Subscribe to our RSS Feeds
Hello, this is a sample text to show how you can display a short information about you and or your blog. You can use this space to display text or image introduction or to display 468 x 60 ads and to maximize your earnings.

Tuesday, August 31, 2010

wireless history in Indonesia

Literally, the Internet (short for interconnected-networking) is a series of computers connected in some circuits. When the Internet (the letter 'I' large) is a public computer system, which is connected globally and use TCP / IP as the protocol exchange of packets (packet switching communication protocol). The largest internet series called the Internet. How to connect with kaedah series is called internetworking.

Internet maintained by bi-or multilateral agreements and technical specifications (which describe the protocols of data transfer between circuits). These protocols established by the Internet Engineering Task Force discussion (IETF), which is open to the public. The agency issued a document known as RFC (Request for Comments). Some RFCs areInternet Standards (Internet Standard), by the Internet Architecture Board (Internet Architecture Board - IAB). Internet protocols are frequently used such as, IP, TCP, UDP, DNS, PPP, SLIP, ICMP, POP3, IMAP, SMTP, HTTP, HTTPS, SSH, Telnet, FTP, LDAP, and SSL.

Some popular services on the Internet that use the protocol above, is an email / electronic mail, Usenet, Newsgroups, share files (File Sharing), WWW (World Wide Web), Gopher, session access (Session Access), WAIS, finger, IRC, MUD, and MUSH. In between all this, email / mail and World Wide Web is more often used, and more services are built berdasarkannya, such as mailing lists (Mailing List) and Weblog. TheInternet allows the current service (Real-time service), such as web radio, and webcasts, which can be accessed around the world. Additionally via theinternet it is possible to communicate directly between two or more users through instant messaging programs like Camfrog, Pidgin (Gaim), Trilian, Kopete, Yahoo Messenger, MSN Messenger and Windows Live Messenger.

Some of the popular Internet service based on closed systems (Proprietary System), is like IRC, ICQ, AIM, CDDB, and Gnutella.

Countries with the best Internet access, including South Korea (50% of the population have access jalurlebar - Broadband), and Sweden. There are two common forms of Internet access, the dial-up, and jalurlebar. In Indonesia, such as developing countries where access to Internet and PC penetration is still too low, the other approximately 42% of Internet access via public Internet access facilities such as internet cafe, Cyber cafes, etc. hotspot. Other public places are often used for Internet access on campus and in the office.

In addition to using a PC (Personal Computer), we can also access the Internet through Mobile (HP) using a facility called GPRS (General Packet Radio Service). GPRS is a wireless communication standard (wireless) that has a connection speed of 115 kbps and supports a broader application (graphics and multimedia). GPRS technology is accessible to support the facility. Pen-setting in the phone's GPRS Depending on the operator (Telkomsel, Indosat, XL, 3) used.Internet access costs are calculated through the amount of capacity (per-kilobite) are downloaded.




Tuesday, August 24, 2010

Wireless Wi-Fi Transfer With Resume Capability Possible With WirelessGet

Perhaps the most attractive feature about the WirelessGet software application is the ability to copy over Wireless / Wi-Fi any files or folders with resume capability and its phenomenal speed. Be it from a network drive, hard drive, USB Stick, DVD / CD or any other storage media (Including a wired LAN). As long as you have access to and the owner is allowed to share with you, copying over Wi-Fi is a breeze, public or private Wi-Fi hotspots locations or access points are no exception, you just need to have access to them!

Wireless file sharing is in the increase due to the popularity and integration of Wi-Fi technology in desktop and mobile computers (laptops or notebooks), PDAs and even mobile phones by manufacturers, which has now led to this technology becoming a household name. Most people refer to it as wireless, and what a good technology it is, no longer are we confined to our desktops at home or cooked up in our offices should we need to surf the internet. We can now be in a park, restaurant, hotel or even in trains and airplanes, we can still check our email, message family and friends, conduct business or run our remote servers and manage websites with ease. Having said that, any new technology never matures without teething problems, and Wi-Fi is no different, one of the problems is in file sharing.

All operating systems including Microsoft Windows™ with all its sophistication has a problem when copying files or folders from one laptop to another for example if you lost the wireless signal, though it's not the operating systems which is at fault here, sometimes the wireless signal drops out on you half way and you end up having to do it all over again, while this may not be very common, it is however a normal and expected occurrence when Wi-Fi networking. To solve this problem, the WirelessGet application which is usable only under Windows™ at the moment, uses advanced and intelligent mechanisms to copy files safely and securely with memory and caching in mind. It also uses monitoring capabilities to know exactly when the signal was lost so it can save what has already been copied and what's left out for the next time you try and resume copying. WirelessGet does not rely on the size of the files or folders alone, but rather gets deeper into the files themselves using a sophisticated algorithm and count methods to avoid corrupting files when resume copying is initiated.

Imagine copying a 4.7 Gig DVD full of videos or songs, or worst still, when the only means of access to a backup is through wireless remote network, few Gigs of data backup from the master server at the end of the day's office hours of customers and clients orders, details or files...you want to get home but the signal stops and is lost at the last MB of a 10 Gig folder, with WirelessGet, you wait a couple of minutes for that signal to comeback and copy and past again, it'll not doi it all over gain, but resumes with safety where it stopped, copying only the 10 MB remaining, hence you get home maybe two minutes late instead of hours late if the files are important and you can't get home unless they are backed up safely.

WirelessGet is a freeware and has many handy features, Exit when you finish and the application just shuts down, Shut down when finished and it'll shut down the computer safely. Options such as controlling memory and cache...all though the "Drop Zone" which is a tiny floating windows you can Right Click it to manage those options and even make it disappear by un-ticking the drop zone box. Once you install it, you don't have to do anything, if you want to copy either from the same computer from one folder to another or from a network drive or computer, just highlight the folder or file as usual, right click and press copy, WirelessGet automatically flash its window asking to confirm the file or folder name (task name) you are copying, the original location, i.e the source path and the destination (save to), by default it does copy to C:\Program Files\WirelessGet\Default, so change that by pressing the browse (...) button and pick a folder or drive location. If you want to transfer / copy immediately, that option is the default, if you want to do it later, you can un-tick "Immediately" and choose "Manual" which will then save the task of copying to a later time and stores the task in the main window so when you click that window, you'll see that task (name of the file or folder) in Grey color icon, which means is waiting for you to right click it and either copy (Start or Start all if more than one file) or delete the task (cancel) and other options.

The manual transfer option is similar (to some degree) to resume printing a file or a page when you press stop printing, the next time you start the printer, you'll see the file again on its task window so you can print or cancel, however any files or folders have to be shared if from a wireless network or hard wired LAN. WirelessGet can also copy from a Wired LAN network as easy as over wireless and in most cases faster without any signals being dropped, though this process has less likelihood of interruption, nonetheless it can happen and WirelessGet deals with the situation just the same!

Tests: Using an average Intel Pentium 4 with 1.5 Gig memory and 54g Wi-Fi network connection WirelessGet was tested few times on copying small and large files in parallel to normal windows copying. When the signal never dropped, copying small files using normal windows copy was almost the same, though wirelessGet was only slightly faster, but when copying files over 1 Gig it was entirely different story, on average, WirelessGet cut the time in comparison to normal copy to up to a tenth, i.e the larger the file, the less time it takes to copy compared to normal use. One advantage is, wirelessGet can be minimized and you can get on with your work, with normal windows copy, that annoying flying files from one folder to another with the hard pinned window stays there until the transfer is complete, and you daren't close it or crash you machine, you end the transfer with a possibility of corrupt files and disastrous consequences, we all been there one time or another!

You can also download it from any reputable download site such as Get it from CNET Download.com!

WirelessGet has many useful options and features, version 1.2 which was a commercial version now upgraded to version 1.3 with better help and support, different and easier install and un-install process and of course the license is freeware. WirelessGet was originally owned by BINT software and the Wi-Fi Technology Forum acquired the business including the site and the wirelessget.com domain around early 2008. Wi-Fi-TF aims to upgrade to the next version with a complete rewrite to include amongst other capabilities, Internet Legal File sharing, and are looking for capable developers to undertake the development task to the next level by mid 2010. Interested developers should email Tahar (contact AT wirelessget.com) with a career history or a brief Résumé / CV and either a quote or negotiate a partnership deal.

More about WirelessGet here: http://www.wirelessget.com/



Sony’s new Milimeter-wave Wireless Technology promises cost-effective solution for high speed data transfers within electronics products

Sony’s new Intra-Connection technology allows makers to simplify circuit boards, eliminates the need for large ICs by providing wireless ultra high speed data transfer (11Gbps of data over a distance of 14mm) among components of electronics products. This technology works through electromagnetic waves with high frequency of 30Ghz to 300GHz with wavelength between 1 mm to 10 mm and uses very small antennas merely 1 mm in size.

Tokyo, Japan, February 8, 2010 – Sony Corporation (‘Sony’) today announced the development of millimeter-wave wireless intra-connection technology that realizes high speed wireless data transfer inside electronic products such as television sets. By replacing complicated wires and internal circuitry with wireless connections, this technology enables a reduction in the size and cost of the IC and other components used in electronics products, delivering advantages such as size and cost-reduction and enhanced reliability of the final product.

The advancing functionality of today’s electronics products requires ever increasing quantities of internal data transfer. Once wired connections approach the limit of their data capacity, additional circuitry is required to facilitate larger data transfers, however this leads to the issue of increasingly complicated IC packages, intricately printed circuit boards, and larger IC sizes.

This new wireless intra-connection system is based on millimeter-wave wireless data transfer technology. Millimeter-wave refers to electromagnetic waves with a frequency of 30GHz to 300GHz, and wavelength between 1mm to 10mm. With their high frequency, millimeter-waves are suited to ultra high speed data transfer, while a further advantage is their ability to transfer data using only very small antennas. The high frequency technologies used in this system also draw on Sony’s extensive expertise and years of experience in the field of wireless communications and broadcast products. Specifically, Sony has integrated highly energy efficient millimeter-wave circuits on 40nm-CMOS-LSIs (with an active footprint of just 0.13mm2 including both the transmitter and receiver), to realize high speed, 11Gbps data transfer over a distance of 14mm using antennas approximately 1mm in size.
By replacing physical circuitry in electronics products with high speed wireless connections, this new data transfer technology reduces the number of wired connections and minimizes IC use, to simplify the IC package and printed circuit board. Furthermore, because the data transfer occurs without contact, this enhances the reliability of movable and detachable parts within the product.

Sony will proceed with efforts to adopt this technology in a range of electronics products, while continuing its development to meet ever-increasing data-rate requirements.

This technology will be presented at “ISSCC 2010″, to be held in San Francisco, California, US, from February 7th 2010.
Key Features
1. Optimized circuit for intra-connection on CMOS-LSIs
Sony has drawn on its years of experience in radio frequency technologies to realize compact, low power, millimeter-wave circuits optimal for use in intra-connection over CMOS-LSIs. Due to the small footprint of just 0.13mm2 the circuits can be built into a single chip, at very low cost.

2. Injection lock method*2 realizes small size, low power consumption and sufficient transmission range for intra-connection.
Synchronized detection, which aligns the receiver with the transmitted carrier frequency, is an effective means of providing sufficient transmission range for intra-connection, while also ensuring low power consumption. However, the PLL (Phase Locked Loop) generally used for this synchronization has the disadvantage of requiring large, power-consuming circuitry to transmit at millimeter-wave frequency. By adopting an injection lock system that eliminates PLL, Sony has enabled synchronized detection over small size circuits, while also minimizing power consumption and providing sufficient transmission range for successful intra-connection.

This technology, used together with miniature antennas approximately 1mm in length, enable transmission speeds of 11Gbps over a distance of 14mm, with power consumption of 70mW. It is possible for this distance to be extended to around 50mm using high directivity antennas.

Latest Wireless Technology

Mobile and WLAN services enable users to access full Internet services on handheld devices without cable connections, allowing mobility and convenience.

Mobile phones can provide multiple services including voice, email, text messaging, paging, web access, and voice recognition services. Newer mobile phones incorporate PDA, wireless Internet, email, and global positioning system (GPS) capabilities.

Bluetooth wireless technology uses radio waves to enable mobile devices, such as mobile phones, PDAs and laptops, to establish wireless connections with other devices that are in short range.

Bluetooth-enabled devices don't need to be in line of sight or be pointing at each other. And, because they are wireless, Bluetooth devices don't plug into your cell phone. To use a Bluetooth, all you need to do is have them "shake hands" in a sense.

Monday, August 23, 2010

ZTE and ChinaTel Sign Memorandum of Understanding for Global Strategic Partnership

ChinaTel Group, Inc.

SAN DIEGO & SHENZHEN, China--(BUSINESS WIRE)--ChinaTel Group, Inc. (ChinaTel) (OTCBB:CHTL), a US-based provider of high speed wireless broadband and telecommunications infrastructure engineering and construction services, and ZTE Corporation (ZTE) (H share stock code: 0763.HK / A share stock code: 000063.SZ), a leading international telecommunications solutions provider headquartered in the People’s Republic of China (PRC), today announced the signing of a binding memorandum of understanding (MOU) for strategic partnership to advance both parties’ interests in delivering innovative telecommunications solutions to individual, enterprise and government consumers worldwide. Under the terms of the MOU, ZTE will be the preferred and primary provider of customized equipment, software, consumer products, operational services and financing for high speed wireless broadband telecommunications networks ChinaTel is deploying in the PRC, Peru, and other markets ChinaTel enters in the future. ChinaTel and ZTE will also work together to analyze consumer demand for new products and solutions, develop business plans to determine financial viability, execute design concepts, and roll out completed products and solutions, including manufacturing, marketing and sales, all with the goal to expand the reach of wireless broadband access. ZTE shall treat ChinaTel as its preferred customer in the supply of equipment, consumer products, operational services, solutions and financing. ZTE shall offer ChinaTel a favorable vendor financing proposal for each project identified and use its best efforts to facilitate ChinaTel’s applications for debt financing by banks with which ZTE has relationships. The parties will share equal ownership of intellectual property involved in equipment, software, consumer products, services or solutions developed through their joint efforts.

“In addition to a vendor and financing relationship, we are excited to work with ChinaTel to develop innovative new products and solutions to meet the expectations of commercial, government and residential subscribers to realize the full potential of wireless multi-media technologies.”

“ZTE is pleased to add ChinaTel as its strategic partner for development of wireless broadband networks. The markets in which ChinaTel is currently deploying or investigating opportunities include Peru and other Latin American countries where ZTE also wishes to expand its influence, as well as the People’s Republic of China, where ZTE’s local presence offers competitive advantages,” said Mr. Lirong Shi, the CEO of ZTE. “In addition to a vendor and financing relationship, we are excited to work with ChinaTel to develop innovative new products and solutions to meet the expectations of commercial, government and residential subscribers to realize the full potential of wireless multi-media technologies.”

About ChinaTel Group, Inc.

ChinaTel Group, Inc. (ChinaTel), through its controlled subsidiaries, provides fixed telephony, conventional long distance, high-speed wireless broadband and telecommunications infrastructure engineering and construction services. ChinaTel is presently building, operating and deploying networks in Asia and South America: a 3.5GHz wireless broadband system in 29 cities across the People’s Republic of China (PRC) with and for CECT-Chinacomm Communications Co., Ltd., a PRC company that holds a license to build the high speed wireless broadband system; and a 2.5GHz wireless broadband system in cities across Peru with and for Perusat, S.A., a Peruvian company that holds a license to build high speed wireless broadband systems. ChinaTel’s vision remains clear: (i) to acquire and operate wireless broadband networks in key markets throughout the world; (ii) to deliver a new world of communications; and (iii) and invest in building long-lasting relationships with customers and partners to lead the broadband industry in customer service and responsiveness. Our strategy is to build leading-edge IP-leveraged solutions advanced by our worldwide infrastructure and leadership in emerging markets. www.ChinaTelGroup.com

About ZTE Corporation

ZTE is a leading global provider of telecommunications equipment and network solutions. Founded in 1985, ZTE Corporation has been listed as an A-share company on the Shenzhen Stock Exchange since 1997. In December 2004, ZTE was successfully listed on the Main Board of The Stock Exchange in Hong Kong, becoming the first Chinese company to hold both A shares and H shares. Currently, ZTE is the telecom equipment provider with the most market capitalization and revenue in China’s A share market.

ZTE has the widest and most complete product range in the world covering virtually every sector of the wireline, wireless, service and terminals markets. The company delivers innovative, custom-made products and services to over 500 operators in more than 140 countries, helping them to achieve continued revenue growth and to shape the future of the world’s communications. Besides its established cooperation with top Chinese telecoms players including China Mobile, China Telecom and China Unicom in China, the company also has developed long-term partnerships with industry-leading operators including France Telecom, Vodafone, Telstra, Telefonica, among others.

Safe Harbor Statement

This press release contains forward-looking statements that involve risks and uncertainties. Actual results, events and performances could vary materially from those contemplated by these forward-looking statements. These statements involve known and unknown risks and uncertainties, which may cause the Company's actual results, expressed or implied, to differ materially from expected results. These risks and uncertainties include, among other things, product demand and market competition. You should independently investigate and fully understand all risks before making an investment decision.

Contacts

Retail Investors
ChinaTel Group, Inc.
Tim Matula
Investor Relations
(Toll Free) 1-877-260-9170
investors@chinatelgroup.com

ChinaTel and ZTE announced the signing of a binding memorandum of understanding (MOU) for strategic partnership to advance both parties interests in delivering telecommunications solutions worldwide.

Sunday, August 22, 2010

Mobile phones and the Internet

The first mobile phone with Internet connectivity was the Nokia 9000 Communicator, launched in Finland in 1996. The viability of Internet services access on mobile phones was limited until prices came down from that model and network providers started to develop systems and services conveniently accessible on phones. NTT DoCoMo in Japan launched the first mobile Internet service, i-Mode, in 1999 and this is considered the birth of the mobile phone Internet services. In 2001 the mobile phone email system by Research in Motion for their Blackberry product was launched in America. To make efficient use of the small screen and tiny keypad and one-handed operation typical of mobile phones, a specific document and networking model was created for mobile devices, the Wireless Application Protocol (WAP). Most mobile device Internet services operate using WAP. The growth of mobile phone services was initially a primarily Asian phenomenon with Japan, South Korea and Taiwan all soon finding the majority of their Internet users accessing resources by phone rather than by PC.[citation needed] Developing countries followed, with India, South Africa, Kenya, Philippines and Pakistan all reporting that the majority of their domestic users accessed the Internet from a mobile phone rather than a PC. The European and North American use of the Internet was influenced by a large installed base of personal computers, and the growth of mobile phone Internet access was more gradual, but had reached national penetration levels of 20–30% in most Western countries.[citation needed] The cross-over occurred in 2008, when more Internet access devices were mobile phones than personal computers. In many parts of the developing world, the ratio is as much as 10 mobile phone users to one PC user.

Some concerns have been raised over the historiography of the Internet's development. Specifically that it is hard to find documentation of much of the Internet's development, for several reasons, including a lack of centralized documentation for much of the early developments that led to the Internet.

From gopher to the WWW

As the Internet grew through the 1980s and early 1990s, many people realized the increasing need to be able to find and organize files and information. Projects such as Gopher, WAIS, and the FTP Archive list attempted to create ways to organize distributed data. Unfortunately, these projects fell short in being able to accommodate all the existing data types and in being able to grow without bottlenecks.[citation needed]

One of the most promising user interface paradigms during this period was hypertext. The technology had been inspired by Vannevar Bush's "Memex" and developed through Ted Nelson's research on Project Xanadu and Douglas Engelbart's research on NLS. Many small self-contained hypertext systems had been created before, such as Apple Computer's HyperCard. Gopher became the first commonly-used hypertext interface to the Internet. While Gopher menu items were examples of hypertext, they were not commonly perceived in that way.

In 1989, while working at CERN, Tim Berners-Lee invented a network-based implementation of the hypertext concept. By releasing his invention to public use, he ensured the technology would become widespread. For his work in developing the World Wide Web, Berners-Lee received the Millennium technology prize in 2004. One early popular web browser, modeled after HyperCard, was ViolaWWW.

A potential turning point for the World Wide Web began with the introduction of the Mosaic web browser in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the High-Performance Computing and Communications Initiative, a funding program initiated by the High Performance Computing and Communication Act of 1991 also known as the Gore Bill. Indeed, Mosaic's graphical interface soon became more popular than Gopher, which at the time was primarily text-based, and the WWW became the preferred interface for accessing the Internet. (Gore's reference to his role in "creating the Internet", however, was ridiculed in his presidential election campaign. See the full article Al Gore and information technology).

Mosaic was eventually superseded in 1994 by Andreessen's Netscape Navigator, which replaced Mosaic as the world's most popular browser. While it held this title for some time, eventually competition from Internet Explorer and a variety of other browsers almost completely displaced it. Another important event held on January 11, 1994, was The Superhighway Summit at UCLA's Royce Hall. This was the "first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about the Information Superhighway and its implications."

24 Hours in Cyberspace, "the largest one-day online event" (February 8, 1996) up to that date, took place on the then-active website, cyber24.com. It was headed by photographer Rick Smolan. A photographic exhibition was unveiled at the Smithsonian Institution's National Museum of American History on January 23, 1997, featuring 70 photos from the project.

Search engines

Even before the World Wide Web, there were search engines that attempted to organize the Internet. The first of these was the Archie search engine from McGill University in 1990, followed in 1991 by WAIS and Gopher. All three of those systems predated the invention of the World Wide Web but all continued to index the Web and the rest of the Internet for several years after the Web appeared. There are still Gopher servers as of 2006, although there are a great many more web servers. As the Web grew, search engines and Web directories were created to track pages on the Web and allow people to find things. The first full-text Web search engine was WebCrawler in 1994. Before WebCrawler, only Web page titles were searched. Another early search engine, Lycos, was created in 1993 as a university project, and was the first to achieve commercial success. During the late 1990s, both Web directories and Web search engines were popular—Yahoo! (founded 1994) and Altavista (founded 1995) were the respective industry leaders. By August 2001, the directory model had begun to give way to search engines, tracking the rise of Google (founded 1998), which had developed new approaches to relevancy ranking. Directory features, while still commonly available, became after-thoughts to search engines. Database size, which had been a significant marketing feature through the early 2000s, was similarly displaced by emphasis on relevancy ranking, the methods by which search engines attempt to sort the best results first. Relevancy ranking first became a major issue circa 1996, when it became apparent that it was impractical to review full lists of results. Consequently, algorithms for relevancy ranking have continuously improved. Google's PageRank method for ordering the results has received the most press, but all major search engines continually refine their ranking methodologies with a view toward improving the ordering of results. As of 2006, search engine rankings are more important than ever, so much so that an industry has developed ("search engine optimizers", or "SEO") to help web-developers improve their search ranking, and an entire body of case law has developed around matters that affect search engine rankings, such as use of trademarks in metatags. The sale of search rankings by some search engines has also created controversy among librarians and consumer advocates. As of June 3, 2009, Microsoft launched its own search engine. Bing became immediately popular with the masses searching the internet. It has multiple sites belonging to separate countries e.g. the United States version is different to the Australian version. In the US, Bing ranked 17th among all websites out of over 450,000 websites, up from 5120 the week before the official launch when the website was merely a placeholder. Within the Search Engines category, Bing ranked 4th out of the search engines tracked by Hitwise and Bing Image Search ranked 15th for the week ending June 6, 2009.

Dot-com bubble

Suddenly the low price of reaching millions worldwide, and the possibility of selling to or hearing from those people at the same moment when they were reached, promised to overturn established business dogma in advertising, mail-order sales, customer relationship management, and many more areas. The web was a new killer app—it could bring together unrelated buyers and sellers in seamless and low-cost ways. Visionaries around the world developed new business models, and ran to their nearest venture capitalist. While some of the new entrepreneurs had experience in business in economics, the majority were simply people with ideas, and didn't manage the capital influx prudently. Additionally, many dot-com business plans were predicated on the assumption that by using the Internet, they would bypass the distribution channels of existing businesses and therefore not have to compete with them; when the established businesses with strong existing brands developed their own Internet presence, these hopes were shattered, and the newcomers were left attempting to break into markets dominated by larger, more established businesses. Many did not have the ability to do so.

The dot-com bubble burst on March 10, 2000, when the technology heavy NASDAQ Composite index peaked at 5,048.62 (intra-day peak 5,132.52), more than double its value just a year before. By 2001, the bubble's deflation was running full speed. A majority of the dot-coms had ceased trading, after having burnt through their venture capital and IPO capital, often without ever making a profit.

Online population forecast

A study conducted by JupiterResearch anticipates that a 38 percent increase in the number of people with online access will mean that, by 2011, 22 percent of the Earth's population will surf the Internet regularly. The report says 1.1 billion people have regular Web access. For the study, JupiterResearch defined online users as people who regularly access the Internet from dedicated Internet-access devices, which exclude cellular telephones.>>next>>

Opening the network to commerce

The interest in commercial use of the Internet became a hotly debated topic. Although commercial use was forbidden, the exact definition of commercial use could be unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNet connections. Some UUCP links still remained connecting to these networks however, as administrators cast a blind eye to their operation.

During the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email and Usenet News to the public. The first commercial dialup ISP in the United States was The World, opened in 1989.

In 1992, Congress allowed commercial activity on NSFNet with the Scientific and Advanced-Technology Act, 42 U.S.C. § 1862(g), permitting NSFNet to interconnect with commercial networks. This caused controversy amongst university users, who were outraged at the idea of noneducational use of their networks.[citation needed] Eventually, it was the commercial Internet service providers who brought prices low enough that junior colleges and other schools could afford to participate in the new arenas of education and research.[citation needed]

By 1990, ARPANET had been overtaken and replaced by newer networking technologies and the project came to a close. In 1994, the NSFNet, now renamed ANSNET (Advanced Networks and Services) and allowing non-profit corporations access, lost its standing as the backbone of the Internet. Both government institutions and competing commercial providers created their own backbones and interconnections. Regional network access points (NAPs) became the primary interconnections between the many networks. The final commercial restrictions ended in May 1995 when the National Science Foundation ended its sponsorship of the Internet backbone.

Internet Engineering Task Force


Requests for Comments (RFCs) started as memoranda addressing the various protocols that facilitate the functioning of the Internet and were previously edited by the late Dr. Postel as part of his IANA functions.

The IETF started in January 1985 as a quarterly meeting of U.S. government funded researchers. Representatives from non-government vendors were invited starting with the fourth IETF meeting in October of that year.[citation needed] In 1992, the Internet Society, a professional membership society, was formed and the IETF was transferred to operation under it as an independent international standards body.[citation needed]

NIC, InterNIC, IANA and ICANN


The first central authority to coordinate the operation of the network was the Network Information Centre (NIC) at Stanford Research Institute (SRI) in Menlo Park, California. In 1972, management of these issues was given to the newly created Internet Assigned Numbers Authority (IANA). In addition to his role as the RFC Editor, Jon Postel worked as the manager of IANA until his death in 1998.

As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by Paul Mockapetris. The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root nameserver administration and Internet number assignments under a United States Department of Defense contract. In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sector Network Solutions, Inc.

Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by General Atomics.

In 1998 both IANA and InterNIC were reorganized under the control of ICANN, a California non-profit corporation contracted by the United States Department of Commerce to manage a number of Internet-related tasks. The role of operating the DNS system was privatized and opened up to competition, while the central management of name allocations would be awarded on a contract tender basis.

Globalization and 21st century


Since the 1990s, the Internet's governance and organization has been of global importance to commerce. The organizations which hold control of certain technical aspects of the Internet are both the successors of the old ARPANET oversight and the current decision-makers in the day-to-day technical aspects of the network. While formally recognized as the administrators of the network, their roles and their decisions are subject to international scrutiny and objections which limit them. These objections have led to the ICANN removing themselves from relationships with first the University of Southern California in 2000, and finally in September 2009, gaining autonomy from the US government by the ending of its longstanding agreements, although some contractual obligations with the Department of Commerce continue until at least 2011. The history of the Internet will now be played out in many ways as a consequence of the ICANN organization.

In the role of forming standard associated with the Internet, the IETF continues to serve as the ad-hoc standards group. They continue to issue Request for Comments numbered sequentially from RFC 1 under the ARPANET project, for example, and the IETF precursor was the GADS Task Force which was a group of US government-funded researchers in the 1980s. Many of the group's recent developments have been of global necessity, such as the i18n working groups who develop things like internationalized domain names. The Internet Society has helped to fund the IETF, providing limited oversight.

Use and culture


E-mail and Usenet


E-mail is often called the killer application of the Internet. However, it actually predates the Internet and was a crucial tool in creating it. E-mail started in 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the history is unclear, among the first systems to have such a facility were SDC's Q32 and MIT's CTSS.

The ARPANET computer network made a large contribution to the evolution of e-mail. There is one report indicating experimental inter-system e-mail transfers on it shortly after ARPANET's creation. In 1971 Ray Tomlinson created what was to become the standard Internet e-mail address format, using the @ sign to separate user names from host names.

A number of protocols were developed to deliver e-mail among groups of time-sharing computers over alternative transmission systems, such as UUCP and IBM's VNET e-mail system. E-mail could be passed this way between a number of networks, including ARPANET, BITNET and NSFNet, as well as to hosts connected directly to other sites via UUCP. See the history of SMTP protocol.

In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel and Tom Truscott in 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known as newsgroups, on a wide range of topics. On ARPANET and NSFNet similar discussion groups would form via mailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list). >>next>>



TCP/IP becomes worldwide ( ISP 4)

CERN, the European Internet, the link to the Pacific and beyond

Between 1984 and 1988 CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs and an accelerator control system. CERN continued to operate a limited self-developed system CERNET internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP and the CERN TCP/IP intranets remained isolated from the Internet until 1989.

In 1988 Daniel Karrenberg, from Centrum Wiskunde & Informatica (CWI) in Amsterdam, visited Ben Segal, CERN's TCP/IP Coordinator, looking for advice about the transition of the European side of the UUCP Usenet network (much of which ran over X.25 links) over to TCP/IP. In 1987, Ben Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks, and in 1989 CERN opened its first external TCP/IP connections. This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out co-ordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam.

At the same time as the rise of internetworking in Europe, ad hoc networking to ARPA and in-between Australian universities formed, based on various technologies such as X.25 and UUCPNet. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors' Committee and provided a dedicated IP based network for Australia.

The Internet began to penetrate Asia in the late 1980s. Japan, which had built the UUCP-based network JUNET in 1984, connected to NSFNet in 1989. It hosted the annual meeting of the Internet Society, INET'92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.

Digital divide


While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they are building organizations for Internet resource administration and sharing operational experience, as more and more transmission facilities go into place.

Africa

At the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications.

In August, 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now known as InfoCom http://www.imul.com, and NSN Network Services of Avon, Colorado, sold in 1997 and now known as Clear Channel Satellite, established Africa's first native TCP/IP high-speed satellite Internet services. The data connection was originally carried by a C-Band RSCC Russian satellite which connected InfoMail's Kampala offices directly to NSN's MAE-West point of presence using a private network from NSN's leased ground station in New Jersey. InfoCom's first satellite connection was just 64kbps, serving a Sun host computer and twelve US Robotics dial-up modems.

In 1996 a USAID funded project, the Leland initiative, started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Côte d'Ivoire and Benin in 1998.

Africa is building an Internet infrastructure. AfriNIC, headquartered in Mauritius, manages IP address allocation for the continent. As do the other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists.

There are a wide range of programs both to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts.

Asia and Oceania

The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT).

In 1991, the People's Republic of China saw its first TCP/IP college network, Tsinghua University's TUNET. The PRC went on to make its first global Internet connection in 1994, between the Beijing Electro-Spectrometer Collaboration and Stanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-wide content filter.

Latin America

As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services.>>next>>


Merging the networks and creating the Internet (ISP 3)

TCP/IP

With so many different network methods, something was needed to unify them. Robert E. Kahn of DARPA and ARPANET recruited Vinton Cerf of Stanford University to work with him on the problem. By 1973, they had soon worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmerman, Gerard LeLann and Louis Pouzin (designer of the CYCLADES network) with important work on this design. The specification of the resulting protocol, RFC 675 - Specification of Internet Transmission Control Program, by Vinton Cerf, Yogen Dalal and Carl Sunshine, Network Working Group, December, 1974, contains the first attested use of the term internet, as a shorthand for internetworking; later RFCs repeat this use, so the word started out as an adjective rather than the noun it is today.

With the role of the network reduced to the bare minimum, it became possible to join almost any networks together, no matter what their characteristics were, thereby solving Kahn's initial problem. DARPA agreed to fund development of prototype software, and after several years of work, the first somewhat crude demonstration of a gateway between the Packet Radio network in the SF Bay area and the ARPANET was conducted. On November 22, 1977 a three network demonstration was conducted including the ARPANET, the Packet Radio Network and the Atlantic Packet Satellite network—all sponsored by DARPA. Stemming from the first specifications of TCP in 1974, TCP/IP emerged in mid-late 1978 in nearly final form. By 1981, the associated standards were published as RFCs 791, 792 and 793 and adopted for use. DARPA sponsored or encouraged the development of TCP/IP implementations for many operating systems and then scheduled a migration of all hosts on all of its packet networks to TCP/IP. On January 1, 1983, TCP/IP protocols became the only approved protocol on the ARPANET, replacing the earlier NCP protocol.

ARPANET to several federal wide area networks: MILNET, NSI, and NSFNet


After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting edge research and development, not running a communications utility. Eventually, in July 1975, the network had been turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.

The networks based on the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and even to a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were.

Several other branches of the U.S. government, the National Aeronautics and Space Agency (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid 1980s, all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet.

In 1984 NSF developed CSNET exclusively based on TCP/IP. CSNET connected with ARPANET using TCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange. This grew into the NSFNet backbone, established in 1986, and intended to connect and provide access to a number of supercomputing centers established by the NSF.

Transition towards the Internet


The term "internet" was adopted in the first RFC published on the TCP protocol (RFC 675: Internet Transmission Control Program, December 1974) as an abbreviation of the term internetworking and the two terms were used interchangeably. In general, an internet was any network using TCP/IP. It was around the time when ARPANET was interlinked with NSFNet in the late 1980s, that the term was used as the name of the network, Internet, being a large and global TCP/IP network.

As interest in wide spread networking grew and new applications for it were developed, the Internet's technologies spread throughout the rest of the world. The network-agnostic approach in TCP/IP meant that it was easy to use any existing network infrastructure, such as the IPSS X.25 network, to carry Internet traffic. In 1984, University College London replaced its transatlantic satellite links with TCP/IP over IPSS.

Many sites unable to link directly to the Internet started to create simple gateways to allow transfer of e-mail, at that time the most important application. Sites which only had intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple e-mail peering, such as allowing access to FTP sites via UUCP or e-mail.

Finally, the Internet's remaining centralized routing aspects were removed. The EGP routing protocol was replaced by a new protocol, the Border Gateway Protocol (BGP), in order to allow the removal of the NSFNet Internet backbone network. In 1994, Classless Inter-Domain Routing was introduced to support better conservation of address space which allowed use of route aggregation to decrease the size of routing tables. The picture on the right hand side shows a system made with the help of the high-tech company BBN. >>next>>



Networks that led to the Internet ( ISP 2 )

ARPANET

Promoted to the head of the information processing office at DARPA, Robert Taylor intended to realize Licklider's ideas of an interconnected networking system. Bringing in Larry Roberts from MIT, he initiated a project to build such a network. The first ARPANET link was established between the University of California, Los Angeles and the Stanford Research Institute on 22:30 hours on October 29, 1969. By December 5, 1969, a 4-node network was connected by adding the University of Utah and the University of California, Santa Barbara. Building on ideas developed in ALOHAnet, the ARPANET grew rapidly. By 1981, the number of hosts had grown to 213, with a new host being added approximately every twenty days.


ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used. ARPANET development was centered around the Request for Comments (RFC) process, still used today for proposing and distributing Internet Protocols and Systems. RFC 1, entitled "Host Software", was written by Steve Crocker from the University of California, Los Angeles, and published on April 7, 1969. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.

International collaborations on ARPANET were sparse. For various political reasons, European developers were concerned with developing the X.25 networks. Notable exceptions were the Norwegian Seismic Array (NORSAR) in 1972, followed in 1973 by Sweden with satellite links to the Tanum Earth Station and Peter Kirstein's research group in the UK, initially at the Institute of Computer Science, London University and later at University College London.

X.25 and public access

Based on ARPA's research, packet switching network standards were developed by the International Telecommunication Union (ITU) in the form of X.25 and related standards. While using packet switching, X.25 is built on the concept of virtual circuits emulating traditional telephone connections. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET. The initial ITU Standard on X.25 was approved in March 1976.

The British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.

Unlike ARPANET, X.25 was commonly available for business use. Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET.

The first public dial-in networks used asynchronous TTY terminal protocols to reach a concentrator operated in the public network. Some networks, such as CompuServe, used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. Other major dial-in networks were America Online (AOL) and Prodigy that also provided communications, content, and entertainment features. Many bulletin board system (BBS) networks also provided on-line access, such as FidoNet which was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.

UUCP

In 1979, two students at Duke University, Tom Truscott and Jim Ellis, came up with the idea of using simple Bourne shell scripts to transfer news and messages on a serial line with nearby University of North Carolina at Chapel Hill. Following public release of the software, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines, X.25 links or even ARPANET connections, and the lack of strict use policies (commercial organizations who might provide bug fixes) compared to later networks like CSnet and Bitnet. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984. - Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network represented possibly one of the first examples of the internet technology becoming progress through popular diffusion.

NPL

In 1965, Donald Davies of the National Physical Laboratory (United Kingdom) proposed a national data network based on packet-switching. The proposal was not taken up nationally but by 1970 he had designed and built a packet-switched network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions.

By 1976 12 computers and 75 terminal devices were attached and more were added until the network was replaced in 1986.

>>next>>

History of the Internet ( ISP )


Before the wide spread of internetworking (802.1) that led to the Internet, most communication networks were limited by their nature to only allow communications within the stations on the local network and the prevalent computer networking method was based on the central mainframe computer model. Several research programs began to explore and articulate principles of networking between physically separate networks, leading to the development of the packet switching model of digital networking. These research efforts included those of the laboratories of Vinton G. Cerf at Stanford University, Donald Davies (NPL), Paul Baran (RAND Corporation), and Leonard Kleinrock at MIT and at UCLA. The research led to the development of several packet-switched networking solutions in the late 1960s and 1970s, including ARPANET, Telenet, and the X.25 protocols. Additionally, public access and hobbyist networking systems grew in popularity, including unix-to-unix copy (UUCP) and FidoNet. They were however still disjointed separate networks, served only by limited gateways between networks. This led to the application of packet switching to develop a protocol for internetworking, where multiple different networks could be joined together into a super-framework of networks. By defining a simple common network system, the Internet Protocol Suite, the concept of the network could be separated from its physical implementation. This spread of internetworking began to form into the idea of a global network that would be called the Internet, based on standardized protocols officially implemented in 1982. Adoption and interconnection occurred quickly across the advanced telecommunication networks of the western world, and then began to penetrate into the rest of the world as it became the de-facto international standard for the global network. However, the disparity of growth between advanced nations and the third-world countries led to a digital divide that is still a concern today.

Following commercialization and introduction of privately run Internet service providers in the 1980s, and the Internet's expansion for popular use in the 1990s, the Internet has had a drastic impact on culture and commerce. This includes the rise of near instant communication by electronic mail (e-mail), text based discussion forums, and the World Wide Web. Investor speculation in new markets provided by these innovations would also lead to the inflation and subsequent collapse of the Dot-com bubble. But despite this, the Internet continues to grow, driven by commerce, greater amounts of online information and knowledge and social networking known as Web 2.0.

Three terminals and an ARPA



In the 1950s and early 1960s, before the widespread inter-networking that led to the Internet, most communication networks were limited in that they only allowed communications between the stations on the network. Some networks had gateways or bridges between them, but these bridges were often limited or built specifically for a single use. One prevalent computer networking method was based on the central mainframe method, simply allowing its terminals to be connected via long leased lines. This method was used in the 1950s by Project RAND to support researchers such as Herbert Simon, at Carnegie Mellon University in Pittsburgh, Pennsylvania, when collaborating across the continent with researchers in Sullivan, Illinois, on automated theorem proving and artificial intelligence.

A fundamental pioneer in the call for a global network, J.C.R. Licklider, articulated the ideas in his January 1960 paper, Man-Computer Symbiosis.

"A network of such [computers], connected to one another by wide-band communication lines [which provided] the functions of present-day libraries together with anticipated advances in information storage and retrieval and [other] symbiotic functions."
—J.C.R. Licklider,

In August, 1962, Licklider and Welden Clark published the paper "On-Line Man Computer Communication", one of the first descriptions of a networked future.

In October, 1962, Licklider was hired by Jack Ruina as Director of the newly established IPTO within DARPA, with a mandate to interconnect the United States Department of Defense's main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos describing a distributed network to the IPTO staff, whom he called "Members and Affiliates of the Intergalactic Computer Network". As part of the information processing office's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at the University of California, Berkeley and one for the Compatible Time-Sharing System project at the Massachusetts Institute of Technology (MIT). Licklider's identified need for inter-networking would be made obvious by the apparent waste of resources this caused.

"For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them. [...]

I said, it's obvious what to do (But I don't want to do it): If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet."

—Robert W. Taylor, co-writer with Licklider of "The Computer as a Communications Device", in an interview with the New York Times,

Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus that led his successors such as Lawrence Roberts and Robert Taylor to further the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years.

Packet switching

At the tip of the internetworking problem lay the issue of connecting separate physical networks to form one logical network. During the 1960s, Paul Baran (RAND Corporation), produced a study of surviveable networks for the US military. Information transmitted across Baran's network would be divided into what he called 'message-blocks'. Independently, Donald Davies (National Physical Laboratory, UK), proposed and developed a similar network based on what he called packet-switching, the term that would ultimately be adopted. Leonard Kleinrock (MIT) developed mathematical theory behind this technology. Packet-switching provides better bandwidth utilization and response times than the traditional circuit-switching technology used for telephony, particularly on resource-limited interconnection links.


Packet switching is a rapid store-and-forward networking design that divides messages up into arbitrary packets, with routing decisions made per-packet. Early networks used message switched systems that required rigid routing structures prone to single point of failure. This led Tommy Krash and Paul Baran's US Military funded research to focus on using message-blocks to include network redundancy, which in turn led to the widespread urban legend that the Internet was designed to resist nuclear attack.

>>next>>

History of Internet Service Providers

Commercial use of the internet began in the early 1990s with companies such as MindSpring serving limited customers and connections starting in 1994. Many companies started out small using homemade software and server facilities in their garages. Users of these services would pay around 20 – 40 dollars per month for a dial-up connection with an average speed of 9.6 kbit/s to 14.4 kbit/s and these connections were often very unreliable. At the same time, much larger companies such as America Online (AOL) had developed their own networks using proprietary for connecting to the internet. Therefore, AOL was a separate network from the Internet and one that no longer exists.

In 1998 v.90 was developed bringing users connection and download speeds of up to 56.6 kbit/s. Larger companies began to offer internet services using advertising to propel the acceptance of the Internet. Internet prices also began to stabilize with the average price for a dial-up connection being around $19.95 per month for unlimited access.

The battle over broadband access began by the early 2000s. DSL, which was through phone lines, provided a faster, more reliable connection than traditional dial-up access. Cable companies became ISPs by offering broadband services through cable modems. Smaller ISPs however did not have access to the cable system and DSL was too expensive so many of these smaller companies began using wireless technology to provide broadband access. The use of this wireless technology paved the way for the wireless networks that are in common use today.

As of 2005 the larger ISPs are turning a profit through a combination of wired, wireless, and content services. One major challenge of the near future is free wireless broadband access, possibly provided as a municipality.

Thursday, August 19, 2010

Wireless Routers

A wireless Glossary Link router is simply a router with a wireless interface and incorporates the utilities of a wireless access point. It is generally used to permit access to the Internet or a local computer network without the necessity for a wired connection. It can work in a cabled Glossary Link LAN (local area network), only wireless network, or a combination of both.

Characteristics of Wireless Router

  1. LAN ports - They work exactly similar to the ports on a network switch
  2. WAN port - It connects to WAN (wider area network)
  3. Wireless antennae – The antennae allows the router to link up with other wireless devices for communication
A wireless router could be a regular Glossary Link IP router with an 802.11 interface card and antenna added, or it could be a router specifically designed for wireless use. Most wireless routers also act as firewalls, switches, and provide Network Address Translation (NAT).

If network speed is important to you, be sure to purchase an 802.11a or 802.11g wireless router. If you are comfortable with 11Mbps then save money by purchasing an older 802.11b wireless router. If you need to increase the range of your wireless router, consider upgrading it with a better wireless antenna.

In the 802.11 Wi-Fi era, most people refer to wireless routers as "Access Points". Few foremost wireless router manufacturers are Buffalo Technology, D-Link, Linksys, Netgear, 3Com, TP-Link and Belkin.

Wireless router

A wireless router is a device that performs the functions of a router but also includes the functions of a wireless access point. It is commonly used to allow access to the Internet or a computer network without the need for a cabled connection. It can function in a wired LAN (local area network), a wireless only LAN, or a mixed wired/wireless network. Most current wireless routers have the following characteristics:

  • LAN ports, which function in the same manner as the ports of a network switch
  • A WAN port, to connect to a wide area network, typically one with Internet access. External destinations are accessed using this port. If it is not used, many functions of the router will be bypassed.
  • Wireless antennae. These allow connections from other wireless devices (NICs (network interface cards), wireless repeaters, wireless access points, and wireless bridges, for example), usually using the Wi-Fi standard.



Public Wireless Linux networks

If you don't want to play alone with your wireless equipement, there are lots of people setting up public wireless networks using Linux. I just picked a few of those with interesting info on their pages.
  • WiFiMaps has some maps of public wireless connectivity, and allow you to locate those public wireless Access Points.
  • Linux users in Australia are using the good old Wavelan or the Wavelan IEEE to create point to point data link between distant houses. They have set up a mailing list, which is not Wavelan specific and very useful.
  • Guerilla Net aim to setup a free network in the area of Boston.
  • Consume the net want to do the same in the area of London. Many mailing lists.
  • Elektrosmog wants fast Internet everywhere, starting in Sweeden.
  • Seattle Wireless wants to build a next-generation community wireless network.
  • Personal Telco want to build alternative communication networks in the area of Portland. Their web site contains a mountain of information, such as this Wireless FAQ.
  • NYCwireless wants Free Public Wireless Internet for New York City. They have some mailing lists.
  • BAWUG, the Bay Area (California) Wireless User group is pretty active and has some mailing lists.
  • Reseau Citoyen is deploying their wireless network in Bruxelles, Belgium, and has an extensive amount of information in french.
  • LIVE.COM want you to enjoy wireless coffee in Mountain View, California.
  • The Shmoo Group has setup a database of public Wireless LAN networks.

Wireless LAN Hardware (surveys and reviews)

Various people maintain some approximate list of the hardware that is compatible with Linux :
  • Of course, I list a number of vendors in the various sections of the Howto...
  • Absoval has one of the most exaustive list of wireless cards, and list compatibility of PrismII cards with their own linux-wlan driver.
  • Hendrik-Jan Heins is now maintaining an updated version of the exaustive list from Absoval. This is very difficult task, so don't be surprised if you find minor errors.
  • Personal Telco has a short list of PrismII cards (for which many Linux drivers are available).
  • Seattle Wireless has a pretty long list of cards, but the information on this page is not always correct, so double check.
  • Kismet Wireless list card compatible with Kismet, and the corresponding driver.
  • Nicolai Langfeldt has a short list of 802.11g cards comaptible with Linux.
  • Jacek Pliszka has many tips on how to identify the various card, especially USB devices.
  • Jason Hecker maintains a list of all Atmel USB devices.
  • Tarmo Järvalt has long lists of cards containing various chipsets, one page per chipset, including some Google Ads.
  • The Linux Wireless wiki has some limited hardware surveys.
  • The NetworkManager team has a complete list of hardware and drivers that works properly with NetworkManager.
Just a few reviews and guides here, not Linux specific.
  • Most manufacturer web sites are listed in the Howto...
  • Tim Higgins has a huge amount of 802.11 information on his web site (FAQ, articles, reviews, links), which is acurate, detailed and up to date.
  • Practically Networked lists and compares the main Wireless LAN products available on the market. Their list is long and they have reviewed in details a lot of products.
  • I've found a really good web page on the different radio products available (now quite outdated).
  • Network Computing has a long and complete article comparing various 802.11 products. Definitely worth a read, even if they don't mention Linux support ;-)
  • PC Magazine/ZDnet has done a short review of 802.11-b products. They have tested the latest products from the big names.
  • Toms Networking has frequent detailed reviews of various wireless hardware.
  • Synack Communications has done some testing of the Power Consumptions of some common Wireless LANs.

Other web sites of interest (Wireless LAN related)

A random collection of links. I welcome your suggestions...
  • Roger Coudé has developped an impressive package to predict the performance and coverage of a radio system based on the characteristic of the environment.
  • The State University of Ohio has a basic Overview of 802.11.
  • Mark S. Mathews has a nice white paper on 802.11.
  • Intersil (formerly Harris) has a lot of white papers, but they tend to have a very strong bias towards what they are offering.
  • Lot's of links about Wireless (no longer updated).
  • Ben Gross has more links about Wireless (mostly Linux related, and quite up to date).
  • Jacco Tunnissen has lot's of links about Wardriving and Wireless Security.
  • Bernard Adoba has created The Unofficial 802.11 Security Web Page, with many links about security issues in wireless networks and 802.1x.
  • Delbert K. Matlock used to have a very complete page on Linux BlueTooth support, linking to all information available on the net on this subject, but hasn't updated it in since 2001.
  • Foo Chun Choong has a web page that link to various BlueTooth research projects and papers.
  • The NTIA maintain a chart of the frequencies in use in the US. Try to find the unlicensed bands ;-)
  • You may also want to check my paper page, especially if you look for either my publications or SWAP information.

Linux and other links

Some personal recommendations on the web...
  • The project I'm currently officially working on for HP is called CoolTown.