Article Keyword Videos to Watch
Click on the image to start the video.
Images - Links - Articles
Ethernet, IP, And Caching....A Bit Of History, Background, And Insights
Since most of you are probably too young to remember a time before Ethernet and IP, there may be many who are unaware of just how well much of it was designed and what this means to the cost of an efficient network.
The team that invented Ethernet, and in particular Mr. Metcalf and Mr. Boggs, really did some impressive work. Some of their initial solutions are still serving us well today.
One of the most important aspects of the original work was a simple and eloquent solution to the usage "hog". One of the challenges of a shared communications facility is sharing. Even back in its initial implementations, the problem of a single node monopolizing all the bandwidth was a potential problem. Preventing this is part of the basic design.
* Transmissions is done by packets, with a maximum packet size defined.
* A minimum wait time is established between packets.
* Each packet transmission is done the same way, giving every node an equal opportunity to be the next to transmit based on random timers.
* When two nodes try to transmit at the same time, a collision is detected and both start a new random timer before trying to transmit again.
* This approach provides for an efficiency of 95% of the available bandwidth.
This is really a very clever approach to what could have been a significant issue. The traditional approach to communications at that time was like a train. Data being sent was put into a long and continuous stream. In voice, this was for the duration of the connection, while in data it was in large blocks. With Ethernet, the train was replaced with individual cars merging into an expressway. This was a major departure from earlier approaches.
Now all of this occurs at layer 1 and 2, independent of any routing, any access to any resource off the local area network. By design, the issue of one node hogging the network for its own particular needs is addressed. So if someone uploading photos to their Web album or downloading a video feed from CNN is causing performance issues on the network, it might be a good idea to check for a bad NIC in one of the nodes as a first step, before leasing more bandwidth.
The earliest routers relied heavily on the inherent sharing of Ethernet, but it didn't take too long before the smaller access circuits became an issue anyway. This was back when an access line off the router might be dialup or a high speed DDS link operating at 56 Kbps, or something between these two. One of the very first enhancements was to move away from the simple first in first out in favor of priority queuing. Using the IP header in packets, a simple high, medium, and low output buffer was established. Although different vendors implemented different techniques, all used a common approach. The high priority queue is checked the most frequently for packets waiting to be transmitted onto the access link. The medium queue is checked less frequently, and the low priority queue is checked the least. Packets are assigned to a queue by source, destination, protocol, or some combination of these. All advanced priority queuing is still based on this initial approach.
So by design, we have a basic way of dealing with bandwidth hogs both on the local area network and in the wide area network. This goes all the way back to the earliest days of IP, back before the Internet was a common resource. The performance issues are still the same, and in most cases bandwidth is not the culprit thanks to that original work.
Of course, there are times when bandwidth is an issue. For that, we can often find a solution with roots even older than Ethernet. It is an interesting truth that in most instances a very few sites out of the many potential locations are visited on the Web. In an office, coworkers end up all visiting the same couple of news sites, the same couple of portals, the same video feeds and flash presentations. While day to day these might change, in a given day its a small set of locations. In the networking side of brokerage houses back in the early 80's, there was a similar situation. Brokers watched the same stocks. There was a set that everyone watched, and then a smaller set unique to a firm, an even smaller set for an office, and finally a couple that were unique by broker. In all, about 100 stocks on any given day constituted around 90% of the traffic. Lacking even the speed of today's dialup service, they had to support this demand. The approach was simple - send changes on these stocks as they occurred to the local controller so that requests never left that location.
Today we call this caching, and it is handled via a network appliance. For about the price of one months DS3 lease, a cache can be installed so that the bulk of the traffic remains off the leased service. As an added benefit, not even DS3 will outperform a LAN connection. The latency alone prevents this. The result is lower costs and improved performance. It has even moved into the traditional broadcast arena - TIVO is one of the best known brands in the North American market. Different objective, but the same basic approach. Caching won't resolve problems with many office applications, but it can be a real answer for issues regarding Web sites.
About the Author: Michael is the owner of FreedomFire Communications....including DS3-Bandwidth.com and Business-VoIP-Solution.com. Michael also authors Broadband Nation where you're always welcome to drop in and catch up on the latest BroadBand news, tips, insights, and ramblings for the masses.