Copyright ©1997-2011 Glenn Fleishman except as noted otherwise. All rights reserved. For permission to reprint, contact Glenn Fleishman at glenn at glennf.com. Photo © 2008 Laurence Chen; used with permission.
Turning technology from mumbo-jumbo into rich tasty gumbo< Ethernet over Power (EoP) | Main | Apple Predictions >
I've been talking and emailing with Bob Frankston and a couple of other folks since Monday about the issues surrounding a network concept known as Quality of Service (and abbreviated QoS), when I met Bob at the Supernova 2002 conference. (You might recall that Bob was one of the guys behind VisiCalc, a little invention that changed the face of business computing.)
The notion behind QoS is that you need to establish some kind of prioritization for data packets so that more important packets are more likely to make it through an end-to-end QoS implementation (say from a Wi-Fi-based VOIP phone through your DSL connection to a telco termination to the PSTN, or a video signal from your cable modem over 802.11a to a receiver). The 802.11e task group at the IEEE, for instance, is establishing QoS to ensure that media and voice can be sent over a Wi-Fi connection without jitters or interruptions.
The problem with QoS specifically is that it's hard to do under an IP network. TCP/IP and its various parts were designed to not care about the kind of data, only about getting it to the right place. TCP incorporates packet retransmission in case of failure; UDP, which is actually part of the TCP/IP universe, doesn't retransmit missing packets, leaving it up to the application that's using UDP to decide whether it needs the missing pieces or not.
With a stupid network, as David Isenberg terms it, you just need pools of bandwidth all over the place. You don't need to make one kind of data more or less important than another, as impossible a task as that's generally proven to me. It's one of those "have your cake and eat it, too" problems: if you focus on QoS, you lose many of the best aspects of IP networking; if you don't focus on QoS, then you can't ensure certain classes of data get to where they're going when you want them there.
Here's where I hope I represent Bob, David Reed, and David Isenberg's arguments correctly: it's better to focus on having more bandwidth than more intelligent networks. That is, forget about the fascist task of deciding that certain network traffic is more important than other network traffic. Rather, spend your energy (telcos and chipmakers and network equipment makers) on simply increasing the pool. In a non-prioritized network, more bandwidth means that more different kinds of traffic have an equal chance to get through.
David Reed pointed out, for instance, that a full quality voice signal only needs a few Kbps with modern compression. If you've got 128 Kbps upstream via DSL (as I do), why do you need QoS from yourself to the network? If you've got 11 Mbps via Wi-Fi (raw), why on earth does voice traffic need any help? Bob has said to me several times that there are simple technological fixes, none of them very expensive, that could push out high-speed bandwidth to the home -- fiber, etc., aren't required, just a little electronics of the electrical signal kind.
(Bob, by the way, knows of what he speaks. He was at Microsoft from 1993 to 1998, and was one of the forces behind spreading home networking as a concept within the company, and thus throughout the industry. He was one of the folks behind HomePNA, which uses plain copper wiring inside the house instead of dedicated Ethernet, and he calls himself -- ruefully -- the father of Network Address Translation (NAT). He regrets pushing the fake, private NAT address technique as a way to expand address space instead of IPv6, which would have offered substantial advantages in mobility and security. But who knew?)
Of course, the current reality is that networking systems are lumpy, and that even when you have ostensibly enough space, poor topology or network design can result in substantial collisions or dropped packets that can enormously reduce network throughput. It doesn't take a lot of dropped packets in a voice call to make a conversation sound choppy.
But Reed and others note that the biggest culprit in that problem isn't on the local end (LAN or DSL/cable-to-Internet) but rather at the point at which you transit your data from the DSL or cable modem pool into an ATM network or other network that takes you onto the Internet. Virtually all broadband is massively oversold for capacity, so until more capacity is brought to broadband termination points, you'll have times of congestion.
More to the point is that even if you had QoS from your machine or remote device across your network over a broadband or digital service connection to the Internet: well, it still has to contend once it gets there. If you were to pay the telco or cable company for QoS, they can't guarantee it to an endpoint, and they would have every motivation to ensure that without you paying for QoS, all of your packets were belong to them: you'd certainly see even poorer performance.
Worse, if the phone companies and cable companies get the idea that they should be selling you Voice over IP in the home or business as a network service that can be "guaranteed" and "reliable," you can bet Aunt Nellie that this transforms the nature of the data feed from the Internet that you get. It's no longer just the Internet, but a proprietary network that has different properties.
On the Interesting People list that Dave Farber runs, three posts on this topic appeared this morning. First, Larry Lessig on how prioritization should be fought in favor of neutrality; second, from Karl Auerbach on how the artificial dearth of high bandwidth on the network edges seems to push QoS, probably subversively; and third, from Bob Frankston, whose words I leave you with:
What we need to focus on is the mechanism and the awareness of the concept of connectivity -- the simple commodity out of which it really is trivial to create the current telecommunications services and it is possible to do far more.
Posted by Glennf at December 14, 2002 11:36 AM
TrackBack URL for this entry:
I remember way back when, they had this thing called the porta-fi. You plugged this speaker into the electric in your house and your GE stereo played music. I never took the thing apart to find out how it worked, but basically I made the suggestion of superimposing computer signals over the electric lines, I said that all one would need was a filter at the end of the line then it would work, shortly thereafter I saw the wireless and cable come to fruition, the problem with using the electric lines is that it would give everyone else who might use electric large spikes and high voltages, well I know of a guy that is selling lightning protection and he sells a filter that will give you precisely 117 volts at 60 hz well anyway he is selling the filters which sort of makes my idea obsolete, I like the cable connection.
Posted by: Frederick D Callis at January 7, 2003 12:24 PM
The problem of QoS transcends "bandwidth", needing to consider other properties of a network connection, such as delay and jitter. For example, in the recent May/June 2002 issue of IEEE Internet Computing discusses this in detail and the emphasis isn't on bandwidth, but delay and jitter. We all know that you only need 56 Kbps of bandwidth to convey an uncompressed G.711 voice signal through a network--this has been done since the 1960s. The problem arises when you start taking that stream, chopping it up into packets, and push it through the network being used for everything else. If you can't ensure that those packets will arrive at the destination within an average of 10 ms of each other, you're voice quality degrades below that defined by G.114 and G.131 (standards for voice telecommunication established back in the 1950s). This constraint on delivery can not be solved by bandwidth alone.
For example lets look at XBox Live and some of the problems with that system. Once you've used Xbox Live to say, find a game session you want to play in, it really isn't any different than "system link", in that you're no longer talking to Microsoft's servers, but to the host's Xbox directly. This presents the greatest limitation of system, in that the host hopefully has a connection that doesn't impose too much lag.
With all due respect to Mr. Isenberg Internet gaming,like voice, depends on delay and jitter not just bandwidth. For example, It is possible to host a game on an Xbox capable of handling a theoretical maximum of 16 players. In this scenario, I would have 15 sessions into my Xbox. At a clocked cable link a little over 1 Mbps you would think this would be enough? It would if the delay wasn't so bad. However, on AT&T for example there is one core router. It's so bad that when I do a tracert to www.microsoft.com, you see delays between 800 and 1200 ms, which kills any ability to host sessions larger than 5 or 6 players.
Posted by: Not Just Bandwidth Mr. Isenberg - John Furrier at December 29, 2002 8:00 AM
QoS comes in different flavors. The biggest use for QoS in my own personal home network is to keep my broadband pipe uncongested. This is #1 reason for QoS at the user end, and the most beneficial IMHO.
"... it's better to focus on having more bandwidth than more intelligent networks. That is, forget about the fascist task of deciding that certain network traffic is more important than other network traffic. Rather, spend your energy (telcos and chipmakers and network equipment makers) on simply increasing the pool....
David Reed pointed out, for instance, that a full quality voice signal only needs a few Kbps with modern compression. If you've got 128 Kbps upstream via DSL (as I do), why do you need QoS from yourself to the network? If you've got 11 Mbps via Wi-Fi (raw), why on earth does voice traffic need any help? ..."
The problem is that many network protocols are _greedy_. A TCP connection transferring 100M of data will consume all available bandwidth (assuming the sending side has capacity). This is implicit in the nature of TCP itself, and the way it implement congestion avoidance.
So when you have one or more TCP connections sucking bandwidth, and large buffers on the end of a broadband connection, you get _high latency_.
Without QoS at my linux router connected to my broadband connection, I would have over 2 seconds of round trip latency due to buffering at the cable modem multiplexor/switch.
No matter how big your pipes get, it is still trivial for a small number of wide open TCP connections to suck all the bandwidth available, and thus drive latency much higher.
QoS allows me to ensure latency is low - no congestion. I can now SSH remotely into my machine over broadband at the same time one roomate is using gnutella, and a third is downloading redhat ISO's.
QoS is fucking awesome, but perhaps for different reasons than you originally assume in your post (i.e. for QoS across the backbone)
QoS at the user end is immensely valuable - I am reminded of this every day as I tweak my iptables / traffic controller rules for extremely low latency despite high traffic load.
Posted by: coderman at December 18, 2002 10:54 AM
As an economist, I hate to concede that pricing is not a solution. My $.02 is at http://www.corante.com/bottomline/20021201.shtml#15347
Posted by: Arnold Kling at December 16, 2002 6:03 PM
I think there are two issues getting mixed-up in the E2E and QoS/Bandwidth discussion. One, where are the current bandwidth bottlenecks and at which points is it easiest to alleviate? Second, what sort of QoS do clients want and at what price? Let’s specifically address the importance of QoS for wireless (802.11e) at the last mile.
With content providers investing heavily in data warehouses, load balanced clusters, acceleration services like Akamai, high-speed connection to the regional pop, etc. there is a frantic rush to eliminate the first mile from the list of bottlenecks.
The middle miles (metro-core-metro) are a hard problem to tackle but are being addressed by VoD and VoIP (e.g. Net2Phone) service providers for their specific applications. This needs both bandwidth and QoS – significant investment for an overall scalable solution will take time.
While there is congestion along the E2E path, there is significant potential congestion at the last hop – especially for wireless networks. It has been proven that the current distributed coordination function (DCF) mode of operation, a variant of ALOHA, for 802.11 does not scale well beyond 45-50% load causing exponential increases in latency. If the contribution to latency and jitter of each mile is mapped, the last hop 802.11 will be a significant contributor especially as the number of clients shareing the channel increases (in home and enterprise) and the bandwidth of flows increases (p2p file-sharing & streaming video form a majority of LAN traffic today).
The devil is in the details: The root of the problems with wireless latency is that all users not only share the same link, a collision allows no party to effectively use the channel. Furthermore, there is currently, in 802.11, no method to decide which client to service and for how long (no idea of the flow rate or # of backlogged packets for arbitary traffic distributions).
The binary exponential back-off scheme leads to unbounded jitter. 802.11e only provides the hooks to implement QoS. It is up to the packet scheduler in the 802.11 access point to provide throughput, delay and jitter bounds and efficiently utilize the channel. A scheduling scheme can deliver QoS from a tighter form of service differentiation to explicit guarantees within the practical channel error rates. As we are talking about a scalable solution, it would be naïve to only consider one voice and one video connection. Solving the last-hop wireless QoS problem is core because it is constrained by the wired connection to the CO and the fundamental problems of wireless medium access control.
Posted by: Rahul Mangharam at December 15, 2002 3:45 PM
I've always presumed those selling QoS planned to sell it to you end to end, particularly for VoIP, Videophone and Video on Demand. Because both endpoints want the QoS, we assume, all it takes is for both of them to have it, and for the first mile providers to also pledge to use QoS based intermediate pipes to reach the other ones. That, in theory, is what you would pay them for.
I agree that it's after we get from my line that we need the QoS, but can you really demonstrate that if we just made lots of inter-network bandwidth that the desire for it would vanish? If some heavy user congests the routers I am going through for their video feed, that's going to cause packets to be lost on my phone call, and I don't like it. I have a Vonage VoIP phone today, and it's pretty good, but I can tell the slightly longer latency sometimes.
Another use for QoS, I think, is sort of an anti-QoS, when polite users (or you yourself) design applications that say, "on this packet, I don't really care about delivery time that much, though I do care about delivery."
That's a good label for non-interactive packets, like E-mail, background file xfer and so on.
Those packets could become a big chunk of the net if a poor man's video on demand were implemented as I describe here
In that case, you want that heavy application to use as much of your bandwidth as it can, but not slow down your interactive applications.
Posted by: Brad Templeton at December 15, 2002 9:37 AM
as some wise puppy once said, design your network for video and e-mail comes for free. Substitute voice for e-mail and the concept still holds.
Almost all of the problems with broadband services can be traced to ignorance of economics by the people deploying the network and the people regulating the network. The way technology is changing networking, the viable lifetime of a piece of telecom gear seems to be about three years. This comes as quite a shock to the telecom and cable folks who are expecting 10 to 15 year lifetimes at a minimum.
Combine the above with ignorance of last mile economics (i.e. increasing competition creates increasing per subscriber costs), downward pressure on pricing for service and the natural outgrowth is the discourse on "bandwidth hogs", tiered service based on usage, massive over selling of capacity, and yes, QOS.
There is no easy fix. In order to make the transition to reasonable first mile infrastructure, it will be necessary to strand the current investments. For example, fiber to the home is one of the better options for high-speed future friendly networks. Installing such a network will make it possible for competition on video services such as those provided by the incumbent cable company. The incumbent cable company will be faced with three options. First is to try shut down the fiber competitor through legal action. Second is to try and compete with independent infrastructure (see last mile economics above). And third, abandon its infrastructure and use the fiber.
The first option is most likely because it is the cheapest for the cable company and preserves most revenue. The second option guarantees the eventual bankruptcy of one of the two providers or the withdrawal of one of the providers. The third option is probably the least painful financially because the infrastructure can be written off and the cost of shared infrastructure is usually lower.
The story is even worse for ILECS and their central office/copper plant infrastructure. The infrastructure is very expensive to maintain and relatively fragile. It takes a great deal of skill and knowledge to keep it running. As customers drop second lines and in some cases first lines for services such as cellular and voice over IP, the ILEC is forced to strand their infrastructure one line at a time. Their investment in central office switches that they were forced to make over the past five years because of the growth of dial-up Internet access is now becoming increasingly idle with the drop in dial-up usage.
So, at the end you have two very powerful industries trying to recoup some of their investment through any means possible (i.e. tariff increases, lobbying, stonewalling consumers) vs. a bunch of upstarts who are trying to make the cost of networking as cheap as possible. In my opinion, this transition isn't going to be a gentle one because its dam near impossible to make a living selling just bits with the current level of tinkering necessary to keep networks running. It will take some form of legislative solution to deal with the equipment obsolescence issue (i.e. accelerated tax write-offs) and the first mile/natural monopoly issues. It will be necessary to develop the appropriate technical solutions so that a network covering a small city takes no more than two people to manage; 4 if you include the help desk. Only then will the economics allow the growth of high-speed networking from the first mile on outward.
Posted by: Eric S. Johansson at December 15, 2002 8:46 AM
Forced scarcity. There's no economic motive to add bandwidth, as that would reduce the price you could charge. Of course, we know that eventually Gilder will be proved right in at least one respect: all that optical fiber will change the nature of telecom, but it'll take some brinksmanship before they start lighting up a lot more of it.
If they were smart, the telecoms and others, they'd realize that they should be treating it as a fungible commodity, not as a scarce resource differentiated by different network characteristics.
Posted by: Glenn Fleishman at December 14, 2002 11:57 AM
"Virtually all broadband is massively oversold for capacity, ..."
There's supposedly large quantities of dark fibre. The price of fibre termination equipment and routers should be dropping with Moore's law. So why isn't wholesale broadband following Moore's law in both increased capacity and falling price?
To illustrate, 10 years ago I switched from 14.4Kb to 28.8Kb for my ten pounds a month internet dial up account. So now I should be switching from 1.4Mbps to 2.8Mbps for the same ten pounds per month.
And as one of those people said, the easy answer to QoS is bigger pipes. If I had a solid 1Mbps upload speed, then a 32 or even 64kbps voice channel within that is going to be trivial.
Posted by: Julian Bond at December 14, 2002 11:51 AM
October 2011 | August 2011 | June 2011 | May 2011 | February 2011 | December 2010 | November 2010 | October 2010 | September 2010 | August 2010 | July 2010 | June 2010 | May 2010 | April 2010 | January 2010 | December 2009 | November 2009 | October 2009 | September 2009 | August 2009 | July 2009 | May 2009 | April 2009 | March 2009 | February 2009 | January 2009 | December 2008 | November 2008 | October 2008 | September 2008 | August 2008 | July 2008 | June 2008 | May 2008 | April 2008 | March 2008 | February 2008 | January 2008 | December 2007 | November 2007 | October 2007 | September 2007 | August 2007 | July 2007 | June 2007 | May 2007 | April 2007 | March 2007 | February 2007 | January 2007 | December 2006 | November 2006 | October 2006 | September 2006 | August 2006 | July 2006 | June 2006 | May 2006 | April 2006 | March 2006 | February 2006 | January 2006 | December 2005 | November 2005 | October 2005 | September 2005 | August 2005 | July 2005 | June 2005 | May 2005 | April 2005 | March 2005 | February 2005 | January 2005 | December 2004 | November 2004 | October 2004 | September 2004 | August 2004 | July 2004 | June 2004 | May 2004 | April 2004 | March 2004 | February 2004 | January 2004 | December 2003 | November 2003 | October 2003 | September 2003 | August 2003 | July 2003 | June 2003 | May 2003 | April 2003 | March 2003 | February 2003 | January 2003 | December 2002 | November 2002 | October 2002 | September 2002 | August 2002 | July 2002 | June 2002 | May 2002 | April 2002 | March 2002 | February 2002 | January 2002 | December 2001 | November 2001 | October 2001 |