Copyright ©1997-2011 Glenn Fleishman except as noted otherwise. All rights reserved. For permission to reprint, contact Glenn Fleishman at glenn at glennf.com. Photo © 2008 Laurence Chen; used with permission.
Turning technology from mumbo-jumbo into rich tasty gumbo
I’ve been talking and emailing with Bob Frankston and a couple of other folks since Monday about the issues surrounding a network concept known as Quality of Service (and abbreviated QoS), when I met Bob at the Supernova 2002 conference. (You might recall that Bob was one of the guys behind VisiCalc, a little invention that changed the face of business computing.)
The notion behind QoS is that you need to establish some kind of prioritization for data packets so that more important packets are more likely to make it through an end-to-end QoS implementation (say from a Wi-Fi-based VOIP phone through your DSL connection to a telco termination to the PSTN, or a video signal from your cable modem over 802.11a to a receiver). The 802.11e task group at the IEEE, for instance, is establishing QoS to ensure that media and voice can be sent over a Wi-Fi connection without jitters or interruptions.
The problem with QoS specifically is that it’s hard to do under an IP network. TCP/IP and its various parts were designed to not care about the kind of data, only about getting it to the right place. TCP incorporates packet retransmission in case of failure; UDP, which is actually part of the TCP/IP universe, doesn’t retransmit missing packets, leaving it up to the application that’s using UDP to decide whether it needs the missing pieces or not.
With a stupid network, as David Isenberg terms it, you just need pools of bandwidth all over the place. You don’t need to make one kind of data more or less important than another, as impossible a task as that’s generally proven to me. It’s one of those “have your cake and eat it, too” problems: if you focus on QoS, you lose many of the best aspects of IP networking; if you don’t focus on QoS, then you can’t ensure certain classes of data get to where they’re going when you want them there.
Here’s where I hope I represent Bob, David Reed, and David Isenberg’s arguments correctly: it’s better to focus on having more bandwidth than more intelligent networks. That is, forget about the fascist task of deciding that certain network traffic is more important than other network traffic. Rather, spend your energy (telcos and chipmakers and network equipment makers) on simply increasing the pool. In a non-prioritized network, more bandwidth means that more different kinds of traffic have an equal chance to get through.
David Reed pointed out, for instance, that a full quality voice signal only needs a few Kbps with modern compression. If you’ve got 128 Kbps upstream via DSL (as I do), why do you need QoS from yourself to the network? If you’ve got 11 Mbps via Wi-Fi (raw), why on earth does voice traffic need any help? Bob has said to me several times that there are simple technological fixes, none of them very expensive, that could push out high-speed bandwidth to the home — fiber, etc., aren’t required, just a little electronics of the electrical signal kind.
(Bob, by the way, knows of what he speaks. He was at Microsoft from 1993 to 1998, and was one of the forces behind spreading home networking as a concept within the company, and thus throughout the industry. He was one of the folks behind HomePNA, which uses plain copper wiring inside the house instead of dedicated Ethernet, and he calls himself — ruefully — the father of Network Address Translation (NAT). He regrets pushing the fake, private NAT address technique as a way to expand address space instead of IPv6, which would have offered substantial advantages in mobility and security. But who knew?)
Of course, the current reality is that networking systems are lumpy, and that even when you have ostensibly enough space, poor topology or network design can result in substantial collisions or dropped packets that can enormously reduce network throughput. It doesn’t take a lot of dropped packets in a voice call to make a conversation sound choppy.
But Reed and others note that the biggest culprit in that problem isn’t on the local end (LAN or DSL/cable-to-Internet) but rather at the point at which you transit your data from the DSL or cable modem pool into an ATM network or other network that takes you onto the Internet. Virtually all broadband is massively oversold for capacity, so until more capacity is brought to broadband termination points, you’ll have times of congestion.
More to the point is that even if you had QoS from your machine or remote device across your network over a broadband or digital service connection to the Internet: well, it still has to contend once it gets there. If you were to pay the telco or cable company for QoS, they can’t guarantee it to an endpoint, and they would have every motivation to ensure that without you paying for QoS, all of your packets were belong to them: you’d certainly see even poorer performance.
Worse, if the phone companies and cable companies get the idea that they should be selling you Voice over IP in the home or business as a network service that can be “guaranteed” and “reliable,” you can bet Aunt Nellie that this transforms the nature of the data feed from the Internet that you get. It’s no longer just the Internet, but a proprietary network that has different properties.
On the Interesting People list that Dave Farber runs, three posts on this topic appeared this morning. First, Larry Lessig on how prioritization should be fought in favor of neutrality; second, from Karl Auerbach on how the artificial dearth of high bandwidth on the network edges seems to push QoS, probably subversively; and third, from Bob Frankston, whose words I leave you with:
What we need to focus on is the mechanism and the awareness of the concept of connectivity — the simple commodity out of which it really is trivial to create the current telecommunications services and it is possible to do far more.
The Supernova 2002 conference seemed to have power strips everywhere, which meant that everyone with a laptop (which was everyone) had them open and on all the time. Because of the Wi-Fi network, everyone was able to work, blog, and surf constantly. On the other hand, we were in a vault, so cell phones barely worked and/or people were good about shutting the ring off (more likely didn’t work).
Perhaps have power everywhere is not as good an idea as I thought. People had an entirely different relationship to the speakers, which I’m divided on whether was better or worse or just the same as other events. The small size of the event contributed, plus an array of domain experts and opinionated people in the audience (latter category including myself).
Talking to Howard Rheingold about my previous post on how people might get sick of rating things, he said that his thinking is that you’ll opt in to tools that will monitor very specific aspects of your behavior and report on it as part of your opinion on events or individuals.
For instance, he has this example of how people could carpool through a shared rating system: you’d use some device to look where people needed rides on your way into work, and anyone who other drivers had ranked highly as good passengers you might pick up. Likewise, people might refuse to accept rides from low-ranked drivers.
In my initial understanding of this, I was thinking that you would have to take intentional steps to rank the driver or passenger, and that this starts to wear on people. Howard said that in his real vision, systems with permission would monitor this interaction and then report your actions implicitly as opposed to requiring action on your part.
This seems much more likely as an emergent property. For instance, if I control my financial networks, I could let my agent software comb Visa records and make relationships: Glenn likes sushi at these restaurants because he goes there once a week.
Dan Gillmor and I were just talking about how you’re getting the Rashomon effect in the audience here, as many bloggers are spinning their own versions of what they’re hearing. Even funnier is a very small point: Dan noted that every blog entry mentioning the title of his talk, Journalism 3 point something (3.01b7? 3.1b6?) had a different version number. It was up on the screen for long enough to transcribe, but each blogger seems to have typed in something different.
I’ve been trying to listen rather than blog constantly at Supernova 2002 as it’s hard to type, process, and listen simultaneously. I notice that some serial bloggers in the audience are posting very small chunks. Part of this is because many speakers are using fragments of stump speeches to put us in their mindset and we immediately move into fast-changing questions and histrionics.
I had a long talk with a fellow who used to publish a magazine that had a single 7,000-word feature in each issue devoted to a single person. He found that taking notes was a problem, and that after writing literally 1,000 mass-market (GQ, Esquire, Rolling Stone, etc.) pieces over the course of a decade that he had to relearn how to listen, and using a recording device and letting someone talk was the best idea.
The myth of multitasking is that we can accomplish several things with full attention at the same time. Rather, multitasking is an evolutionary outgrowth that allows us to carry out one activity and have a nominal awareness of others that we have extended to believe is full attention. Evolutionarily, this kept us from being eaten, and thus we survived and became who we are.
My wife hates one feature of my brain in this regard: I can stop listening when I’m distracting but still have heard what she says. If she asks what she said, I can repeat it to her, but my brain doesn’t process it until I’ve resaid it. This kills her — and rightly so — as I can claim I heard her. Nonetheless, I should be listening with my entire brain.
Here at Supernova 2002, listening to Smart Mobs author Howard Rheingold. (Good or bad, I heard him on Tech Nation last night, cover a lot of similar ground.)
Rheingold’s vision of the future may require that people constantly express opinions in order to achieve a status among their peers or anonymous strangers. Won’t people get tired of this? Look at reviews on Amazon.com. Look at Slashdot posts. Too much content overwhelms the ability to see anything but an abstract score, even when the best writing rises to the top. This means that your hard-written prose only gets read by a relatively few people, which eventually can make you cynical about contributing.
Pockets of community have to develop, but Rheingold talks about finding out the value of many resources through constant peer contribution, and I’m doubtful that the most likely to contribute will maintain their participation over time.
I broke down and switched my cell phone service from Verizon to Cingular last week. Why? Many, many reasons, incuding rollover minutes (instead of expire-at-the-end-of-the-month minutes), support for the Sony Ericsson T68i phone (more on that in a second), and GSM/GPRS-based service. The new plan with more features, minutes, etc., will cost me substantiall less than my Verizon plan, too.
But here’s where it gets interested. I signed up for just the $4/month Wireless Internet package which adds GSM service to my phone. The T68i handles Bluetooth, as does my iBook with the addition of a $50 D-Link USB adapter. Because Mac OS X 10.2 Jaguar handles Bluetooth with ease as another networking flavor, I can make data calls from my Mac over the phone.
True, it’s at just 9600 bps — I could pay a lot more for GPRS and get better downstream speeds — but that’s good enough for email and text Web access. The funny part is this: I’m in a hotel in Palo Alto right now avoiding their $2/call charges or whatever they charge (there’s no card here at Rickey’s, a Hyatt property explaining it, but I paid $2 per call from my Westin room last week in Santa Clara).
The chain of connections is: iBook -> Bluetooth adapter -> Bluetooth phone -> GSM network -> modem somewhere in a GSM equipment office (yes, an actual modem!) -> Earthlink network. In the case of GPRS, the cell company is the ISP, too; with GSM, you dial a modem by proxy. Whatever. It works, and, so far as I can tell, the time is coming out of my 3000 minute a month of weekend/evening time.
October 2011 | August 2011 | June 2011 | May 2011 | February 2011 | December 2010 | November 2010 | October 2010 | September 2010 | August 2010 | July 2010 | June 2010 | May 2010 | April 2010 | January 2010 | December 2009 | November 2009 | October 2009 | September 2009 | August 2009 | July 2009 | May 2009 | April 2009 | March 2009 | February 2009 | January 2009 | December 2008 | November 2008 | October 2008 | September 2008 | August 2008 | July 2008 | June 2008 | May 2008 | April 2008 | March 2008 | February 2008 | January 2008 | December 2007 | November 2007 | October 2007 | September 2007 | August 2007 | July 2007 | June 2007 | May 2007 | April 2007 | March 2007 | February 2007 | January 2007 | December 2006 | November 2006 | October 2006 | September 2006 | August 2006 | July 2006 | June 2006 | May 2006 | April 2006 | March 2006 | February 2006 | January 2006 | December 2005 | November 2005 | October 2005 | September 2005 | August 2005 | July 2005 | June 2005 | May 2005 | April 2005 | March 2005 | February 2005 | January 2005 | December 2004 | November 2004 | October 2004 | September 2004 | August 2004 | July 2004 | June 2004 | May 2004 | April 2004 | March 2004 | February 2004 | January 2004 | December 2003 | November 2003 | October 2003 | September 2003 | August 2003 | July 2003 | June 2003 | May 2003 | April 2003 | March 2003 | February 2003 | January 2003 | December 2002 | November 2002 | October 2002 | September 2002 | August 2002 | July 2002 | June 2002 | May 2002 | April 2002 | March 2002 | February 2002 | January 2002 | December 2001 | November 2001 | October 2001 |