Network neutrality leaped back into the headlines last month, when FCC commissioners held a public hearing at Harvard University to examine whether the commission should institute rules to regulate the way Internet service providers (ISPs) manage traffic on their networks. The panel heard from executives representing the two largest ISPs in the Northeast, Comcast and Verizon, along with Internet pundits, politicians and academics.
The hearing coincided with an increasing public awareness that Comcast and dozens of other ISPs (most of them cable TV companies) commonly use methods to throttle some forms of traffic on their networks. They do this to prevent their networks from becoming congested. These methods typically target peer-to-peer traffic from BitTorrent, a popular music and video file sharing program the ISPs say generates a third or more of their traffic.
Accordingly, BitTorrent has become the debate’s poster child, pushing much of the net neutrality debate into endless arguments over free speech, copyright law and what—if anything—should constitute “legal use” of the Net.
But there’s another side to this debate, one that gets far too little attention. In their attempt to limit BitTorrent and other peer-to-peer file sharing traffic, some ISPs have unwittingly caused collateral damage to other, unrelated businesses and their users. For example, some Web conferencing providers have seen their services slow to a crawl in some regions of the world because of poorly executed traffic management policies. Since ISPs often deny they use such practices, it can be exceedingly difficult to identify the nature of the problem in an attempt to restore normal service.
My company, Glance Networks, has first hand experience. Glance provides a simple desktop screen sharing service that thousands of businesses use to show online presentations and web demos to people and businesses worldwide. When a Glance customer hosts a session, bursts of high speed data are sent each time the person’s screen content changes. The Glance service forwards these data streams to all guests in the session, so they can see what the host sees. The streams need to flow quickly, so everyone’s view stays in sync.
One day a few years ago, our support line got a spate of calls from customers complaining that our service had suddenly slowed to a crawl. We soon realized the problem was localized to Canada, where nearly everyone gets their Internet service through one of just two ISPs. Sure enough, posts on blogs indicated that both of these ISPs had secretly deployed “traffic shaping” methods to beat back the flow of BitTorrent traffic. But the criteria their methods used to identify the streams were particularly blunt instruments that not only slowed BitTorrent, but many other high-speed data streams sent by their customers’ computers.
This experience illustrates why additional rules need to be imposed on ISPs. While we were working the problem, customers were understandably stuck wondering who was telling them the truth. Their ISP was saying “all is well” and that “nothing has changed”, both of which turned out to be wrong. But how were they to know? Their other Web traffic flowed normally. From their perspective, only our service had slowed.
Luckily, we quickly discovered that by changing a few parameters in our service, we were able to restore normal performance to our Canadian customers. But the Canadian ISPs were of no help. For over a year, they denied even using traffic shaping, let alone what criteria they used to single out “bad” traffic. We were forced to find our own “workaround” by trial and error.
And there’s the rub.
Imagine for a moment that regional phone companies were allowed to “manage their congestion” by implementing arbitrary methods that block a subset of phone calls on their network. People whose calls got blocked would be at a loss to know why some calls failed to connect, while others continued to go through normally. Such behavior would never be tolerated in our telephony market. Yet we allow ISPs to “manage their congestion” this way today.
In a truly open marketplace, we could expect market forces to drive bad ISPs out of the market. But most ISPs are monopolies, for good reason. Their infrastructure costs are enormous. The right to have a monopoly, however, must always be balanced by regulations that prevent abuse of that right.
Business and markets cannot thrive when ISPs secretly delay or discard a subset of their traffic. Networks need to be free of secret, arbitrary traffic management policies. Just because an ISP’s network suffers chronic congestion, that ISP cannot be allowed to selectively block arbitrary classes of traffic.
Some ISPs argue that the solution is to let them offer “tiered” services that guarantee some classes of traffic flow unimpeded, while other classes may be delayed or discarded altogether. I disagree. The net works just fine when traffic flows smoothly through the pipes.
The real problem is chronic congestion that occurs when ISPs sell more capacity than they can deliver. BitTorrent has become the ISPs’ sore thumb today (as will other high traffic services of the future) because these ISPs sold their customers flat-rate plans limited by speed, not volume. Amending those plans to include a volume-related surcharge would quickly squelch traffic or motivate customers to switch to ISPs with more capacity. Either way, congestion is eliminated.
Some readers may object that this practice itself violates the principle of net neutrality. But the net neutrality debate should not be hijacked by people advocating some illusory “right” to flat-rate pricing plans. Businesses on the Web have paid graduated rates according to their consumption for years, metered by the gigabyte or by peak burst rates.
Meanwhile, FCC commissioners need to understand that arbitrary and secret traffic management policies have already impacted businesses unrelated to the peer-to-peer file sharing applications targeted by those policies. These are not hypothetical scenarios. The ongoing threat to legitimate Web services that businesses and consumers depend upon daily is real.
The FCC must impose rules that prevent ISPs from implementing such policies. ISPs that oversold capacity must respond with improved pricing plans, not traffic blocking policies. To let the status quo continue imperils legitimate users of the global information infrastructure that so many of us depend upon daily.