I came across this interesting article at Slashdot a while ago (warning, link contains technobabble. I'll do my best to explain it below)
One thing that has become a big issue in the past year or so in the tech world is the problem of \”net neutrality\” – i.e. that all traffic on the Internet should be treated equally. This has become a particularly acute issue over the last few months, with Comcast's practice of filtering peer-to-peer traffic (like BitTorrent) becoming perhaps the best-known recent incident. The argument put forth by those who support ISPs' right to control what goes through their networks suggest that the explosion of services like BitTorrent, streaming video and HD content in recent years means that today's ISPs are unable to meet capacity demands, and thus should use whatever means necessary to prevent bandwidth needs from spiralling out of control. Some have even suggested that ISPs should shape traffic in order to prevent people from swapping files over P2P illegally.
There is some element of truth to this argument, if only because broadband infrastructure in the US is in a woeful state – The FCC's definition of \”broadband\” is data services above 200Kbps. As a point of reference, I had a 256Kbps ADSL line in Singapore…nine years ago. There are plans afoot to raise the bottom bound to 768kbps but this still doesn't resolve the major infrastructure and cost issues with getting quality broadband access in the US – people in Asia and Europe are much better off. So I say that if ISPs find themselves in a capacity constraint situation, they probably have themselves to blame for not investing in infrastructure while they had the chance.
Well, at least, until I read the article above. It claims that the capacity constraint brought upon by the advent of P2P services can be relieved not by legislation or political wrangling, but by good old network engineering.
The article should be a good enough summary for the techies who might be reading this, but let me attempt to explain it for the less technically-inclined.
At the bottom of the problem is the protocol used to handle the majority of the Internet's traffic, the Transmission Control Protocol or TCP (Just like how a real-life protocol dictates how two humans might interact, a networking protocol is a description of how two computers can communicate over a network). Just about all traffic that requires reliable delivery from one endpoint to another uses TCP, and this includes a lot of P2P services like BitTorrent. The article suggests that TCP's built-in congestion control mechanism – i.e. the safeguards put in place to ensure that the networks aren't flooded with TCP packets – is inherently biased in favour of traffic similar to that generated by P2P applications. This is for a couple of reasons:
- P2P applications use multiple connections, and thus aren't constrained by TCP's congestion controls
- P2P applications transmit data continuously over long periods of time, while other applications like HTTP (web pages) and e-mail tend to use \”burst\” or intermittent tranmission
The combination of these two factors means that P2P traffic tends to \”crowd out\” regular TCP traffic on most networks (probably not on your home PC, since TCP congestion management focuses on upstream transmissions rather than downstream, but this would certainly be an issue for an ISP).
The immediate solution proposed by the article (or rather by a researcher at BT) is to change the TCP protocol to weight applications that use fewer connections more heavily, so that they don't get drowned out by P2P traffic. P2P transmission speed will take a hit while other applications use burst tranmission, but will recover once they're done. There are a few other, longer-term solutions discussed as well, but the main thing the article reinforced for me was that this should be treated as an engineering problem, nothing more. It can't be legislated away.
It also casts a somewhat different light on ISPs, in that they really shouldn't be subject to the vitriol they are these days. This resonates with me to an extent, in that as owners of a network, they're obliged to do whatever they can to prevent traffic from exploding. However, I still think that filtering is really only a short-term solution at best. I'd say it's better to invest in better infrastructure and long-term technical fixes (like the one discussed in the article, whether or not it ends up being feasible) than trying to stop a leaking dam from bursting.