Contributed by: Jonathan Zittrain
The Internet’s framers made simplicity a core value — a risky bet with a high payoff. The bet was risky because a design whose main focus is simplicity may omit elaboration that solves certain foreseeable problems. The simple design that the Internet’s framers settled upon makes sense only with a set of principles that go beyond mere engineering. These principles are not obvious ones — for example, the proprietary networks were not designed with them in mind — and their power depends on assumptions about people that, even if true, could change. The most important are what we might label the procrastination principle and the trust-your-neighbor approach.
The procrastination principle rests on the assumption that most problems confronting a network can be solved later or by others. It says that the network should not be designed to do anything that can be taken care of by its users. Its origins can be found in a 1984 paper by Internet architects David Clark, David Reed, and Jerry Saltzer. In it they coined the notion of an “end-to-end argument” to indicate that most features in a network ought to be implemented at its computer endpoints — and by those endpoints’ computer programmers — rather than “in the middle,” taken care of by the network itself, and designed by the network architects. The paper makes a pure engineering argument, explaining that any features not universally useful should not be implemented, in part because not implementing these features helpfully prevents the generic network from becoming tilted toward certain uses. Once the network was optimized for one use, they reasoned, it might not easily be put to other uses that may have different requirements.
The end-to-end argument stands for modularity in network design: it allows the network nerds, both protocol designers and ISP implementers, to do their work without giving a thought to network hardware or PC software. More generally, the procrastination principle is an invitation to others to overcome the network’s shortcomings, and to continue adding to its uses.
The assumptions made by the Internet’s framers and embedded in the network — that most problems could be solved later and by others, and that those others themselves would be interested in solving rather than creating problems — arose naturally within the research environment that gave birth to the Internet. For all the pettiness sometimes associated with academia, there was a collaborative spirit present in computer science research labs, in part because the project of designing and implementing a new network — connecting people — can benefit so readily from collaboration.
It is one thing for the Internet to work the way it was designed when deployed among academics whose raison d’être was to build functioning networks. But the network managed an astonishing leap as it continued to work when expanded into the general populace, one which did not share the world-view that informed the engineers’ designs. Indeed, it not only continued to work, but experienced spectacular growth in the uses to which it was put. It is as if the bizarre social and economic configuration of the quasi-anarchist Burning Man festival turned out to function in the middle of a city. What works in a desert is harder to imagine in Manhattan: people crashing on each others’ couches, routinely sharing rides and food, and loosely bartering things of value. Yet the turn of the twenty-first century, the developed world has found itself with a wildly generative information technology environment.
Maintaining it against its own wild popularity remains one of the central struggles of our time.
Adapted from The Future of the Internet, <http://www.amazon.com/The-Future-Internet-And-How-Stop/dp/0300151241#>
Jonathan Zittrain
George Bemis Professor of Law, Harvard Law School | Harvard Kennedy School of Government
Professor of Computer Science, Harvard School of Engineering and Applied Sciences
Director, HLS Library | Berkman Center for Internet & Society
<http://cyber.law.harvard.edu>