Skip to main content
  • State of the Internet

How the Internet works – a philosophical explanation

By Byron Holland
President and CEO

Andrew Sullivan, a Fellow at Dyn, provides excellent explanation of how the Internet works.

If you’re involved in the Internet governance world, you likely subscribe to any number of mailing lists. Email inboxes fill daily with discussions of a wide variety of topics in the ecosystem, from ICANN accountability to marketing to Internet infrastructure. Following these important discussions can get – with apologies to the brilliant folks posting there – a little tedious. However, every now and then a real gem is posted.

This was the case this past Saturday when Andrew Sullivan, a Fellow at Dyn and a Canadian deeply involved in the Internet governance world, posted one of the best explanations of how the Internet works that I’ve ever read. It’s both technical and philosophical, and really captures the nuances that are critical to understanding the inner workings of the Internet. While his post was made in the context of the current discussions on ICANN accountability, I believe it’s also valuable to those that aren’t Internet governance geeks.

With Andrew’s permission, I’ve posted the text of the email below:

“Both the Internet and the DNS are at once global and local. The nature of internetworking means that the global Internet is built only of other (inter)networks. Similarly, we usually think of the DNS as a tree structure and we often emphasise the common root as a result. But we can think if it another way: the DNS is made up of a collection of zones operated mostly independently from one another. The Internet is a radically distributed system: almost all of the technical operation is undertaken without any direct co-ordination with anyone, performed by an enormous number of independent operators. This means that interoperation is fundamentally a voluntary thing. In your network, you make your rules, and there is no stick (outside of national law) to make you interoperate with others. Instead, there is only the carrot: if you interoperate, you get the benefits of that interoperation. This is the near-magic that is the functioning of the Internet today.

It turns out that the magic is made a little easier if we have a minimal amount of central co-ordination. In principle, you could do this some other way, but this is how we do it now. IANA’s job is the minimal co-ordination.

So, to allow packets to go from one network to another, it’s necessary to be able to tell one another what network you’re operating (that’s how routing works – BGP announcements do this). And in order that, when you say, “I’m running this network,” everyone else needs to know what “this network” means. The way we do that is a common number space, and to have a common number space it is convenient to have a registry of the source of commonality, and IANA does it.

Similarly, to make it easy for the various networks to connect to one another in a reliable way, they can use common protocols set up in a particular way. To know how to set up the protocols, it’s convenient to have a single place to look up the settings. Keeping the list of those settings – the protocol parameters – is another IANA job.

Finally, names that are assigned locally won’t be any use to those on other networks unless the other network users know how to get to those names. To know how to do that, it is convenient to have a place to start looking. Mathematically, a way to do that (and one that is not too hard to implement in computers) is a tree structure, which by definition starts from a common root. That common root is IANA’s job.

This job turns out to be special, too, because while the other two registry types have a well-defined policy source, the policy source for the root zone turns out to be ICANN as well. This fact is (I guess we all know) how we got into the current controversy.

But notice that the DNS itself is a matter of convenience. We _could_ have other naming systems on the Internet. There are peer-to-peer systems that have already been invented and are in fact deployed.

There are alternatives that have been proposed but turn out for practical purposes to depend on the DNS anyway (e.g. the “handles” system from DONA), but that need not. And so on.

Now, because of the nature of the Internet, which relies on all those interconnected networks voluntarily interoperating, the convenience of centralization is a trade-off. You trade a central point of control (IANA) for the advantages of simplicity in protocol design, implementation, and operation. But if the central control is too great – if, for instance, it starts trying to impose controls down through the DNS tree, or it starts trying to demand strict interconnection regimes along geopolitical lines, or whatever – then all the independent networks that are now gaining the benefit of easy interoperation will get less “carrot” than they do today.

The Internet scales the way it does because the overwhelming majority of interconnections from large ISPs are done with a handshake: I want your packets and you want mine, and we peer. If the world decides to make that hard, it changes the business models of all the ISPs. Similarly, the domain name system is a terrible user experience, really, and that’s the reason we have so many hacks on it. But part of the reason it scales so well is because the co-ordination ends at a delegation point: the root zone delegates .com to Verisign, and after that has basically nothing to say about what happens inside com.

Similarly, Verisign delegates anvilwalrusden.com to me, and they don’t have anything to say about what I do in my zone.

If we start to chip away at that distributed operation by attempting to use ICANN’s policy control over the root zone to impose regulations down the tree, we are attacking the model that has made the Internet work at all. Moreover, we risk driving people away from the domain name system into some other technology – a change that will certainly not happen overnight, and which will lead to balkanization and damage to the system’s usability.

So, I don’t think “the global public interest”, whatever that means, does anything to help us to understand what ICANN should do. ICANN should pay attention to its well-understood and needed functions. It should not go adventuring out into global governance issues that distract from that narrow set of responsibilities. And it should not embrace language that distracts from the narrow responsibilities – lest such language become an attractive nuisance that encourages people to think ICANN has power it never has had and (given the design of the Internet) can’t get.”

 

About the author
Byron Holland

Byron Holland (MBA, ICD.D) is the president and CEO of the Canadian Internet Registration Authority (CIRA), the national not-for-profit best known for managing the .CA domain and developing new cybersecurity, DNS, and registry services.

Byron is an expert in internet governance and a seasoned entrepreneur. Under Byron’s leadership, CIRA has become one of the leading ccTLDs in the world, with over 3 million domains under management. Over the past decade, he has represented CIRA internationally and held numerous leadership positions within ICANN. He currently sits on the Board of Directors for TORIX, and is a member of the nominations committee for ARIN. He lives in Ottawa with his wife, two sons, and their Australian shepherd, Marley.

The views expressed in this blog are Byron’s opinions on internet-related issues, and are not necessarily those of the organization.

Loading…