To keep the DNS running, TLD registries use global Anycast name server networks.
Quick raise of hands. How many organizations need to keep a million (or tens of millions) of external websites online? How many need to ensure that they all run as fast as possible? Need to ensure they never fail? Can budget somewhere between $5-10 per year to do it?
That is the task of a domain registry. I’ll also admit that the cost cited above is a bit of hyperbole because the budget comes from revenue that is derived from all of the .CA domain registrations. The point here is that a registry keeps the top level domain (TLD) portion of the authoritative DNS working for everything from the largest bank to the smallest home-based Minecraft server using the same revenue and investment per customer. Let’s also be a little more specific because in today’s market, that is the role of a country code top level domain (ccTLD) registry and a few of the largest generic TLD registries. This includes organizations like CIRA (The Canadian Internet Registration Authority) and our counterparts in other countries around the world.
To keep the DNS running, registries (years ago) turned to global Anycast name server networks and when done right, used multiple suppliers for added redundancy. CIRA specifically built out our own network in Internet hubs around the world. This level of global redundancy is sufficient for many organizational websites and should be an absolute requirement for IT services companies that host on behalf of others. However, unlike most organizations, a registry also sometimes needs a strong regional footprint to better serve the local population.
ccTLDs have a public mandate to make the Internet better
In addition to operating the domain registry, many ccTLDs also help improve the country by supporting technology adoption and a strong local network of IXPs. With local IXPs in place, you have another peering location for your Anycast name servers that also peers with large content providers, hosting companies and ISPs in the country. This is possible because IXPs are public resources and every organization that can peer with them, should do so for two reasons:
- It helps their organization with a more resilient, faster, and lower cost footprint for websites, VPN road warriors, and other services that rely on the Internet.
- It fulfills an individual’s professional (or socially conscious) mandate to make the world/Internet a better place for everybody. It is similar to the benefits of using and contributing to open source projects.
How do local IXPs impact the registry? With IXP peering, a registry speeds up the local response for DNS queries and is providing DDoS protection by virtue of location. If you aren’t peered with the local exchange then you can’t target it. If you are peered then any attack can be quickly mitigated and the real-world address of the party launching the attack can be found and the person prosecuted.
There is one other benefit to a strong local DNS footprint – performance. This is because the less distance the query has to travel, the faster it will be. Case in point, CIRA ran a study using the RIPE network to query authoritative DNS providers and for Canadian queries the CIRA DNS delivered a 102% faster average response time than the others’ average. If in-country visitors matter for your registrants, then the registry should optimize for them while also serving the global need. For many country code TLDs this is the reality as they support local websites for international companies and local businesses built to serve their nearby communities.
One cloud of Anycast servers is not enough. A registry needs not only multiple node locations but multiple clouds on different transit (multiple Autonomous System numbers). This type of architecture significantly mitigates the risk of an outage due to network, equipment, or software issues. But in today’s environment that isn’t even enough. Any large registry must have at least two DNS providers. In addition to service redundancy this helps protect against today’s DDoS attacks that can measure in the hundreds of gigabits. The entire network capacity must be architected to withstand the absolute worst-case scenario because we are not talking about one application or website; we are talking about millions.
Speed, reliability and security are obvious – what else?
A registry needs a few things that most organizations may or may not require. The first is that they can maintain their own IP blocks to ensure continuity of service regardless of the suppliers chosen today and tomorrow. Since they are using several suppliers they need to be able to open future RFQs without risking interruption.
Registries also typically need to perform research and analysis for improving the quality in the future and for contributing to other global technology and governance initiatives. For example, a registry needs both real-time reporting in a typical dashboard format but also access to the raw data and verbose logging. Accessing packet capture (PCAP) data allows the DNS service to plug directly into any in-house monitoring systems and to, for instance, contribute to DNS-OARC’s Day in the Life (DITL) project. Finally, verbose logging can also help with debugging issues.
To close out this section, a DNS provider must be able to accept zone transfers using secure Transaction Signature’s (TSIG). Registries are very focused on the security of their zones and are one of the few organizations that are using this best practice. TSIG has been around for a while and we are constantly surprised by the number of large organizations that are still transferring zones in the clear. Suffice to say that no registry should make that mistake.
Can organizations learn from registries?
If you are a large organization then you should be thinking like a registry when it comes to managing your authoritative DNS. If you host a lot of websites or domains then you should be thinking like a registry. In the past this was unattainable because the cost of setting up a global infrastructure was too high for the benefits accrued. However, cloud services have come along to bring down the costs to commodity levels. For those who manage important web properties we recommend the D-Zone Anycast DNS solution. There is little reason not to be investigating enhancing your in-region DNS as well as your global footprint. Just make sure you understand how the DNS provider’s footprint and services benefits your needs and how their pricing model works on both a zone and query volume basis.