Researchers Find Unnecessary Traffic Saturating Key Internet Root Server

Scientists at the San Diego Supercomputer Center (SDSC) at UCSD analyzing traffic to one of the 13 Domain Name System (DNS) “root” servers at the heart of the Internet found that the server spends the majority of its time dealing with unnecessary queries. DNS root servers provide a critical link between users and the Internet’s routing infrastructure by mapping text host names to numeric Internet Protocol (IP) addresses. Researchers at the Cooperative Association for Internet Data Analysis (CAIDA) at SDSC conducted a detailed analysis of 152 million messages received on Oct. 4, 2002, by a root server in California, and discovered that 98 percent of the queries it received during 24 hours were unnecessary. The researchers believe that the other 12 DNS root servers likely receive similarly large amounts of bad requests.From the San Diego Supercomputer Center:San Diego Supercomputer Center Researchers Find Unnecessary Traffic Saturating a Key Internet “Root” Server

Scientists at the San Diego Supercomputer Center (SDSC) at UCSD analyzing traffic to one of the 13 Domain Name System (DNS) “root” servers at the heart of the Internet found that the server spends the majority of its time dealing with unnecessary queries. DNS root servers provide a critical link between users and the Internet’s routing infrastructure by mapping text host names to numeric Internet Protocol (IP) addresses.

Researchers at the Cooperative Association for Internet Data Analysis (CAIDA) at SDSC conducted a detailed analysis of 152 million messages received on Oct. 4, 2002, by a root server in California, and discovered that 98 percent of the queries it received during 24 hours were unnecessary. The researchers believe that the other 12 DNS root servers likely receive similarly large amounts of bad requests.

Some experts regard the system of 13 DNS root servers, the focus of several studies by CAIDA researchers, as a potential weak link in the global Internet. Spikes in DNS query traffic caused by distributed denial-of-service attacks are routinely handled by root server operators. Occasionally, as on Oct. 21, 2002, all 13 root servers are attacked simultaneously. Although no serious damage resulted from that incident, Richard A. Clarke, chairman of the President’s Critical Infrastructure Protection Board and special advisor to the president for cybersecurity, has expressed concern that attacks against root servers could potentially disrupt the entire Internet. CAIDA researchers will discuss their analysis of the root server and other cybersecurity findings with Clarke during his visit to SDSC on Jan. 28. CAIDA researchers will also present a paper detailing some of their DNS analysis at the 2003 Passive and Active Measurement Workshop April 6?8 at UCSD.

Only about 2 percent of the 152 million queries received by the root server in California on Oct. 4 were legitimate, while 98 percent were classified as unnecessary. CAIDA researchers are seeking to understand why any root server would receive such an enormous number of broken queries daily from lower level servers. “If the system were functioning properly, it seems that a single source should need to send no more than 1,000 or so queries to a root name server in a 24-hour period,” said CAIDA researcher Duane Wessels. “Yet we see millions of broken queries from certain sources.”

Wessels categorized all the queries received by the California root server on Oct. 4, 2002, into nine types. About 70 percent of all the queries were either identical, or repeat requests for addresses within the same domain. It is as if a telephone user were dialing directory assistance to get the phone numbers of certain businesses, and repeating the directory-assistance calls again and again. Lower level servers and Internet service providers (ISPs) could save?or cache?these responses from root servers, improving overall Domain Name Service performance.

About 12 percent of the queries received by the root server on Oct. 4, were for nonexistent top-level domains, such as “.elvis”, “.corp”, and “.localhost”. Registered top-level domains include country codes such as “.au” for Australia, “.jp” for Japan, or “.us” for the United States, as well as generic domains such as “.com”, “.net”, and “.edu”. In addition, 7 percent of all the queries already contained an IP address instead of a host name, which made the job of mapping it to an IP address irrelevant.

Researchers believe that many bad requests occur because organizations have misconfigured packet filters and firewalls, security mechanisms intended to restrict certain types of network traffic. When packet filters and firewalls allow outgoing DNS queries, but block the resulting incoming responses, software on the inside of the firewall can make the same DNS queries over and over, waiting for responses that can’t get through. Name server operators can use new tools such as dnstop, a software program written by Wessels and available from CAIDA detects and warns of these and other misconfigurations, significantly reducing bad requests.

Name-service errors in software implementations were reported as early as 1992, but some of these bugs are still with us today. Explosive Internet growth during the last decade exacerbates problems caused by defective name servers.


Substack subscription form sign up