Skip to main content

IPv6 and the cloud: Not quite NATural

IPv6 works. It's a shame that enterprise software vendors may be ignoring it.

Cloud technology is so impressive at first glance that you might think you could put your whole enterprise into it. In theory you could, but theory often makes simplistic assumptions. The reality of complex enterprise networks is what leads so many organizations to adopt hybrid cloud platforms and preserve many critical in-house systems.

To get all those computers on the Internet, we had to make a deal with the devil, and the deal is called network address translation (NAT). Rather than give everyone their own public IP address, NAT uses a single address as the public front end for a larger range of systems using a private address space, usually 192.168.0.0/16 or 10.0.0.0/8.

The related technique of network address and port translation (NAPT) is what allows two host systems with different private IP addresses to communicate outside using the same public IP address, even if they use the same TCP/UDP port. The NAPT changes the source port on the outbound packet for each unique port so replies can go to the correct internal host.

NAT and the related technique of port address translation save a lot of money, time, and administration effort, and the Internet could never have flourished as it has without them. But they break what the Internet’s designers thought of as a fundamental principle: end-to-end connectivity. Under this principle, end nodes on the network communicate seamlessly with one another no matter what the infrastructure is between them.

Forecast for IPv6: Cloudy with chance of NAT

The main reason NATs are so popular, of course, is that IPv4 addresses are scarce and, as Mark Twain said of land, they’re not making any more of them. Ideally, the answer is IPv6, the limits of which probably won’t be reached in our lifetimes. IPv6 support is widespread, particularly in enterprise products, but it’s not universal. IPv6 support in Amazon Web Services EC2 is fairly new, and there are still important holes in IPv6 support across AWS, such as the lack of Elastic IPv6 addresses. Certainly, AWS is expanding IPv6 support all the time, but the point is that it still noticeably lags behind IPv4 support. 

Most of us need to work around the scarcity of IPv4 addresses. In the meantime, cloud architecture encourages the proliferation of virtual machines and other virtual things that need addresses. Suddenly, it becomes easy and cheap to have multiple instances of a service running to increase the performance and resilience of the service. At peak periods, you may even be able to have the cloud auto-scale an application, which generally means allocating more servers with their own addresses.

Many applications and protocols on the Internet were written assuming end-to-end connectivity and may fail when one of the parties is behind a NAT. Protocols based on UDP and other stateless protocols can be disrupted through a NAT.

The main and most obvious source of trouble is when applications store IP addresses as data in packets. For example, if one computer has the private address 10.0.0.2 and sends a packet that includes a reference in the payload to 10.0.0.3 through a NAT to a computer on a public address, that public computer cannot initiate communication to 10.0.0.3, even though this may be the design of the application. In a world with true end-to-end connectivity, there would be no such limitation.

The list of applications and techniques broken, or at least hampered, by NAT is long, but here are a few of them. Many will say, with justification, that some broken applications are old and there are better ones, that the techniques are unwise, or that the protocol is insecure anyway. The real point is that real users use them and rely on them and will resist changes that will break their running systems.

  • Persistent host to address binding: When the address on one side is address allocated from a pool by a NAT, the other side can’t ever really know if the address will be reallocated after a time, and the connection will be broken.
  • Fragmented packets: If private host packets are fragmented before being processed through a NAT, they cannot be properly processed because only the first fragment contains the header information all fragments would need for reassembly. The session would necessarily be corrupted.
  • Multichannel protocols: Multimedia applications built on protocols such as FTP, SIP, and H.323, which use multiple channels to separate signaling and media, often exchange addresses in the signaling payloads. There are workarounds, generally application-level gateways, which are protocol-aware filters running in the NAT. These techniques generally require more powerful and expensive NATs that can perform deep packet inspection.
  • Hosts not knowing addresses: NATs prevent hosts from reliably knowing their own addresses as viewed from outside the NAT. Even if they knew their own addresses, they would not know in which contexts those addresses were valid.
  • IPsec woes: IPsec encrypts the entire packet contents, including Layer 3 and 4 headers, in order to make them opaque to whomever is sniffing the wire. There are techniques, even standards, for surmounting the problem, but they all add complexity and weaken protection.
  • X Windows: The X Windowing system (colloquially known just as "X") is the windowing UI system for the Unix world. It has a strange architecture in which the client system at which the human user sits and works is the "server" because it serves its user interface, while the computer on which the programs run, probably a server in the conventional sense, is the "client." Because the design is backward, the connection needs to be backward, from client to server. The usual way to work around this is to tunnel the connection through SSH.
  • SNMP: Simple Network Management Protocol, the Internet standard for monitoring and management of devices on IP networks, needs to address those devices directly, as you might assume, and initiate communications with them. Depending on how the network monitor and monitored device are connected, there are ways to set up a gateway proxy to deliver SNMP traps. The device sees the traps as coming from the gateway and has no real way of knowing where they came from.

The bottom line in almost all of these cases is that you can make them work, sort of, if you punch holes in firewalls and other security barriers, limit functionality, or set up a tunnel that creates more complexity. 

Where will you place your workload in a Hybrid IT environment? Get advice in this interview with 451 Research.

And then there's reliability

NATs also introduce an element of unreliability. It’s hard to design redundancy into NATs, as the state of the NAT table is central to the function. If the NAT goes down, the hosts are unreachable.

It’s no surprise that enterprises of any real complexity retain many functions on in-house systems. The problem is not so much the outsourced and public nature of public clouds as cloud architecture itself and address depletion.

It’s tempting to think that fully qualified domain names (which directly correspond to a system, even one behind a NAT) are an effective architectural substitute for raw IP addresses, but this is often not the case. Many sites don’t maintain DNS for local hosts. Many hosts don’t even know their own DNS names. Maintaining an accurate, high-performing DNS is hard. Fundamentally, DNS is not as reliable as IP routing. In the real world, DNS lookups can slow things down; it’s often faster and easier not to bother with it.

Public clouds can mitigate some of these problems with virtual private clouds, which allow customers to use the cloud provider’s systems but run a portion of the network in their own address space and under their complete management. IT can continue to use its own management tools and procedures. It’s far less disruptive and closer to what they know works.

There is also some benefit to taking systems running on conventional architectures in conventional data centers and running them instead in a public cloud. But the advantage is not as profound as the true cloud architecture, running on the cloud services that the public clouds offer. Hybrid IT configurations that allow the user to mix on-prem and cloud solutions can also provide more control over addressing and NAT issues.

The answer in the long run is, of course, an Internet and marketplace where customers can assume full IPv6 support. In such a computing environment, any system could address any other, although we’d still have firewalls and other safeguards to control who actually gets in and out of networks and systems. We can expect things to work better because it all will be simpler. 

IPv6 and the cloud: Lessons for leaders

  • Confirm that your IPv6 architecture will work with your selected cloud services before adding cloud services to your infrastructure.
  • Make sure that applications within the existing infrastructure can access cloud services.
  • Consider a hybrid infrastructure to get maximum flexibility and value.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.