jhw: baleful eye (Default)
[personal profile] jhw
Here's how I know I'm reading something about IPv6 that wasn't written by an expert: it contains a line somewhere in it that spells out exactly how many addresses can be represented in 128 bits. Hint: it's a galactic size number, and people who quote it to you usually want you to believe that such a large number of addresses couldn't possibly be exhausted as quickly as the comparatively smaller range of IPv4 addresses. Sometimes, they even try to boggle your mind with analogies involving how many addresses per gram of matter in the solar system, etc.

Well, I'm here to explain why those people need to put on their thinking caps and cogitate a little harder. The IPv6 address space is finite and it's NOT consumed one address at a time.

Sure, the address space in IPv6 is a lot larger than in IPv4, but that space isn't used the same way. Here's the thing: when you've got only 32-bits in a nice flat allocation space, the temptation to start encoding information in your IP addresses is pretty manageable. People just don't try to do much of that because there aren't really a lot of bits with which to do it. Not much information can be encoded in the spare bits of a 32-bit IPv4 address.

With IPv6 addresses, on the other hand, there's a very real temptation to start encoding things in various subfields of the addresses. And the IPv6 addressing architecture is probably the first place where you see that happening. Most IPv6 addresses are divided into a 64-bit network identifier and a 64-bit interface identifier. The network identifier can be divided further into fields assigned by the RIR, then by the LIR, then finally comes the subnetwork identifier.

There are working groups inside IETF that have succumbed to the siren call to encode information in the bits of IPv6 addresses. Teredo, 6to4, IRON and NAT64 are some examples of protocols that embed IPv4 addresses in bits carved out between a short prefix assigned by IANA and the subnet identifier. The Host Identity Protocol wants to encode cryptographic material in IPv6 addresses using a protocol called ORCHID. These are just the IETF efforts I know about that eat surprisingly large blocks of IPv6 address space by encoding things into subfields.

I predict once the vast teeming masses of Internet application developers really start cranking on IPv6 adoption, the street will find many new and interesting uses for that large address space, and the Internet registries will be inundated with demand for blocks of sufficient size to support their designs. Once you start trying to cram useful information into IP addresses, you pretty quickly discover that the number of available addresses isn't as galactic as those news articles and blog posts you're reading would like you to believe. Now, I feel certain that the experts at the NRO and the regional Internet registries know all this, but I suspect the rest of the Internet engineering community is developing some unhealthy misconceptions as interest in IPv6 spreads.

So... people... please stop gushing about how it's "inconceivable" that we might ever exhaust the free pool of IPv6 addresses like we've now run out of free IPv4 addresses. It's pretty easy to conceive how it could happen, and it would be a good idea to bear that in mind when developing your address plans.

I beg to differ... Let's look at the basic math.

Date: 2011-02-18 08:01 am (UTC)
From: [identity profile] http://www.google.com/profiles/diverflyer
Hey James, interesting piece.

First, the galactic number of which he speaks is 340 undecillion. Now, if James didn't know me (It's Owen from HE, James), he'd probably discount me as a non-expert just for knowing that number. I'm pretty sure he can't (or at least won't) do that in my case.

He's right that it's not consumed one address at a time. It's consumed 18 quintillion addresses per subnet at a time, and, sometimes in even larger units than that, just like IPv4. (These larger units can be called "prefixes" for convenience in this context.) He's also right about something else in the article, but, you have to keep reading to find out what if you don't already know...

Each prefix issued has a length specified in bits. For example, a small ISP gets a /32 prefix (which means that the RIR specifies 32 bits and the ISP gets to specify part or all of the rest). An ISP will give a typical end-site a /48 (meaning the ISP got their 32 bits specified by the RIR, and they specified the next 16 bits, leaving the remaining 16 bits of network numbering to the end-site to assign.) This means the end site has 65,536 subnets that they can assign and each subnet can hold up to 18 quintillion hosts.

If you're worried about running out of these ISP assignments, as James apparently is, let's do some math on that...

First, there are a little more than 4 billion /32s, but, we only get 1/8th of that to play with in the current IETF designation (almost 7/8ths of the IPv6 address space is held in reserve in case our first stab at numbering doesn't pan out, so, there's a safety valve in case James is right). So we actually get about 500 million (a little more, technically) /32s to assign to ISPs before we run out of space.

There are currently approximately 30-50,000 ISPs on earth. To get to 500 million ISPs, if we opened a new ISP somewhere in the world every day, it would take 1,470,000 (and then a few) years to use them all up.

Now, the next argument James would make about this is that many ISPs will not get only a /32. This is true.
In fact, I'm personally advocating policy in both the ARIN and APNIC regions that will mean that the vast majority of ISPs probably get a /48 and a small handful (probably less than 20 world wide) will get as much as a /16.
So... Let's look at that math. There are 65,536 /16s available, but, again, we only get 1/8th of them to play with for now, so, that's 8,192. Subtracting 100 (in case I'm way off about 20), we get 8,092. Let's assume that the remaining 50,000 are divided as 1,000 /20s, 10,000 /24s, and 39,000 /28s. (Probably fewer /20s and /24s and more /28s and a bunch of /32s, but, this way James can't say I wasn't conservative in my estimates.) Of the 8,092 remaining /16s, the 1,000 /20s will consume 62.5 of them. The 10,000 /24s will consume 39.0625 of them and finally, the 39,000 /28s will consume a whopping 9.5215 of them (rounded up). So, in total, that's 100+62.5+39.0625+9.5215 = 211.084, let's call it 250 for round numbers consumed, leaving us with 7,842 still available.

Oh, by the way, remember that with that 7,842 available, if we use it up, there's still almost 7/8ths of IPv6
space available to use for a more conservative numbering scheme.

Now there is one place where I agree with James. This trend of application-specific well-known humongous address blocks has to stop. Either we need to find a way where these applications can share a single app-specific block, or, we need to stop using addresses for app-specific data. The good news is that most of the things James mentions
above are unlikely to ever leave the laboratory. HIP and ORCHID are the dream of some cryptographic researchers, but, so far, it looks like the kind of thing only a cryptographer could love. Teredo, 6to4, and NAT64 are all likely to disappear very shortly after IPv6 becomes ubiquitous and those addresses can probably be reclaimed in 5-10 years from now.

It's conceivable that if we keep giving applications addresses for the encoding of data, we could run out because programmers are very good at wasting almost any resource you put in front of them.

However, when you hear people saying that runout is inconceivable, most of them are not non-experts. Most of them are people like me who are focused on address allocation policy for the sake of allocating addresses to machines as intended. To us, an allocation scheme for giving addresses to machines which makes any sense seems very very very unlikely to exhaust the address space. Inconceivable, as a matter of fact. I think the math above
backs up that claim.

It is OK to give a /48 to every end site, including residential end-users. All the RIRs have policy that will accept this today, and, that policy should be getting even better shortly. It's also OK for an ISP to get at least a /32 and to align their hierarchies on nibble boundaries for human factors engineering purposes (which is fancy terminology for even digit boundaries = fewer mistakes = fewer outages). It's also OK to round up ISP allocations to nibble boundaries for all the same reasons. None of those practices will consume more than 0.5% (best estimate is 0.45%) in the next 50 years, worldwide for the purposes of addressing networks and machines.

So, when you hear people preaching that we need to conserve address space, it's important to look at the whole picture. We should be conservative in non-addressing uses of address space. We can, and should, still be reasonably liberal in address allocation and assignment.

Thanks for your time,

Owen DeLong
IPv6 Evangelist
Hurricane Electric


jhw: baleful eye (Default)
j h woodyatt

August 2012

121314 15161718

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 22nd, 2017 09:38 am
Powered by Dreamwidth Studios