I'm curious about something so I'm going to throw this thought experiment out here. For some background I run a pure IPv6 network and dove into v6 ignoring any v4 baggage so this is more of a devils advocate question than anything I genuinely believe.
Onto the question, why should I run a /64 subnet and waste all those addresses as opposed to running a /96 or even a /112?
It breaks SLAAC and Android
let's assume I don't care for whatever reason and I'm content with DHCP, maybe android actually supports DHCP in this alternate universe
It breaks RFC3306 aka Unicast-prefix-based multicast groups
No applications I care about are impacted by this breakage
It violates the purity of the spec
I don't care
What advantages does running a /64 provide over smaller subnets? Especially subnets like a /96 where address count still far exceeds usage so filling subnets remains impossible.
🤔 does it actually break PD?...that's actually not an awful reason if it does. Would actually make sense...outside of this post I fall into the /64 everywhere crowd, minus the cases for /127. Your gripe with point 2 is fair...although I haven't come across any applications that need it...beyond the applications I've written that use it...because again IRL I'm in the /64 everywhere crowd. Thanks for the response though
What advantages does running a /64 provide over smaller subnets?
Nibble boundaries and MAC-48. Way way back in the day like the 90s the plan was your IP would be matched to your Mac address so no matter which network you were connected to your last 48 bits would be the same.
but then they needed to start using control bits for various reasons and so a proposal was made to increase the address space to 60, but 60 is a bit of an awkward nibble boundry so they decided to expand it to 64.
Ultimately as the network admin, you can run whatever network size you want. The preset prefix sizes are recommended sizes not mandatory ones.
I run a pure IPv6 network and dove into v6 ignoring any v4 baggage
Now for the controversial bit.
You are showing some V4 baggage there by trying to conserve address space, it is not needed.
There are so many addresses available that the current allocates public block will not exhaust for over 100 years and we still have 5 more blocks to use.
I'm not looking for this type of answer. I'm aware of why v6 was designed with /64 subnets...I'm also aware we don't need to conserve addresses, both of these reasons are why I prefixed my question with the devils advocate bit. I understand all of this...then I proceed to describe why mac based, or more generally SLAAC, addressing doesn't matter to me because we have DHCP and DHCP works great, who needs SLAAC? You cannot convince me to use SLAAC, SLAAC is not important to me or my hypothetical use cases.
...also yes I'm showing v4 baggage...because again...devils advocate...this is a thought experiment, not a genuine question, in this I just think that a /64 is dumb...a /96 is much nicer because it's still plenty big while not being quite so excessive. Keep in mind, IRL I'm a firm believer of /64 everywhere...I don't carry v4 baggage...hypothetical me from this question does and it's not going away because 4.3B addresses is still PLENTY when you don't care about the purity of v6 design.
Standardizing /64 everywhere is great when you want to immediately figure out which part is the "network number" and which part is the "host number".
The standardization also helps in conserving route table space as the routers don't have to care about the last 64 bits of IPv6 addresses, because you are routing /64 networks around, not the hosts. (I believe that's why people do the "reserve /64, assign /127" thing for P2P links.)
This is outside the scope of your question and old, but I keep seeing something like this as a justification for not caring about conservation like v4 limitations. I have questions though.
v4 has been around for ~50 years?
Does everyone likely believe that we'll replace the protocol before then? So there's no need to worry about having a repeat take place?
Doee anyone thing it's weird that we are repeating previous unforeseen requirements? IBM said there'd never need to be more than so many KBs of RAM, etc.
Do we think the rate of IP enabled devices will increase, decrease, or stay the same as we progress through that up coming 100 years?
I suppose we'll just migrate to a new protocol version when we finally get that colony on Mars, expand all over the universe and run out of the infinite IP space 😉
We may replace it by then, but when people quote the 100 year thing like I have we sometimes forget to mention that is in reference to the current allocated 2000::/3 block.
We have several other blocks reserved for future use which will take several hundred years each to use up.
If we find a more efficient way of using address space then we can use those methods for the other /3 blocks.