Citizendium - a community developing a quality, comprehensive compendium of knowledge, online and free.
Click here to join and contribute
CZ thanks our previous donors. Donate here. Treasurer's Financial Report

Talk:CIDR notation

From Citizendium
Jump to: navigation, search

In IPv4 & IPv6 articles

It's in them to some extent; it is the standard for IPv6 and subnet masks are deprecated in IPv4.

I've changed "subnet mask" to "network prefix".

Many routers do this comparison with hardware assistance, and, in some cases, with ternary rather than binary masks. Howard C. Berkowitz 00:44, 27 October 2009 (UTC)

I'm never sure whether to edit an existing article like IPv4, or just write a new one when I need a quick explanation of a topic like CIDR notation. The need for a short explanation came up in an article on an email authentication method (SPF). In this case, an entire article on IPv4, or even CIDR, would be too much. --David MacQuigg 12:07, 27 October 2009 (UTC)
Then let's see if we can extract the CIDR notation part out of three articles and have an article to which all can link. One of the questions if that article should or should not discuss the deprecated subnet mask notation. If an IPv4 article keeps subnet masks but links to CIDR, it might overemphasize masks. Maybe the article should be "IP address prefix length description." Howard C. Berkowitz 14:33, 27 October 2009 (UTC)
Sounds good. I can add a section with an example of a subnet mask, basically summarizing what is in Peterson & Davie. If we need more on subnet masks, then a separate article makes sense. That is not my area of expertise, however, so you might have to write that one.
Looks to me like the whole awkward business of subnets and CIDR came out of the inefficiency in allocating the original class A, B, and C networks. If we were doing it from scratch, we would use a notation like This would allow the address space to be subdivided as precisely as any small domain could want (block of 5 - no problem), while still allowing aggregation of larger blocks in the routing tables. At some point upstream, where router speed is critical, the blocks may need to fall on 2^N (maskable) boundaries, but I don't think that should matter to routers at the campus level. I wonder if it even matters to the backbone routers. Dedicated hardware could compare an address against upper and lower limits about as quickly as using a mask.
So now we have CIDR notation in SPF authentication records, where it makes no sense at all. --David MacQuigg 17:37, 27 October 2009 (UTC)
Maybe a notation as in Routing Policy Specification Language#Range Expressions?
Remember that a real-world router has to make many address comparisons, not just for forwarding but for access control list, conditional handling (e.g., routing policy), etc. Border Gateway Protocol associates a good deal of information with that address range. Trust me, in the forwarding plane (including filtering) of a gigabit router, lookup performance is important. When you start getting into real-world security and management, you often wind up having to pipeline the packet through five or more processors, some special-purpose. In the Cisco GSR's (12000 series), the autonomous forwarding card have synchronously running main forwarding processors and a "punt" path to a closer examining one; the timing is tight enough that a packet may have to queue for lookup if it misses a lookup cycle.
When I was doing router design, I was more concerned with the processing of routing protocols and the building of the tables; one of my colleagues was the hardware/firmware lookup specialist, but I know some of the principles. Ternary Content Addressable Memory gives you some very nice capabilities in edge routers, but it's too expensive for backbone routers (see RFC 4098 for definitions of ISP router functions).
I can go through the whole addressing history, and indeed my first book is on addressing architectures. The problems that CIDR addressed were discussed earlier, but really came into serious discussion around RFC1518. If you go back to the original IPv4, RFC 760, you'll see a brief period where all prefixes were /8. It wasn't long before RFC 791 introduced classful addressing (spit-pfui) but subnets came later. Howard C. Berkowitz 18:49, 27 October 2009 (UTC)
I wish there was a fundamental explanation for why we can't have arbitrary address ranges, at least at the edges where most of the allocations to small networks are done. Seems there would be great benefit to this - near perfect utilization of the available addresses. I don't doubt your conclusion, I just don't follow the reasoning. As an IC designer, it just seems that a simultaneous test against an arbitrary upper and lower limit would go as fast as masking and test for equality. Interesting discussion, but I guess we are way off the topic of CIDR notation in an SPF record. --David MacQuigg 19:16, 27 October 2009 (UTC)
Yes, it is digressing although it may be grist for something else. I will say that the consensus, in designing IPv6, that eficient utilization, which more and more drove IPv4 to ridiculous extents, was consciously rejected as a desirable goal. The 128-bit IPv6 is mean not so much to give unique addresses to every ant on the planet, but to allow easy allocation, as well as varying levels of aggregation AND local significance. Howard C. Berkowitz 19:19, 27 October 2009 (UTC)