In RFC 5952 - section 5 it is stated that for some IPv6 addresses it is recommended to give the mixed notation, if it has a certain prefix. However, it is unclear which prefixes are used for this, because it is stated that a prefix may be used if it is commonly used as a prefix for a IPv4-mapped address. Thus meaning basically any prefix could be used for this.
Now my question is:
May every IPv6 address be written as an IPv4-mapped IPv6-address?
If not, what are the exact rules for correctly writing an IPv4-mapped IPv6-address?
You may use the IPv4 notation for the last 32 bits in any IPv6 address. The RFC you mention is about the recommended notation. It doesn't specify all correct notations. The RFC that defines that is https://www.rfc-editor.org/rfc/rfc4291#section-2.2 and it allows the IPv4 style notation (Note that it isn't called "IPv4-mapped IPv6-address", that is actually a special range of addresses that commonly used this notation style) for any address.
Related
I'm trying to explain base64 to someone, but I'm not grasping the fundamental need for it. And/or it seems like all network protocols would need it, meaning base64 wouldn't be a special thing.
My understanding is that base64 is used to convert binary data to text data so that the protocol sending the text data can use certain bytes as control characters.
This suggests we'll still be using base64 as long as we're sending text data.
But why do I only see base64 being used for SMTP and a few other contexts? Don't most commonly-used protocols need to:
support sending of arbitrary binary data
reserve some bytes for themselves as control chars
Maybe I should specify that I'm thinking of TCP/IP, FTP, SSH, etc. Yet base64 is not used in those protocols, TMK. Why not? How are those protocols solving the same problems? Which then begs the reverse question: why doesn't SMTP use that solution instead of base64?
Note: obviously I have tried looking at WP, other Stack-O questions about base64, etc, and not found an answer to this specific question.
I've seen around IPv4 subnet address ranges expressed in a compact form.
For example:
127/24 == 127.0.0.0/24
10/8 == 10.0.0.0/8
10.10.10/24 == 10.10.10.0/24
BTW I can't find any RFC (or any other kind of official or semi-official documentation) that describes it.
Does anyone have some links to share?
I recall this notation being used on Juniper routers as far back as 2001; not sure what, if any, RFC defined it. RFCs do not define the whole planet; somehow over the years they replaced specifications (which are/were much more detailed), but were originally intended for Request For Comments. (Gee, I wonder why there are so many bugs in networking gear.)
There're simple structures of network protocols (e.g. ipv4, tcp, udp, ...) which can be can be easily described in any language via strictures. But there are more difficult structures with optional fields/block and dynamic block/field sizes (TVL, LT, etc.) - e.g. ipv6, sctp, PROFINET-IO (decentralized periphery), ...
My question is - How to properly describe the protocol data structure and store that for future using? E.g. generating structures for different languages, or getting all trees (e.g. in ipv6 Wireshark ipv6.opt.pdm.delta_last_recv), or getting all fields for specific block/extension/option of the protocol.
I hope the description is clear. Thanks.
The ASN1 language was created to solve this and other problems like it. IMHO, the reason that you do not see it used often is that the language got very complex and different factions started to use it in different ways (SNMP MIBs, Crypto X509, etc) which resulted in ASN1 compilers being specialized and not general.
Often instead of ASN1 you see a C-Struct definition of the packet or just an RFC packet diagram ( you can use the protocol tool to generate one) with some markings (like ...) to indicate variable length.
I guess protobuf technically also qualifies as a language that describes a binary message though I do not believe it is a general language that can describe any message and is meant to be used by other protobuf-enabled applications.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
It is a known fact that there are three blocks of IPv4 Addresses that were chosen to be reserved for private networks:
10.0.0.0 - 10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
(as specified by RFC 1918). However, although I can sort of see why 10.0.0.0 would be a natural choice, I can think of no particular reason why 172.16.0.0 and 192.168.0.0 were chosen among all the possibilities. I tried to google about this but got nothing, and the RFC document did not provide any explanation either. Was it really just a random decision?
AS stated by ganeshh.iyer "
10.0.0.0/8 was the old ARPANET, which they picked up on 01-Jan-1983. When they shut down the ARPANET in 1990, the 10.0.0.0/8 block was freed. There was much argument about if there should ever be private IP spaces, given that a goal of IPv4 was universal to all hosts on the net.
In then end, practicality won out, and RFC 1597 reserved the now well known private address spaces. When ARPANET went away, the 10.0.0.0/8 allocation was marked as reserved and since it was known that the ARPANET was truly gone (the hosts being moved to MILNET, NSFNET or the Internet) it was decided that this was the best Class A block to allocate.
Note Class A. This was before CIDR. So, the Class A, B and C private address netblocks needed to come out of the correct IP ranges.
I know that 172.16.0.0/12 was picked because it offered the most continuous block of Class B (/16) addresses in the IP space that was in a reserved block. 192.0.0.0/24 was always reserved for the same reason that 0.0.0.0/8 and 128.0.0.0/16 were reserved (first blocks of the old Class C, A and B network blocks) so assigning 192.168.0.0/24 out as private fit well -- 192.0.2.0/24 was already TEST-NET, where you could use them in public documentation without fear of someone trying it (see example.com for another example.)"
Quoted from:
https://supportforums.cisco.com/thread/2014967
https://supportforums.cisco.com/people/ganeshh.iyer
I understand that a header HTTP_X_FORWARDED_FOR is set by proxy servers to identify the ip-address of the host that is making the HTTP request through the proxy. I've heard claims that the header HTTP_CLIENT_IP is set for similar purposes.
What is the difference between HTTP_CLIENT_IP and HTTP_X_FORWARDED_FOR?
Why would one have different values than the other?
Where can I find resources on the exact definition of these headers.
Neither of these headers are officially standardised. Therefore:
What is the difference between HTTP_CLIENT_IP and HTTP_X_FORWARDED_FOR? - it is impossible to say. Different proxies may implement these, or may not. The implementations may vary from one proxy to the next, and they may not. A lack of a standard breeds question marks.
Why would one have different values than the other? - See point 1. However, from a purely practical point of view, the only reason I can see for these having different values is if more than one proxy was involved - the X-Forwarded-For: header might then contain a complete track of the forwarding chain, whereas the Client-IP: header would contain the actual client IP. This is pure speculation, however.
Where can I find resources on the exact definition of these headers. - You can't. See point 1.
There does seem to be some kind of de-facto standard regarding the X-Forwarded-For: header, but given that there is no RFC that defines it this cannot be relied upon see comment below.
As a side note, the Client-IP: header should by convention be X-Client-IP: since it is a 'user-defined' header.