Consider this Wireshark trace for a h225 Ras Registration Request (RRQ):
as you can see, Wireshark decodes requestSeqNum as 25601 but the byte presentation is 0x6400 which is 25600. I looked in the ASN.1 PER encoding rules, but I can't find the reason why the value would have to be increased with 1. My question, is wireshark decoding this correctly and if so, where can I find this in the spec?
ASN.1 code:
RequestSeqNum ::= INTEGER (1..65535)
In fact, Wireshark consistently adds 1 to requestSeqNum for all h225 messages.
Never mind,
found it in the ASN.1 PER encoding rules spec:
11.5.7.3 (The two-octet case.) If the "range" has a value greater than or equal to 257 and less than or equal to 64K, then the value ("n" –
"lb") shall be encoded in a two-octet bit-field (octet-aligned in the
ALIGNED variant) as a non-negative- binary-integer encoding as
specified in 11.3.
lb (Lower bound) is in this case 1 so that explains my question.
I was looking at the wrong place in the spec, I get headaches from reading specifications :)
Related
Let's say I'm sending a multipart request (or response). I need to choose a multipart boundary which does not appear in any of my payloads. However, my payloads are large binary files and I am streaming them to the destination. I want to avoid streaming them twice - once to scan for the boundary and one to stream out.
So my question is: is it possible to escape the boundary if it appears in the payload? If so, how?
Don't Panic. Your boundary can be up to 70 characters long. If you go with that maximum and randomly generate it out of characters and numbers you'll have 62⁷⁰ possible combinations for each position in a file. Chance of having the same sequence of bytes in your binary files is so infinitesimal that it shouldn't bother your sleep at all 😀. The probability of collision in a 1GB file is roughly 1-((1-(1/(62^70)))^(10^9)) ~= 3.4*10⁻¹¹⁸. Human brain can't really fathom how small that number is. For comparison the number of atoms in our universe is estimated to be ~ 10⁸⁰.
No, it's not possible; you need to either scan, or live with potential failures.
I'm writing some code to convert an v4 ip stored in a string to a custom data type (a class with 4 integers in this case).
I was wondering if I should accept ips like the one I put in the title or only ips wiht no preceding zeros, let's see it with an example.
This two ips represent the same to us (humans) and for example windows network configuration accepts them:
192.56.2.1 and 192.056.2.01
But I was wondering if the second one is actually correct or not.
I mean, according to the RFC is the second ip valid?.
Thanks in advance.
Be careful, inet_addr(3) is one of Unix's standard API to translate a textual representation of IPv4 address into an internal representation, and it interprets 056 as an octal number:
http://pubs.opengroup.org/onlinepubs/9699919799/functions/inet_addr.html
All numbers supplied as parts in IPv4 dotted decimal notation may be decimal, octal, or hexadecimal, as specified in the ISO C standard (that is, a leading 0x or 0X implies hexadecimal; otherwise, a leading '0' implies octal; otherwise, the number is interpreted as decimal).
Its younger brothers like inet_ntop(3) and getaddrinfo(3) are all the same:
http://pubs.opengroup.org/onlinepubs/9699919799/functions/inet_ntop.html
http://pubs.opengroup.org/onlinepubs/9699919799/functions/getaddrinfo.html
Summary
Although such textual representations of IP addresses like 192.056.2.01 might be valid on all platforms, different OS interpret them differently.
This would be enough reason for me to avoid such a way of textual representation.
Pros
In decimal numerotation 056 is equals to 56 so why not?
Cons
0XX format is commonly used to octal numerotation
Whatever your decisions just put it on your documentation and it will be ok :)
Defining if it is correct or not depends on your implementation.
As you mentioned windows OS considers it correct because it removes any leading zeros when it resolves the IP.
So if in your program you set an appropriate logic, e.g every subset of the IP stored in your 4 integer class, without the leading zeros, it will be correct for your case too.
Textual Representation of IPv4 and IPv6 Addresses is an “Internet-Draft”,
which, I guess, is like an RFC wanna-be.
(Also, it expired a decade ago, on 2005-08-23,
and, apparently, has not been reissued,
so it’s not even close to being official.)
Anyway, in Section 2: History it says,
The original IPv4 “dotted octet” format was never fully defined in any RFC,
so it is necessary to look at usage,
rather than merely find an authoritative definition,
to determine what the effective syntax was.
The first mention of dotted octets in the RFC series is …
four dot-separated parts, each of which consists of
“three digits representing an integer value in the range 0 through 255”.
A few months later, [[IPV4-NUMB][3]] …
used dotted decimal format, zero-filling each encoded octet to three digits.
⋮
Meanwhile,
a very popular implementation of IP networking went off in its own direction.
4.2BSD introduced a function inet_aton(), …
[which] allowed octal and hexadecimal in addition to decimal,
distinguishing these radices by using the C language syntax
involving a prefix “0” or “0x”, and allowed the numbers to be arbitrarily long.
The 4.2BSD inet_aton() has been widely copied and imitated,
and so is a de facto standard
for the textual representation of IPv4 addresses.
Nevertheless, these alternative syntaxes have now fallen out of use …
[and] All the forms except for decimal octets are seen as non-standard
(despite being quite widely interoperable) and undesirable.
So, even though [POSIX defines the behavior of inet_addr][4]
to interpret leading zero as octal (and leading “0x” as hex),
it may be safest to avoid it.
P.S. [RFC 790][3] has been obsoleted by [RFC 1700][5],
which uses decimal numbers of one, two, or three digits,
without leading zeroes.
[3]: https://www.rfc-editor.org/rfc/rfc790 "the "Assigned Numbers" RFC"
[4]: http://pubs.opengroup.org/onlinepubs/9699919799/functions/inet_addr.html
[5]: https://www.rfc-editor.org/rfc/rfc1700
I want code to render n bits with n + x bits, non-sequentially. I'd Google it but my Google-fu isn't working because I don't know the term for it.
For example, the input value in the first column (2 bits) might be encoded as any of the output values in the comma-delimited second column (4 bits) below:
0 1,2,7,9
1 3,8,12,13
2 0,4,6,11
3 5,10,14,15
My goal is to take a list of integer IDs, and transform them in a way they can still be used for persistent URLs, but that can't be iterated/enumerated sequentially, and where a client cannot determine programmatically if a URL in a search result set has been visited previously without visiting it again.
I would term this process "encoding". You'll see something similar done to permit the use of communications channels that have special symbols that are not permitted in data. Examples: uuencoding and base64 encoding.
That said, you still need to (and appear at first blush to have) ensure that there is only one correct de-code; and accept the increase in size of the output (in the case above, the output will be double the size, bit-for-bit as the input).
I think you'd be better off encrypting the number with a cheap cypher + a constant secret key stored on your server(s), adding a random character or four at the end, and a cheap checksum, and simply reject any responses that don't have a valid checksum.
<encrypt(secret)>
<integer>+<random nonsense>
</encrypt>
+
<checksum()>
<integer>+<random nonsense>
</checksum>
Then decrypt the first part (remember, cheap == fast), validate the ciphertext using the checksum, throw off the random nonsense, and use the integer you stored.
There are probably some cryptographic no-no's here, but let's face it, the cost of this algorithm being broken is a touch on the low side.
I am simulating the IEEE802.11b PHY Model. I am building the header of the Packet in the Physical Layer.
As per the Literature
The PLCP LENGTH field shall be an unsigned 16-bit integer that indicates the number of microseconds to transmit the PPDU.
If I assume the packet size to be 1024Bytes, what should be the value of the Length field(16 bit wide)
The calculation of the LENGTH field depends on the number of bytes to send, as well as on the data rate (5.5 or 11 Mbps). The basic idea of the calculation is:
Bytes * 8
LENGTH = Time (µs) = ----------------
Data rate (Mbps)
However, you need to read Section 18.2.3.5, Long PLCP LENGTH field in the 802.11b-1999 Standard, pages 15-17. It has the complete details of how to calculate this value, along with several examples. It unambiguously explains how to properly round the data, as well as when the length extension bit in the SERVICE field should be set.
I will not reproduce the text of the section here since it looks like IEEE might be strict about enforcing their copyright. However, if you don't have the standard already, I suggest you download it now from the link above -- it's free!
If you have any questions about interpreting the standard, don't hesitate to ask.
I have some base-64 encoded encrypted data and noticed a fair amount of repetition. In a (approx) 200-character-long string, a certain base-64 character is repeated up to 7 times in several separate repeated runs.
Is this a red flag that there is a problem in the encryption? According to my understanding, encrypted data should never show significant repetition, even if the plaintext is entirely uniform (i.e. even if I encrypt 2 GB of nothing but the letter A, there should be no significant repetition in the encrypted version).
According to the binomial distribution, there is about a 2.5% chance that you'd see one character from a set of 64 appear seven times in a series of 200 random characters. That's a small chance, but not negligible. With more information, you might raise your confidence from 97.5% to something very close to 100% … or find that the cipher text really is uniformly distributed.
You say that the "character is repeated up to 7 times" in several separate repeated runs. That's not enough information to say whether the cipher text has a bias. Instead, tell us the total number of times the character appeared, and the total number of cipher text characters. For example, "it appeared a total of 3125 times in 1000 runs of 200 characters each."
Also, you need to be sure that you are talking about the raw output of a cipher. Cipher text is often encapsulated in an "envelope" like that defined by the Cryptographic Message Syntax. Of course, this enclosing structure will have predictable patterns.
Well I guess it depends. Repetition in general is bad thing if it represents the same data.
Considering you are encoding it have you looked at data to see if you have something that repeats in those counts?
In order to understand better you gotta know what kind of encryption does it use.
It could be just coincidence that they are repeating.
But if repetition comes from same data, then it can be a red flag because then frequency counts can be used to decode it.
What kind of encryption are you using? Home made or some industry standard?
It depends on how are you encrypting your data.
Base64 encoding a string may count as light obfuscation, but it is NOT encryption. The purpose of Base64 encoding is to allow any sort of binary data to be encoded as a safe ASCII string.