Address aggregation contigous blocks, why are some addresses unsumable? - networking

There is a question in one of my past exams that says: "An IP-operator has recieved these IP-addresses:
• 192.168.1.0/26
• 192.168.1.96/27
• 192.168.1.128/27
• 192.168.1.160/27
Q: Sum the networks that can be summed."
So I tried to sum all of the IP-adresses, but it turns out that you can only sum the last two of them, due to adresses ranging from 192.168.1.64-95 do not exist. But why then, can you only sum the last two (192.168.1.128/27, 192.168.1.160/27) and not the three last ones (192.168.1.96/27, 192.168.1.128/27, 192.168.1.160/27) ?

To understand the problem, you need to think of the addresses in binary rather than decimal notation. Bear in mind that the slash-suffix designates the number of bits in the network address. To combine, two blocks must match on all but the lowest bit of the network part of the address. This implies that they are numerically adjacent networks, but only 50% of numerically adjacent networks are adjacent because they differ on the lowest bit. The other half of the time, they differ on some other bit as well. That's just the nature of counting in binary.
Thus, for example, you could combine 10.0.2.0/24 and 10.0.3.0/24 into 10.0.2.0/23, because they match on the first 23 bits. You can't do the same for 10.0.1.0/24 and 10.0.2.0/24, however, because they only match on the first 22 bits.
If you've got three adjacent networks (with the same netmask length), then the one in the middle can definitely be merged with one of its neighbours, and definitely can't be merged with the other one.

Related

Number of valid parenthesis catalan number explanation

While studying about catalan numbers, some of the applications that I came across were:
no of possible binary search trees using n nodes.
no of ways to draw non-intersecting chords using 2*n points on a circle.
no of ways to arrange n pairs of parenthesis.
While I understand the first two problems, how catalan numbers fit in their solution, I am not able to understand how they fit in the third problem.
Couldn't find any other useful resource on the internet which explains the HOW part. Everyone just says that it's the solution.
Can someone please explain.
Since others do not seem to agree with me that this question is off-topic, I now decide that it is on topic and provide and answer.
The Wikipedia is indeed confusing about the "number of ways to arrange n pairs of parentheses" (the second bullet point in this link.) Part of the confusion is that the order of the strings of parentheses does not match the order of the binary tree, which you do understand, or with many of the other examples.
Here is a way to transform a string of n pairs of parentheses which are correctly matched into a binary tree with n internal nodes. Consider the left-most parenthesis, which will be a left-parenthesis, together with its matching right-parenthesis. Turn the string into a node of the binary tree. The sub-string that is inside the currently-considered parentheses becomes the left child of this node, and the sub-string that is after (to the right) of the currently-considered right-parenthesis becomes the right child. Either or both sub-strings may be empty, and the currently-considered parentheses are simply removed. If either sub-string is not empty, continue this procedure recursively until all parentheses have been removed.
Here are two examples. Let's start with the string ((())). We start with
The considered-parentheses are the outermost ones. This becomes
(I did not bother drawing the external leaf nodes) then
then
which is Wikipedia's left-most binary tree with 3 internal nodes.
Now let's do another string, (())(). We start with
Again, the considered-parentheses are the outermost ones. This transforms to
And now the considered-parentheses are the first two, not the outermost ones. This becomes
which finally becomes
which is the second binary tree in Wikipedia's list.
I hope you now understand. Here is a list of all five possible strings of 3 pairs of parentheses that are correctly paired, followed by Wikipedia's list of binary trees. These lists now correspond to each other.
((())) (()()) (())() ()(()) ()()()

SPOJ PPATH, Converting a given 4 digit prime to another 4 digit prime

In the SPOJ problem PPATH we are given two four-digit prime numbers and we have to convert, in the least possible steps, the first prime into the second one by changing a single digit at a time and at each step the number should be prime. We have to output 'IMPOSSIBLE' if the primes cannot be converted in said fashion.
However, solutions to the problem in which the impossible case is not even considered have been accepted, which leads one to conjecture that every four-digit prime can be converted into any other four-digit prime in the specified manner. I was unable to prove it. Is it true? How can we prove it formally? Also, is there a general result for n-digit primes?
For four digit number this can be verified exhaustively through a program but for n digit we will have to prove it theoretically.
Well so you have an undirected graph with vertices as a prime 4-digit numbers and an edges connecting two numbers which differ in 1 digit. You are asked to find the closest path from one vertex to another. IMPOSSIBLE result will be produced if you will not be able to find such path. That would mean that graph has more than one connected component. If you prove that this graph has one connected component it will guarantee the existence of the path.
I don't know how to prove it in a formal way but it is very easy to check if graph described above has only one connected component. You can write an algorithm and its result can be interpreted as a proof for a specific case of 4-digit graphs.

How to write in network address/netmask the following address space?

I'd like to ask how to write in network address/netmask the following address space:
63.39.191.192 - 63.40.192.223
On paper, I couldn't figure any way of doing it, so I tried using a network address calculator to figure it out.
I inputed the first IP address and started toying with the netmask.
What I couldn't understand is how the first and last usable address varied based on the netmask.
So, here I am, hoping that you might explain to me how the first and last IP address are determined based on the netmask and how to solve that problem.
There are two things that someone might mean by a address/netmask pair. One option is something that looks like 192.168.0.1/24. This means that the first 24 bits of an acceptable address must match the given address. This is a common way of expressing subnets, however it is not possible to express your range like this. This means that you will not be able to work out a solution in the calculator you linked, which uses this method as input.
The other way is as a pair of dotted quads. The subnet above would be expressed like this: 192.168.0.1/255.255.255.0. Everything which can be expressed in the first way can be expressed in the second way, but the converse is not true.
To understand how to solve your problem using the second format, you have to know something about binary numbers. Each part of the dotted quad is a number 0-255 and can be expressed as a binary number with eight digits (bits). Thus the whole address is a binary number made up of 32 bits, each of which is either 0 or 1.
A network specification is an address, followed by another 32-bit number, expressed as an address. What the second number means is this: each place in that number where the digit is 1, the first address has to match on that digit. Each place where the digit in the netmask is 0, no match is needed. So you see how matching the first 24 bits is the same as matching 255.255.255.0, which is a 32-bit number made up of 24 1's followed by 8 0's.
You can also see how some netmasks can't be expressed in the first type. Any netmask which isn't one string of repeated 1's followed by the rest 0's, can't be written like this. The reason for the first type is that most real-world networks do have netmasks of this form.
To construct a netmask of the second type, you can work with one byte at a time. The first byte of the address has to match exactly 63. So the address will be 63.x.x.x and the mask will be 255.x.x.x. As before 255, made up of all 1's, means match every bit. The second byte can be either 39 (00100111 in binary), or 40 (00101000). This one can't be expressed as any number plus a set of bits to match. Only the first four bits of the two numbers match, but if we try to do something like 63.39.x.x/255.224.x.x (224 is 11110000), we will match any second byte from 32 to 47. You should check your previous question to see if this is right, however, you should hopefully be able to figure some more out if you understand binary.
If you're not completely sure how binary works, please go away and make sure you really get it before looking into netmasks further. It really will help and it's a very good thing to know about anyway.

Maximizing Stored Information (Entropy?)

So I'm not sure if this question belongs here or maybe Math overflow. In any case, my question is about information theory.
Let's say I have a 16 bit word. There are 65,536 unique configurations of 1's and 0's in that number. What each one of those configurations represents is unimportant as depending on your notation (2's complement vs signed magnitude etc.) the same configuration can mean different things.
What I'm wondering is are there any techniques to store more information than that in a 16 bit word?
My original ideas were like odd/even parity or something but then I realized that's already determined by the configuration... i.e. there is no extra information encoded in that. I'm beginning to wonder if no such thing exists.
EDIT For example, let's say some magical computer (thinking quantum or something here) could understand 0,1,a. Then obviously we have 3^16 configurations and can now store more than the numbers [0 - 65,536]. Are there any other properties of a 16 bit word that you can mess with in order to encode extra information in your bit stream?
EDIT2 I am really struggling to put this into words. Right now when I look at a 16 bit word in the computer, the property which conveys information to me the relative ordering of individual 1's and 0's. Is there another property or way of looking at a 16 bit word which would allow more than 2^16 unique "configurations"? (Note it would no longer be a configuration, but 2^16 xxxx's where xxxx is a noun describing an instance of that property). The only thing I can really think of is something like if we looked at the number of 1 to 0 transitions or something rather than whether each bit was actually a 1 or 0? Now transitions does not yield more than 2^16 combinations because it is ultimately solely dependent on the configuration of 1's and 0's. I'm looking for properties that would derive from the configuration of 1's and 0's AND something else thus resulting in MORE than 2^16. Does anyone even know what this would be called if it did exist?
EDIT3 Ok I got it. My question boils down to this: How do we prove that the configuration of 1's and 0's in a word completely defines it? I.E. How do we prove that you need no other information besides the bitmap to show equality between two 16 bit words?
FINAL EDIT
I have an example... If instead of looking at the presence of 1's and 0's we look at transition between bits we can store 2^16 alphabet characters. If the bit to left is the same, treat it as a 1, if it transitions, treat it as a 0. Using the 16 bit word as a circularly linked list type structure where each link represent 0/1 we basically for a 16 bit word out of the transition between bits. That is an exact example of what I was looking for but that results in 2^16, nothing better. I am convinced that you cannot do better and am marking the correct answer =(
The amount of information in a particular configuration of 16 0/1s is determined by the probability of this configuration (this is called self-information). This can be bigger than 16 bits if the configuration is less likely than 1/(2^16), but that means that some other configurations are more likely than 1/(2^16) and so will contain less information than 16 bits.
To take into account all the possible configurations, you have to use the expected value of self-information (called entropy) of individual configurations. This value will reach its maximum when the probabilities of all configurations are equal (that is 1/(2^16)) and then it will be exactly 16 bits.
So the answer is no, you cannot store more than 16 bits of information in 16 0/1s.
See
http://en.wikipedia.org/wiki/Information_theory
http://en.wikipedia.org/wiki/Self-information
EDIT It is important to realize that bit does not stand for 0 or 1, but it is a unit of information, that is -log_2 P(w) where P(w) is the probability of a particular configuration.
You cannot store more than 2 states in one digit of a semiconductor device. You answered it yourself. The only way more information can be fitted into 16 digits is if each digit were to have many possible values.

error correction code upper bound

If I want to send a d-bit packet and add another r bits for error correction code (d>r)
how many errors I can find and correct at most?
You have 2^d different kinds of packets of length d bits you want to send. Adding your r bits to them makes them into codewords of length d+r, so now you have 2^d possible codewords you could send. The receiver could get 2^(d+r) different received words(codewords with possible errors). The question then becomes, how do you map those 2^(d+r) received words to the 2^d codewords?
This comes down to the minimum distance of the code. That is, for each pair of codewords, find the number of bits where they differ, then take the smallest of those values.
Let's say you had a minimum distance of 3. You received a word and you notice that it isn't one of the codewords. That is, there's an error. So, for the lack of a better decoding algorithm, you flip the first bit, and see if its a codeword. If it isn't you flip it back and flip the next one. Eventually, you get a codeword. Since all codewords differ in 3 positions, you know this codeword is the "closest" to the received word, since you would have to flip 2 bits in the received word to get to another codeword. If you didn't get a codeword from flipping just one bit at a time, you can't figure out where the errors are, since there are multiple codewords you could get to by flipping two bits, but you know there are at least two errors.
This leads to the general principle that for a minimum distance md, you can detect md-1 errors and correct floor((md-1)/2) errors. Calculating the minimum distance depends on the details of how you generate the codewords, otherwise known as the code. There are various bounds you can use to figure out an upper limit on md based on d and (d+r).
Paul mentioned the Hamming Code, which is a good example. It achieves the Hamming bound. For the (7,4) Hamming code, you have 4 bit messages and 7 bit codewords, and you achieve a minimum distance of 3. Obviously*, you are never going to get a minimum distance greater than the number of bits you are adding so this is the very best you can do. Don't get too used to this though. The Hamming code is one of the few examples of a non-trivial perfect code, and most of those have a minimum distance that is less than the number of bits you add.
*It's not really obvious, but I'm pretty sure it's true for non-trivial error correcting codes. Adding one parity bit gets you a minimum distance of two, allowing you to detect an error. The code consisting of {000,111} gets you a minimum distance of 3 by adding just 2 bits, but it's trivial.
You should probably read the wikipedia page on this:
http://en.wikipedia.org/wiki/Error_detection_and_correction
It sounds like you specifically want a Hamming Code:
http://en.wikipedia.org/wiki/Hamming_code#General_algorithm
Using that scheme, you can look up some example values from the linked table.

Resources