I am trying to wrap my head around 1's complement checksum error detection as is used in UDP.
My understanding with simplified example for an UDP-like 1's complement checksum error checking algorithm operating on 8 bit words (I know UDP uses 16 bit words):
Sum all 8 bit words of data, carry the MSB rollover to the LSB.
Take 1's complement of this sum, set checksum, send datagram
Receiver adds with carry rollover all received 8 bit words of data in the incoming datagram, adds checksum.
If sum = 0xFF, no errors. Else, error occurred, throw away packet.
It is obvious that this algorithm can detect 1 bit errors and by extension any odd-numbered bit errors. If just one bit in an 8-bit data word is corrupted, the sum + checksum will never equal 0xFF. A plain and simple example would be A = 00000000, B = 00000001, then ~(A + B) = 11111110. If A(receiver) = 00000001, B(reciever) = 00000001, the sum + checksum would be 0x00 != 0xFF
My question is:
It's not too clear to me if this can detect 2 bit errors. My intuition says no, and a simple example is taking A = 00000001, B = 00000000, then sum + checksum would be 0xFF, but there are two total errors in A and B from sender to receiver. If the 2 bit error occurred in the same word, theres a chance it could be detected, but it doesn't seem guaranteed.
How robust is UDP error checking? Does it work for even numbers of bit errors?
Some even-bit changes can be detected, some can't.
Any error that changes the sum will be detected. So a 2-bit error that changes the sum will be detected, but a 2-bit error that does not change the sum will not be detected.
A 2-bit error in a single word (single byte in your simplified example) will change the value of that word, which will change the sum, and therefore will always be detected. Most 2-bit errors across different words will be detected, but a 2-bit error that changes the same bit in different directions (one 0->1, the other 1->0) in different words will not change the sum -- the change in value created by one of the changed bits will be cancelled out by the equal-but-opposite change in value created by the other changed bit -- and therefore that error will not be detected.
Because this checksum is simply an addition, it will also fail to detect the insertion or removal of words whose arithmetic value is zero (and since this is a one's complement computation, that means words whose content is all 0s or all 1s).
It will also fail to detect transpositions of words, (because a+b gives the same sum as b+a), or more generally it will fail to detect errors that add the same amount to one word as they subtract from the other (because a+b gives the same sum as (a+n)+(b-n), e.g. 3+3=4+2=5+1). You could consider the transposition and cancelling-error cases to be made up of multiple pairs of same-bit changes.
I have a few questions about CRC:
How can I tell, for a given CRC polynom and a n-bits data, what is the largest number of bits in error that is guaranteed to be detected?
Is it true that ALWAYS - the bigger the polynom degree, the more errors that can be detected from that CRC?
Thanks!
I will assume that "How many errors can it detect" is the largest number of bits in error that is always guaranteed to be detected.
You would need to search for the minimum weight codeword of length n for that polynomial, also referred to as its Hamming distance. The number of bit errors that that CRC is guaranteed to detect is one less than that minimum weight. There is no alternative to what essentially amounts to a brute-force search. See Table 1 in this paper by Koopman for some results. As an example, for messages 2974 or fewer bits in length, the standard CRC-32 has a Hamming distance of no less than 5, so any set of 4 bit errors would be detected.
Not necessarily. Polynomials can vary widely in quality. Given the performance of a polynomial on a particular message length, you can always find a longer polynomial that has worse performance. However a well-chosen longer polynomial should in general perform better than a well-chosen shorter polynomial. However even there, for long message lengths you may find that both have a Hamming distance of two.
It's not a question of 'how many'. It's a question of what proportion and what kind. 'How many' depends on those things and on the length of the input.
Yes.
The AIMD Additive Increase Multiplicative Decrease CA algorithm halves the size of the congestion window when a loss has been detected. But what experimental/statistical or theoretical evidence is there to suggest that dividing by 2 is the most efficient method (instead of, say, another numerical value), other than "intuition"? Can someone point me to a publication or journal paper that supports this or investigates this claim?
All of the algorithms here
https://en.wikipedia.org/wiki/TCP_congestion_control#Algorithms
alter the congestion window in one form or another and they all have varying results which is to be expected.
Can someone point me to a publication or journal paper that supports this or investigates this claim?
Yang Richard Yang & Simon S. LamThere's paper investigates it in this paper
http://www.cs.utexas.edu/users/lam/Vita/Misc/YangLam00tr.pdf
We refer to this window adjustment strategy as general additive increase
multiplicative decrease (GAIMD). We present the (mean) sending rate of a GAIMD
flow as a function of α, β.
The authors parameterized the additive and multiplicative parts of AIMD and then studied them to see if they could be improved on for various TCP flows. The paper goes into a fair amount of depth on what they did and what the effects were. To summarize...
We found that the GAIMD flows were highly TCP-friendly. Furthermore, with β
at 7/8 instead of 1/2, these GAIMD flows have reduced rate fluctuations
compared to TCP flows.
If we believe the papers conclusion then there is no reason to believe that 2 is a magic bullet. Personally I doubt there is a best factor because it's based on too many variable ie protocol, types of flow etc.
Actually the factor of 2 also occurs in another part of the algorithm: slow start, where the window is doubled every RTT. Slow start is essentially a binary search for the optimal value of the congestion window, where the upper bound is infinity.
When you exit slow start due to packet loss, it is natural to half the congestion window (since the value from the previous RTT did not cause congestion), in other words you revert the last iteration of slow start, and then fine tune with a linear search. This is the main reason for halving when exiting slow start.
However the 1/2 factor is also used in CA when the transfer is in steady state, a long time after slow start has ended. There is not a good justification for this. I see it also as a binary search, but downwards, with a finite upper bound equal to the current congestion window; one could say, informally, that it is the opposite of slow start.
You can also read the document by Van Jacobson (one of the main designers of TCP) "Congestion Avoidance and Control", 1988; appendix D discusses exactly how the halving factor was chosen.
I am currently working on a project that requires data to be sent from A to B. Once B receives the data, it needs to be able to determine if an error occurred during transmission.
I have read about CRC and have decided that CRC16 is right for my needs; I can chop the data into chunks and send a chunk at a time.
However, I am confused about how B will be able to tell if an error occurred. My initial thought was to have A generate a CRC and then send the data to B. Once B receives the data, generate the CRC and send it back to A. If the CRCs match, the transmission was successful. BUT - what if the transmission of the CRC from B to A errors? It seems redundant to have the CRC sent back because it can become corrupted in the same way that the data can be.
Am I missing something or over-complicating the scenario?
Any thoughts would be appreciated.
Thanks,
P
You usually send the checksum with the data. Then you calculate the checksum out of the data on the receiving end, and compare it with the checksum that came along with it. If they don't match, either the data or the checksum was corrupted (unless you're unlucky enough to get a collision) - in which case you should ask for a retransmission.
CRC is error-detection and, notice, your code can only detect a finite number of errors. However, you can calculate the probability of a CRC16 collision (this is relatively small for most practical purposes).
Now how CRC works is using polynomial division. Your CRC value is some polynomial (probably on the order of (x^15) for CRC16). That said, the polynomial is represented in binary as the coefficients. For example, x^3 + [(0)*x^2] + x + 1 = 1011 is some polynomial on order x^3. Now, you divide your data chunk by your CRC polynomial. The remainder is the CRC value. Thus, when you do this division operation again to the received chunk (with the remainder) on B, the polynomial division should come out even to 0. If this does not occur then you have a transmission error.
Now, this assumes (including corruption of your CRC value) that if n bits are corrupted, the CRC check will detect the failure (assuming no collision). If the CRC check does not pass, simply send a retransmission request to A. Otherwise, continue processing as normal. If a collision occurred, there is no way to verify the data is corrupted until you look at your received data manually (or send several, hopefully error-free copies - note that this method incurs a lot of overhead and redundancy only works to finite precision again).
I read a while back that Quantum Computers can break most types of hashing and encryption in use today in a very short amount of time(I believe it was mere minutes). How is it possible? I've tried reading articles about it but I get lost at the a quantum bit can be 1, 0, or something else. Can someone explain how this relates to cracking such algorithms in plain English without all the fancy maths?
Preamble: Quantum computers are strange beasts that we really haven't yet tamed to the point of usefulness. The theory that underpins them is abstract and mathematical, so any discussion of how they can be more efficient than classical computers will inevitably be long and involved. You'll need at least an undergraduate understanding of linear algebra and quantum mechanics to understand the details, but I'll try to convey my limited understanding!
The basic premise of quantum computation is quantum superposition. The idea is that a quantum system (such as a quantum bit, or qubit, the quantum analogue of a normal bit) can, as you say, exist not only in the 0 and 1 states (called the computational basis states of the system), but also in any combination of the two (so that each has an amplitude associated with it). When the system is observed by someone, the qubit's state collapses into one of its basis states (you may have heard of the Schrödinger's cat thought experiment, which is related to this).
Because of this, a register of n qubits has 2^n basis states of its own (these are the states that you could observe the register being in; imagine a classical n-bit integer). Since the register can exist in a superposition of all these states at once, it is possible to apply a computation to all 2^n register states rather than just one of them. This is called quantum parallelism.
Because of this property of quantum computers, it may seem like they're a silver bullet that can solve any problem exponentially faster than a classical computer. But it's not that simple: the problem is that once you observe the result of your computation, it collapses (as I mentioned above) into the result of just one of the computations – and you lose all of the others.
The field of quantum computation/algorithms is all about trying to work around this problem by manipulating quantum phenomena to extract information in fewer operations than would be possible on a classical computer. It turns out that it's very difficult to contrive a "quantum algorithm" that is faster than any possible classical counterpart.
The example you ask about is that of quantum cryptanalysis. It's thought that quantum computers might be able to "break" certain encryption algorithms: specifically, the RSA algorithm, which relies on the difficulty of finding the prime factors of very large integers. The algorithm which allows for this is called Shor's algorithm, which can factor integers with polynomial time complexity. By contrast the best classical algorithm for the problem has (almost) exponential time complexity, and the problem is hence considered "intractable".
If you want a deeper understanding of this, get a few books on linear algebra and quantum mechanics and get comfortable. If you want some clarification, I'll see what I can do!
Aside: to better understand the idea of quantum superposition, think in terms of probabilities. Imagine you flip a coin and catch it on your hand, covered so that you can't see it. As a very tenuous analogy, the coin can be thought of as being in a superposition of the heads and tails "states": each one has a probability of 0.5 (and, naturally, since there are two states, these probabilities add up to 1). When you take your hand away and observe the coin directly, it collapses into either the heads state or the tails state, and so the probability of this state becomes 1, while the other becomes 0. One way to think about it, I suppose, is a set of scales that is balanced until observation, at which point it tips to one side as our knowledge of the system increases and one state becomes the "real" state.
Of course, we don't think of the coin as a quantum system: for all practical purposes, the coin has a definite state, even if we can't see it. For genuine quantum systems, however (such as an individual particle trapped in a box), we can't think about it in this way. Under the conventional interpretation of quantum mechanics, the particle fundamentally has no definite position, but exists in all possible positions at once. Only upon observation is its position constrained in space (though only to a limited degree; cf. uncertainty principle), and even this is purely random and determined only by probability.
By the way, quantum systems are not restricted to having just two observable states (those that do are called two-level systems). Some have a large but finite number, some have a countably infinite number (such as a "particle in a box" or a harmonic oscillator), and some even have an uncountably infinite number (such as a free particle's position, which isn't constrained to individual points in space).
It's highly theoretical at this point. Quantum Bits might offer the capability to break encryption, but clearly it's not at that point yet.
At the Quantum Level, the laws that govern behavior are different than in the macro level.
To answer your question, you first need to understand how encryption works.
At a basic level, encryption is the result of multiplying two extremely large prime numbers together. This super large result is divisible by 1, itself, and these two prime numbers.
One way to break encryption is to brute force guess the two prime numbers, by doing prime number factorization.
This attack is slow, and is thwarted by picking larger and larger prime numbers. YOu hear of key sizes of 40bits,56bits,128bits and now 256,512bits and beyond. Those sizes correspond to the size of the number.
The brute force algorithm (in simplified terms) might look like
for(int i = 3; i < int64.max; i++)
{
if( key / i is integral)
{
//we have a prime factor
}
}
So you want to brute force try prime numbers; well that is going to take awhile with a single computer. So you might try grouping a bunch of computers together to divide and conquer. That works, but is still slow for very large keysizes.
How a quantum bit address this is that they are both 0 and 1 at the same time. So say you have 3 quantum bits (no small feat mind you).
With 3 qbits, your program can have the values of 0-7 simulatanously
(000,001,010,011 etc)
, which includes prime numbers 3,5,7 at the same time.
so using the simple algorithm above, instead of increasing i by 1 each time, you can just divide once, and check
0,1,2,3,4,5,6,7
all at the same time.
Of course quantum bits aren't to that point yet; there is still lots of work to be done in the field; but this should give you an idea that if we could program using quanta, how we might go about cracking encryption.
The Wikipedia article does a very good job of explaining this.
In short, if you have N bits, your quantum computer can be in 2^N states at the same time. Similar conceptually to having 2^N CPU's processing with traditional bits (though not exactly the same).
A quantum computer can implement Shor's algorithm which can quickly perform prime factorization. Encryption systems are build on the assumption that large primes can not be factored in a reasonable amount of time on a classical computer.
Almost all our public-key encryptions (ex. RSA) are based solely on math, relying on the difficulty of factorization or discrete-logarithms. Both of these will be efficiently broken using quantum computers (though even after a bachelors in CS and Math, and having taken several classes on quantum mechanics, I still don't understand the algorithm).
However, hashing algorithms (Ex. SHA2) and symmetric-key encryptions (ex. AES), which are based mostly on diffusion and confusion, are still secure.
In the most basic terms, a normal no quantum computer works by operating on bits (sates of on or off) uesing boolean logic. You do this very fast for lots and lots of bits and you can solve any problem in a class of problems that are computable.
However they are "speed limits" namely something called computational complexity.This in lay mans terms means that for a given algorithm you know that the time it takes to run an algorithm (and the memory space required to run the algorithm) has a minimum bound. For example a algorithm that is O(n^2) means that for a data size of n it will require n^2 time to run.
However this kind of goes out the window when we have qbits (quantum bits) when you are doing operations on qbits that can have "in between" values. algorithms that would have very high computational complexity (like factoring huge numbers, the key to cracking many encryption algorithms) can be done in much much lower computational complexity. This is the reason that quantum computing will be able to crack encrypted streams orders of magnitude quicker then normal computers.
First of all, quantum computing is still barely out of the theoretical stage. Lots of research is going on and a few experimental quantum cells and circuits, but a "quantum computer" does not yet exist.
Second, read the wikipedia article: http://en.wikipedia.org/wiki/Quantum_computer
In particular, "In general a quantum computer with n qubits can be in an arbitrary superposition of up to 2^n different states simultaneously (this compares to a normal computer that can only be in one of these 2^n states at any one time). "
What makes cryptography secure is the use of encryption keys that are very long numbers that would take a very, very long time to factor into their constituent primes, and the keys are sufficiently long enough that brute-force attempts to try every possible key value would also take too long to complete.
Since quantum computing can (theoretically) represent a lot of states in a small number of qubit cells, and operate on all of those states simultaneously, it seems there is the potential to use quantum computing to perform brute-force try-all-possible-key-values in a very short amount of time.
If such a thing is possible, it could be the end of cryptography as we know it.
quantum computers etc all lies. I dont believe these science fiction magazines.
in fact rsa system is based on two prime numbers and their multipilation.
p1,p2 is huge primes p1xp2=N modulus.
rsa system is
like that
choose a prime number..maybe small its E public key
(p1-1)*(p2-1)=R
find a D number that makes E*D=1 mod(R)
we are sharing (E,N) data as public key publicly
we are securely saving (D,N) as private.
To solve this Rsa system cracker need to find prime factors of N.
*mass of the Universe is closer to 10^53 kg*
electron mass is 9.10938291 × 10^-31 kilograms
if we divide universe to electrons we can create 10^84 electrons.
electrons has slower speeds than light. its move frequency can be 10^26
if anybody produces electron size parallel rsa prime factor finders from all universe mass.
all universe can handle (10^84)*(10^26)= 10^110 numbers/per second.
rsa has limitles bits of alternative prime numbers. maybe 4096 bits
4096 bit rsa has 10^600 possible prime numbers to brute force.
so your universe mass quantum solver need to make tests during 10^500 years.
rsa vs universe mass quantum computer
1 - 0
maybe quantum computer can break 64/128 bits passwords. because 128 bit password has 10^39 possible brute force nodes.
This circuit is a good start to understand how qubit parallelism works. The 2-qubits-input is on the left side. Top qubit is x and bottom qubit ist y. The y qubit is 0 at the input, just like a normal bit. The x qubit on the other hand is in superposition at the input. y (+) f(x) stands here for addition modulo 2, just meaning 1+1=0, 0+1=1+0=1. But the interesting part is, since the x-qubit is in superposition, f(x) is f(0) and f(1) at the same time and we can perform the evaluation of the f function for all states simultaneously without using any (time consuming) loops. Having enough quibits we can branch this into endlessly complicating curcuits.
Even more bizarr imo. is the Grover's algorithm. As input we get here an unsorted array of integers with arraylength = n. What is the expected runtime of an algorithm, that finds the min value of this array? Well classically we have at least to check every 1..n element of the array resulting in an expected runtime of n. Not so for quantum computers, on a quantum computer we can solve this in expected runtime of maximum root(n), this means we don't even have to check every element to find the guaranteed solution...