The AIMD Additive Increase Multiplicative Decrease CA algorithm halves the size of the congestion window when a loss has been detected. But what experimental/statistical or theoretical evidence is there to suggest that dividing by 2 is the most efficient method (instead of, say, another numerical value), other than "intuition"? Can someone point me to a publication or journal paper that supports this or investigates this claim?
All of the algorithms here
https://en.wikipedia.org/wiki/TCP_congestion_control#Algorithms
alter the congestion window in one form or another and they all have varying results which is to be expected.
Can someone point me to a publication or journal paper that supports this or investigates this claim?
Yang Richard Yang & Simon S. LamThere's paper investigates it in this paper
http://www.cs.utexas.edu/users/lam/Vita/Misc/YangLam00tr.pdf
We refer to this window adjustment strategy as general additive increase
multiplicative decrease (GAIMD). We present the (mean) sending rate of a GAIMD
flow as a function of α, β.
The authors parameterized the additive and multiplicative parts of AIMD and then studied them to see if they could be improved on for various TCP flows. The paper goes into a fair amount of depth on what they did and what the effects were. To summarize...
We found that the GAIMD flows were highly TCP-friendly. Furthermore, with β
at 7/8 instead of 1/2, these GAIMD flows have reduced rate fluctuations
compared to TCP flows.
If we believe the papers conclusion then there is no reason to believe that 2 is a magic bullet. Personally I doubt there is a best factor because it's based on too many variable ie protocol, types of flow etc.
Actually the factor of 2 also occurs in another part of the algorithm: slow start, where the window is doubled every RTT. Slow start is essentially a binary search for the optimal value of the congestion window, where the upper bound is infinity.
When you exit slow start due to packet loss, it is natural to half the congestion window (since the value from the previous RTT did not cause congestion), in other words you revert the last iteration of slow start, and then fine tune with a linear search. This is the main reason for halving when exiting slow start.
However the 1/2 factor is also used in CA when the transfer is in steady state, a long time after slow start has ended. There is not a good justification for this. I see it also as a binary search, but downwards, with a finite upper bound equal to the current congestion window; one could say, informally, that it is the opposite of slow start.
You can also read the document by Van Jacobson (one of the main designers of TCP) "Congestion Avoidance and Control", 1988; appendix D discusses exactly how the halving factor was chosen.
Related
I was studying error detection in computer networks and I came to know about the following methods -
Single Bit parity Check
2d parity Check
Checksum
Cyclic Redundancy Check
But after studying only a bit (lmao pun), I came across cases where they fail.
The methods fail when -
Single Bit parity Check - If an even number of bits has been inverted.
2d parity Check - If an even number of bits are inverted in the same position.
Checksum - addition of 0 to a frame does not change the result, and sequence is not maintained.
(for e.g. in the data - 10101010 11110000 11001100 10111001 if we add 0 to any of the
four frames here)
CRC - A CRC of n-bit for g(x) = (x+l)*p(x) can detect:
All burst errors of length less than or equal to n.
All burst errors affecting an odd number of bits.
All burst errors of length equal to n + 1 with probability (2^(n-1) − l)/2^n − 1
All burst errors of length greater than n + 1 with probability (2^(n-1) − l)/2^n
[the CRC-32 polynomial will detect all burst errors of length greater than 33 with
probability (2^32 − l)/2^32; This is equivalent to a 99.99999998% accuracy rate]
Copied from here - https://stackoverflow.com/a/65718709/16778741
As we can see these methods fail because of some very obvious shortcomings.
So my question is - why were these still allowed and not rectified and what do we use these days?
Its like the people who made them forgot to cross check
It is a tradeoff between effort and risk. The more redundant bits are added, the smaller the risk of undetected error.
Extra bits mean additional memory or network bandwidth consumption. It depends on the application, which additional effort is justified.
Complicated checksums add some computational overhead as well.
Modern checksum or hash functions can drive the remaining risk to very small ranges tolerable for the vast majority of applications.
Only 0.00000002% of burst errors will be missed. But what is not stated is the likelihood of these burst errors occurring. That number is dependent on the network implementation. In most cases the likelihood of a undetectable burst error will be very close to zero or zero for an ideal network.
Multiplying almost zero with almost zero is really close to zero.
Undetected errors in CRCs is more of academic interest than practical reality.
When there is packet loss , I know the method to calculate the p (I read in the RFC document).
But when there is no packet loss , how to calculate it? The document show nothing about it.
If the loss event rate p is zero, the denominator of equation in tfrc is 0.
The equation is as follows:
enter image description here
and the document is rfc5348 : https://www.rfc-editor.org/rfc/rfc5348
I know nothing about TFRC so pure guesswork here.
I suppose available bandwidth calculation is based on loss event rate. If no packets were lost up to now you have zero info about available bandwidth. Usually in this case Congestion Avoidance algo increases bitrate until packet drops start to occur.
In other words, if no packet drops so far you can assume your available bandwidth is unlimited, and use max possible value of p's type. This exactly follows from the formula, as division by zero gives you infinity in math.
I read a while back that Quantum Computers can break most types of hashing and encryption in use today in a very short amount of time(I believe it was mere minutes). How is it possible? I've tried reading articles about it but I get lost at the a quantum bit can be 1, 0, or something else. Can someone explain how this relates to cracking such algorithms in plain English without all the fancy maths?
Preamble: Quantum computers are strange beasts that we really haven't yet tamed to the point of usefulness. The theory that underpins them is abstract and mathematical, so any discussion of how they can be more efficient than classical computers will inevitably be long and involved. You'll need at least an undergraduate understanding of linear algebra and quantum mechanics to understand the details, but I'll try to convey my limited understanding!
The basic premise of quantum computation is quantum superposition. The idea is that a quantum system (such as a quantum bit, or qubit, the quantum analogue of a normal bit) can, as you say, exist not only in the 0 and 1 states (called the computational basis states of the system), but also in any combination of the two (so that each has an amplitude associated with it). When the system is observed by someone, the qubit's state collapses into one of its basis states (you may have heard of the Schrödinger's cat thought experiment, which is related to this).
Because of this, a register of n qubits has 2^n basis states of its own (these are the states that you could observe the register being in; imagine a classical n-bit integer). Since the register can exist in a superposition of all these states at once, it is possible to apply a computation to all 2^n register states rather than just one of them. This is called quantum parallelism.
Because of this property of quantum computers, it may seem like they're a silver bullet that can solve any problem exponentially faster than a classical computer. But it's not that simple: the problem is that once you observe the result of your computation, it collapses (as I mentioned above) into the result of just one of the computations – and you lose all of the others.
The field of quantum computation/algorithms is all about trying to work around this problem by manipulating quantum phenomena to extract information in fewer operations than would be possible on a classical computer. It turns out that it's very difficult to contrive a "quantum algorithm" that is faster than any possible classical counterpart.
The example you ask about is that of quantum cryptanalysis. It's thought that quantum computers might be able to "break" certain encryption algorithms: specifically, the RSA algorithm, which relies on the difficulty of finding the prime factors of very large integers. The algorithm which allows for this is called Shor's algorithm, which can factor integers with polynomial time complexity. By contrast the best classical algorithm for the problem has (almost) exponential time complexity, and the problem is hence considered "intractable".
If you want a deeper understanding of this, get a few books on linear algebra and quantum mechanics and get comfortable. If you want some clarification, I'll see what I can do!
Aside: to better understand the idea of quantum superposition, think in terms of probabilities. Imagine you flip a coin and catch it on your hand, covered so that you can't see it. As a very tenuous analogy, the coin can be thought of as being in a superposition of the heads and tails "states": each one has a probability of 0.5 (and, naturally, since there are two states, these probabilities add up to 1). When you take your hand away and observe the coin directly, it collapses into either the heads state or the tails state, and so the probability of this state becomes 1, while the other becomes 0. One way to think about it, I suppose, is a set of scales that is balanced until observation, at which point it tips to one side as our knowledge of the system increases and one state becomes the "real" state.
Of course, we don't think of the coin as a quantum system: for all practical purposes, the coin has a definite state, even if we can't see it. For genuine quantum systems, however (such as an individual particle trapped in a box), we can't think about it in this way. Under the conventional interpretation of quantum mechanics, the particle fundamentally has no definite position, but exists in all possible positions at once. Only upon observation is its position constrained in space (though only to a limited degree; cf. uncertainty principle), and even this is purely random and determined only by probability.
By the way, quantum systems are not restricted to having just two observable states (those that do are called two-level systems). Some have a large but finite number, some have a countably infinite number (such as a "particle in a box" or a harmonic oscillator), and some even have an uncountably infinite number (such as a free particle's position, which isn't constrained to individual points in space).
It's highly theoretical at this point. Quantum Bits might offer the capability to break encryption, but clearly it's not at that point yet.
At the Quantum Level, the laws that govern behavior are different than in the macro level.
To answer your question, you first need to understand how encryption works.
At a basic level, encryption is the result of multiplying two extremely large prime numbers together. This super large result is divisible by 1, itself, and these two prime numbers.
One way to break encryption is to brute force guess the two prime numbers, by doing prime number factorization.
This attack is slow, and is thwarted by picking larger and larger prime numbers. YOu hear of key sizes of 40bits,56bits,128bits and now 256,512bits and beyond. Those sizes correspond to the size of the number.
The brute force algorithm (in simplified terms) might look like
for(int i = 3; i < int64.max; i++)
{
if( key / i is integral)
{
//we have a prime factor
}
}
So you want to brute force try prime numbers; well that is going to take awhile with a single computer. So you might try grouping a bunch of computers together to divide and conquer. That works, but is still slow for very large keysizes.
How a quantum bit address this is that they are both 0 and 1 at the same time. So say you have 3 quantum bits (no small feat mind you).
With 3 qbits, your program can have the values of 0-7 simulatanously
(000,001,010,011 etc)
, which includes prime numbers 3,5,7 at the same time.
so using the simple algorithm above, instead of increasing i by 1 each time, you can just divide once, and check
0,1,2,3,4,5,6,7
all at the same time.
Of course quantum bits aren't to that point yet; there is still lots of work to be done in the field; but this should give you an idea that if we could program using quanta, how we might go about cracking encryption.
The Wikipedia article does a very good job of explaining this.
In short, if you have N bits, your quantum computer can be in 2^N states at the same time. Similar conceptually to having 2^N CPU's processing with traditional bits (though not exactly the same).
A quantum computer can implement Shor's algorithm which can quickly perform prime factorization. Encryption systems are build on the assumption that large primes can not be factored in a reasonable amount of time on a classical computer.
Almost all our public-key encryptions (ex. RSA) are based solely on math, relying on the difficulty of factorization or discrete-logarithms. Both of these will be efficiently broken using quantum computers (though even after a bachelors in CS and Math, and having taken several classes on quantum mechanics, I still don't understand the algorithm).
However, hashing algorithms (Ex. SHA2) and symmetric-key encryptions (ex. AES), which are based mostly on diffusion and confusion, are still secure.
In the most basic terms, a normal no quantum computer works by operating on bits (sates of on or off) uesing boolean logic. You do this very fast for lots and lots of bits and you can solve any problem in a class of problems that are computable.
However they are "speed limits" namely something called computational complexity.This in lay mans terms means that for a given algorithm you know that the time it takes to run an algorithm (and the memory space required to run the algorithm) has a minimum bound. For example a algorithm that is O(n^2) means that for a data size of n it will require n^2 time to run.
However this kind of goes out the window when we have qbits (quantum bits) when you are doing operations on qbits that can have "in between" values. algorithms that would have very high computational complexity (like factoring huge numbers, the key to cracking many encryption algorithms) can be done in much much lower computational complexity. This is the reason that quantum computing will be able to crack encrypted streams orders of magnitude quicker then normal computers.
First of all, quantum computing is still barely out of the theoretical stage. Lots of research is going on and a few experimental quantum cells and circuits, but a "quantum computer" does not yet exist.
Second, read the wikipedia article: http://en.wikipedia.org/wiki/Quantum_computer
In particular, "In general a quantum computer with n qubits can be in an arbitrary superposition of up to 2^n different states simultaneously (this compares to a normal computer that can only be in one of these 2^n states at any one time). "
What makes cryptography secure is the use of encryption keys that are very long numbers that would take a very, very long time to factor into their constituent primes, and the keys are sufficiently long enough that brute-force attempts to try every possible key value would also take too long to complete.
Since quantum computing can (theoretically) represent a lot of states in a small number of qubit cells, and operate on all of those states simultaneously, it seems there is the potential to use quantum computing to perform brute-force try-all-possible-key-values in a very short amount of time.
If such a thing is possible, it could be the end of cryptography as we know it.
quantum computers etc all lies. I dont believe these science fiction magazines.
in fact rsa system is based on two prime numbers and their multipilation.
p1,p2 is huge primes p1xp2=N modulus.
rsa system is
like that
choose a prime number..maybe small its E public key
(p1-1)*(p2-1)=R
find a D number that makes E*D=1 mod(R)
we are sharing (E,N) data as public key publicly
we are securely saving (D,N) as private.
To solve this Rsa system cracker need to find prime factors of N.
*mass of the Universe is closer to 10^53 kg*
electron mass is 9.10938291 × 10^-31 kilograms
if we divide universe to electrons we can create 10^84 electrons.
electrons has slower speeds than light. its move frequency can be 10^26
if anybody produces electron size parallel rsa prime factor finders from all universe mass.
all universe can handle (10^84)*(10^26)= 10^110 numbers/per second.
rsa has limitles bits of alternative prime numbers. maybe 4096 bits
4096 bit rsa has 10^600 possible prime numbers to brute force.
so your universe mass quantum solver need to make tests during 10^500 years.
rsa vs universe mass quantum computer
1 - 0
maybe quantum computer can break 64/128 bits passwords. because 128 bit password has 10^39 possible brute force nodes.
This circuit is a good start to understand how qubit parallelism works. The 2-qubits-input is on the left side. Top qubit is x and bottom qubit ist y. The y qubit is 0 at the input, just like a normal bit. The x qubit on the other hand is in superposition at the input. y (+) f(x) stands here for addition modulo 2, just meaning 1+1=0, 0+1=1+0=1. But the interesting part is, since the x-qubit is in superposition, f(x) is f(0) and f(1) at the same time and we can perform the evaluation of the f function for all states simultaneously without using any (time consuming) loops. Having enough quibits we can branch this into endlessly complicating curcuits.
Even more bizarr imo. is the Grover's algorithm. As input we get here an unsorted array of integers with arraylength = n. What is the expected runtime of an algorithm, that finds the min value of this array? Well classically we have at least to check every 1..n element of the array resulting in an expected runtime of n. Not so for quantum computers, on a quantum computer we can solve this in expected runtime of maximum root(n), this means we don't even have to check every element to find the guaranteed solution...
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Let's say I have a slider that can go between 0 and 1. The SoundTransform.volume also ranges between 0 (silent) and 1 (full volume), but if I use a linear function, let's say SoundTransform.volume = slider.volume, the result is rather not pleasing - the perception is that the volume dramatically changes in the lower half and does almost nothing in the upper half of the slider.
I really haven't studied the human ear, but I overheard once that human perception is logarithmic, or something similar. What algorithms should I use for setting the SoundTransform.volume?
human perception in general is logarithmic, also when it comes to things as luminosity, etc. ... this enables us to register small changes to small "input signals" of our environement, or to put it another way: to always percieve a change of a perceivable physical quantity in relation to its value ...
thus, you should modify the volume to grow exponentially, like this:
y = (Math.exp(x)-1)/(Math.E-1)
you can try other bases as well:
y = (Math.pow(base,x)-1)/(base-1)
the bigger the value of base is, the stronger the effect, the slower volume starts growing in the beginning and the faster it grows in the end ...
a slighty simpler approach, giving you similar results (you are only in the interval between 0 and 1, so approximations are quite simple, actually), is to exponantiate the original value, as
y = Math.pow(x, exp);
for exp bigger than 1, the effect is, that the output (i.e. the volume in you case) first goes up slower, and then faster towards the end ... this is very similar to exponential functions ... the bigger exp, the stronger the effect ...
Human hearing is logarithmic, so you want an exponential function (the inverse) to apply to the linear output of your slider. I don't know if human hearing is closer to ln or log:
For Ln:
e^x
For Log:
10^x
You could experiment with other bases too. You will then need to scale your output so that it covers the available range of values.
Update
After a bit of research it seems that base 2 would be appropriate since the power is related to the square of the pressure. If anyone knows better, please correct me.
I think what you want is:
v' = 2^v.a^v - 1
a = ( 2^(log2(m+1)/n) )/2
v is your linear input value ranging from 0..n
v' is your logarithmic value ranging from 0..m
The -1 in the first equation is to give you an output range from 0 instead of 1 (since k^0=1).
The m+1 is to compensate for this so you get 0..m not 0..m+1
You can of course get tweak this to suit your requirements.
Hearing is complicated, the perceived loudness varies according to frequency, the duration of the sample, and from person to person. So this cannot be solved mathematically but by trying a variety of functions for the control and picking the one which 'feels' the best.
Do you find at the moment that varying the control at the low end of the range has little effect on the apparent volume, but that the volume increases rapidly at the upper end of the range? Or do you hear the reverse, the volume varies too quickly at the low end and not enough at the high end? Or would you like finer control over the volume at medium levels?
Increased low-volume sensitivity:
SoundTransform.volume = Math.sin(x * Math.PI / 2);
Increased high-volume sensitivity:
SoundTransform.volume = (Math.pow(base,x) - 1)/(base-1);
or
SoundTransform.volume = Math.pow(x, base);
Where base > 1, try different values and see how it feels. Or more drastically, a 90 degree circular arc:
SoundTransform.volume = 1 - Math.sqrt(1-(x * x));
Where x is slider.volume and is between 0 and 1.
Please do let us know how you get on!
Yes, human perception is logarithmic. Considering this, you should adjust a volume exponentially, so that the percieved increase becomes linear. See decibel on Wikipedia
Android already do such things from Audio Framework.It use decibels to adjust the volumes. User can use steps such as from 1 to 7 for ringtone or 1 to 15 for music.
The formula is:
User call set volume API linearly but get amplitude exponentially. graph as below:
A 3db increase means you are doubling the volume, but the human ear requires ~6db increase to perceive a doubling in volume.
However, a strictly logarithmic curve, while accurately modeling the human perception of volume, has a usability problem.
When people want a loud volume, the knob becomes too sensitive at the upper end, making it difficult to find the "right" volume.
You've probably had this problem before... 7 is too soft, 8 is too loud, meanwhile 1-3 are inaudible over background noise.
So, I recommend a logarithmic scale, but with a floor at the low end and a soft knee at the top to allow a more linear response, especially in the "loud" part of the knob.
Oh, and make sure the knob goes up to 11. ;)
The human ear indeed perceives sounds on a logarithmic scale of increasing intensity, and because of that, the unit generally used to measure acoustic intensity is the decibel (which is actually used for all sorts of intensities and powers, not just those of sound, and also happens to be a dimensionless unit). The reference level, 0 dB, is usually set to the lower bound of human hearing, and every ten-decibel increase above that is equivalent to an increase in power by a factor of 10.
Note, however, that you should first check with other people and see what they think, just in case; what sounds odd to you may not sound odd to others. If they agree with you, then go right ahead and do it exponentially, but if you're in the minority, then it might just be your own ears that are the problem.
EDIT: Ignore my previous third paragraph. Refer to back2dos's answer if you decide to do it exponentially.
This is a javascript function i have for a logarithmic scale for dbm.
The input is a percentage (0.00 to 1.00) and the max value (my implementation uses 12db)
The mid point is set to 0.5 and that will be 0db.
When the percentage is zero, the output is negative Infinity.
function percentageToDb(p, max) {
return max * (1 - (Math.log(p) / Math.log(0.5)));
};
we have a particle detector hard-wired to use 16-bit and 8-bit buffers. Every now and then, there are certain [predicted] peaks of particle fluxes passing through it; that's okay. What is not okay is that these fluxes usually reach magnitudes above the capacity of the buffers to store them; thus, overflows occur. On a chart, they look like the flux suddenly drops and begins growing again. Can you propose a [mostly] accurate method of detecting points of data suffering from an overflow?
P.S. The detector is physically inaccessible, so fixing it the 'right way' by replacing the buffers doesn't seem to be an option.
Update: Some clarifications as requested. We use python at the data processing facility; the technology used in the detector itself is pretty obscure (treat it as if it was developed by a completely unrelated third party), but it is definitely unsophisticated, i.e. not running a 'real' OS, just some low-level stuff to record the detector readings and to respond to remote commands like power cycle. Memory corruption and other problems are not an issue right now. The overflows occur simply because the designer of the detector used 16-bit buffers for counting the particle flux, and sometimes the flux exceeds 65535 particles per second.
Update 2: As several readers have pointed out, the intended solution would have something to do with analyzing the flux profile to detect sharp declines (e.g. by an order of magnitude) in an attempt to separate them from normal fluctuations. Another problem arises: can restorations (points where the original flux drops below the overflowing level) be detected by simply running the correction program against the reverted (by the x axis) flux profile?
int32[] unwrap(int16[] x)
{
// this is pseudocode
int32[] y = new int32[x.length];
y[0] = x[0];
for (i = 1:x.length-1)
{
y[i] = y[i-1] + sign_extend(x[i]-x[i-1]);
// works fine as long as the "real" value of x[i] and x[i-1]
// differ by less than 1/2 of the span of allowable values
// of x's storage type (=32768 in the case of int16)
// Otherwise there is ambiguity.
}
return y;
}
int32 sign_extend(int16 x)
{
return (int32)x; // works properly in Java and in most C compilers
}
// exercise for the reader to write similar code to unwrap 8-bit arrays
// to a 16-bit or 32-bit array
Of course, ideally you'd fix the detector software to max out at 65535 to prevent wraparound of the sort that is causing your grief. I understand that this isn't always possible, or at least isn't always possible to do quickly.
When the particle flux exceeds 65535, does it do so quickly, or does the flux gradually increase and then gradually decrease? This makes a difference in what algorithm you might use to detect this. For example, if the flux goes up slowly enough:
true flux measurement
5000 5000
10000 10000
30000 30000
50000 50000
70000 4465
90000 24465
60000 60000
30000 30000
10000 10000
then you'll tend to have a large negative drop at times when you have overflowed. A much larger negative drop than you'll have at any other time. This can serve as a signal that you've overflowed. To find the end of the overflow time period, you could look for a large jump to a value not too far from 65535.
All of this depends on the maximum true flux that is possible and on how rapidly the flux rises and falls. For example, is it possible to get more than 128k counts in one measurement period? Is it possible for one measurement to be 5000 and the next measurement to be 50000? If the data is not well-behaved enough, you may be able to make only statistical judgment about when you have overflowed.
Your question needs to provide more information about your implementation - what language/framework are you using?
Data overflows in software (which is what I think you're talking about) are bad practice and should be avoided. While you are seeing (strange data output) is only one side effect that is possible when experiencing data overflows, but it is merely the tip of the iceberg of the sorts of issues you can see.
You could quite easily experience more serious issues like memory corruption, which can cause programs to crash loudly, or worse, obscurely.
Is there any validation you can do to prevent the overflows from occurring in the first place?
I really don't think you can fix it without fixing the underlying buffers. How are you supposed to tell the difference between the sequences of values (0, 1, 2, 1, 0) and (0, 1, 65538, 1, 0)? You can't.
How about using an HMM where the hidden state is whether you are in an overflow and the emissions are observed particle flux?
The tricky part would be coming up with the probability models for the transitions (which will basically encode the time-scale of peaks) and for the emissions (which you can build if you know how the flux behaves and how overflow affects measurement). These are domain-specific questions, so there probably aren't ready-made solutions out there.
But one you have the model, everything else---fitting your data, quantifying uncertainty, simulation, etc.---is routine.
You can only do this if the actual jumps between successive values are much smaller than 65536. Otherwise, an overflow-induced valley artifact is indistinguishable from a real valley, you can only guess. You can try to match overflows to corresponding restorations, by simultaneously analysing a signal from the right and the left (assuming that there is a recognizable base line).
Other than that, all you can do is to adjust your experiment by repeating it with different original particle flows, so that real valleys will not move, but artifact ones move to the point of overflow.