Is there a standard way to do this?
Googling -- "approximate entropy" bits -- uncovers multiple academic papers but I'd like to just find a chunk of pseudocode defining the approximate entropy for a given bit string of arbitrary length.
(In case this is easier said than done and it depends on the application, my application involves 16,320 bits of encrypted data (cyphertext). But encrypted as a puzzle and not meant to be impossible to crack. I thought I'd first check the entropy but couldn't easily find a good definition of such. So it seemed like a question that ought to be on StackOverflow! Ideas for where to begin with de-cyphering 16k random-seeming bits are also welcome...)
See also this related question:
What is the computer science definition of entropy?
Entropy is not a property of the string you got, but of the strings you could have obtained instead. In other words, it qualifies the process by which the string was generated.
In the simple case, you get one string among a set of N possible strings, where each string has the same probability of being chosen than every other, i.e. 1/N. In the situation, the string is said to have an entropy of N. The entropy is often expressed in bits, which is a logarithmic scale: an entropy of "n bits" is an entropy equal to 2n.
For instance: I like to generate my passwords as two lowercase letters, then two digits, then two lowercase letters, and finally two digits (e.g. va85mw24). Letters and digits are chosen randomly, uniformly, and independently of each other. This process may produce 26*26*10*10*26*26*10*10 = 4569760000 distinct passwords, and all these passwords have equal chances to be selected. The entropy of such a password is then 4569760000, which means about 32.1 bits.
Shannon's entropy equation is the standard method of calculation. Here is a simple implementation in Python, shamelessly copied from the Revelation codebase, and thus GPL licensed:
import math
def entropy(string):
"Calculates the Shannon entropy of a string"
# get probability of chars in string
prob = [ float(string.count(c)) / len(string) for c in dict.fromkeys(list(string)) ]
# calculate the entropy
entropy = - sum([ p * math.log(p) / math.log(2.0) for p in prob ])
return entropy
def entropy_ideal(length):
"Calculates the ideal Shannon entropy of a string with given length"
prob = 1.0 / length
return -1.0 * length * prob * math.log(prob) / math.log(2.0)
Note that this implementation assumes that your input bit-stream is best represented as bytes. This may or may not be the case for your problem domain. What you really want is your bitstream converted into a string of numbers. Just how you decide on what those numbers are is domain specific. If your numbers really are just one and zeros, then convert your bitstream into an array of ones and zeros. The conversion method you choose will affect the results you get, however.
I believe the answer is the Kolmogorov Complexity of the string.
Not only is this not answerable with a chunk of pseudocode, Kolmogorov complexity is not a computable function!
One thing you can do in practice is compress the bit string with the best available data compression algorithm.
The more it compresses the lower the entropy.
The NIST Random Number Generator evaluation toolkit has a way of calculating "Approximate Entropy." Here's the short description:
Approximate Entropy Test Description: The focus of this test is the
frequency of each and every overlapping m-bit pattern. The purpose of
the test is to compare the frequency of overlapping blocks of two
consecutive/adjacent lengths (m and m+1) against the expected result
for a random sequence.
And a more thorough explanation is available from the PDF on this page:
http://csrc.nist.gov/groups/ST/toolkit/rng/documentation_software.html
There is no single answer. Entropy is always relative to some model. When someone talks about a password having limited entropy, they mean "relative to the ability of an intelligent attacker to predict", and it's always an upper bound.
Your problem is, you're trying to measure entropy in order to help you find a model, and that's impossible; what an entropy measurement can tell you is how good a model is.
Having said that, there are some fairly generic models that you can try; they're called compression algorithms. If gzip can compress your data well, you have found at least one model that can predict it well. And gzip is, for example, mostly insensitive to simple substitution. It can handle "wkh" frequently in the text as easily as it can handle "the".
Using Shannon entropy of a word with this formula : http://imgur.com/a/DpcIH
Here's a O(n) algorithm that calculates it :
import math
from collections import Counter
def entropy(s):
l = float(len(s))
return -sum(map(lambda a: (a/l)*math.log2(a/l), Counter(s).values()))
Here's an implementation in Python (I also added it to the Wiki page):
import numpy as np
def ApEn(U, m, r):
def _maxdist(x_i, x_j):
return max([abs(ua - va) for ua, va in zip(x_i, x_j)])
def _phi(m):
x = [[U[j] for j in range(i, i + m - 1 + 1)] for i in range(N - m + 1)]
C = [len([1 for x_j in x if _maxdist(x_i, x_j) <= r]) / (N - m + 1.0) for x_i in x]
return -(N - m + 1.0)**(-1) * sum(np.log(C))
N = len(U)
return _phi(m) - _phi(m + 1)
Example:
>>> U = np.array([85, 80, 89] * 17)
>>> ApEn(U, 2, 3)
-1.0996541105257052e-05
The above example is consistent with the example given on Wikipedia.
Related
I am looking for some precise math on the likelihood of collisions for MD5, SHA1, and SHA256 based on the birthday paradox.
I am looking for something like a graph that says "If you have 10^8 keys, this is the probability. If you have 10^13 keys, this is the probability and so on"
I have looked at tons of articles but I am having a tough time finding something that gives me this data. (Ideal option for me would be a formula or code that calculates this for any provided hash size)
Let's imagine we have a truly random hash function that hashes from strings to n-bit numbers. This means that there are 2n possible hash codes, and each string's hash code is chosen uniformly at random from all of those possibilities.
The birthday paradox specifically says that once you've seen roughly √(2k) items, there's a 50% chance of a collision, where k is the number of distinct possible outputs. In the case where the hash function hashes to an n-bit output, this means that you'll need roughly 2n/2 hashes before you get a collision. This is why we typically pick hashes that output 256 bits; it means that we'd need a staggering 2128 ≈1038 items hashed before there's a "reasonable" chance of a collision. With a 512-bit hash, you'd need about 2256 to get a 50% chance of a collision, and 2256 is approximately the number of protons in the known universe.
The exact formula for the probability of getting a collision with an n-bit hash function and k strings hashed is
1 - 2n! / (2kn (2n - k)!)
This is a fairly tricky quantity to work with directly, but we can get a decent approximation of this quantity using the expression
1 - e-k2/2n+1
So, to get (roughly) a probability p chance of a collision, we can solve to get
p ≈ 1 - e-k2/2n+1
1 - p ≈ e-k2/2n+1
ln(1 - p) ≈ -k2/2n+1
-ln(1 - p) ≈ k2/2n+1
-2n+1 ln(1 - p) ≈ k2
2(n+1)/2 √(-ln(1 - p)) ≈ k
As one last approximation, assume we're dealing with very small choices of p. Then ln(1 - p) ≈ -p, so we can rewrite this as
k ≈ 2(n+1)/2 √p
Notice that there's still a monster 2(n+1)/2 term here, so for a 256-bit hash that leading term is 2128.5, which is just enormous. For example, how many items must we see to get a 2-50 chance of a collision with a 256-bit hash? That would be approximately
2(256+1)/2 √2-50
= 2257/2 2-50/2
= 2207/2
= 2103.5.
So you'd need a staggeringly huge number of hashes to have a vanishingly small chance of getting a collision. Figure that 2103.5 is about 1031, which at one nanosecond per hash computed would take you longer than the length of the universe to compute. And after all that, you'd get a success probability of 2-50, which is about 10-15.
In fact, this precisely why we pick such large numbers of bits for our hashes! It makes it extremely unlikely for a collision to occur by chance.
(Note that the hash functions we have today aren't actually truly random functions, which is why people advise against using MD5, SHA1, and others that have had security weaknesses exposed.)
Hope this helps!
Is there any (simple) random generation function that can work without variable assignment? Most functions I read look like this current = next(current). However currently I have a restriction (from SQLite) that I cannot use any variable at all.
Is there a way to generate a number sequence (for example, from 1 to max) with only n (current number index in the sequence) and seed?
Currently I am using this:
cast(((1103515245 * Seed * ROWID + 12345) % 2147483648) / 2147483648.0 * Max as int) + 1
with max being 47, ROWID being n. However for some seed, the repeat rate is too high (3 unique out of 47).
In my requirements, repetition is ok as long as it's not too much (<50%). Is there any better function that meets my need?
The question has sqlite tag but any language/pseudo-code is ok.
P.s: I have tried using Linear congruential generators with some a/c/m triplets and Seed * ROWID as Seed, but it does not work well, it's even worse.
EDIT: I currently use this one, but I do not know where it's from. The rate looks better than mine:
((((Seed * ROWID) % 79) * 53) % "Max") + 1
I am not sure if you still have the same problem but I might have a solution for you.
What you could do is use Pseudo Random M-sequence generators based on shifting registers. Where you just have to take high enough order of you primitive polynomial and you don't need to store any variables really.
For more info you can check the wiki page
What you would need to code is just the primitive polynomial shifting equation and I have checked in an online editor it should be very easy to do. I think the easiest way for you would be to use Binary base and use PRBS sequences and depending on how many elements you will have you can choose your sequence length. For example this is the implementation for length of 2^15 = 32768 (PRBS15), the primitive polynomial I took from the wiki page (There youcan find the primitive polynomials all the way to PRBS31 what would be 2^31=2.1475e+09)
Basically what you need to do is:
SELECT (((ROWID << 1) | (((ROWID >> 14) <> (ROWID >> 13)) & 1)) & 0x7fff)
The beauty of this approach is if you take the sequence of the PRBS with longer period than your ROWID largest value you will have unique random index. Very simple. :)
If you need help with searching for primitive polynomials you can see my github repo which deals exactly with finding primitive polynomials and unique m-sequences. It is currently written in Matlab, but I plan to write it in python in next few days.
Cheers!
What about using good hash function and map result into [1...max] range?
Along the lines (in pseudocode). sha1 was added to SQLite 3.17.
sha1(ROWID) % Max + 1
Or use any external C code for hash (murmur, chacha, ...) as shown here
A linear congruential generator with appropriately-chosen parameters (a, c, and modulus m) will be a full-period generator, such that it cycles pseudorandomly through every integer in its period before repeating. Although you may have tried this idea before, have you considered that m is equivalent to max in your case? For a list of parameter choices for such generators, see L'Ecuyer, P., "Tables of Linear Congruential Generators of Different Sizes and Good Lattice Structure", Mathematics of Computation 68(225), January 1999.
Note that there are some practical issues to implementing this in SQLite, especially if your SQLite version supports only 32-bit integers and 64-bit floating-point numbers (with 52 bits of precision). Namely, there may be a risk of—
overflow if an intermediate multiplication exceeds 32 bits for integers, and
precision loss if an intermediate multiplication results in a greater-than-52-bit number.
Also, consider why you are creating the random number sequence:
Is the sequence intended to be unpredictable? In that case, a linear congruential generator alone is not enough, and you should generate unique identifiers by other means, such as by combining unique numbers with cryptographically random numbers.
Will the numbers generated this way be exposed in any way to end users? If not, there is no need to obfuscate them by "shuffling" them.
Also, depending on the SQLite API you're using (for your programming language), there may be a way to write a custom function to convert the seed and ROWID to a random unique number. The details, however, depend heavily on the specific SQLite API. Another answer shows an example for Perl.
I would like to have a function f(x) that gives good pseudo-random numbers in uniform distribution according to value x. I am aware of linear congruential generators, however these work in iterations, i.e. I provide the initial seed and then I get a sequence of random values one by one. This is not what I want, because if a want to get let's say 200000th number in the sequence, I have to compute numbers 1 ... 199999. I need a function that is given by one simple formula that uses basic operations such as +, *, mod, etc. I am also aware of hash functions but I didn't find any that suits these needs. I might come up with some function myself, but I'd like to use something that's been tested to give decent pseudo-random values. Is there anything like that being used?
You might consider multiplicative congruential generators. These are linear congruentials without the additive constant: Xi+1 = aXi % c for suitable constants a and c. Expanding this out for a few iterations will convince you that Xk = akX0 % c, where X0 is your seed value. This can be calculated in O(log(k)) time using fast modular exponentiation. No need to calculate the first 199,999 to get the 200,000th value, you can find it in something proportional to about 18 steps.
Actually, for LCG with additive constant it works as well. There is a paper by F. Brown, "Random Number Generation with Arbitrary Stride", Trans. Am. Nucl. Soc. (Nov. 1994). Based on this paper there is reasonable LCG with decent quality and log2(N) skip-ahead feature, used by well-known Monte Carlo package MCNP5. C++ post is here https://github.com/Iwan-Zotow/LCG-PLE63/. Further development if this idea (RNG with logarithmic skip-ahead) is pretty decent family of generators at http://www.pcg-random.org/
You could use a simple encryption algorithm that can encrypt the numbers 1, 2, 3, ... Since encryption is reversible, each input number will have a unique output. The 200000th number in your sequence is encrypt(key, 200000). Use DES for 64 bit numbers, AES for 128 bit numbers and you can roll your own simple Feistel cipher for 32 bit or 16 bit numbers.
I need to multiply long integer numbers with an arbitrary BASE of the digits using FFT in integer rings. Operands are always of length n = 2^k for some k, and the convolution vector has 2n components, therefore I need a 2n'th primitive root of unity.
I'm not particularly concerned with efficiency issues, so I don't want to use Strassen & Schönhage's algorithm - just computing basic convolution, then some carries, and that's nothing else.
Even though it seems simple to many mathematicians, my understanding of algebra is really bad, so I have lots of questions:
What are essential differences or nuances between performing the FFT in integer rings modulo 2^n + 1 (perhaps composite) and in integer FIELDS modulo some prime p?
I ask this because 2 is a (2n)th primitive root of unity in such a ring, because 2^n == -1 (mod 2^n+1). In contrast, integer field would require me to search for such a primitive root.
But maybe there are other nuances which will prevent me from using rings of such a form for the FFT.
If I picked integer rings, what are sufficient conditions for the existence of 2^n-th root of unity in this field?
All other 2^k-th roots of unity of smaller order could be obtained by squaring this root, right?..
What essential restrictions are imposed on the multiplication by the modulo of the ring? Maybe on their length, maybe on the numeric base, maybe even on the numeric types used for multiplication.
I suspect that there may be some loss of information if the coefficients of the convolution are reduced by the modulo operation. Is it true and why?.. What are general conditions that will allow me to avoid this?
Is there any possibility that just primitive-typed dynamic lists (i.e. long) will suffice for FFT vectors, their product and the convolution vector? Or should I transform the coefficients to BigInteger just in case (and what is the "case" when I really should)?
If a general answer to these question takes too long, I would be particularly satisfied by an answer under the following conditions. I've found a table of primitive roots of unity of order up to 2^30 in the field Z_70383776563201:
http://people.cis.ksu.edu/~rhowell/calculator/roots.html
So if I use 2^30th root of unity to multiply numbers of length 2^29, what are the precision/algorithmic/efficiency nuances I should consider?..
Thank you so much in advance!
I am going to award a bounty to the best answer - please consider helping out with some examples.
First, an arithmetic clue about your identity: 70383776563201 = 1 + 65550 * 2^30. And that long number is prime. There's a lot of insight into your modulus on the page How the FFT constants were found.
Here's a fact of group theory you should know. The multiplicative group of integers modulo N is the product of cyclic groups whose orders are determined by the prime factors of N. When N is prime, there's one cycle. The orders of the elements in such a cyclic group, however, are related to the prime factors of N - 1. 70383776563201 - 1 = 2^31 * 3^1 * 5^2 * 11 * 13, and the exponents give the possible orders of elements.
(1) You don't need a primitive root necessarily, you need one whose order is at least large enough. There are some probabilistic algorithms for finding elements of "high" order. They're used in cryptography for ensuring you have strong parameters for keying materials. For numbers of the form 2^n+1 specifically, they've received a lot of factoring attention and you can go look up the results.
(2) The sufficient (and necessary) condition for an element of order 2^n is illustrated by the example modulus. The condition is that some prime factor p of the modulus has to have the property that 2^n | p - 1.
(3) Loss of information only happens when elements aren't multiplicatively invertible, which isn't the case for the cyclic multiplicative group of a prime modulus. If you work in a modular ring with a composite modulus, some elements are not so invertible.
(4) If you want to use arrays of long, you'll be essentially rewriting your big-integer library.
Suppose we need to calculate two n-bit integer multiplication where
n = 2^30;
m = 2*n; p = 2^{n} + 1
Now,
w = 2, x =[w^0,w^1,...w^{m-1}] (mod p).
The issue, for each x[i], it will be too large and we cannot do w*a_i in O(1) time.
As a programmer I think it is my job to be good at math but I am having trouble getting my head round imaginary numbers. I have tried google and wikipedia with no luck so I am hoping a programmer can explain in to me, give me an example of a number squared that is <= 0, some example usage etc...
I guess this blog entry is one good explanation:
The key word is rotation (as opposed to direction for negative numbers, which are as stranger as imaginary number when you think of them: less than nothing ?)
Like negative numbers modeling flipping, imaginary numbers can model anything that rotates between two dimensions “X” and “Y”. Or anything with a cyclic, circular relationship
Problem: not only am I a programmer, I am a mathematician.
Solution: plow ahead anyway.
There's nothing really magical to complex numbers. The idea behind their inception is that there's something wrong with real numbers. If you've got an equation x^2 + 4, this is never zero, whereas x^2 - 2 is zero twice. So mathematicians got really angry and wanted there to always be zeroes with polynomials of degree at least one (wanted an "algebraically closed" field), and created some arbitrary number j such that j = sqrt(-1). All the rules sort of fall into place from there (though they are more accurately reorganized differently-- specifically, you formally can't actually say "hey this number is the square root of negative one"). If there's that number j, you can get multiples of j. And you can add real numbers to j, so then you've got complex numbers. The operations with complex numbers are similar to operations with binomials (deliberately so).
The real problem with complexes isn't in all this, but in the fact that you can't define a system whereby you can get the ordinary rules for less-than and greater-than. So really, you get to where you don't define it at all. It doesn't make sense in a two-dimensional space. So in all honesty, I can't actually answer "give me an exaple of a number squared that is <= 0", though "j" makes sense if you treat its square as a real number instead of a complex number.
As for uses, well, I personally used them most when working with fractals. The idea behind the mandelbrot fractal is that it's a way of graphing z = z^2 + c and its divergence along the real-imaginary axes.
You might also ask why do negative numbers exist? They exist because you want to represent solutions to certain equations like: x + 5 = 0. The same thing applies for imaginary numbers, you want to compactly represent solutions to equations of the form: x^2 + 1 = 0.
Here's one way I've seen them being used in practice. In EE you are often dealing with functions that are sine waves, or that can be decomposed into sine waves. (See for example Fourier Series).
Therefore, you will often see solutions to equations of the form:
f(t) = A*cos(wt)
Furthermore, often you want to represent functions that are shifted by some phase from this function. A 90 degree phase shift will give you a sin function.
g(t) = B*sin(wt)
You can get any arbitrary phase shift by combining these two functions (called inphase and quadrature components).
h(t) = Acos(wt) + iB*sin(wt)
The key here is that in a linear system: if f(t) and g(t) solve an equation, h(t) will also solve the same equation. So, now we have a generic solution to the equation h(t).
The nice thing about h(t) is that it can be written compactly as
h(t) = Cexp(wt+theta)
Using the fact that exp(iw) = cos(w)+i*sin(w).
There is really nothing extraordinarily deep about any of this. It is merely exploiting a mathematical identity to compactly represent a common solution to a wide variety of equations.
Well, for the programmer:
class complex {
public:
double real;
double imaginary;
complex(double a_real) : real(a_real), imaginary(0.0) { }
complex(double a_real, double a_imaginary) : real(a_real), imaginary(a_imaginary) { }
complex operator+(const complex &other) {
return complex(
real + other.real,
imaginary + other.imaginary);
}
complex operator*(const complex &other) {
return complex(
real*other.real - imaginary*other.imaginary,
real*other.imaginary + imaginary*other.real);
}
bool operator==(const complex &other) {
return (real == other.real) && (imaginary == other.imaginary);
}
};
That's basically all there is. Complex numbers are just pairs of real numbers, for which special overloads of +, * and == get defined. And these operations really just get defined like this. Then it turns out that these pairs of numbers with these operations fit in nicely with the rest of mathematics, so they get a special name.
They are not so much numbers like in "counting", but more like in "can be manipulated with +, -, *, ... and don't cause problems when mixed with 'conventional' numbers". They are important because they fill the holes left by real numbers, like that there's no number that has a square of -1. Now you have complex(0, 1) * complex(0, 1) == -1.0 which is a helpful notation, since you don't have to treat negative numbers specially anymore in these cases. (And, as it turns out, basically all other special cases are not needed anymore, when you use complex numbers)
If the question is "Do imaginary numbers exist?" or "How do imaginary numbers exist?" then it is not a question for a programmer. It might not even be a question for a mathematician, but rather a metaphysician or philosopher of mathematics, although a mathematician may feel the need to justify their existence in the field. It's useful to begin with a discussion of how numbers exist at all (quite a few mathematicians who have approached this question are Platonists, fyi). Some insist that imaginary numbers (as the early Whitehead did) are a practical convenience. But then, if imaginary numbers are merely a practical convenience, what does that say about mathematics? You can't just explain away imaginary numbers as a mere practical tool or a pair of real numbers without having to account for both pairs and the general consequences of them being "practical". Others insist in the existence of imaginary numbers, arguing that their non-existence would undermine physical theories that make heavy use of them (QM is knee-deep in complex Hilbert spaces). The problem is beyond the scope of this website, I believe.
If your question is much more down to earth e.g. how does one express imaginary numbers in software, then the answer above (a pair of reals, along with defined operations of them) is it.
I don't want to turn this site into math overflow, but for those who are interested: Check out "An Imaginary Tale: The Story of sqrt(-1)" by Paul J. Nahin. It talks about all the history and various applications of imaginary numbers in a fun and exciting way. That book is what made me decide to pursue a degree in mathematics when I read it 7 years ago (and I was thinking art). Great read!!
The main point is that you add numbers which you define to be solutions to quadratic equations like x2= -1. Name one solution to that equation i, the computation rules for i then follow from that equation.
This is similar to defining negative numbers as the solution of equations like 2 + x = 1 when you only knew positive numbers, or fractions as solutions to equations like 2x = 1 when you only knew integers.
It might be easiest to stop trying to understand how a number can be a square root of a negative number, and just carry on with the assumption that it is.
So (using the i as the square root of -1):
(3+5i)*(2-i)
= (3+5i)*2 + (3+5i)*(-i)
= 6 + 10i -3i - 5i * i
= 6 + (10 -3)*i - 5 * (-1)
= 6 + 7i + 5
= 11 + 7i
works according to the standard rules of maths (remembering that i squared equals -1 on line four).
An imaginary number is a real number multiplied by the imaginary unit i. i is defined as:
i == sqrt(-1)
So:
i * i == -1
Using this definition you can obtain the square root of a negative number like this:
sqrt(-3)
== sqrt(3 * -1)
== sqrt(3 * i * i) // Replace '-1' with 'i squared'
== sqrt(3) * i // Square root of 'i squared' is 'i' so move it out of sqrt()
And your final answer is the real number sqrt(3) multiplied by the imaginary unit i.
A short answer: Real numbers are one-dimensional, imaginary numbers add a second dimension to the equation and some weird stuff happens if you multiply...
If you're interested in finding a simple application and if you're familiar with matrices,
it's sometimes useful to use complex numbers to transform a perfectly real matrice into a triangular one in the complex space, and it makes computation on it a bit easier.
The result is of course perfectly real.
Great answers so far (really like Devin's!)
One more point:
One of the first uses of complex numbers (although they were not called that way at the time) was as an intermediate step in solving equations of the 3rd degree.
link
Again, this is purely an instrument that is used to answer real problems with real numbers having physical meaning.
In electrical engineering, the impedance Z of an inductor is jwL, where w = 2*pi*f (frequency) and j (sqrt(-1))means it leads by 90 degrees, while for a capacitor Z = 1/jwc = -j/wc which is -90deg/wc so that it lags a simple resistor by 90 deg.