Generate very very large random numbers - math

How would you generate a very very large random number? I am thinking on the order of 2^10^9 (one billion bits). Any programming language -- I assume the solution would translate to other languages.
I would like a uniform distribution on [1,N].
My initial thoughts:
--You could randomly generate each digit and concatenate. Problem: even very good pseudorandom generators are likely to develop patterns with millions of digits, right?
You could perhaps help create large random numbers by raising random numbers to random exponents. Problem: you must make the math work so that the resulting number is still random, and you should be able to compute it in a reasonable amount of time (say, an hour).
If it helps, you could try to generate a possibly non-uniform distribution on a possibly smaller range (using the real numbers, for instance) and transform. Problem: this might be equally difficult.
Any ideas?

Generate log2(N) random bits to get a number M,
where M may be up to twice as large as N.
Repeat until M is in the range [1;N].
Now to generate the random bits you could either use a source of true randomness, which is expensive.
Or you might use some cryptographically secure random number generator, for example AES with a random key encrypting a counter for subsequent blocks of bits. The cryptographically secure implies that there can be no noticeable patterns.

It depends on what you need the data for. For most purposes, a PRNG is fast and simple. But they are not perfect. For instance I remember hearing that Monte Carlos simulations of chaotic systems are really good at revealing the underlying pattern in a PRNG.
If that is the sort of thing that you are doing, though, there is a simple trick I learned in grad school for generating lots of random data. Take a large (preferably rapidly changing) file. (Some big data structures from the running kernel are good.) Compress it to increase the entropy. Throw away the headers. Then for good measure, encrypt the result. If you're planning to use this for cryptographic purposes (and you didn't have a perfect entropy data set to work with), then reverse it and encrypt again.
The underlying theory is simple. Information theory tells us that there is no difference between a signal with no redundancy and pure random data. So if we pick a big file (ie lots of signal), remove redundancy with compression, and strip the headers, we have a pretty good random signal. Encryption does a really good job at removing artifacts. However encryption algorithms tend to work forward in blocks. So if someone could, despite everything, guess what was happening at the start of the file, that data is more easily guessable. But then reversing the file and encrypting again means that they would need to know the whole file, and our encryption, to find any pattern in the data.
The reason to pick a rapidly changing piece of data is that if you run out of data and want to generate more, you can go back to the same source again. Even small changes will, after that process, turn into an essentially uncorrelated random data set.

NTL: A Library for doing Number Theory
This was recommended by my Coding Theory and Cryptography teacher... so I guess it does the work right, and it's pretty easy to use.
RandomBnd, RandomBits, RandomLen -- routines for generating pseudo-random numbers
ZZ RandomLen_ZZ(long l);
// ZZ = psuedo-random number with precisely l bits,
// or 0 of l <= 0.

If you have a random number generator that generates random numbers of X bits. And concatenated bits of [X1, X2, ... Xn ] create the number you want of N bits, as long as each X is random, I don't see why your large number wouldn't be random as well for all intents and purposes. And if standard C rand() method is not secure enough, I'm sure there's plenty of other libraries (like the ones mentioned in this thread) whose pseudo-random numbers are "more random".

even very good pseudorandom generators are likely to develop patterns with millions of digits, right?
From the wikipedia on pseudo-random number generation:
A PRNG can be started from an arbitrary starting state using a seed state. It will always produce the same sequence thereafter when initialized with that state. The maximum length of the sequence before it begins to repeat is determined by the size of the state, measured in bits. However, since the length of the maximum period potentially doubles with each bit of 'state' added, it is easy to build PRNGs with periods long enough for many practical applications.
You could perhaps help create large random numbers by raising random numbers to random exponents
I assume you're suggesting something like populating the values of a scientific notation with random values?
E.g.: 1.58901231 x 10^5819203489
The problem with this is that your distribution is going to be logarithmic (or is that exponential? :) - same difference, it isn't even). You will never get a value that has the millionth digit set, yet contains a digit in the one's column.
you could try to generate a possibly non-uniform distribution on a possibly smaller range (using the real numbers, for instance) and transform
Not sure I understand this. Sounds like the same thing as the exponential solution, with the same problems. If you're talking about multiplying by a constant, then you'll get a lumpy distribution instead of a logarithmic (exponential?) one.
Suggested Solution
If you just need really big pseudo-random values, with a good distribution, use a PRNG algorithm with a larger state. The Periodicity of a PRNG is often the square of the number of bits, so it doesn't take that many bits to fill even a really large number.
From there, you can use your first solution:
You could randomly generate each digit and concatenate
Although I'd suggest that you use the full range of values returned by your PRNG (possibly 2^31 or 2^32), and populate a byte array with those values, splitting it up as necessary. Otherwise you might be throwing away a lot of bits of randomness. Also, scaling your values to a range (or using modulo) can easily screw up your distribution, so there's another reason to try to keep the max number of bits your PRNG can return. Be careful to pack your byte array full of the bits returned, though, or you'll again introduce lumpiness to your distribution.
The problem with those solution, though, is how to fill that (larger than normal) seed state with random-enough values. You might be able to use standard-size seeds (populated via time or GUID-style population), and populate your big-PRNG state with values from the smaller-PRNG. This might work if it isn't mission critical how well distributed your numbers are.
If you need truly cryptographically secure random values, the only real way to do it is use a natural form of randomness, such as that at http://www.random.org/. The disadvantages of natural randomness are availability, and the fact that many natural-random devices take a while to generate new entropy, so generating large amounts of data might be really slow.
You can also use a hybrid and be safe - natural-random seeds only (to avoid the slowness of generation), and PRNG for the rest of it. Re-seed periodically.

Related

Series of numbers with minimized risk of collision

I want to generate some numbers, which should attempt to share as few common bit patterns as possible, such that collisions happen at minimal amount. Until now its "simple" hashing with a given amount of output bits. However, there is another 'constraint'. I want to minimize the risk that, if you take one number and change it by toggling a small amount of bits, you end up with another number you've just generated. Note: I don't want it to be impossible or something, I want to minimize the risk!
How to calculate the probability for a list with n numbers, where each number has m bits? And, of course, what would be a suitable method to generate those numbers? Any good articles about this?
To answer this question precisely, you need to say what exactly you mean by "collision", and what you mean by "generate". If you just want the strings to be far apart from each other in hamming distance, you could hope to make an optimal, deterministic set of such strings. It is true that random strings will have this property with high probability, so you could use random strings instead.
When you say
Note: I don't want it to be impossible or something, I want to minimize the risk!
this sounds like an XY problem. If some outcome is the "bad thing" then why do you want it to be possible, but just low probability? Shouldn't you want it not to happen at all?
In short I think you should look up the term "error correcting code". The codewords of any good error correcting code, with any parameters that you feel like, will have the minimal risk of collision in the presence of random noise, for that number of code words of that length, and they can typically be generated very easily using matrix multiplication.

what options are there for representing numbers with more than 2^81 digits?

I came across an interesting math problem that would require me to do some artithmetic with numbers that have more than 281 digits. I know that its impossible to represent a number this large with a system where there is one memory unit for each digit but wondered if there were any ways around this.
My initial thought was to use a extremely large base instead of base 10 (decimal). After some thought I believe (but can't verify) that the optimal base would be the square root of the number of digits (so for a number with 281 digits you'd use base 240ish) which is a improvement but that doesn't scale well and still isn't really practical.
So what options do I have? I know of many arbitrary precision libraries, but are there any that scale to support this sort of arithmetic?
Thanks o7
EDIT: after thinking some more i realize i may be completely wrong about the "optimal base would be the square root of the number of digits" but a) that's why im asking and b) im too tired to remember my initial reasoning for assumption.
EDIT 2: 1000,000 in base ten = F4240 in base 16 = 364110 in base 8. In base 16 you need 20 bits to store the number in base 8 you need 21 so it would seem that by increasing the base you decrees the total number of bits needed. (again this could be wrong)
This is really a compression problem pretending to be an arithmetic problem. What you can do with such a large number depends entirely on its Kolmogorov complexity. If you're required to do computations on such a large number, it's obviously not going be arrive as 2^81 decimal digits; the Kolmogorov complexity would too high in that case and you can't even finish reading the input before the sun goes out. The best way to deal with such a number is via delayed evaluation and symbolic rational types that a language like Scheme provides. This way a program may be able to answer some questions about the result of computations on the number without actually having to write out all those digits to memory.
I think you should just use scientific notation. You will lose precision, but you can not store numbers that large without losing precision, because storing 2^81 digits will require more than 10^24 bits(about thousand billion terabytes), which is much more that you can have nowadays.
that have more than 2^81 digits
Non-fractional number with 2^81 bits, will take 3*10^11 terabytes of data. Per number.
That's assuming you want every single digit and data isn't compressible.
You could attempt to compress the data storing it in some kind of sparse array that allocates memory only for non-zero elements, but that doesn't guarantee that data will be fit anywhere.
Such precision is useless and impossible to handle on modern hardware. 2^81 bits will take insane amount of time to simply walk through number (9584 trillion years, assuming 1 byte takes 1 millisecond), never mind multiplication/division. I also can't think of any problem that would require precision like that.
Your only option is to reduce precision to first N significant digits and use floating point numbers. Since data won't fit into double, you'll have to use bignum library with floating point support, that provides extremely large floating point numbers. Since you can represent 2^81 (exponent) in bits, you can store beginning of a number using very big floating point.
1000,000 in base ten
Regardless of your base, positive number will take at least floor(log2(number))+1 bits to store it. If base is not 2, then it will take more than floor(log2(number))+1 bits to store it. Numeric base won't reduce number of required bits.

Math question regarding Python's uuid4

I'm not great with statistical mathematics, etc. I've been wondering, if I use the following:
import uuid
unique_str = str(uuid.uuid4())
double_str = ''.join([str(uuid.uuid4()), str(uuid.uuid4())])
Is double_str string squared as unique as unique_str or just some amount more unique? Also, is there any negative implication in doing something like this (like some birthday problem situation, etc)? This may sound ignorant, but I simply would not know as my math spans algebra 2 at best.
The uuid4 function returns a UUID created from 16 random bytes and it is extremely unlikely to produce a collision, to the point at which you probably shouldn't even worry about it.
If for some reason uuid4 does produce a duplicate it is far more likely to be a programming error such as a failure to correctly initialize the random number generator than genuine bad luck. In which case the approach you are using it will not make it any better - an incorrectly initialized random number generator can still produce duplicates even with your approach.
If you use the default implementation random.seed(None) you can see in the source that only 16 bytes of randomness are used to initialize the random number generator, so this is an a issue you would have to solve first. Also, if the OS doesn't provide a source of randomness the system time will be used which is not very random at all.
But ignoring these practical issues, you are basically along the right lines. To use a mathematical approach we first have to define what you mean by "uniqueness". I think a reasonable definition is the number of ids you need to generate before the probability of generating a duplicate exceeds some probability p. An approcimate formula for this is:
where d is 2**(16*8) for a single randomly generated uuid and 2**(16*2*8) with your suggested approach. The square root in the formula is indeed due to the Birthday Paradox. But if you work it out you can see that if you square the range of values d while keeping p constant then you also square n.
Since uuid4 is based off a pseudo-random number generator, calling it twice is not going to square the amount of "uniqueness" (and may not even add any uniqueness at all).
See also When should I use uuid.uuid1() vs. uuid.uuid4() in python?
It depends on the random number generator, but it's almost squared uniqueness.

Quantum Computing and Encryption Breaking

I read a while back that Quantum Computers can break most types of hashing and encryption in use today in a very short amount of time(I believe it was mere minutes). How is it possible? I've tried reading articles about it but I get lost at the a quantum bit can be 1, 0, or something else. Can someone explain how this relates to cracking such algorithms in plain English without all the fancy maths?
Preamble: Quantum computers are strange beasts that we really haven't yet tamed to the point of usefulness. The theory that underpins them is abstract and mathematical, so any discussion of how they can be more efficient than classical computers will inevitably be long and involved. You'll need at least an undergraduate understanding of linear algebra and quantum mechanics to understand the details, but I'll try to convey my limited understanding!
The basic premise of quantum computation is quantum superposition. The idea is that a quantum system (such as a quantum bit, or qubit, the quantum analogue of a normal bit) can, as you say, exist not only in the 0 and 1 states (called the computational basis states of the system), but also in any combination of the two (so that each has an amplitude associated with it). When the system is observed by someone, the qubit's state collapses into one of its basis states (you may have heard of the Schrödinger's cat thought experiment, which is related to this).
Because of this, a register of n qubits has 2^n basis states of its own (these are the states that you could observe the register being in; imagine a classical n-bit integer). Since the register can exist in a superposition of all these states at once, it is possible to apply a computation to all 2^n register states rather than just one of them. This is called quantum parallelism.
Because of this property of quantum computers, it may seem like they're a silver bullet that can solve any problem exponentially faster than a classical computer. But it's not that simple: the problem is that once you observe the result of your computation, it collapses (as I mentioned above) into the result of just one of the computations – and you lose all of the others.
The field of quantum computation/algorithms is all about trying to work around this problem by manipulating quantum phenomena to extract information in fewer operations than would be possible on a classical computer. It turns out that it's very difficult to contrive a "quantum algorithm" that is faster than any possible classical counterpart.
The example you ask about is that of quantum cryptanalysis. It's thought that quantum computers might be able to "break" certain encryption algorithms: specifically, the RSA algorithm, which relies on the difficulty of finding the prime factors of very large integers. The algorithm which allows for this is called Shor's algorithm, which can factor integers with polynomial time complexity. By contrast the best classical algorithm for the problem has (almost) exponential time complexity, and the problem is hence considered "intractable".
If you want a deeper understanding of this, get a few books on linear algebra and quantum mechanics and get comfortable. If you want some clarification, I'll see what I can do!
Aside: to better understand the idea of quantum superposition, think in terms of probabilities. Imagine you flip a coin and catch it on your hand, covered so that you can't see it. As a very tenuous analogy, the coin can be thought of as being in a superposition of the heads and tails "states": each one has a probability of 0.5 (and, naturally, since there are two states, these probabilities add up to 1). When you take your hand away and observe the coin directly, it collapses into either the heads state or the tails state, and so the probability of this state becomes 1, while the other becomes 0. One way to think about it, I suppose, is a set of scales that is balanced until observation, at which point it tips to one side as our knowledge of the system increases and one state becomes the "real" state.
Of course, we don't think of the coin as a quantum system: for all practical purposes, the coin has a definite state, even if we can't see it. For genuine quantum systems, however (such as an individual particle trapped in a box), we can't think about it in this way. Under the conventional interpretation of quantum mechanics, the particle fundamentally has no definite position, but exists in all possible positions at once. Only upon observation is its position constrained in space (though only to a limited degree; cf. uncertainty principle), and even this is purely random and determined only by probability.
By the way, quantum systems are not restricted to having just two observable states (those that do are called two-level systems). Some have a large but finite number, some have a countably infinite number (such as a "particle in a box" or a harmonic oscillator), and some even have an uncountably infinite number (such as a free particle's position, which isn't constrained to individual points in space).
It's highly theoretical at this point. Quantum Bits might offer the capability to break encryption, but clearly it's not at that point yet.
At the Quantum Level, the laws that govern behavior are different than in the macro level.
To answer your question, you first need to understand how encryption works.
At a basic level, encryption is the result of multiplying two extremely large prime numbers together. This super large result is divisible by 1, itself, and these two prime numbers.
One way to break encryption is to brute force guess the two prime numbers, by doing prime number factorization.
This attack is slow, and is thwarted by picking larger and larger prime numbers. YOu hear of key sizes of 40bits,56bits,128bits and now 256,512bits and beyond. Those sizes correspond to the size of the number.
The brute force algorithm (in simplified terms) might look like
for(int i = 3; i < int64.max; i++)
{
if( key / i is integral)
{
//we have a prime factor
}
}
So you want to brute force try prime numbers; well that is going to take awhile with a single computer. So you might try grouping a bunch of computers together to divide and conquer. That works, but is still slow for very large keysizes.
How a quantum bit address this is that they are both 0 and 1 at the same time. So say you have 3 quantum bits (no small feat mind you).
With 3 qbits, your program can have the values of 0-7 simulatanously
(000,001,010,011 etc)
, which includes prime numbers 3,5,7 at the same time.
so using the simple algorithm above, instead of increasing i by 1 each time, you can just divide once, and check
0,1,2,3,4,5,6,7
all at the same time.
Of course quantum bits aren't to that point yet; there is still lots of work to be done in the field; but this should give you an idea that if we could program using quanta, how we might go about cracking encryption.
The Wikipedia article does a very good job of explaining this.
In short, if you have N bits, your quantum computer can be in 2^N states at the same time. Similar conceptually to having 2^N CPU's processing with traditional bits (though not exactly the same).
A quantum computer can implement Shor's algorithm which can quickly perform prime factorization. Encryption systems are build on the assumption that large primes can not be factored in a reasonable amount of time on a classical computer.
Almost all our public-key encryptions (ex. RSA) are based solely on math, relying on the difficulty of factorization or discrete-logarithms. Both of these will be efficiently broken using quantum computers (though even after a bachelors in CS and Math, and having taken several classes on quantum mechanics, I still don't understand the algorithm).
However, hashing algorithms (Ex. SHA2) and symmetric-key encryptions (ex. AES), which are based mostly on diffusion and confusion, are still secure.
In the most basic terms, a normal no quantum computer works by operating on bits (sates of on or off) uesing boolean logic. You do this very fast for lots and lots of bits and you can solve any problem in a class of problems that are computable.
However they are "speed limits" namely something called computational complexity.This in lay mans terms means that for a given algorithm you know that the time it takes to run an algorithm (and the memory space required to run the algorithm) has a minimum bound. For example a algorithm that is O(n^2) means that for a data size of n it will require n^2 time to run.
However this kind of goes out the window when we have qbits (quantum bits) when you are doing operations on qbits that can have "in between" values. algorithms that would have very high computational complexity (like factoring huge numbers, the key to cracking many encryption algorithms) can be done in much much lower computational complexity. This is the reason that quantum computing will be able to crack encrypted streams orders of magnitude quicker then normal computers.
First of all, quantum computing is still barely out of the theoretical stage. Lots of research is going on and a few experimental quantum cells and circuits, but a "quantum computer" does not yet exist.
Second, read the wikipedia article: http://en.wikipedia.org/wiki/Quantum_computer
In particular, "In general a quantum computer with n qubits can be in an arbitrary superposition of up to 2^n different states simultaneously (this compares to a normal computer that can only be in one of these 2^n states at any one time). "
What makes cryptography secure is the use of encryption keys that are very long numbers that would take a very, very long time to factor into their constituent primes, and the keys are sufficiently long enough that brute-force attempts to try every possible key value would also take too long to complete.
Since quantum computing can (theoretically) represent a lot of states in a small number of qubit cells, and operate on all of those states simultaneously, it seems there is the potential to use quantum computing to perform brute-force try-all-possible-key-values in a very short amount of time.
If such a thing is possible, it could be the end of cryptography as we know it.
quantum computers etc all lies. I dont believe these science fiction magazines.
in fact rsa system is based on two prime numbers and their multipilation.
p1,p2 is huge primes p1xp2=N modulus.
rsa system is
like that
choose a prime number..maybe small its E public key
(p1-1)*(p2-1)=R
find a D number that makes E*D=1 mod(R)
we are sharing (E,N) data as public key publicly
we are securely saving (D,N) as private.
To solve this Rsa system cracker need to find prime factors of N.
*mass of the Universe is closer to 10^53 kg*
electron mass is 9.10938291 × 10^-31 kilograms
if we divide universe to electrons we can create 10^84 electrons.
electrons has slower speeds than light. its move frequency can be 10^26
if anybody produces electron size parallel rsa prime factor finders from all universe mass.
all universe can handle (10^84)*(10^26)= 10^110 numbers/per second.
rsa has limitles bits of alternative prime numbers. maybe 4096 bits
4096 bit rsa has 10^600 possible prime numbers to brute force.
so your universe mass quantum solver need to make tests during 10^500 years.
rsa vs universe mass quantum computer
1 - 0
maybe quantum computer can break 64/128 bits passwords. because 128 bit password has 10^39 possible brute force nodes.
This circuit is a good start to understand how qubit parallelism works. The 2-qubits-input is on the left side. Top qubit is x and bottom qubit ist y. The y qubit is 0 at the input, just like a normal bit. The x qubit on the other hand is in superposition at the input. y (+) f(x) stands here for addition modulo 2, just meaning 1+1=0, 0+1=1+0=1. But the interesting part is, since the x-qubit is in superposition, f(x) is f(0) and f(1) at the same time and we can perform the evaluation of the f function for all states simultaneously without using any (time consuming) loops. Having enough quibits we can branch this into endlessly complicating curcuits.
Even more bizarr imo. is the Grover's algorithm. As input we get here an unsorted array of integers with arraylength = n. What is the expected runtime of an algorithm, that finds the min value of this array? Well classically we have at least to check every 1..n element of the array resulting in an expected runtime of n. Not so for quantum computers, on a quantum computer we can solve this in expected runtime of maximum root(n), this means we don't even have to check every element to find the guaranteed solution...

Psuedo-Random-Number-Generator from a computable normal number

Isn't it easily possible to construct a PRNG in such a fashion? Why is it not done?
That is, as far as I know we could simply have a PRNG that takes a seed n. When you ask for a random bit, it takes the nth digit of the binary expansion of the computable normal number, and increments n.
My first thought was that perhaps we hadn't found a computable normal number, but we have. The remaining thought is that there is a good reason not to-- either there's some property of PRNGs that I'm not familiar with that such a method would not have, or it would be impractical somehow, or is otherwise outstripped by other methods.
That would make predicting the output really simple.
Say, for example, you generate the integer 0x54a30b7f. If you have 4GiB of pi (or random noise or an actual normal number), chances are there's only going to be one (or maybe a handful) occurrence of that particular integer and I can predict with reasonably high probability all future numbers. This is a serious problem in the case of cryptographically strong PRNGs. If instead of simple sequential scan you use some function, I just have to follow the function which if it is difficult enough to follow it turns into a PRNG in it's own right.
If you are not concerned about the cryptographic strength of your generator, then there are much more compact ways of generating random numbers. Mersenne Twister, for example, has a much larger period without requiring a 4GiB lookup table.

Resources