I'm not learning cryptography yet, and this exercise - in the form it was delivered as a homework, was more of an exercise on reading composite functions and the like. Either way, I took a look at some part of the source code and didn't understand this.
For RSA encryption, the source code manipulated the string in such a way:
Message is being hashed into an integer list. (int1, int2, int3...)
Encrypt int1
Subtract result from int2 ( int2 - e(int1))
Modulo with the modulo key (n)
RSA transform with a key.
However, the RSA decryption method is done by:
1) RSA_transform
2) Result is added
3) Modulo with n
The part that puzzles me about the RSA decryption is the need for modulo after the adding and rsa_transform. If it's needed, shouldnt it be used in reverse order of how the chain of operations was carried out in RSA encryption?
Also, an "invert_modulo" was provided in the source code. I originally believed this to be a key in decrypting the message, but it wasn't so. What could "invert_modulo" be used for?
I cannot understand the first part of your question as the steps to hash the string is not clear also i don't get 3rd part of your encryption step. As for the Second question invert_modulo is the "MODULAR MULTIPLICATIVE INVERSE".
While working with modular airthmetic we always want our answer to be in the integer range 0 to M-1(where M is the number we modulo with) simple operations like addition , multiplication and subtraction are easy to perform : like (a+b) MOD M, it is well defined for the constraints of modular airthmetic.
Problem arises wen we try to divide : (a/b) MOD M
as you can see here a/b may not always always give an integer, therefore (a/b) does not lie in the integer range 0 to M-1. so to overcome this we try to find an inverse of b that we would rather multiply a with, i.e : (a*b_inverse) MOD M.
b_inverse can be defined as : (b*b_inverse) MOD M = 1.
i.e b_inverse is a number in the range 0 to M-1, which when multiplied with b, modulo M yields 1.
Note : also note that modular inverse of some numbers might not exist we can check that by taking the GCD of M and the number concerned(in our example "b") if GCD is not equal to 1 the the modular inverse does not exist.
Related
I've learned the theory of public key encryption but I'm missing the connection to the physical world. e.g.
I've been told that good RSA encryption should rely on prime numbers with 300 decimal digits but why? who came up with this number? How long it will take to break such encryption (statistics about different machines).
I've tried Google, but couldn't find what I wanted. anyone?
thanks
The key of asymmetric cryptography is to have an asymmetric function which allow decrypting message encrypted by the asymmetric key, without allowing to find the other key. In RSA, the function used is based on factorization of prime numbers however it is not the only option (Elliptic curve is another one for example).
So, basically you need two prime numbers for generating a RSA key pair. If you are able to factorize the public key and find these prime numbers, you will then be able to find the private key. The whole security of RSA is based on the fact that it is not easy to factorize large composite numbers, that's why the length of the key highly change the robustness of the RSA algorithm.
There are competitions to factorize large prime numbers with calculators each years with nice price. The last step of factorizing RSA key was done in 2009 by factorizing 768 bits keys. That's why at least 2048 bit keys should be used now.
As usual, Wikipedia is a good reference on RSA.
All public key algorithms are based on trapdoor functions, that is, mathematical constructs that are "easy" to compute in one way, but "hard" to reverse unless you have also some additional information (used as private key) at which point also the reverse becomes "easy".
"Easy" and "hard" are just qualitative adjectives that are always more formally defined in terms of computational complexity. "Hard" very often refers to computations that cannot be solved in polynomial time O(nx) for some fixed x and where n is the input data.
In the case of RSA, the "easy" function is the modular exponentiation C = Me mod N where the factors of N are kept secret. The "hard" problem is to find the e-th root of C (that is, M). Of course, "hard" does not mean that it is always hard, but (intuitively) that increasing the size of N by a certain factor increases the complexity by a much larger factor.
The sizes of the modulus which are recommended (2048 bits, or 617 decimal digits) relate to the availability of computation power at present time, so that if you stick to them you are assured that it will be extremely expensive for the attacker to break it. For more details, I should refer you to a brilliant answer on cryptography.SE (go and upvote :-)).
Finally, in order to have a trapdoor, N is built so as to be a composite number. It theory, for improved performance, N may have more than 2 factors, but the general security rule is that all factors must be balanced and have roughly the same size. That means that if you have K factors, and N is B bits long, each factor is roughly B/K bits longs.
This problem to solve is not the same as the integer factorization problem though. The two are related in that if you manage to factor N you can compute the private key by re-doing what the party that generated the key did. Typically, the exponent e being used is very small (3); it cannot be excluded that someday somebody devises an algorithm to compute the e-th without factoring N.
EDIT: Corrected the number of decimal digits for the modulus of a 2048 bits RSA key.
RSA uses the idea of one-way math functions, so that it's easy to encrypt and decrypt if you have the key, but hard (as in it takes lots and lots of CPU cycles) to decrypt if you don't have the key. Even before they thought of using prime numbers, mathematicians identified the need for a one-way function.
The first method they hit upon was the idea that if your "key" is a prime number, and your message is another number, then you can encrypt by multiplying the two together. Someone with the key can easily divide out the prime number and get the message, but for someone without the prime number, figuring out the prime number key is hard.
I need to multiply long integer numbers with an arbitrary BASE of the digits using FFT in integer rings. Operands are always of length n = 2^k for some k, and the convolution vector has 2n components, therefore I need a 2n'th primitive root of unity.
I'm not particularly concerned with efficiency issues, so I don't want to use Strassen & Schönhage's algorithm - just computing basic convolution, then some carries, and that's nothing else.
Even though it seems simple to many mathematicians, my understanding of algebra is really bad, so I have lots of questions:
What are essential differences or nuances between performing the FFT in integer rings modulo 2^n + 1 (perhaps composite) and in integer FIELDS modulo some prime p?
I ask this because 2 is a (2n)th primitive root of unity in such a ring, because 2^n == -1 (mod 2^n+1). In contrast, integer field would require me to search for such a primitive root.
But maybe there are other nuances which will prevent me from using rings of such a form for the FFT.
If I picked integer rings, what are sufficient conditions for the existence of 2^n-th root of unity in this field?
All other 2^k-th roots of unity of smaller order could be obtained by squaring this root, right?..
What essential restrictions are imposed on the multiplication by the modulo of the ring? Maybe on their length, maybe on the numeric base, maybe even on the numeric types used for multiplication.
I suspect that there may be some loss of information if the coefficients of the convolution are reduced by the modulo operation. Is it true and why?.. What are general conditions that will allow me to avoid this?
Is there any possibility that just primitive-typed dynamic lists (i.e. long) will suffice for FFT vectors, their product and the convolution vector? Or should I transform the coefficients to BigInteger just in case (and what is the "case" when I really should)?
If a general answer to these question takes too long, I would be particularly satisfied by an answer under the following conditions. I've found a table of primitive roots of unity of order up to 2^30 in the field Z_70383776563201:
http://people.cis.ksu.edu/~rhowell/calculator/roots.html
So if I use 2^30th root of unity to multiply numbers of length 2^29, what are the precision/algorithmic/efficiency nuances I should consider?..
Thank you so much in advance!
I am going to award a bounty to the best answer - please consider helping out with some examples.
First, an arithmetic clue about your identity: 70383776563201 = 1 + 65550 * 2^30. And that long number is prime. There's a lot of insight into your modulus on the page How the FFT constants were found.
Here's a fact of group theory you should know. The multiplicative group of integers modulo N is the product of cyclic groups whose orders are determined by the prime factors of N. When N is prime, there's one cycle. The orders of the elements in such a cyclic group, however, are related to the prime factors of N - 1. 70383776563201 - 1 = 2^31 * 3^1 * 5^2 * 11 * 13, and the exponents give the possible orders of elements.
(1) You don't need a primitive root necessarily, you need one whose order is at least large enough. There are some probabilistic algorithms for finding elements of "high" order. They're used in cryptography for ensuring you have strong parameters for keying materials. For numbers of the form 2^n+1 specifically, they've received a lot of factoring attention and you can go look up the results.
(2) The sufficient (and necessary) condition for an element of order 2^n is illustrated by the example modulus. The condition is that some prime factor p of the modulus has to have the property that 2^n | p - 1.
(3) Loss of information only happens when elements aren't multiplicatively invertible, which isn't the case for the cyclic multiplicative group of a prime modulus. If you work in a modular ring with a composite modulus, some elements are not so invertible.
(4) If you want to use arrays of long, you'll be essentially rewriting your big-integer library.
Suppose we need to calculate two n-bit integer multiplication where
n = 2^30;
m = 2*n; p = 2^{n} + 1
Now,
w = 2, x =[w^0,w^1,...w^{m-1}] (mod p).
The issue, for each x[i], it will be too large and we cannot do w*a_i in O(1) time.
Newbee question:
I am studying ANSI X9.31 -1998 for implementing PRNG as per section 2.4. I am not able to properly understand the representation of varibles used - like "ede".
Is the "ede" an operation or a variable ?
Why a * is used before X? Is it some kind of a standard representation?
Is there any specific document which describes all these?
"A.2.4 Generating Pseudo Random Numbers Using the DEA
Let ede*X(Y) represent the DEA multiple encryption of Y under the key *X.
Let *K be a DEA key pair reserved only for the generation of pseudo random numbers, let V be a 64-bit seed value which is also kept secret, and let XOR be the exclusive-or operator. Let DT be a date/time vector which is updated on each iteration. I is an intermediate value. A 64-bit vector R is generated as follows:
I = ede*K(DT)
R = ede*K(I XOR V) and a new V is generated by V = ede*K(R XOR I).
Successive values of R may be concatenated to produce a pseudo random number of the desired length."
EDE means Encrypt, Decrypt, Encrypt, the usual protocol when using Triple DES.
The use of * looks a lot like the more usual subscription common to cryptography articles:
EDEX(Y) to mean using X as the key for the algorithm EDE.
I have a table with an auto-increment 32-bit integer primary key in a database, which will produce numbers ranging 1-4294967295.
I would like to keep the convenience of an auto-generated primary key, while having my numbers on the front-end of an application look like randomly generated.
Is there a mathematical function which would allow a two-way, one-to-one transformation between an integer and another?
For example a function would take a number, and translate it to another:
1 => 1538645623
2 => 2043145593
3 => 393439399
And another function the way back:
1538645623 => 1
2043145593 => 2
393439399 => 3
I'm not necessarily looking for an implementation here, but rather a hint on what I suppose, must be a well-known mathematical problem somewhere :)
Mathematically this is almost exactly the same problem as cryptography.
You: I want to go from an id(string of bits) to another number (string of bits) and back again in a non-obvious way.
Cryptography: I want to go from plaintext (string of bits) to another string of bits and back again (reversible) in a non-obvious way.
So for a simple solution, can I suggest just plugging in whatever cryptography algorithm is most convenient in your language, and encrypt and decrypt your id?
If you wanted to be a bit cleverer you can do what is called "salting" in addition to cryptography. Take your id as a 32 bit (or whatever) number. Concatenate it with a random 32 bit number. Encrypt the result. To reverse, just decrypt, and throw away the random part.
Of course, if someone was seriously attacking this, this might be vulnerable to known plaintext/differential cryptanalysis attacks as you have a very small known plaintext space, but it sounds like you aren't trying to defend against serious attacks.
First remove the offset of 1, so you get numbers in the range 0 to 232-2. Let m = 232-1.
Choose some a that is relative prime to m. Since it is relatively prime it has an inverse a' so that a * a' = 1 (mod m). Also choose some b. Choose big numbers to get a good mixing effect.
Then you can compute your desired pseudo-random number by y = (a * x + b) % m, and get back the original by x = ((y - b) * a') % m.
This is essentially one step of a linear congruential generator (LCG) for pseudo-random numbers.
Note that this is not secure, it is only obfuscation. For example, if a user can get two numbers in sequence then he can recover a and b easily.
In most cases web apps use a hash of a randomly generated number as a reference to a table row. This hash can be stored as a number and displayed as a string for the end user.
This hash is unique and it is identifier and the id is only used in the application itself, never shown to the outside world.
Is there a standard way to do this?
Googling -- "approximate entropy" bits -- uncovers multiple academic papers but I'd like to just find a chunk of pseudocode defining the approximate entropy for a given bit string of arbitrary length.
(In case this is easier said than done and it depends on the application, my application involves 16,320 bits of encrypted data (cyphertext). But encrypted as a puzzle and not meant to be impossible to crack. I thought I'd first check the entropy but couldn't easily find a good definition of such. So it seemed like a question that ought to be on StackOverflow! Ideas for where to begin with de-cyphering 16k random-seeming bits are also welcome...)
See also this related question:
What is the computer science definition of entropy?
Entropy is not a property of the string you got, but of the strings you could have obtained instead. In other words, it qualifies the process by which the string was generated.
In the simple case, you get one string among a set of N possible strings, where each string has the same probability of being chosen than every other, i.e. 1/N. In the situation, the string is said to have an entropy of N. The entropy is often expressed in bits, which is a logarithmic scale: an entropy of "n bits" is an entropy equal to 2n.
For instance: I like to generate my passwords as two lowercase letters, then two digits, then two lowercase letters, and finally two digits (e.g. va85mw24). Letters and digits are chosen randomly, uniformly, and independently of each other. This process may produce 26*26*10*10*26*26*10*10 = 4569760000 distinct passwords, and all these passwords have equal chances to be selected. The entropy of such a password is then 4569760000, which means about 32.1 bits.
Shannon's entropy equation is the standard method of calculation. Here is a simple implementation in Python, shamelessly copied from the Revelation codebase, and thus GPL licensed:
import math
def entropy(string):
"Calculates the Shannon entropy of a string"
# get probability of chars in string
prob = [ float(string.count(c)) / len(string) for c in dict.fromkeys(list(string)) ]
# calculate the entropy
entropy = - sum([ p * math.log(p) / math.log(2.0) for p in prob ])
return entropy
def entropy_ideal(length):
"Calculates the ideal Shannon entropy of a string with given length"
prob = 1.0 / length
return -1.0 * length * prob * math.log(prob) / math.log(2.0)
Note that this implementation assumes that your input bit-stream is best represented as bytes. This may or may not be the case for your problem domain. What you really want is your bitstream converted into a string of numbers. Just how you decide on what those numbers are is domain specific. If your numbers really are just one and zeros, then convert your bitstream into an array of ones and zeros. The conversion method you choose will affect the results you get, however.
I believe the answer is the Kolmogorov Complexity of the string.
Not only is this not answerable with a chunk of pseudocode, Kolmogorov complexity is not a computable function!
One thing you can do in practice is compress the bit string with the best available data compression algorithm.
The more it compresses the lower the entropy.
The NIST Random Number Generator evaluation toolkit has a way of calculating "Approximate Entropy." Here's the short description:
Approximate Entropy Test Description: The focus of this test is the
frequency of each and every overlapping m-bit pattern. The purpose of
the test is to compare the frequency of overlapping blocks of two
consecutive/adjacent lengths (m and m+1) against the expected result
for a random sequence.
And a more thorough explanation is available from the PDF on this page:
http://csrc.nist.gov/groups/ST/toolkit/rng/documentation_software.html
There is no single answer. Entropy is always relative to some model. When someone talks about a password having limited entropy, they mean "relative to the ability of an intelligent attacker to predict", and it's always an upper bound.
Your problem is, you're trying to measure entropy in order to help you find a model, and that's impossible; what an entropy measurement can tell you is how good a model is.
Having said that, there are some fairly generic models that you can try; they're called compression algorithms. If gzip can compress your data well, you have found at least one model that can predict it well. And gzip is, for example, mostly insensitive to simple substitution. It can handle "wkh" frequently in the text as easily as it can handle "the".
Using Shannon entropy of a word with this formula : http://imgur.com/a/DpcIH
Here's a O(n) algorithm that calculates it :
import math
from collections import Counter
def entropy(s):
l = float(len(s))
return -sum(map(lambda a: (a/l)*math.log2(a/l), Counter(s).values()))
Here's an implementation in Python (I also added it to the Wiki page):
import numpy as np
def ApEn(U, m, r):
def _maxdist(x_i, x_j):
return max([abs(ua - va) for ua, va in zip(x_i, x_j)])
def _phi(m):
x = [[U[j] for j in range(i, i + m - 1 + 1)] for i in range(N - m + 1)]
C = [len([1 for x_j in x if _maxdist(x_i, x_j) <= r]) / (N - m + 1.0) for x_i in x]
return -(N - m + 1.0)**(-1) * sum(np.log(C))
N = len(U)
return _phi(m) - _phi(m + 1)
Example:
>>> U = np.array([85, 80, 89] * 17)
>>> ApEn(U, 2, 3)
-1.0996541105257052e-05
The above example is consistent with the example given on Wikipedia.