We're looking for a way to encrypt a 16 digit number (could be 10-20 digits), with the following requirements:
Output is also a number
Output doesn't double (or greatly increase the number of digits)
Doesn't require pre-storing a massive mapping table
Ok with moderate to low security
Simple and very low security: Add something, then XOR the number with another number of similar size. Only viable if nobody has access to the source code. And anybody who has access to the program (even without source) and who can run it with a few samples (0, 1000, 10000, 10000000) will be able to figure it out.
Depending on language:
uint64_t theNumber;
uint64_t cryptbase1= 12345678909876, cryptbase2= 234567890987654;
// encrypt
uint64_t encrypted= (theNumber + cryptbase1) ^ cryptbase2;
// decrypt
uint64_t decrypted= (encrypted ^ cryptbase2) - cryptbase1;
I can imagine a 16 digit to 20 digit encryption algorithm:
Encrypt:
Convert the 16 digit number into its binary representation (54 bits needed).
Use a block cipher algorithm with a small blocksize (e.g. Triple-DES has a block size of 64 bits) to encrypt the 54 bits.
Convert the encrypted 64 bits into its 20 digit representation.
Decrypt:
Convert the 20 digit number into its binary 64 bit representation.
Use the block cipher algorithm to decrypt.
Convert the 64 bits into its 20 digit representation. The left 4 digits have to be 0, 16 digits remain.
You are probably looking at a block cypher with a block size able to hold up to 20 decimal digits. You can use Hasty Pudding cipher, which has a variable block size, or alternatively you could roll your own simple Feistel cipher with an even number of bits per block. You do not seem to need a very high level of security, so a simple Feistel cipher with four or six rounds would probably be easier.
I use a simple Feistel cipher for integer permutations, and the F function is:
// The F function for the Feistel rounds.
private int F(int num, int round) {
// XOR with round key.
num ^= mRoundKeys[round];
// Square, then XOR the high and low parts.
num *= num;
return (num >>> HALF_SHIFT) ^ (num & LOW_16_MASK);
} // end F()
You do not seem to need anything more complex than that. If you want cryptographic security, then use Hasty Pudding, which is a lot more secure.
Any binary block of the appropriate size can be represented as decimal digits.
Related
I've been given (17,3233) and I need to encrypt the letter 'Z' using ascii number. (Z = 90)
90^17 mod3233 = 1668 and that would just work. But I want to know if there is a way that i can just send a single char instead of the integer 1668 and still make it work.
RSA is not a stream cipher. The encrypted result always has the size (bits) of the modulus - in your case 3233.
The number 3233 requires 12 bits - however one byte/character provides only 8 bits. Hence you can't send pack the RSA encrypted text as one byte. You need at least 2 bytes.
If you can pack the integer in a char depends on your definition of a char:
char = (printable) ASCII character
A printable ASCII character usually has 7 bit. You can't store 12 bits in 7 bit.
char = byte
A standard character is equivalent of a byte and allows to store 8 bits. You can't store 12 bits in 8 bit.
char = Java UTF-16 char
Considering that a Java char is an UTF-16 character you may be able to save the integer as one character, however storing binary data in a Java UTF-16 char is a very unclean and hackish solution. I strongly recommend not implement this! Binary data should not be saved in a character(Array) without proper conversion and encoding (e.g. base64 of hexadecimal encoding).
All signed character values range from -128 to 127. All unsigned character values range from 0 to 255. So the only way would be to have those numbers inside that range.
How can I convert a 96 bit key to a 64 bit key? I have a DES key that is 96 bits long (i.e 745347651281) . I want to convert this to 64 bit which I will use to decrypt a DES ciphertext.
Update:
There was an "original key" encrypted by RSA
The "original key" was decrypted using RSA to give us this (i.e 745347651281)
This (i.e 745347651281) is now supposed to be used to decryped a DES
file.
Note : The "original key" was in hex format which I converted to integer(base 16) before doing RSA decryption.
The key you have displayed is 48 bits in size, not 96 bits in size - if it is considered to be specified hexadecimals. A DES key without parity would be 56 bits in size. This means that you have to create the parity bits that are missing. The parity of DES is described as such:
One bit in each 8-bit byte of the KEY may be utilized for error detection in key generation, distribution, and storage. Bits 8, 16,..., 64 are for use in ensuring that each byte is of odd parity.
Note that the bits are numbered starting at the left with value 1, meaning that the least significant bit of each byte is used for parity. So you have to distribute the bits you have been given over the bytes, and then adjust the parity of each byte by possibly flipping the least significant bit (using XOR with 1).
Usually libraries have support for this kind of operation. In Java you can do this by generating the DES key using SecretKeyFactory for instance.
I'm currently in the process of learning about encryption and i'm hoping to find more clarification on what I learned.
Suppose the message "100 dollars should be moved from account 123456 to 555555" was encrypted using aes-128-cbc and a random IV. My professor says it's possible to alter the encrypted text so that when it's decrypted again, the message reads "900 dollars should be moved from account 123456 to 555555". How do you go about doing this?
I tried figuring it out on my own by generating my own key and iv, encrypting the message, then converting it to hex characters to work with. From there can I swap out some characters then decrypt? I tried playing around with this but something always seemed to go wrong.
We're using a basic linux command line for this.
Any help or explanation would be awesome!
Suppose the string was encrypted using a one-time-pad and the resulting ciphertext is "B8B7D8CB9860EBD0163507FD00A9F923D45...". We know that the first byte of plaintext, the digit 1, has ASCII code 0x31. The first byte of the ciphertext is 0xB8. If k0 denotes the first byte of the key, then 0x31 xor k0 = 0xB8. Decoding a one-time-pad is just xor-ing the ciphertext with key. So, the person decoding gets the first byte of the plaintext as 0x31 = 0xB8 xor k0. If we xor the first byte of ciphertext with m0, then the person decoding the ciphertext will get (0xB8 xor m0) xor k0. But this is just (0xB8 xor k0) xor m0 as xor is commutative and associative. The last expression can be reduced to 0x31 xor m0. Now we want to change the resulting byte to 0x39, the ASCII code for the digit 9. So we need to solve 0x31 xor m0 = 0x39. But that is simple just xor with 0x31 on both sides.
The same principle applies when using CBC mode. You can modify the IV in a similar way to change the decoded message.
#user515430's reasoning above is based on the fact that every ciphertext C is linearly dependent from the plaintext P (since C = P ⊕ K).
Actually, as #polettix makes us notice, in CBC encryption we have that, e.g. for the 6-th block of a certain text, C₆ = E(P₆ ⊕ C₅, K), given a key K; and if E(·) is a good encryption function we shoud loose such linearity.
But, in CBC decryption, the 6-th block of plaintext will be obtained as P₆ = D(C₆, K) ⊕ C₅, so it will be linearly dependent not from C₆, but from C₅.
Re-wording, if you want to change a plaintext block in CBC, just change the previous chiphertext block.
See also https://crypto.stackexchange.com/q/30407/36884 (for the record, Cryptography StackExchange is the right site for this kind of question).
I am trying to times 52 by 1000 and i am getting a negative result
int getNewSum = 52 * 1000;
but the following code is ouputting a negative result: -13536
An explanation of how two's complement representation works is probably better given on Wikipedia and other places than here. What I'll do here is to take you through the workings of your exact example.
The int type on your Arduino is represented using sixteen bits, in a two's complement representation (bear in mind that other Arduinos use 32 bits for it, but yours is using 16.) This means that both positive and negative numbers can be stored in those 16 bits, and if the leftmost bit is set, the number is considered negative.
What's happening is that you're overflowing the number of bits you can use to store a positive number, and (accidentally, as far as you're concerned) setting the sign bit, thus indicating that the number is negative.
In 16 bits on your Arduino, decimal 52 would be represented in binary as:
0000 0000 0011 0100
(2^5 + 2^4 + 2^2 = 52)
However, the result of multiplying 52 by 1,000 -- 52,000 -- will end up overflowing the magnitude bits of an int, putting a '1' in the sign bit on the end:
*----This is the sign bit. It's now 1, so the number is considered negative.
1100 1011 0010 0000
(typically, computer integer arithmetic and associated programming languages don't protect you against doing things like this, for a variety of reasons, mostly related to efficiency, and mostly now historical.)
Because that sign bit on the left-hand end is set, to convert that number back into decimal from its assumed two's complement representation, we assume it's a negative number, and then first take the one's complement (flipping all the bits):
0011 0100 1101 1111
-- which represents 13,535 -- and add one to it, yielding 13,536, and call it negative: -13,536, the value you're seeing.
If you read up on two's complement/integer representations in general, you'll get the hang of it.
In the meantime, this probably means you should be looking for a bigger type to store your number. Arduino has unsigned integers, and a long type, which will use four bytes to store your numbers, giving you a range from -2,147,483,648 to 2,147,483,647. If that's enough for you, you should probably just switch to use long instead of int.
Matt's answer is already a very good in depth explanation of what's happening, but for those looking for a more TL;dr practical answer:
Problem:
This happens quite often for Arduino programmers when they try to assign (= equal sign) the result of an arithmetic (usually multiplication) to a normal integer (int). As mentioned, when the result is bigger than the memory size compiler has assigned to the variables, overflowing happens.
Solution 1:
The easiest solution is to replace the int type with a bigger datatype considering your needs. As this tutorialspoint.com tutorial has explained there are different integer types we can use:
int:
16 bit: from -32,768 to 32,767
32 bit: from -2,147,483,648 to 2,147,483,647
unsigned int: from 0 to 65,535
long: from 2,147,483,648 to 2,147,483,647
unsigned long: from 0 to 4,294,967,295
Solution 2:
This works only if you have some divisions with big enough denominators, in your arithmetic. In Arduino compiler multiplication is calculated prior to the division. so if you have some divisions in your equation try to encapsulate them with parentheses. for example if you have a * b / c replace it with a * (b / c).
Why is only XOR used in cryptographic algorithms, and other logic gates like OR, AND, and NOR are not used?
It isn't exactly true to say that the logical operation XOR is the only one used throughout all cryptography, however it is the only two way encryption where it is used exclusively.
Here is that explained:
Imagine you have a string of binary digits 10101
and you XOR the string 10111 with it you get 00010
now your original string is encoded and the second string becomes your key
if you XOR your key with your encoded string you get your original string back.
XOR allows you to easily encrypt and decrypt a string, the other logic operations don't.
If you have a longer string you can repeat your key until its long enough
for example if your string was 1010010011 then you'd simple write your key twice and it would become 1011110111 and XOR it with the new string
Here's a wikipedia link on the XOR cipher.
I can see 2 reasons:
1) (Main reason) XOR does not leak information about the original plaintext.
2) (Nice-to-have reason) XOR is an involutory function, i.e., if you apply XOR twice, you get the original plaintext back (i.e, XOR(k, XOR(k, x)) = x, where x is your plaintext and k is your key). The inner XOR is the encryption and the outer XOR is the decryption, i.e., the exact same XOR function can be used for both encryption and decryption.
To exemplify the first point, consider the truth-tables of AND, OR and XOR:
And
0 AND 0 = 0
0 AND 1 = 0
1 AND 0 = 0
1 AND 1 = 1 (Leak!)
Or
0 OR 0 = 0 (Leak!)
0 OR 1 = 1
1 OR 0 = 1
1 OR 1 = 1
XOR
0 XOR 0 = 0
0 XOR 1 = 1
1 XOR 0 = 1
1 XOR 1 = 0
Everything on the first column is our input (ie, the plain text). The second column is our key and the last column is the result of your input "mixed" (encrypted) with the key using the specific operation (ie, the ciphertext).
Now, imagine an attacker got access to some encrypted byte, say: 10010111, and he wants to get the original plaintext byte.
Let's say the AND operator was used in order to generate this encrypted byte from the original plaintext byte. If AND was used, then we know for certain that every time we see the bit '1' in the encrypted byte then the input (ie, the first column, the plain text) MUST also be '1' as per the truth table of AND. If the encrypted bit is a '0' instead, we do not know if the input (ie, the plain text) is a '0' or a '1'. Therefore, we can conclude that the original plain text is: 1 _ _ 1 _ 111. So 5 bits of the original plain text were leaked (ie, could be accessed without the key).
Applying the same idea to OR, we see that every time we find a '0' in the encrypted byte, we know that the input (ie, the plain text) must also be a '0'. If we find a '1' then we do not know if the input is a '0' or a '1'. Therefore, we can conclude that the input plain text is: _ 00 _ 0 _ _ _. This time we were able to leak 3 bits of the original plain text byte without knowing anything about the key.
Finally, with XOR, we cannot get any bit of the original plaintext byte. Every time we see a '1' in the encrypted byte, that '1' could have been generated from a '0' or from a '1'. Same thing with a '0' (it could come from both '0' or '1'). Therefore, not a single bit is leaked from the original plaintext byte.
Main reason is that if a random variable with unknown distribution R1 is XORed with a random variable R2 with uniform distribution the result is a random variable with uniform distribution, so basically you can randomize a biased input easily which is not possible with other binary operators.
The output of XOR always depends on both inputs. This is not the case for the other operations you mention.
I think because XOR is reversible. If you want to create hash, then you'll want to avoid XOR.
XOR is the only gate that's used directly because, no matter what one input is, the other input always has an effect on the output.
However, it is not the only gate used in cryptographic algorithms. That might be true of old-school cryptography, the type involving tons of bit shuffles and XORs and rotating buffers, but for prime-number-based crypto you need all kinds of mathematics that is not implemented through XOR.
XOR acts like a toggle switch where you can flip specific bits on and off. If you want to "scramble" a number (a pattern of bits), you XOR it with a number. If you take that scrambled number and XOR it again with the same number, you get your original number back.
210 XOR 145 gives you 67 <-- Your "scrambled" result
67 XOR 145 gives you 210 <-- ...and back to your original number
When you "scramble" a number (or text or any pattern of bits) with XOR, you have the basis of much of cryptography.
XOR uses fewer transistors (4 NAND gates) than more complicated operations (e.g. ADD, MUL) which makes it good to implement in hardware when gate count is important. Furthermore, an XOR is its own inverse which makes it good for applying key material (the same code can be used for encryption and decryption) The beautifully simple AddRoundKey operation of AES is an example of this.
For symmetric crypto, the only real choices operations that mix bits with the cipher and do not increase length are operations add with carry, add without carry (XOR) and compare (XNOR). Any other operation either loses bits, expands, or is not available on CPUs.
The XOR property (a xor b) xor b = a comes in handy for stream ciphers: to encrypt a n bit wide data, a pseudo-random sequence of n bits is generated using the crypto key and crypto algorithm.
Sender:
Data: 0100 1010 (0x4A)
pseudo random sequence: 1011 1001 (0xB9)
------------------
ciphered data 1111 0011 (0xF3)
------------------
Receiver:
ciphered data 1111 0011 (0xF3)
pseudo random sequence: 1011 1001 (0xB9) (receiver has key and computes same sequence)
------------------
0100 1010 (0x4A) Data after decryption
------------------
Let's consider the three common bitwise logical operators
Let's say we can choose some number (let's call it the mask) and combine it with an unknown value
AND is about forcing some bits to zero (those that are set to zero in the mask)
OR is about forcing some bits to one (those that are set to one in the mask)
XOR is more subtle you can't know for sure the value of any bit of the result, whatever the mask you choose. But if you apply your mask two times you get back your initial value.
In other words the purpose of AND and OR is to remove some information, and that's definitely not what you want in cryptographic algorithms (symmetric or asymmetric cipher, or digital signature). If you lose information you won't be able to get it back (decrypt) or signature would tolerate some minute changes in message, thus defeating it's purpose.
All that said, that is true of cryptographic algorithms, not of their implementations. Most implementations of cryptographic algorithms also use many ANDs, usually to extract individual bytes from 32 or 64 internal registers.
You typically get code like that (this is some nearly random extract of aes_core.c)
rk[ 6] = rk[ 0] ^
(Te2[(temp >> 16) & 0xff] & 0xff000000) ^
(Te3[(temp >> 8) & 0xff] & 0x00ff0000) ^
(Te0[(temp ) & 0xff] & 0x0000ff00) ^
(Te1[(temp >> 24) ] & 0x000000ff) ^
rcon[i];
rk[ 7] = rk[ 1] ^ rk[ 6];
rk[ 8] = rk[ 2] ^ rk[ 7];
rk[ 9] = rk[ 3] ^ rk[ 8];
8 XORs and 7 ANDs if I count right
XOR is a mathematical calculation in cryptography. It is a logical operation. There are other logical operations: AND, OR, NOT, Modulo Function etc. XOR is the most important and the most used.
If it's the same, it's 0.
If it's different, it's 1.
Example:
Message : Hello
Binary Version of Hello : 01001000 01100101 01101100 01101100 01101111
Key-stream : 110001101010001101011010110011010010010111
Cipher text using XOR : 10001110 11000110 00110110 10100001 01001010
Applications : The one-time pad/Vern-am Cipher uses the Exclusive or function in which the receiver has the same key-stream and receives the ciphertext over a covert transport channel. The receiver then Xor the ciphertext with the key-stream in order to reveal the plaintext of Hello. In One Time Pad, the key-stream should be at-least as long as the message.
Fact : The One Time Pad is the only truly unbreakable encryption.
Exclusive Or used in Feistel structure which is used in the block cipher DES algo.
Note : XOR operation has a 50% chance of outputting 0 or 1.
I think its simply because a given some random set of binary numbers a large number of 'OR' operations would tend towards all '1's, likewise a large number of 'AND' operations would tend towards all zeroes. Wheres a large number of 'XOR's produces a random-ish selection of ones and zeroes.
This is not to say that AND and OR are not useful - just that XOR is more useful.
The prevalence of OR/AND and XOR in cryptography is for two reasons:-
One these are lightning fast instructions.
Two they are difficult to model using conventional mathematical formulas