encryption of a letter with RSA algorithm - encryption

I've been given (17,3233) and I need to encrypt the letter 'Z' using ascii number. (Z = 90)
90^17 mod3233 = 1668 and that would just work. But I want to know if there is a way that i can just send a single char instead of the integer 1668 and still make it work.

RSA is not a stream cipher. The encrypted result always has the size (bits) of the modulus - in your case 3233.
The number 3233 requires 12 bits - however one byte/character provides only 8 bits. Hence you can't send pack the RSA encrypted text as one byte. You need at least 2 bytes.
If you can pack the integer in a char depends on your definition of a char:
char = (printable) ASCII character
A printable ASCII character usually has 7 bit. You can't store 12 bits in 7 bit.
char = byte
A standard character is equivalent of a byte and allows to store 8 bits. You can't store 12 bits in 8 bit.
char = Java UTF-16 char
Considering that a Java char is an UTF-16 character you may be able to save the integer as one character, however storing binary data in a Java UTF-16 char is a very unclean and hackish solution. I strongly recommend not implement this! Binary data should not be saved in a character(Array) without proper conversion and encoding (e.g. base64 of hexadecimal encoding).

All signed character values range from -128 to 127. All unsigned character values range from 0 to 255. So the only way would be to have those numbers inside that range.

Related

Length of AES encrypted data

I have a data that needs to be stored in a database as encrypted, the maximum length of the data before encryption is 50 chars (English or Arabic), I need to encrypt the data using AES-128 bit, and store the output in the database (base64string).
How to know the length of the data after encryption?
Try it with your specified algorithm, block size, IV size, and see what size output you get :-)
First it depends on the encoding of the input text. Is it UTF8? UTF16?
Lets assume UTF8 so 1 Byte per character means 50 Bytes of input data to your encryption algorithm. (100 Bytes if UTF16)
Then you will pad to the Block Size for the algorithm. AES, regardless of key size is a block of 16 Bytes. So we will be padded out to 64 Bytes (Or 112 for UTF 16)
Then we need to store the IV and header information. So that is (usually, with default settings/IV sizes) another 16Bytes so we are at 80 Bytes (Or 128 for UTF16)
Finally we are encoding to Base64. I assume you want string length, since otherwise it is wasteful to make it into a string. So Base 64 bloats the string using the following formula: Ceil(bytes/3) * 4. So for us that is Ceil(80/3) = 27 * 4 = 108 characters (Or 172 for UTF 16)
Again this is all highly dependent on your choices of how you encrypt, what the text is encoded as, etc.
I would try it with your scenario before relying on these numbers for anything useful.

RSA on ASCII message problems with '\0'

I want to encrypt and decrypt ASCII messages using an RSA algorithm written in assembly.
I read that for security and efficiency reasons the encryption is normally not called character-wise but a number of characters is grouped and encrypted together (e.g. wikipedia says that 3 chars are grouped).
Let us assume that we want to encrypt the message "aaa" grouping 2 characters.
"aaa" is stored as 61616100.
If we group two characters and encrypt the resulting halfwords the result for the 6161 block can in fact be something like 0053. This will result in an artificial second '\0' character which corrupts the resulting message.
Is there any way to work around this problem?
Using padding or anything similar is unfortunately not an option since I am required to use the same function for encrypting and decrypting.
The output of RSA is a number. Usually this number is encoded as an octet string (or byte array). You should not treat the result as a character string. You need to treat it as a byte array with the same length as the modulus (or at least the length of the modulus in bytes).
Besides the result containing a zero (null-terminator) the characters may have any value, including non-printable characters such as control characters and 7F. If you want to treat the result as a printable string, convert to hex or base64.

Simple way to encrypt 16 digit number

We're looking for a way to encrypt a 16 digit number (could be 10-20 digits), with the following requirements:
Output is also a number
Output doesn't double (or greatly increase the number of digits)
Doesn't require pre-storing a massive mapping table
Ok with moderate to low security
Simple and very low security: Add something, then XOR the number with another number of similar size. Only viable if nobody has access to the source code. And anybody who has access to the program (even without source) and who can run it with a few samples (0, 1000, 10000, 10000000) will be able to figure it out.
Depending on language:
uint64_t theNumber;
uint64_t cryptbase1= 12345678909876, cryptbase2= 234567890987654;
// encrypt
uint64_t encrypted= (theNumber + cryptbase1) ^ cryptbase2;
// decrypt
uint64_t decrypted= (encrypted ^ cryptbase2) - cryptbase1;
I can imagine a 16 digit to 20 digit encryption algorithm:
Encrypt:
Convert the 16 digit number into its binary representation (54 bits needed).
Use a block cipher algorithm with a small blocksize (e.g. Triple-DES has a block size of 64 bits) to encrypt the 54 bits.
Convert the encrypted 64 bits into its 20 digit representation.
Decrypt:
Convert the 20 digit number into its binary 64 bit representation.
Use the block cipher algorithm to decrypt.
Convert the 64 bits into its 20 digit representation. The left 4 digits have to be 0, 16 digits remain.
You are probably looking at a block cypher with a block size able to hold up to 20 decimal digits. You can use Hasty Pudding cipher, which has a variable block size, or alternatively you could roll your own simple Feistel cipher with an even number of bits per block. You do not seem to need a very high level of security, so a simple Feistel cipher with four or six rounds would probably be easier.
I use a simple Feistel cipher for integer permutations, and the F function is:
// The F function for the Feistel rounds.
private int F(int num, int round) {
// XOR with round key.
num ^= mRoundKeys[round];
// Square, then XOR the high and low parts.
num *= num;
return (num >>> HALF_SHIFT) ^ (num & LOW_16_MASK);
} // end F()
You do not seem to need anything more complex than that. If you want cryptographic security, then use Hasty Pudding, which is a lot more secure.
Any binary block of the appropriate size can be represented as decimal digits.

randomblob(N) in sqlite

The documentation (http://www.sqlite.org/lang_corefunc.html) says that it generates a N-byte blob, and it goes on to give an example of using randomblob(16) with hex() for generating unique ids.
But isn't a randomblob(8) is more than enough for most databases. 8 bytes gives 64 bits, which would give 2^64 different possible values (which will be converted into hex format by hex(randomblob(8)). Why waste the extra 8 bytes here?
GUIDs are defined as having 128 bits.

Hex to Ascii conversion problem

What is the ascii representation of this hex value: 0x80487d2 every converter gives me a different answer, hopefully someone can help.
Thanks
0x80487d2 has no ASCII representation.
ASCII can only have characters in the range 0 and 127 (inclusive). The hex value 0x80487d2 is well above 127.
That hex value can be split into multiple bytes but the way this is done depends on whether the machine is little or big endian, and regardless, not all of those bytes have an ASCII representation. You won't find 0xd2 on any ASCII character chart (http://www.asciitable.com/).
Assuming that is a literal number (i.e. not some weird or little-endian encoding) the decimal representation is 134514642.

Resources