What is the ascii representation of this hex value: 0x80487d2 every converter gives me a different answer, hopefully someone can help.
Thanks
0x80487d2 has no ASCII representation.
ASCII can only have characters in the range 0 and 127 (inclusive). The hex value 0x80487d2 is well above 127.
That hex value can be split into multiple bytes but the way this is done depends on whether the machine is little or big endian, and regardless, not all of those bytes have an ASCII representation. You won't find 0xd2 on any ASCII character chart (http://www.asciitable.com/).
Assuming that is a literal number (i.e. not some weird or little-endian encoding) the decimal representation is 134514642.
Related
I want to know a way to encode a Plain Text of 20 characters to a Cipher of 8 characters and Decode it back to 20 characters.
The possible constituent characters are :
Like how in HexaDecimal number, the characters ranges from '0 to F', for our PlainText it ranges from '0 - A' (base 11).
The required Cipher can have combination of letters and numbers only. It should not have symbols in it.
I want a compression technique or even a program would be more helpful to both encode and decode the above requirements.
Thank You!
I think it's impossible.
11^20 (672749994932560009201) is too bigger than 36^8 (2821109907456).
(11 means count of '0-A', 36 means count of 'A-Z' & '0-9').
At least 238469969 (11^20/36^8) of the plan texts will be encoded as the same output.
I've been given (17,3233) and I need to encrypt the letter 'Z' using ascii number. (Z = 90)
90^17 mod3233 = 1668 and that would just work. But I want to know if there is a way that i can just send a single char instead of the integer 1668 and still make it work.
RSA is not a stream cipher. The encrypted result always has the size (bits) of the modulus - in your case 3233.
The number 3233 requires 12 bits - however one byte/character provides only 8 bits. Hence you can't send pack the RSA encrypted text as one byte. You need at least 2 bytes.
If you can pack the integer in a char depends on your definition of a char:
char = (printable) ASCII character
A printable ASCII character usually has 7 bit. You can't store 12 bits in 7 bit.
char = byte
A standard character is equivalent of a byte and allows to store 8 bits. You can't store 12 bits in 8 bit.
char = Java UTF-16 char
Considering that a Java char is an UTF-16 character you may be able to save the integer as one character, however storing binary data in a Java UTF-16 char is a very unclean and hackish solution. I strongly recommend not implement this! Binary data should not be saved in a character(Array) without proper conversion and encoding (e.g. base64 of hexadecimal encoding).
All signed character values range from -128 to 127. All unsigned character values range from 0 to 255. So the only way would be to have those numbers inside that range.
I want to encrypt and decrypt ASCII messages using an RSA algorithm written in assembly.
I read that for security and efficiency reasons the encryption is normally not called character-wise but a number of characters is grouped and encrypted together (e.g. wikipedia says that 3 chars are grouped).
Let us assume that we want to encrypt the message "aaa" grouping 2 characters.
"aaa" is stored as 61616100.
If we group two characters and encrypt the resulting halfwords the result for the 6161 block can in fact be something like 0053. This will result in an artificial second '\0' character which corrupts the resulting message.
Is there any way to work around this problem?
Using padding or anything similar is unfortunately not an option since I am required to use the same function for encrypting and decrypting.
The output of RSA is a number. Usually this number is encoded as an octet string (or byte array). You should not treat the result as a character string. You need to treat it as a byte array with the same length as the modulus (or at least the length of the modulus in bytes).
Besides the result containing a zero (null-terminator) the characters may have any value, including non-printable characters such as control characters and 7F. If you want to treat the result as a printable string, convert to hex or base64.
This is something I have been thinking while reading programming books and in computer science class at school where we learned how to convert decimal values into hexadecimal.
Can someone please tell me what are the advantages of using hexadecimal values and why we use them in programmnig?
Thank you.
In many cases (like e.g. bit masks) you need to use binary, but binary is hard to read because of its length. Since hexadecimal values can be much easier translated to/from binary than decimals, you could look at hex values as kind of shorthand notation for binary values.
It certainly depends on what you're doing.
It comes as an extension of base 2, which you probably are familiar with as essential to computing.
Check this out for a good discussion of
several applications...
https://softwareengineering.stackexchange.com/questions/170440/why-use-other-number-bases-when-programming/
The hexadecimal digit corresponds 1:1 to a given pattern of 4 bits. With experience, you can map them from memory. E.g. 0x8 = 1000, 0xF = 1111, correspondingly, 0x8F = 10001111.
This is a convenient shorthand where the bit patterns do matter, e.g. in bit maps or when working with i/o ports. To visualize the bit pattern for 169d is in comparison more difficult.
A byte consists of 8 binary digits and is the smallest piece of data that computers normally work with. All other variables a computer works with are constructed from bytes. For example; a single character can be stored in a single byte, and a 32bit integer consists of 4 bytes.
As bytes are so fundamental we want a way to write down their value as neatly and efficiently as possible. One option would be to use binary, but then we would need a lot of digits. This takes up a lot of space and can be confusing when many numbers are written in sequence:
200 201 202 == 11001000 11001001 11001010
Using hexadecimal notation, we can write every byte using just two digits:
200 == C8
Also, as 16 is a power of 2, it is easy to convert between hexadecimal and binary representations in your head. This is useful as sometimes we are only interested in a single bit within the byte. As a simple example, if the first digit of a hexadecimal representation is 0 we know that the first four binary digits are 0.
Besides the difference in how characters are stored, are there any special characters in any language utf-32 can display and utf-8 cannot?
All UTF encodings can represent the same range of code points (0 to 0x10FFFF). So, the same characters can be encoded by any of them.
Whether they can be "displayed" is an entirely different question. That's nothing to do with the encoding, and a function of the font family used. I am not sure that any font has glyphs for every single Unicode code point. But I assume you meant "represented".
They do vary in how many bytes they'll need to represent a given string. UTF-8 is almost always the shortest for non-Asian languages. For those, UTF-16 might win (I haven't really "benchmarked".) I can't imagine a realistic case where UTF-32 would be optimal.
Is there any character one of them can't represent?
In theory: No.
All of those formats can represent all Unicode code points.
In practice: Depends.
The Windows API uses UCS-2 (which is pretty much the first UTF-16 chunk) and doesn't always handle surrogates correctly. So you might want to use UTF-16 to have your program act as "normal" as possible compared to other programs, instead of truncating high-ranging UTF-32 code points manually.
Anything else?
Yes: Use UTF-8!
It's endian-less, so you it avoids byte-order issues, which are a pain in the rear.
Of course, if you're on Windows then you need to convert to UTF-16 before using them.
UTF-8, UTF-16 and UTF-32 all can be used to represent all Unicode datapoints. So no, there are no special characters that can be represented in UTF-32 and not in UTF-8.
1) UTF-8 can be backward compatible with ASCII for regular english characters, this can be an advantage when your client just have english characters.
2) UTF-8 is good in saving network bandwidth if you have ASCII characters more than non-English characters.
3) UTF-16 would be good if you have more non-English characters in terms of saving Storage space.
I suggest to use UTF-8 based on #1 above.