Using Bit Mask of 32 bits for a string - bitmask

Is there any simple function that can be used in the server which can bit mask ( 32 Bit) a hexadecimal or normal character string?
For example: If I have a string called 'Apple'. Can I bit mask it (32 bit) ? What should be the approach?

Related

How do I convert a signed 16-bit hexadecimal to decimal?

I have a very long sequence of data stored as unsigned 16-bit PCM (it's audio). Using Frhed, I can view the file as hex. I'd like to convert the hex into decimal. So far I've exported into a csv file and attempted to convert in R using as.integer(). However, this doesn't seem to take into account that the values are signed.
You can convert hex text strings to decimal digits using strtoi. You will need to specify that the base is 16.
HexData = c("A167", "AE8F", "6A99", "0966", "784D", "A637", "102D", "4521")
strtoi(HexData, base=16L)
[1] 41319 44687 27289 2406 30797 42551 4141 17697
This is assuming unsigned data. Your question mentions both signed and unsigned so I am not quite sure which you have.

What is the difference between char and nchar datatype when installing oracle 11g with unicode char set option?

I install oracle 11g with unicode char set option. And I found that I can insert unicode character into CHAR datatype column. So my question is that:
what is the difference between CHAR and NCHAR datatype when installing oracle 11g with unicode option ?
There are two main differences.
The default for the length semantic. By default
CHAR(30) != NCHAR(30) but CHAR(30 CHAR) = NCHAR(30).
The default length semantic (as specified by the NLS_LENGTH_SEMANTICS parameter) is used for CHAR but not for NCHAR. The default value for this parameter is byte. The length of NCHAR is always in characters. This is important because NCHAR(30) will always hold 30 unicode characters - as will CHAR(30 CHAR) - but CHAR(30) will only hold 30 bytes by default which may or may not equal 30 unicode characters.
AL32UTF8 (the database characterset unicode default) and AL16UTF8 (the NLS_NCHAR_CHARACTERSET default) are not equivalent. Both are variable length unicode character sets but store characters differently so storage requirements between the two vary with the former using 1, 2, 3 and sometimes 4 bytes per character and the later 2 and sometimes 4 bytes per character). Your mileage will vary depending on the characters you store.
Additionally NCHAR support is limited in many client applications and some Oracle components so if you use AL32UTF8 for the database character set, Oracle's advice is to just stick to CHAR and not use NCHAR at all.

What causes XOR encryption to return a "blank"?

What is the cause of certain characters to be blank when using XOR encryption? Furthermore, how can this be compensated for when decrypting?
For instance:
....
void basic_encrypt(char *to_encrypt) {
char c;
while (*to_encrypt) {
*to_encrypt = *to_encrypt ^ 20;
to_encrypt++;
}
}
will return "nothing" for the character k. Clearly, character decay is problematic for decryption.
I assume this is caused by the bit operator, but I am not very good with binary so I was wondering if anyone could explain.
Is it converting an element, k, in this case, to some spaceless ASCII character? Can this be compensated for by choosing some y < x < z operator where x is the operator?
Lastly, if it hasn't been compensated for, is there a realistic decryption strategy for filling in blanks besides guess and check?
'k' has the ASCII value 107 = 0x6B. 20 is 0x14, so
'k' ^ 20 == 0x7F == 127
if your character set is ASCII compatible. 127 is \DEL in ASCII, which is a non-printable character, so won't be displayed if you print it out.
You will have to know the difference between bytes and characters to understand which is happening. On the one hand you have the C char type, which is simply a presentation of a byte, not a character.
In the old days each character was mapped to one byte or octet value in a character encoding table, or code page. Nowadays we have encodings that take more bytes for certain characters, e.g. UTF-8, or even encodings that always take more than one byte such as UTF-16. The last two are unicode encodings, which means that each character has a certain number value and the encoding is used to encode this number into bytes.
Many computers will interpret bytes in ISO/IEC 8859-1 or Latin-1, sometimes extended by Windows-1252. These code pages have holes for control characters, or byte values that are simply not used. Now it depends on the runtime system how these values are handled. Java by default substitutes an ? character in place of the missing character. Other runtimes will simply drop the value or - of course - execute the control code. Some terminals may use the ESC control code to set the color or to switch to another code page (making a mess of the screen).
This is why ciphertext should be converted to another encoding, such as hexadecimals or Base64. These encodings should make sure that the result is readable text. This takes care of the cipher text. You will have to choose a character set for your plain text too, e.g. simply perform ASCII or UTF-8 encoding before encryption.
Getting a zero value from encryption does not matter because once you re-xor with the same xor key you get the original value.
value == value
value XOR value == 0 [encryption]
( value XOR value ) XOR value == value [decryption]
If you're using a zero-terminated string mechanism, then you have two main strategies for preventing 'character degradation'
store the length of the string before encryption and make sure to decrypt at least that number of characters on decryption
check for a zero character after decoding the character

Qt - How to convert a number into QChar

I have a qulonglong variable and I need to convert it into QChar.
For example, from number 65 I should get 'A'.
Or if there is a solution to make that directly into QString would be good too.
Qhat you need is the QChar constructor.
QChar c((short) n);
Notice that QChar provides 16 bit characters:
The QChar class provides a 16-bit Unicode character. In Qt, Unicode
characters are 16-bit entities without any markup or structure. This
class represents such an entity. It is lightweight, so it can be used
everywhere. Most compilers treat it like a unsigned short.
qlonglong is an 64 bit integer so you should be very careful with the conversion to short
qlonglong i = 65;
QString((char)i);
Or see the docs here.

as3crypto issue

I am using the as3crypto library to get the AES algorithm working on a small project that i am doing. This is how i get the crypto function :
var cipher:ICipher = Crypto.getCipher("simple-aes-cbc", key, Crypto.getPad("pkcs5"));
As you can see, I am trying to use AES-128 with CBC and the pkcs5 padding.
If my source data is 128bytes long, the encrypted data is coming as 160bytes. Can some one tell me why this problem is coming?
Following is a small table that I compiled from a sample program.
Source string length | Encrypted string length
15 | 32
16 | 48
31 | 48
32 | 64
Is it supposed to be like this or have I made some mistake.
It is supposed to be like that. You asked for PKCS5 padding which always adds at least one byte of padding. And, of course, the input must be rounded up to some number of whole blocks because AES produces 16-byte chunks of output. With half a block, you cannot decrypt any of the input at all.

Resources