Convert ipv6 to integer and open in browser - ip

It's common practice to convert ip addresses, ipv4 in particular, to integer values (95.191.162.12 becomes 160639438). With ipv4 I can open the integer value in my browser (by typing http(or whatever)://160639438), and it will perfectly work.
I tried the same with ipv6: http://[2a00:1450:4011:804::1001] works, but its integer representation 55827987829239171056733755306672132097 does not (I tried opening http://[55827987829239171056733755306672132097], http://55827987829239171056733755306672132097, [55827987829239171056733755306672132097]:80, etc.).
Is there any way to address ipv6 host by its integer value?

RFC3493 Section 6.3 suggests that the pure-integer form of IP address which is valid in inet_addr() is no longer acceptable in the equivalent function inet_pton() for IPv6. Quote:
The inet_pton() function does not accept other formats (such as the
octal numbers, hexadecimal numbers, and fewer than four numbers that
inet_addr() accepts).

Related

Why one AL32UTF8 character not display the I-Acute, yet other one displays the tilde-N?

My Oracle 11g is configured with AL32UTF8
NLS_CHARACTERSET AL32UTF8
Why does the tilde-N display as tilde-N in the second record, but the Acute-I and K
not display with Acute-I and K in the first record?
Additional Information:
The hex code for the Accent-I is CD
When I take the HEX code from the dump and convert it using UNISTR(), the character displays with the accent.
select
unistr('\0052\0045\0059\004B\004A\0041\0056\00CD\004B')
as hex_to_unicode
from dual;
This is probably an issue with whatever client you are using to display the results than your database. What are you using?
You can check if the database results are correct using the DUMP function. If the value in your table has the correct byte sequence for your database character set, you're good.
Edit:
OK, I'm pretty sure your data is bad. You're talking about
LATIN CAPITAL LETTER I WITH ACUTE, which is Unicode code point U+00CD. That is not the same as byte 0xCD. You're using database character set AL32UTF8, which uses UTF-8 encoding. The correct UTF-8 encoding for the U+00CD character is the two-byte sequence 0xC38D.
What you have is UTF-8 byte sequence 0xCD4B, which I'm pretty sure is invalid.
The Oracle UNISTR function takes the code point in UCS-2 encoding, which is roughly the same as UTF-16, not UTF-8.
Demonstration here: http://sqlfiddle.com/#!4/7e9d1f/1

How to represent acute accents in ASCII?

I'm having an encoding problem related to cookies on one of my websites.
A user is inputing Usuário, which has an acute accent, and that's being put in a cookie. The raw HEX for the cookie response is (for the Usuário string):
55 73 75 C3 A1 72 69 6F
When I see it in the browser, it looks like this:
...which is really messy. I need to fix this up.
Then I went to this website: http://www.rapidtables.com/convert/number/hex-to-ascii.htm and converted the HEX value to see how it would look like. And I got the same output:
Right. This means the HEX code is wrong. Then I tried to convert Usuário to ASCII to see how it should be. I used this WebSite: http://www.asciitohex.com/ and this is the result:
For my surprise, the HEX is exactly the one that is showing up messy. Why???
And how do I represent Usuário in ASCII so I can put it in a cookie? Should I manually encode it?
PS: I'm using ASP.NET, just in case it matters.
As of 2015 the standard of the web to store character data is UTF-8 and not ASCII. ASCII actually only contains the first 128 characters of the codepage, and does not include any kind of accented characters. To add accented characters to this 128 characters there were many legacy solutions: codepages. They each added 128 different characters to the default ASCII list thereby allowing representing 256 different characters.
The problem was, that this didn't properly solve the issue: ASCII based codepages were more or less incomatible with each other (except for the first 128 characters), and there was usually no way of programatically knowing which codepage was in used.
One of the solutions was UTF-8, which is a way to encode the unocde character set (containing most of the characters used around the world, and more) while trying to remain compatible with ASCII. The first 128 characters are actually the same in both cases, but afterwards UTF-8 characters become multi-byte: one character is encoded using a series of bytes (usually 2-3, depends on which character needs to be encoded)
The problem is if you are using some kind of ASCII based single byte codebase (like ISO-8859-1), which encodes supported characters in single bytes, but your input is actually UTF-8, which will encode accented characters in multiple bytes (you can see this in your HEX example. á is encoded as C3 A1: two bytes). If you try to read these two bytes in an ASCII based codepage, which uses single bytes for every characters (in West-Europe this codepage is usually ISO-8859-1), then each of this two bytes will be reprensented with two different characters.
In the web world the default encoding is UTF-8, so your clients will usually send their requests using UTF-8. ASP.NET is Unicode aware, so it can handle these requests. However somewere in your code this UTF-8 is converted acccidentally into ISO-8859-1, and then back into UTF-8. This might happen on various layers. As you have issues it probably happens at the cookie layer, which is sometimes problematic (here is how it worked in 2009). You should also double check your application that it uses UTF-8 everywhere else though (views, database, etc.), if you want to properly support accented characters.

What is this "ÿþA"?

When I read in csv files to r the requesting dataframe has very different dimensions than I see when I open the file in excel or notepad and the column heading is labeled as "ÿþA". What does this mean?
thanks,
The file you are reading is using an UTF-16 or UTF-32 encoding (with a BOM), and the r read.csv function has not been informed correctly.
As Karsten suggests you should use the fileEncoding parameter to specify the correct encoding, which I suspect should be "UTF-16LE".
Here is what the R Studio documentation states about encoding:
Encoding
The encoding of the input/output stream of a connection can be specified by name in the same way as it would be given to iconv: see that help page for how to find out what encoding names are recognized on your platform. Additionally, "" and "native.enc" both mean the ‘native’ encoding, that is the internal encoding of the current locale and hence no translation is done.
Re-encoding only works for connections in text mode: reading from a connection with re-encoding specified in binary mode will read the stream of bytes, but mixing text and binary mode reads (e.g. mixing calls to readLines and readChar) is likely to lead to incorrect results.
The encodings "UCS-2LE" and "UTF-16LE" are treated specially, as they are appropriate values for Windows ‘Unicode’ text files. If the first two bytes are the Byte Order Mark 0xFFFE then these are removed as some implementations of iconv do not accept BOMs. Note that whereas most implementations will handle BOMs using encoding "UCS-2" and choose the appropriate byte order, some (including earlier versions of glibc) will not. There is a subtle distinction between "UTF-16" and "UCS-2" (see http://en.wikipedia.org/wiki/UTF-16/UCS-2: the use of surrogate pairs is very rare so "UCS-2LE" is an appropriate first choice.
As from R 3.0.0 the encoding "UTF-8-BOM" is accepted for reading and will remove a Byte Order Mark if present (which it often is for files and webpages generated by Microsoft applications). If it is required (it is not recommended) when writing it should be written explicitly, e.g. by writeChar("\ufeff", con, eos = NULL) or writeBin(as.raw(c(0xef, 0xbb, 0xff)), binary_con)
Requesting a conversion that is not supported is an error, reported when the connection is opened. Exactly what happens when the requested translation cannot be done for invalid input is in general undocumented. On output the result is likely to be that up to the error, with a warning. On input, it will most likely be all or some of the input up to the error.
It may be possible to deduce the current native encoding from Sys.getlocale("LC_CTYPE"), but not all OSes record it.
And here is what Wiki states on the BOM:
Byte order mark
The byte order mark (BOM) is a Unicode character used to signal the endianness (byte order) of a text file or stream. It is encoded at U+FEFF byte order mark (BOM). BOM use is optional, and, if used, should appear at the start of the text stream. Beyond its specific use as a byte-order indicator, the BOM character may also indicate which of the several Unicode representations the text is encoded in.1
Because Unicode can be encoded as 16-bit or 32-bit integers, a computer receiving these encodings from arbitrary sources needs to know which byte order the integers are encoded in. The BOM gives the producer of the text a way to describe the text stream's endianness to the consumer of the text without requiring some contract or metadata outside of the text stream itself. Once the receiving computer has consumed the text stream, it presumably processes the characters in its own native byte order and no longer needs the BOM. Hence the need for a BOM arises in the context of text interchange, rather than in normal text processing within a closed environment.

What causes XOR encryption to return a "blank"?

What is the cause of certain characters to be blank when using XOR encryption? Furthermore, how can this be compensated for when decrypting?
For instance:
....
void basic_encrypt(char *to_encrypt) {
char c;
while (*to_encrypt) {
*to_encrypt = *to_encrypt ^ 20;
to_encrypt++;
}
}
will return "nothing" for the character k. Clearly, character decay is problematic for decryption.
I assume this is caused by the bit operator, but I am not very good with binary so I was wondering if anyone could explain.
Is it converting an element, k, in this case, to some spaceless ASCII character? Can this be compensated for by choosing some y < x < z operator where x is the operator?
Lastly, if it hasn't been compensated for, is there a realistic decryption strategy for filling in blanks besides guess and check?
'k' has the ASCII value 107 = 0x6B. 20 is 0x14, so
'k' ^ 20 == 0x7F == 127
if your character set is ASCII compatible. 127 is \DEL in ASCII, which is a non-printable character, so won't be displayed if you print it out.
You will have to know the difference between bytes and characters to understand which is happening. On the one hand you have the C char type, which is simply a presentation of a byte, not a character.
In the old days each character was mapped to one byte or octet value in a character encoding table, or code page. Nowadays we have encodings that take more bytes for certain characters, e.g. UTF-8, or even encodings that always take more than one byte such as UTF-16. The last two are unicode encodings, which means that each character has a certain number value and the encoding is used to encode this number into bytes.
Many computers will interpret bytes in ISO/IEC 8859-1 or Latin-1, sometimes extended by Windows-1252. These code pages have holes for control characters, or byte values that are simply not used. Now it depends on the runtime system how these values are handled. Java by default substitutes an ? character in place of the missing character. Other runtimes will simply drop the value or - of course - execute the control code. Some terminals may use the ESC control code to set the color or to switch to another code page (making a mess of the screen).
This is why ciphertext should be converted to another encoding, such as hexadecimals or Base64. These encodings should make sure that the result is readable text. This takes care of the cipher text. You will have to choose a character set for your plain text too, e.g. simply perform ASCII or UTF-8 encoding before encryption.
Getting a zero value from encryption does not matter because once you re-xor with the same xor key you get the original value.
value == value
value XOR value == 0 [encryption]
( value XOR value ) XOR value == value [decryption]
If you're using a zero-terminated string mechanism, then you have two main strategies for preventing 'character degradation'
store the length of the string before encryption and make sure to decrypt at least that number of characters on decryption
check for a zero character after decoding the character

CR/LF generated by PBEWithMD5AndDES encryption?

May the encryption string provided by PBEWithMD5AndDES and then Base64 encoded contain the CR and or LF characters?
Base64 is only printable characters. However when it's used as a MIME type for email it's split into lines which are separated by CR-LF.
PBEWithMD5AndDES returns binary data. PBE encryption is defined within the PKCS#5 standard, and this standard does not have a dedicated base 64 encoding scheme. So the question becomes for which system you need to Base 64 encode the binary data. Wikipedia has a nice section within the Base 64 article that explains the various forms.
You may encounter a PBE implementation that returns a Base 64, and the implementation does not mention which of the above schemes is used. In that case you need to somehow figure out which scheme is used. I would suggest searching for it, asking the community, looking at the source or if all fails, creating a set of tests on the output.
Fortunately you are pretty safe if you are decoding base 64 and you are ignoring all the white space. Note that some implementations are disregarding padding, so add it before decoding, if applicable.
If you perform the encoding base 64 yourself, I would strongly suggest to not output any whitespace, use only the default alphabet (with '+' and '/' signs) and always perform padding when required. After that you can always split the result and replace any non-standard character (especially the '+' and '/' signs of course), or remove the padding.
I was using java with Andorid SDK. I found that the command:
String s = Base64.encodeToString(enc, Base64.DEFAULT);
did line wrapping. It put LF chars into the output string.
I found that:
String s = Base64.encodeToString(enc, Base64.NO_WRAP);
did not put the LF characters into the output string.

Resources