I have a stream (hooked to an azure blob) which contains strings and integers. The same stream is consumed by a .net process also.
In C# the writing and reading is done through the type specific methods of BinaryWriter and BinaryReader classes e,g., BinaryWriter.Write("path1;path2") and BinaryReader.ReadString().
In Java, I couldn't find the relevant libraries to achieve the same. Most of the InputStream methods are capable of reading the whole line of the string.
If there are such libraries in Java, please share with me.
Most of the InputStream methods are capable of reading the whole line of the string.
None of the InputStream methods is capable of doing that.
What you're looking for is DataInputStreamand DataOutputStream.
If you are trying to read in data generated from BinaryWriter in C# you are going to have to mess with this on the bit level. The data you actually want is prefixed with an integer to show the length of the data. You can read about how the prefix is generated here:
C# BinaryWriter length prefix - UTF7 encoding
It's worth mentioning that from what I tested the length is written backwards. In my case the first two bytes of the file were 0xA0 0x54 convert this to binary to get 10100000 01010100. The first byte here starts with a 1 so it is not the last byte. The second byte starts with a 0 however so it is the last (or in this case first byte) for the length. So the resulting length prefix is 1010100 (taken from the last byte removing the indicator that it is the last byte) Then all previous bytes 0100000 which gives us the result of 10101000100000 or 10784 bytes. The file I was dealing with was 10786 bytes so with the two byte prefix indicating the length this is correct.
Related
I am programming ESP8266thing dev board using arduino.
I have a value stored in byte*payload. I want to convert that value and store it into an int variable. I tried different methods but non of them is working fine. Can anyone suggest me a good method ? Thank You!!
How you do this depends entirely upon how you represented the value when you transmitted it via MQTT.
If you transmitted it in binary - for instance, you published the integer as series of bytes - then you also need to know the byte order and the number of bytes. Most likely it's least-significant-byte first (so if the integer in hex were 0x1234 it would be transmitted as two bytes - 0x34 followed by 0x12) and 32 bits.
If you're transmitting binary between two identical computers running similar software then you'll probably be fine (as long as that never changes), but if the computers differ or the software differs, you're dealing with representations of your integer that will be dependent on the platform you're using. Even using different languages on the two ends might matter - Python might represent an integer one way and C another, even if they're running on identical processors.
So if you transmit in binary you should really choose a machine-independent representation.
If you did transmit in binary and made no attempt at a machine-independent representation, the code would be something like:
byte *payload;
int payload_length;
int result;
if(payload_length < sizeof(int)) {
HANDLE THIS ERROR
} else {
result = *(int *)payload;
}
That checks to make sure there are enough bytes to represent a binary integer, and then uses a cast to retrieve the integer from the payload.
If you transmitted in binary in a machine-independent format then you'd need to do whatever transformation is necessary for the receiving architecture.
I don't really recommend transmitting in binary unless you know what you're doing and have good reasons for it. Most applications today will be fine transmitting as text - which you could say is the machine-independent representation.
The most likely alternative to transmitting in binary is in text - which can be a machine independent format. If you're transmitting an integer as text, your code would look something like this:
byte *payload;
int payload_length;
char payload_string[payload_length + 1];
int result;
memcpy(payload_string, payload, payload_length);
payload_string[payload_length] = '\0';
result = atoi(payload_string);
This code uses a temporary buffer to copy the payload into. We need to treat the payload like a C string, and C strings have an extra byte on the end - '\0' - which indicates end-of-string. There's no space for this in the payload and an end-of-string indicator may or may not have been sent as part of the payload, so we'll guarantee there's one by copying the payload and then adding one.
After that it's simple to call atoi() to convert the string to an integer.
Don't know if you found an answer yet, but I had the exact same issue and eventually came up with this:
payload[length] = '\0'; // Add a NULL to the end of the char* to make it a string.
int aNumber = atoi((char *)payload);
Pretty simple in the end!
We are developing an application that has to work with data that is enycrpted by LoraWan (https://www.lora-alliance.org)
We have already found the documentation of how they encrypt their data, and have been reading through it for the past few days (https://www.lora-alliance.org/sites/default/files/2018-04/lorawantm_specification_-v1.1.pdf) but currently still can't solve our problem.
We have to use AES 128-bit ECB decryption with zero-padding to decrypt the messages, but the problem is it's not working because the encrypted messages we are receiving are not long enough for AES 128 so the algorithm returns a "Data is not a complete block" exception on the last line.
An example key we receive is like this: D6740C0B8417FF1295D878B130784BC5 (not a real key). It is 32 characters long, so 32 bytes, but if treat it as hexadecimal, then it becomes 16 bytes long, which is what is needed for AES 128-bit. This is the code we use to convert the Hex from String:
public static string HextoString(string InputText)
{byte[] hex= Enumerable.Range(0, InputText.Length)
.Where(x => x % 2 == 0)
.Select(x => Convert.ToByte(InputText.Substring(x, 2), 16))
.ToArray();
return System.Text.Encoding.ASCII.GetString(hex);}
(A small thing to note for the above code is that we are not sure what Encoding to use, as we could not find it in the Lora documentation and they have not told us, but depending on this small setting we could be messing up our decryption (though we have tried all possible combinations, ascii, utf8, utf7, etc))
An example message we receive is: d3 73 4c which we are assuming is also in hexadecimal. This is only 6 bytes, and 3 bytes if we convert it from hexa to normal, compared to the 16 bytes we'd need minimum to match the key length.
This is the code for Aes 128 decrypt we are using:
private static string Aes128Decrypt(string cipherText, string key){
string decrypted = null;
var cipherPlainTextBytes = HexStringToByteArray(cipherText);
//var cipherPlainTextBytes = ForcedZeroPadding(HexStringToByteArray(cipherText));
var keyBytes = HexStringToByteArray(key);
using (var aes = new AesCryptoServiceProvider())
{
aes.KeySize = 128;
aes.Key = keyBytes;
aes.Mode = CipherMode.ECB;
aes.Padding = PaddingMode.Zeros;
ICryptoTransform decryptor = aes.CreateDecryptor(aes.Key, aes.IV);
using (MemoryStream ms = new MemoryStream(cipherPlainTextBytes, 0, cipherPlainTextBytes.Length))
{
using (CryptoStream cs = new CryptoStream(ms, decryptor, CryptoStreamMode.Read))
{
using (StreamReader sr = new StreamReader(cs))
{
decrypted = sr.ReadToEnd();
}
}
}
}
return decrypted;}
So obviously this is going to return "Data is an incomplete block" at sr.ReadToEnd().
As you can see from the example, in that one commented out line, we have also tried to "Pad" the text to the correct size with a full zero byte array of correct length (16 - cipherText), in which case the algorithm runs fine, but it returns complete gibberish and not the original text.
We already have tried all of the modes of operation and messed around with padding modes as well. They are not providing us with anything but a cipherText, and a key for that text. No Initialization vector either, so we are assuming we are supposed to be generating that every time (but for ECB it isn't even needed iirc)
What's more is, they are able to encrypt-decrypt their messages just fine. What is most puzzling about this is that I have been googling this for days now and I cannot find a SINGLE example on google where the CIPHERTEXT is shorter than the key during decryption.
Obviously I have found examples where the message they are Encrypting is shorter than what is needed, but that is what padding is for on the ENCRYPTION side (right?). So that when you then receive the padded message, you can tell the algorithm what padding mode was used to make it correct length, so then it can seperate the padding from the message. But in all of those cases the recieved message during decryption is of correct length.
So the question is - what are we doing wrong? is there some way to decrypt with ciphertexts that are shorter than the key? Or are they messing up somewhere by producing ciphers that are too short?
Thanks for any help.
In AES-ECB, the only valid ciphertext shorter than 16-byte is empty. That 16-byte limit is the block (not key) size of AES, which happens to match the key size for AES-128.
Therefore, the question's
An example message we receive is: d3 73 4c
does not show an ECB encrypted message (since a comment tells that's from a JSON, that can't be bytes that happen to show as hex). And that's way too short to be a FRMPayload (per this comment) for a Join-Accept, since the spec says of the later:
1625 The message is either 16 or 32 bytes long.
Could it be that whatever that JSON message contains is not a full FRMPayload, but a fragment of a packet, encoded as hexadecimal pair with space separator? As long as it is not figured out how to build a FRMPayload, there's not point in deciphering it.
Update: If that mystery message is always 3 bytes, and if it is always the same for a given key (or available a single time per key), then per Maarten Bodewes's comment it might be a Key Check Value. The KCV is often the first 3 bytes of the encryption of the all-zero value with the key per the raw block cipher (equivalently: per ECB). Herbert Hanewinkel's javascript AES can work fully offline (which is necessary to not expose the key), and be used to manually validate an hypothesis. It tells that for the 16-byte key given in the question, a KCV would be cd15e1 (or c076fc per the variant in the next section).
Also: it is used CreateDecryptor to craft a gizmo in charge of the ECB decryption. That's likely incorrect in the context of decryption of a LoraWan payload, which requires ECB encryption for decryption of some fields:
1626 Note: AES decrypt operation in ECB mode is used to encrypt the join-accept message so that the end-device can use an AES encrypt operation to decrypt the message. This way an end device only has to implement AES encrypt but not AES decrypt.
In the context of decryption of a LoraWan packets, you want to communicate with the AES engine using byte arrays, not strings. Strings have an encoding, when LoraWan ciphertext and corresponding plaintext does not. Others seems to have managed to coerce the nice .NET do-it-all crypto API to get a low-level job done.
In the HextoString code, I vaguely get that the intention and perhaps outcome is that hex becomes the originally hex input as a byte array (fully rid of hexadecimal and other encoding sin; in which case the variable hex should be renamed to something on the tune of pure_bytes). But then I'm at loss about System.Text.Encoding.ASCII.GetString(hex). I'd be surprised if it just created a byte string from a byte array, or turned the key back to hexadecimal for later feeding to HexStringToByteArray in Aes128Decrypt. Plus this makes me fear that any byte in [0x80..0xFF] might turn to 0x3F, which is not nice for key, ciphertext, and corresponding LoraWan payload. These have no character encoding when de-hexified.
My conclusion is that if HexStringToByteArray does what its name suggests, and given the current interface of Aes128Decrypt, HextoString should simply remove whitespace (or is unneeded if HexStringToByteArray removes whitespace on the fly). But my recommendation is to change the interface to use byte arrays, not strings (see previous section).
As an aside: creating an ICryptoTransform object from its key is supposed to be performed once for multiple uses of the object.
I am developing an http server with Netty. On some occasions, the server must answer a 1x1 transparent pixel. So I hard-coded a GIF transparent pixel in base64, and returned it with the following code :
String pixel_string= new String (Base64.decodeBase64("R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw=="));
HttpResponse response = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
response.setContent(ChannelBuffers.copiedBuffer(pixel_string, CharsetUtil.UTF_8));
EDIT : I also set the content-type :
response.setHeader(HttpHeaders.Names.CONTENT_TYPE,
"image/gif");
In Chrome, everything is fine. However, Firefox tells me that it cannot display the pixel (which is pretty bad for my app), as the pixel data in invalid.
After many investigations, I finally figured out a fix, by changing the charset to Iso-8859-1.
response.setContent(ChannelBuffers.copiedBuffer(
responseBuilder.pixel_string, CharsetUtil.ISO_8859_1));
I don't understand why it works, which makes me think that I may run into troubles in some cases. I tried to change the Firefox preferences (to have UTF8 as default), but it doesn't change much.
Why does Firefox accept the ISO-8859 encoding, and not UTF-8 ? Can I change that ? Would someone have a clue on the origin of the issue and how to be sure that it will work whatever the user's setting ?
Thanks
It's not Firefox that's accepting the encoding or not. It's your server.
When you do your base64 decode you produce a string that contains some characters... but what you really produced was bytes that you're then thinking of as characters somehow. Since a Java String is a container that holds a UTF-16 string, in practice what you're doing is taking each byte, treating it as a a 16-bit integer and constructing the UTF-16 "string" made up of those code units.
But when you want to put all this on the network, you have to convert you string to bytes, and the argument to copiedBuffer says how to do that. If converting to UTF-8, any character that came from a byte that had the high bit set will end up getting encoded as a two-byte UTF-8 sequence. On the other hand, if converting to ISO-8859-1, the conversion just drops the high byte of each UTF-16 code unit (which in your case is always zero anyway).
So the conversion to ISO-8859-1 produces the actual byte array you got out of base64-decoding, while the conversion to UTF-8 produces.... something else which may or may not actually make any sense depending on the exact byte values.
The copiedBuffer constructor you call is not appropriate for the type of data (binary) you are using. According to the JavaDoc of the Netty API, the one you are calling is:
Creates a new big-endian buffer whose content is the specified string
encoded in the specified charset.
Which means that your binary data is being "converted" to UTF-8 (which is meaningless). If you try to save the generated file and look at it with a hex editor, you'll probably see that it is corrupted.
Try with something like this (untested code):
static byte[] pixel_data = Base64.decodeBase64("R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==");
HttpResponse response = ...
response.setHeader(HttpHeaders.Names.CONTENT_TYPE, "image/gif");
response.setContent(ChannelBuffers.copiedBuffer(pixel_data));
Asp.Net 4, C#, Oracle 11g
Hi, I'm trying to save the content of an html file to an Oracle CLOB column. The html file is uploaded to the server through an asp:Upload button. It is working fine most of the time, the problem is that sometimes the stream at the FileContent property of the button, has an odd number of bytes, and the Write method of the clob column throws an exception, stating that it requires an even number of bytes
How can I solve this problem??? Is there anything I can do to make my html files have an even number of bytes??? Html files are encoded as UTF8, and changing the encoding do modify the number of bytes, but they are still an even number
Thanks in advance
Edit: For now, I'm just increasing the size of the buffer by 1 in the case of the stream length being an odd number, then write the stream, in its own length, to the buffer, thus defaulting the last byte of the buffer. Please advice of any potential errors in doing it this way:
var buffer = new byte[(stream.Length % 2 > 0? stream.Length + 1: stream.Length)];
stream.Read(buffer, 0, (int)stream.Length);
clob.Write(buffer, 0, buffer.Length);
Thanks again
Edit: the previous solution didn't work. The new approach consists of passing the stream to string, add a space at the end of the string, and then convert to stream again. It's working fine until now. Sorry I can't post the code... it's just that I couldn't figured out how to overcome this policy of 4 spaces for code in StackOverflow
I'm assuming that you are using the System.Data.OracleClient classes (as opposed to Oracle's ODP.NET).
The OracleLob class has no method for writing a string, which I would expect for handling CLOBs. Instead, the documentation says:
The .NET Framework Data Provider for Oracle handles all CLOB and NCLOB
data as Unicode. Therefore, when accessing CLOB and NCLOB data types,
you are always dealing with the number of bytes, where each character
is 2 bytes. For example, if a string of text containing three
characters is saved as an NCLOB on an Oracle server where the
character set is 4 bytes per character, and you perform a Write
operation, you specify the length of the string as 6 bytes, although
it is stored as 12 bytes on the server.
In this context, Unicode means the UTF-16 encoding which requires 2 bytes for most characters and 4 or 6 bytes for characters in the supplementary planes.
So if you have a string, you have to convert it to UTF-16 first:
byte[] utf16Bytes = Encoding.Unicode.GetBytes(str);
clob.Write(utf16Bytes, 0, utf16Bytes.Lenght);
Or you can use a StreamWriter to achieve the same:
OracleLob clob = ...
using (StreamWriter writer = new StreamWriter(clob, Encoding.Unicode))
{
writer.Write(str);
}
If your data is in a UTF-8 encoded byte array, then you have to convert it to UTF-16:
byte[] utf8Data = ...
byte[] utf16Data = Encoding.Convert(Encoding.UTF8, Encoding.Unicode, utf8Data);
clob.Write(utf16Data, 0, utf16Data.Length);
I am trying to create a file format for myself, so i was forming the header for my file. To write a known length string into a ByteArray, which method should i use, writeUTF() or writeUTFBytes().
From the Flex 3 language ref, it tells me that writeUTF() prepends the length of the string and throws a RangeError whereas writeUTFBytes() does not.
Any suggessions would be appreciated.
The only difference between the two is that writeUTFBytes() doesn't prepend the message with the length of the string (The RangeError is because 65535 is the highest number you can store in 16 bits)
Where you'd use one over the other depends on what you're doing. For example, I use writeUTFBytes() when copying a XML object over to be compressed. In this case, I don't care about the length of the string, and it'd introduce something extra to the code.
writeUTF() can be useful if you're writing a streaming/network server, where as you prefix the message length to the message, you know how many bytes to stream on the other end before the end of the message. e.g., I have 200 bytes worth of message. I read the length (16-bit integer), which tells me the message is 100 bytes. I read in 100 bytes and I know it's a complete message. Everything after is another message. If the message length said the message was 300 bytes, then I'd know I'd have to wait a bit before I have the full message.
I think i have found the solution myself. It came to me when i was coding to read back the data. The corresponding functions to read from a bytearray readUTF() and readUTFBytes(length:uint) requires the length to be passed to it.
So if you know the length of the string that you are gonna write, you can use writeUTFBytes() and use readUTFBytes() with that size. Else you can use readUTF(), letting as3 write the size of the data which can be read back without any need to know the length of the string while using readUTF().
Hope this might be useful to some one as well.