Translating messages from device to LIS via ASCII using checksums - hex

I'm looking for some help please.
I'm trying to communicate to a device over TCP / IP using ASCII. The protocol includes a checksum that consists of two ASCII characters that represent a two character hexadecimal number in the range 00 through FF.
I know that the hexadecimal number is generated by performing a modulo-256 10 summation of all previous characters in the frame (that is, over <STX> … <ETX>, inclusive) and then expressing the resulting 8-bit unsigned integer in hexadecimal format.
For example I know that this checksum is 84, but how is that calculated? <STX>ID_DATA<FS><RS>aMOD<GS>LIS<GS><GS><GS><FS>iIID<GS>333<GS><GS><GS><FS><RS><ETX>84<EOT>
And that being said, what would the checksum be for this? <STX>SMP_REQ<FS><RS>aMOD<GS>LIS<GS><GS><GS><FS>iIID<GS>42731<GS><GS><GS><FS>rSEQ<GS>16<GS><GS><GS><FS><RS><ETX>{chksum}<EOT>
Any guidance is greatly appreciated? :)
TIA!

Late is better than never. Here is how to get the checksum:
public const byte STX = 2;
public const byte FS = 28;
public const byte GS = 29;
public const byte RS = 30;
public static byte[] STX_BUFF = { STX };
public static byte[] FS_BUFF = { FS };
public static byte[] GS_BUFF = { GS };
public static byte[] RS_BUFF = { RS };
string checksum = "00";
int byteVal = 0;
int sumOfChars = 0;
string frame = string.Format("{0}SMP_REQ{1}{2}aMOD{3}0500{3}{3}{3}{1}iIID{3}{6}{3}{3}{3}{1}rSEQ{3}{5}{3}{3}{3}{1}{2}{4}", Encoding.ASCII.GetString(STX_BUFF), Encoding.ASCII.GetString(FS_BUFF), Encoding.ASCII.GetString(RS_BUFF), Encoding.ASCII.GetString(GS_BUFF), Encoding.ASCII.GetString(ETX_BUFF), rSEQ, iIID);
// obtiene el checksum
for (int i = 0; i < frame.Length; i++)
{
byteVal = Convert.ToInt32(frame[i]);
sumOfChars += byteVal;
}
checksum = Convert.ToString(sumOfChars % 256, 16).ToUpper();
if (checksum.Length == 1) checksum = "0" + checksum;
frame += string.Format("{0}{1}", checksum, Encoding.ASCII.GetString(EOT_BUFF));

Related

Why do I need to send a message twice to trigger Indy's OnExecute event?

I am working on an application that works as a "man in the middle" to analyze a protocol (ISO 8583) sent over TCP/IP.
The main idea is to get the raw binary data and convert it to a string for parsing and decoding the protocol.
For this, I am using the TIdMappedPortTCP component.
I am testing with Hercules.
I am working with:
Windows 11 Home
Embarcadero® C++Builder 10.4 Version 27.0.40680.4203
Delphi and C++ Builder 10.4 Update 2
Indy 10.6.2.0
More context can be found in these questions:
Where can I find a fully working example of a TCP Client and Server for Indy in C++Builder?
Parsing bytes as BCD with Indy C++ Builder
The problem is that I have to send the message twice to trigger the OnExecute event. I think this might be length related but I haven't found the issue. Other than that the program does what is expected from it.
If I use this data in Hercules:
00 04 60 02
equivalent to:
"\x00\x04\x60\x02"
My program processes everything correctly:
Here is the code:
void __fastcall TForm1::MITMProxyExecute(TIdContext *AContext)
{
static int index;
TIdBytes ucBuffer;
UnicodeString usTemp1;
UnicodeString usTemp2;
int calculated_length;
// getting the length in Hexa
calculated_length = ReadMessageLength(AContext);
// reads data
AContext->Connection->IOHandler->ReadBytes(ucBuffer, calculated_length);
// displays string with calculated length and size of the data
usTemp2 = UnicodeString("calculated length = ");
usTemp2 += IntToStr(calculated_length);
usTemp2 += " ucBuffer.Length = ";
usTemp2 += IntToStr(ucBuffer.Length);
Display->Lines->Add(usTemp2);
// converts the binary data into a a Hex String for visualization
usTemp1 = BytesToHexString(ucBuffer);
// adds an index to distinguish from previous entries.
usTemp2 = IntToStr(index);
usTemp2 += UnicodeString(": ");
usTemp2 += usTemp1;
Display->Lines->Add(usTemp2);
index++;
}
Here is the code for the functions called there. By the way, is there a better way to convert the bytes to a hex string?
// Convert an array of bytes to a hexadecimal string
UnicodeString BytesToHexString(const TBytes& bytes)
{
// Create an empty UnicodeString to store the hexadecimal representation of the bytes
UnicodeString hexString;
// Iterate through each byte in the array
for (int i = 0; i < bytes.Length; i++)
{
// Convert the byte to a hexadecimal string and append it to the result string
hexString += IntToHex(bytes[i], 2);
}
// Return the hexadecimal string
return hexString;
}
// Read the first two bytes of an incoming message and interpret them as the length of the message
int ReadMessageLength(TIdContext *AContext)
{
int calculated_length;
// Use the 'ReadSmallInt' method to read the length of the message from the first two bytes
calculated_length = AContext->Connection->IOHandler->ReadSmallInt();
// converting from hex binary to hex string
UnicodeString bcdLength = UnicodeString().sprintf(L"%04x", calculated_length);
// converting from hex string to int
calculated_length = bcdLength.ToInt();
// decrease length
calculated_length -= 2;
return calculated_length;
}
UPDATE
I have created a class to update the TEditRich control. But the problem persist, I need to send the message twice to be processed and the application freezes when trying to close it. This is my class:
class TAddTextToDisplay : public TIdSync {
private:
UnicodeString textToAdd;
public:
__fastcall TAddTextToDisplay(UnicodeString str) {
// Store the input parameters in member variables.
textToAdd = str;
}
virtual void __fastcall DoSynchronize() {
if (textToAdd != NULL) {
// Use the input parameters here...
Form1->Display->Lines->Add(textToAdd);
}
}
void __fastcall setTextToAdd(UnicodeString str) {
textToAdd = str;
}
};
And this is how my new OnExecute event looks:
void __fastcall TForm1::MITMProxyExecute(TIdContext *AContext) {
static int index;
TIdBytes ucBuffer;
UnicodeString usTemp1;
UnicodeString usTemp2;
int calculated_length;
int bytes_remaining;
// getting the length in Hexa
calculated_length = ReadMessageLength(AContext);
if (!AContext->Connection->IOHandler->InputBufferIsEmpty()) {
// reads data
AContext->Connection->IOHandler->ReadBytes(ucBuffer, calculated_length);
// displays string with calculated length and size of the data
usTemp2 = UnicodeString("calculated length = ");
usTemp2 += IntToStr(calculated_length);
usTemp2 += " ucBuffer.Length = ";
usTemp2 += IntToStr(ucBuffer.Length);
TAddTextToDisplay *AddTextToDisplay = new TAddTextToDisplay(usTemp2);
AddTextToDisplay->Synchronize();
// converts the binary data into a a Hex String for visualization
usTemp1 = BytesToHexString(ucBuffer);
// adds an index to distinguish from previous entries.
usTemp2 = IntToStr(index);
usTemp2 += UnicodeString(": ");
usTemp2 += usTemp1;
AddTextToDisplay->setTextToAdd(usTemp2);
AddTextToDisplay->Synchronize();
delete AddTextToDisplay;
index++;
}
}
You really should not be reading from the IOHandler directly at all. You are getting your communication out of sync. TIdMappedPortTCP internally reads from the client before firing the OnExecute event, and reads from the target server before firing the OnOutboundData event. In both cases, the bytes received are made available in the TIdMappedPortContext::NetData property, which you are not processing at all.
You need to do all of your parsing using just the NetData only, iterating through its bytes looking for complete messages, and saving incomplete messages for future events to finish.
Try something more like this instead:
#include <IdGlobal.hpp>
#include <IdBuffer.hpp>
bool ReadMessageData(TIdBuffer *Buffer, int &Offset, TIdBytes &Data)
{
// has enough bytes?
if ((Offset + 2) > Buffer->Size)
return false;
// read the length of the message from the first two bytes
UInt16 binLength = Buffer->ExtractToUInt16(Offset);
// converting from hex binary to hex string
String bcdLength = String().sprintf(_D("%04hx"), binLength);
// converting from hex string to int
int calculated_length = bcdLength.ToInt() - 2;
// has enough bytes?
if ((Offset + 2 + calculated_length) > Buffer->Size)
return false;
// reads data
Data.Length = calculated_length;
Buffer->ExtractToBytes(Data, calculated_length, false, Offset + 2);
Offset += (2 + calculated_length);
return true;
}
void __fastcall TForm1::MITMProxyConnect(TIdContext *AContext)
{
AContext->Data = new TIdBuffer;
}
void __fastcall TForm1::MITMProxyDisconnect(TIdContext *AContext)
{
delete static_cast<TIdBuffer*>(AContext->Data);
AContext->Data = NULL;
}
void __fastcall TForm1::MITMProxyExecute(TIdContext *AContext)
{
static int index = 0;
TIdBuffer *Buffer = static_cast<TIdBuffer*>(AContext->Data);
Buffer->Write(static_cast<TIdMappedPortContext*>(AContext)->NetData);
Buffer->CompactHead();
TAddTextToDisplay *AddTextToDisplay = NULL;
TIdBytes ucBuffer;
int offset = 0;
while (ReadMessageData(Buffer, offset, ucBuffer))
{
String sTemp = String().sprintf(_D("%d: ucBuffer.Length = %d ucBuffer = %s"), index, ucBuffer.Length, ToHex(ucBuffer).c_str());
if (AddTextToDisplay)
AddTextToDisplay->setTextToAdd(sTemp);
else
AddTextToDisplay = new TAddTextToDisplay(sTemp);
AddTextToDisplay->Synchronize();
++index;
}
delete AddTextToDisplay;
if (offset > 0)
Buffer->Remove(offset);
}
Otherwise, if you want to do your own socket I/O, then you will have to use TIdTCPServer and TIdTCPClient directly instead of using TIdMappedPortTCP.

Do I need to malloc C-style strings?

I recently started using Arduino so I still have to adapt and find the differences between C/C++ and the Arduino language.
So I have a question for you.
When I see someone using a C-style string in Arduino (char *str), they always initialize it like this (and never free it) :
char *str = "Hello World";
In pure C, I would have done something like this:
int my_strlen(char const *str)
{
int i = 0;
while (str[i]) {
i++;
}
return (i);
}
char *my_strcpy(char *dest, char const *src)
{
char *it = dest;
while (*src != 0) {
*it = *src;
it++;
src++;
}
return (dest);
}
char *my_strdup(char const *s)
{
char *result = NULL;
int length = my_strlen(s);
result = my_malloc(sizeof(char const) * (length + 1));
if (result == NULL) {
return (NULL);
}
my_strcpy(result, s);
return (result);
}
and then initialize it like this:
char *str = my_strdup("Hello World");
my_free(str);
So here is my question, on C-style Arduino strings, is malloc optional or these people just got it wrong?
Thank you for your answers.
In C++ it's better to use new[]/delete[] and not mix it with malloc/free.
In the Arduino there is also String class, that hides those allocations from you.
However using dynamic memory allocations in such restrained platform has its pitfalls, like heap fragmentation (mainly because String overloads + operator, so everyone is overusing it like: Serial.println(String{"something : "} + a + b + c + d + ......) and then wonders about mysterious crashes.
More about it on Majenko's blog: The Evils of Arduino String class (Majenko has highest reputation on the arduino stackexchange)
Basically with the String class your strdup code would be simple as this:
String str{"Hello World"};
String copyOfStr = str;

Convert string as hex to hexadecimal

I have a function that takes an uint64_t variable. Normally I would do this:
irsend.sendNEC(result.value);
result.value is an uint64_t as hexadecimal (I think). If I do this:
String((uint32_t) results.value, HEX)
I get this:
FF02FD
If I do:
irsend.sendNEC(0x00FF02FD)
it works perfectly and is what I want.
Instead of grabbing the result.value, I want to write it as a string (because that's what I get from the GET request). How do I make "FF02FD" into 0x00FF02FD?
EDIT:
Maybe this makes it easier to understand:
GET: http://192.168.1.125/code=FF02FD
//Arduino grabs the FF02FD by doing:
for (int i = 0; i < server.args(); i++) {
if (server.argName(i) == "code") {
String code = server.arg(i);
irsend.sendNEC(code);
}
}
This is where I get the error:
no matching function for call to 'IRsend::sendNEC(String&)'
because:
void sendNEC(uint64_t data, uint16_t nbits = NEC_BITS, uint16_t repeat = 0);
Comment writeup:
As already suggested, a string containing a hexadecimal value can be converted to an actual integer value using the C standard library functions such as "string to unsigned long" (strtoul) or "string to unsigned long long" (strtoull). From Arduino-type String one can get the actual const char* to the data using the c_str() member function. All in all, one does a hex-string to integer conversion as
uint64_t StrToHex(const char* str)
{
return (uint64_t) strtoull(str, 0, 16);
}
Which can then in code be called as
for (int i = 0; i < server.args(); i++) {
if (server.argName(i) == "code") {
String code = server.arg(i);
irsend.sendNEC(StrToHex(code.c_str()));
}
}
Appendum: Be carefull about using int or long on different platforms. On a Arduino Uno/Nano with a 8-bit microcontroller, such as the ATMega328P, an int is a int16_t. On the 32-bit ESP8266 CPU, an int is int32_t.

Arduino Uno - EEPROM locations not consistant

I was trying to write items to the EEPROM and later read them out. I was finding the reading back I was not getting the same as I put in at times. I narrow down to an example I can show you. Below I read into variables 2 address.
const int start_add_type = (EEPROM.length() - 10);
const int start_add_id = (EEPROM.length() - 4);
I then look at the value (via RS232)
Serial.begin(9600);
Serial.println(start_add_type);
Serial.println(start_add_id);
of them at the start of the setup() and see I get
1014
1020
I then look again at the end
Serial.println(start_add_type);
Serial.println(start_add_id);
and I get
1014
818
I cannot see why this should change. I did try calling them const e.g. const
const int start_add_type = (EEPROM.length() - 10);
const int start_add_id = (EEPROM.length() - 4);
but this gave the same result. So here I sit very puzzled at what I must have missed. Anyone got any idea?
#include "EEPROM.h"
int start_add_type = (EEPROM.length() - 10);
int start_add_id = (EEPROM.length() - 4);
char ID[7] = "ENCPG2";
char Stored_ID[5];
char Input[10];
//String Type;
void setup()
{
Serial.begin(9600);
Serial.println(start_add_type);
Serial.println(start_add_id);
// start_add = (EEPROM.length() - 10); // use this method to be PCB independent.
for (int i = 0; i < 6; i++)
{
Stored_ID[i] = EEPROM.read(start_add_type + i); // Read the ID into the EEPROM.
}
if (Stored_ID != ID) // Check if the one we have got is the same as the one in this code ID[7]
{
for (int i = 0; i < 6; i++)
{
EEPROM.write(start_add_type + i, ID[i]); // Write the ID into the EEPROM.
}
}
Serial.println(start_add_type);
Serial.println(start_add_id);
}
void loop()
{
}
You are overwriting your memory in this loop:
for (int i = 0; i < 6; i++)
{
Stored_ID[i] = EEPROM.read(start_add_type + i);
}
Stored_ID array is 5 bytes long, so writing to Stored_ID[5] will rewrite also the start_add_id variable, thus the weird value 818, which equals to 0x0332 HEX and 0x32 is the '2' character of your ID
For fixing this issue, declare Stored_ID in this way:
char Stored_ID[6];
if (Stored_ID != ID)
This is nonsense: You compare two different addresses, which are never equal. If you want to compare the content, you should do it in a loop. (e.g. directly when reading the EEPROM value into Stored_ID[i] )
Alternatively, Stored_ID could be a 0-terminated text as well and you might use
if (strcmp(Stored_ID, ID) != 0)

GCM-AEAD support for ubuntu system running linux kernel-3.10

I am trying to implement a AEAD sample code for encryption Using GCM encryption. But I always get invalid argument error while setting the key
static int init_aead(void)
{
printk("Starting encryption\n");
struct crypto_aead *tfm = NULL;
struct aead_request *req;
struct tcrypt_result tresult;
struct scatterlist plaintext[1] ;
struct scatterlist ciphertext[1];
struct scatterlist gmactext[1];
unsigned char *plaindata = NULL;
unsigned char *cipherdata = NULL;
unsigned char *gmacdata = NULL;
const u8 *key = kmalloc(16, GFP_KERNEL);
char *algo = "rfc4106(gcm(aes))";
unsigned char *ivp = NULL;
int ret, i, d;
unsigned int iv_len;
unsigned int keylen = 16;
/* Allocating a cipher handle for AEAD */
tfm = crypto_alloc_aead(algo, 0, 0);
init_completion(&tresult.completion);
if(IS_ERR(tfm)) {
pr_err("alg: aead: Failed to load transform for %s: %ld\n", algo,
PTR_ERR(tfm));
return PTR_ERR(tfm);
}
/* Allocating request data structure to be used with AEAD data structure */
req = aead_request_alloc(tfm, GFP_KERNEL);
if(IS_ERR(req)) {
pr_err("Couldn't allocate request handle for %s:\n", algo);
return PTR_ERR(req);
}
/* Allocting a callback function to be used , when the request completes */
aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, aead_work_done,&tresult);
crypto_aead_clear_flags(tfm, ~0);
/* Set key */
get_random_bytes((void*)key, keylen);
if((ret = crypto_aead_setkey(tfm, key, 16) != 0)) {
pr_err("Return value for setkey is %d\n", ret);
pr_info("key could not be set\n");
ret = -EAGAIN;
return ret;
}
/* Set authentication tag length */
if(crypto_aead_setauthsize(tfm, 16)) {
pr_info("Tag size could not be authenticated\n");
ret = -EAGAIN;
return ret;
}
/* Set IV size */
iv_len = crypto_aead_ivsize(tfm);
if (!(iv_len)){
pr_info("IV size could not be authenticated\n");
ret = -EAGAIN;
return ret;
}
plaindata = kmalloc(16, GFP_KERNEL);
cipherdata = kmalloc(16, GFP_KERNEL);
gmacdata = kmalloc(16, GFP_KERNEL);
ivp = kmalloc(iv_len, GFP_KERNEL);
if(!plaindata || !cipherdata || !gmacdata || !ivp) {
printk("Memory not availaible\n");
ret = -ENOMEM;
return ret;
}
for (i = 0, d = 0; i < 16; i++, d++)
plaindata[i] = d;
memset(cipherdata, 0, 16);
memset(gmacdata, 0, 16);
for (i = 0,d=0xa8; i < 16; i++, d++)
ivp[i] = d;
sg_init_one(&plaintext[0], plaindata, 16);
sg_init_one(&ciphertext[0], cipherdata, 16);
sg_init_one(&gmactext[0], gmacdata, 128);
aead_request_set_crypt(req, plaintext, ciphertext, 16, ivp);
aead_request_set_assoc(req, gmactext, 16);
ret = crypto_aead_encrypt(req);
if (ret)
printk("cipher call returns %d \n", ret);
else
printk("Failure \n");
return 0;
}
module_init(init_aead);
module_exit(exit_aead);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("My code for aead encryption test");
}
On inserting the module I get following output
Starting encryption
Return value for setkey is -22
key could not be set
According to AEAD specification aead uses aes-128 for encryption hence the block size should be 128 bit .
But my system shows only 1 Byte block size support for AEAD
name : rfc4106(gcm(aes))
driver : rfc4106-gcm-aesni
module : aesni_intel
priority : 400
refcnt : 1
selftest : passed
type : nivaead
async : yes
blocksize : 1
ivsize : 8
maxauthsize : 16
geniv : seqiv
Does the invalid argument error is thrown becuase of the block size. If so , what shall I do to make it work ?
The block size of AES is indeed always 128 bit. The block size of GCM is a different matter though. GCM (Galois-Counter Mode) is - as the name suggests - build on top of the CTR (Counter) mode of operation, sometimes also called the SIC (Segmented Integer Counter) mode of operation. This turns AES into a stream cipher. Stream ciphers - by definition - have a block size of one byte (or, more precisely, one bit, but bit level operations are usually not supported by API's).
Block size however has little to do with the key size displayed in the call, and the argument does seem to require bytes instead of bits (in which key lengths are usually defined).
The size of the IV should be 12 bytes (the default). Otherwise additional calculations may be needed by the GCM implementation (if those exist at all).
For Aes GCM RFC 4106 the key must be 20 bytes. I don't known yet why. I've looked into ipsec source code to see how the encryption is made there.

Resources