IRP_MJ_DEVICE_CONTROL — how to? - serial-port

Coding a app using serial port, when debugging, I have been compelled to work with low level (link control) protocol.
And here my problems begun.
Sniffer gives me values:
IOCTL_SERIAL_SET_BAUD_RATE 80 25 00 00 means baud rate 9600. Well, 00 c2 01 00 means 115200. How is it possible to guess it?
IOCTL_SERIAL_SET_TIMEOUTS 32 00 00 00 05 00 00 00 00 00 00 00 60 09 00 00 00 00 00 00 - what does this mean? What is the value? What is the range of admissible values? I had read MSDN - "Setting Read and Write Timeouts for a Serial Device", for example. Blah-blah-blah, but no any value. What to read? How to understand sniffer data? And how to control it?

Related

Lua TCP communication

I have a proprietary client application that sends and receives TCP data packets to|from the network device like this:
Sent: [14 bytes]
01 69 80 10 01 0E 0F 00 00 00 1C 0D 64 82 .i..........d.
Received: [42 bytes] [+00:000]
01 69 80 10 01 2A 00 D0 DC CC 0C BB C0 40 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 .i...*.......#..................
00 00 00 00 00 00 1C 0D F6 BE ..........
or see the picture
I need to make same requests with Lua. I've found some working examples (for ex) for such communications, but I can't understand what string should I give as an argument for
tcp:send("string");
Should I give it a string of hex? I.e.
'01698010010E0F0000001C0D6482'
Or first convert hex to ACSII? If so, then how (zeroes doesn't convert to symbols)?
You should give it the string you want to send. If you write "016980..." it's a string containing decimal values 48 (ascii digit 0), 49 (ascii digit 1), 54 (ascii digit 6), and so on. Which is not what you want to send. You want to send decimal values 1, 105 (hex 69), 128 (hex 80) and so on.
Luckily, Lua strings can hold any bytes (unlike e.g. Python strings). So you just have to write a string with those bytes. You can write any byte into a string using \x and then a 2-digit hex code. So you could write your call like this:
tcp:send("\x01\x69\x80\x10\x01\x0E\x0F\x00\x00\x00\x1C\x0D\x64\x82")
If you are using a Lua version older than 5.2, \x is not available, but you can still use \ and a 3-digit decimal code.

Get consistent md5 checksum of gzipped file using data.table and R.utils

I'm trying to replicate the checksum of a compressed file with R.utils::gzip and the compress = "gzip" argument of data.table::fwrite but I keep getting different results. Here is an example
library(data.table); library(R.utils); library(digest)
dt <- data.frame(a = c(1, 2, 3))
fwrite(dt, "r-utils.csv")
gzip("r-utils.csv")
fwrite(dt, "datatable-v1.csv.gz", compress = "gzip")
digest(file = "r-utils.csv.gz")
#> [1] "8d4073f4966f94ac5689c6e85de2d92d"
digest(file = "datatable-v1.csv.gz")
#> [1] "5d58f9eeefb166c6d50ac00f3215e689"
Initially I though that fwrite was storing the filename and timestamp in the output file (per the usual gzip behaviour without the --no-name option), but that don't appear to be the case since I get the same checksum in different calls to fwrite
fwrite(dt, "datatable-v2.csv.gz", compress = "gzip")
digest(file = "datatable-v2.csv.gz")
#> [1] "5d58f9eeefb166c6d50ac00f3215e689"
Any ideas on what might be causing the difference?
PS. Incidentally, the checksum of the uncompressed files are the same
$ md5sum datatable-v1.csv
7e034138dc91aa575d929c1ed65aa67c datatable-v1.csv
$ md5sum r-utils.csv
7e034138dc91aa575d929c1ed65aa67c r-utils.csv
If you look at the bytes, you can see they are different
r-utils.csv.gz
Offset: 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
00000000: 1F 8B 08 00 00 00 00 00 00 06 4B E4 E5 32 E4 E5 ..........Kde2de
00000010: 32 E2 E5 32 E6 E5 02 00 21 EB 62 BF 0C 00 00 00 2be2fe..!kb?....
and
datatable-v2.csv.gz
Offset: 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
00000000: 1F 8B 08 00 00 00 00 00 00 0A 4B E4 E5 02 00 56 ..........Kde..V
00000010: EF 2F E3 03 00 00 00 1F 8B 08 00 00 00 00 00 00 o/c.............
00000020: 0A 33 E4 E5 32 E2 E5 32 E6 E5 02 00 49 C4 FF 4D .3de2be2fe..ID.M
00000030: 09 00 00 00 ....
So the data.table one is longer. This appears to be because the default compression settings are different. Specifically it looks like the two method use a different "window size" parameter. The data.table code uses a windowBits value of 31 (15+16) which will include "trailing checksum in the output" but the R.utils::gzip function uses the base R gzfile() function which uses a windowBits value of -15 (MAX_WBITS) and that negative value means a trailing checksum should not be used. So I think that accounts for the extra bytes in the in the data.table output.
Because you can use different compression level settings and checksums and gzip headers, it's not necessarily the case the you will get the same checksum for compressed versions of data files if two different compression pipelines are used. So it's possible for the data inside to be identical but the actual compressed files to be different.
Since these settings are part of the C code for the package and for base R this is not something you will be able to change in R code. It's not possible for these two different methods to return identical output.

AddressSanitizer: heap-use-after-free ANSI C

I am getting this error when I try to clear a linked list
=================================================================
==4574==ERROR: AddressSanitizer: heap-use-after-free on address 0x603000000050 at pc 0x7fcb73b40682 bp 0x7ffffcfd8370 sp 0x7ffffcfd8368
READ of size 8 at 0x603000000050 thread T0
#0 0x7fcb73b40681 in clear_dict ../src/utils_dict.c:83
#1 0x7fcb73b4193c in morsec ../src/morsec.c:37
#2 0x7fcb73b419e2 in main ../src/morsec.c:45
#3 0x7fcb72b8409a in __libc_start_main ../csu/libc-start.c:308
#4 0x7fcb73b40189 in _start (/mnt/c/Code/WEC-01/morsec+0x2189)
0x603000000050 is located 16 bytes inside of 24-byte region [0x603000000040,0x603000000058)
freed by thread T0 here:
#0 0x7fcb72e18fb0 in __interceptor_free (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xe8fb0)
#1 0x7fcb73b405d7 in del_node ../src/utils_dict.c:70
#2 0x7fcb73b40698 in clear_dict ../src/utils_dict.c:84
#3 0x7fcb73b4193c in morsec ../src/morsec.c:37
#4 0x7fcb73b419e2 in main ../src/morsec.c:45
#5 0x7fcb72b8409a in __libc_start_main ../csu/libc-start.c:308
previously allocated by thread T0 here:
#0 0x7fcb72e19330 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xe9330)
#1 0x7fcb73b40266 in new_node ../src/utils_dict.c:20
#2 0x7fcb73b416aa in get_dict ../src/parser.c:50
#3 0x7fcb73b4186d in morsec ../src/morsec.c:19
#4 0x7fcb73b419e2 in main ../src/morsec.c:45
#5 0x7fcb72b8409a in __libc_start_main ../csu/libc-start.c:308
SUMMARY: AddressSanitizer: heap-use-after-free ../src/utils_dict.c:83 in clear_dict
Shadow bytes around the buggy address:
0x0c067fff7fb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7fc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c067fff8000: fa fa fd fd fd fa fa fa fd fd[fd]fa fa fa 00 00
0x0c067fff8010: 00 fa fa fa 00 00 00 fa fa fa 00 00 00 fa fa fa
0x0c067fff8020: 00 00 00 fa fa fa 00 00 00 06 fa fa 00 00 00 fa
0x0c067fff8030: fa fa 00 00 00 fa fa fa 00 00 00 fa fa fa 00 00
0x0c067fff8040: 01 fa fa fa 00 00 00 fa fa fa 00 00 00 fa fa fa
0x0c067fff8050: 00 00 00 fa fa fa 00 00 00 fa fa fa 00 00 00 fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==4574==ABORTING
Here are the functions I wrote, ASAN says it's in clear_dict, I tried debugging it and ASAN triggers only the third or fourth time that clear_dict is run, I really cannot get around this error
struct s_dict
{
char *word;
char *symb;
struct s_dict *next;
};
typedef struct s_dict t_dict;
t_dict *new_node(char *word, char *symb)
{
t_dict *new;
new = NULL;
if (!(new = (t_dict*)malloc(sizeof(t_dict))))
return (NULL);
new->word = word;
new->symb = symb;
new->next = NULL;
return (new);
}
void del_node(t_dict *node)
{
if (node->word)
free(node->word);
node->word = NULL;
if (node->symb)
free(node->symb);
node->symb = NULL;
if (node->next)
free(node->next);
node->next = NULL;
if (node)
free(node);
node = NULL;
}
void clear_dict(t_dict **chain)
{
t_dict *tmp;
while (chain && *chain)
{
tmp = (*chain)->next; << line 83 in my code
del_node(*chain);
*chain = tmp;
}
}
I don't know what's causing the error, it says
free(): double free detected in tcache 2
[1] 4601 abort (core dumped)
when not using -fsanitize=address
thanks for taking your time.
The bug (actually several) is in your del_node(): it shouldn't touch the next node.
As is, it deletes node->next, orphaning node->next->word etc, and setting up for double-delete on next iteration.
P.S. This check and assignment in del_node():
if (node) // useless
free(node);
node = NULL; // useless
are useless: if node was NULL, you would have crashed already. Assigning is useless as it modifies a local variable immediately before returning from the function.

Problem with sending HDLC frames by using GSM modem

I have SL7000 meter and GSM Modem iRZ. When i send by using RS-485 cable - everything work. But when i'm trying to use GSM modem i'm getting issues.
When i send SNRM like this:
7E A0 0A 00 22 00 51 03 93 6A 34 7E
I get normal UA.
But when i try to send SNRM like this:
7E A0 21 00 22 00 51 03 93 6B 21 81 80 12 05 01 80 07 04 00 00 00 02 08 04 00 00 00 01 3D 93 7E (It's from DXDLMSDirector)
I get nothing. Absolutely!
Maybe there is some trick to use hdlc with gsm modem? Maybe special delays or something?
If both of these frames work via the RS-485, and not via the GSM, then there are a couple of possible answers:
1) the addressing you are using is not permitted if this is a seperate port
2) if it is the same port on the meter, then the GSM Modem is not directing traffic to the same RS485 address

How to decode an Address Resolution Packet (ARP) [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What does this ARP packet mean, or even just what bytes correspond to which fields?
0000 FF FF FF FF FF FF 00 00 C0 93 19 00 08 06 00 01
0010 08 00 06 04 00 01 00 00 C0 93 19 00 C0 99 B9 64
0020 FF FF FF FF FF FF C0 99 B9 32 00 00 55 00 00 DC
0030 00 6C 00 D6 00 00 00 A3 00 00 00 41
This is on the study guide for an networking exam that I am woefully unprepared for. The textbook says that the ARP packet is 20-24 bytes, which doesnt fit this data and its way too small to be an ethernet frame. However the series of hexadecimal FF's definately matches the broadcast output of ethernet. So confused. Help please.
That frame is 60 bytes long... the minimum is 64 bytes, and the drivers for most NICs will not send you the 4-byte CRC at the end of the frame... so that is a valid ethernet ARP frame; remember that ethernet frames are required to be a minimum of 64 bytes (measured from destination mac addr to the end of the CRC), and they get padded to that value if the upper protocols (i.e. ARP) don't use the minimum ethernet payload. Use wireshark to decode the it.

Resources