I have a serial port. The other program writes to the serial port in every 0.2 second. Each time, the data written to the serial port is always 83 bytes in length. The serial port can be read correctly using minicom.
I have a program that can read the serial port. Each time, the byte stream written to the serial port is:
65 66 67 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
Most of the times the data can be received correctly. However, sometimes the data is messed up. Once the data is messed up, almost all the incoming messages are messed up.
Sometimes, I get output from my code as:
65 0 66 67 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 65 66 67
This is weird. Since apparently, there is a 0 after the first 65. Also, the last three bytes is 65 66 67, which looks like is the begging of a new message.
The code for opening and configurating the serial port is:
int serial_fd = open(device, O_RDWR | O_NOCTTY | O_NDELAY|O_NONBLOCK);
if (serial_fd < 0) {
return -1;
}
struct termios options;
tcgetattr(serial_fd, &options);
options.c_cflag |= (CLOCAL | CREAD);
//8N1 mode
options.c_cflag &= ~CSIZE;
options.c_cflag |= CS8;
options.c_cflag &= ~CSTOPB;
options.c_cflag &=~PARENB;
options.c_cflag &= ~CRTSCTS;
options.c_lflag &= ~(ICANON|ECHO|ECHONL|ISIG|IEXTEN);
options.c_iflag &= ~(IGNBRK|BRKINT|PARMRK|ISTRIP|INLCR|IGNCR|ICRNL|IXON);
options.c_oflag &= ~OPOST;
cfsetospeed(&options, B115200);
cfsetispeed(&options, B115200);
tcsetattr(serial_fd, TCSANOW, &options);
tcflush(serial_fd, TCIFLUSH);
The code for reading from the serial port and printing out the received message is:
uint8_t *buff,buff_data[100];
buff = buff_data;
int len,fs_sel;
fd_set fs_read;
int expectedLength=83;
int len_acc = 0;
while(1){
FD_ZERO(&fs_read);
FD_SET(serial_fd, &fs_read);
fs_sel = select(serial_fd+1,&fs_read,NULL,NULL,NULL);
if(fs_sel)
{
len = read(serial_fd, buff, expectedLength-len_acc);
if (len<=0) continue;
len_acc += len;
if (len_acc < expectedLength) {
buff += len;
continue;
}
len_acc = 0;
buff = buff_data;
for (int i=0; i<expectedLength; i++){
std::cout <<(int)buff[i]<<" ";
}
std::cout<< std::endl;
}
}
Thanks for your help.
Related
How can I create a list of 3 lists with specific length 20,30,40 to get last 20 vectors of data as list1, last 30 vectors of data as list2, last 40 vectors of data as list3
turn
data <- seq(1,100,1)
length.y <- c(20,30,40)
into
y[[1]]=seq(81,100,1)
y[[2]]=seq(71,100,1)
y[[3]]=seq(61,100,1)
I can use a for loop or create a function like this
y <- rep(list(0),3)
for(i in 1:3){
y[[i]] <- data[(length(data)-length.y[i]+1):length(data)]
}
My data is way complicate then this, so
is there an easier way to get the same result? (using lapply for example)
Using tail as already suggested in comments is an easy way. You can also turn your for loop code to lapply as :
n <- length(data)
lapply(length.y, function(x) data[(n-x + 1):n])
#[[1]]
# [1] 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98
#[19] 99 100
#[[2]]
# [1] 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88
#[19] 89 90 91 92 93 94 95 96 97 98 99 100
#[[3]]
# [1] 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78
#[19] 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96
#[37] 97 98 99 100
using purrr::map
map(length.y, ~ data[-c(1:(length(data) - .x))])
[[1]]
[1] 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
[[2]]
[1] 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
[[3]]
[1] 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94
[35] 95 96 97 98 99 100
I am trying to control where I execute my MPI code.
To do so there are several way, taskset, dplace, numactl or just the options of mpirun like --bind-to or -cpu-set.
The machine: is shared memory, 16 nodes, of 2 times 12cores (so 24 cores per nodes)
> numactl -H
available: 16 nodes (0-15)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 192 193 194 195 196 197 198 199 200 201 202 203
node 1 cpus: 12 13 14 15 16 17 18 19 20 21 22 23 204 205 206 207 208 209 210 211 212 213 214 215
node 2 cpus: 24 25 26 27 28 29 30 31 32 33 34 35 216 217 218 219 220 221 222 223 224 225 226 227
... (I reduce the output)
node 15 cpus: 180 181 182 183 184 185 186 187 188 189 190 191 372 373 374 375 376 377 378 379 380 381 382 383
node distances:
node 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0: 10 50 65 65 65 65 65 65 65 65 79 79 65 65 79 79
1: 50 10 65 65 65 65 65 65 65 65 79 79 65 65 79 79
2: 65 65 10 50 65 65 65 65 79 79 65 65 79 79 65 65
3: 65 65 50 10 65 65 65 65 79 79 65 65 79 79 65 65
4: 65 65 65 65 10 50 65 65 65 65 79 79 65 65 79 79
5: 65 65 65 65 50 10 65 65 65 65 79 79 65 65 79 79
6: 65 65 65 65 65 65 10 50 79 79 65 65 79 79 65 65
7: 65 65 65 65 65 65 50 10 79 79 65 65 79 79 65 65
8: 65 65 79 79 65 65 79 79 10 50 65 65 65 65 65 65
9: 65 65 79 79 65 65 79 79 50 10 65 65 65 65 65 65
10: 79 79 65 65 79 79 65 65 65 65 10 50 65 65 65 65
11: 79 79 65 65 79 79 65 65 65 65 50 10 65 65 65 65
12: 65 65 79 79 65 65 79 79 65 65 65 65 10 50 65 65
13: 65 65 79 79 65 65 79 79 65 65 65 65 50 10 65 65
14: 79 79 65 65 79 79 65 65 65 65 65 65 65 65 10 50
15: 79 79 65 65 79 79 65 65 65 65 65 65 65 65 50 10
My code does not take advantage of the shared memory, I would like to use it as on distributed memory. But the processes seems to move and get too far from their data, so I would like to bind them and see if the performance is better.
What I have try so far:
the classic call mpirun -np 64 ./myexec param > logfile.log
Now I wanted to bind the run on the last nodes, lets say 12 to 15, with dplace or numactl (I do not see main difference...)
mpirun -np 64 dplace -c144-191,336-383 ./myexec param > logfile.log
mpirun -np 64 numactl --physcpubind=144-191,336-383 -l ./myexec param > logfile.log
(the main difference of the two is the -l of numactl that 'bound' the memory, but I am not even sure that it makes a difference..)
So, they both work well, the processes are bounded where I wanted to, BUT by looking closer to each process, it appears that some are allocated on the same core! so they are using only 50% of the core each! This happen even if the number of available core is larger than the number of processes! This is not good at all.
So I try to add some mpirun optin like --nooversubscribe but it changes nothing... I do not understand that. I also try with --bind-to none (to avoid conflict between mpirun and dplace/numactl), -cpus-per-proc 1 and -cpus-per-rank 1... not solving it.
So, I tried with only mpirun option
mpirun -cpu-set 144-191 -np 64 ./myexec param > logfile.log
but the -cpu-set option is not massively documented, and I do not find a way to bind one process per core.
The Question: May someone help me to have one process per core, on the cores that I want ?
Omit 336-383 from the list of physical CPUs in the numactl command. Those are the second hardware threads and having them on the allowed CPU list permits the OS to schedule two processes on the different hardware threads of the same core.
Generally, with Open MPI, mapping and binding are two separate operations and the get both done on core bases, the following options are necessary:
--map-by core --bind-to core
The mapper starts by default from the first core on the first socket. To limit the core choice, pass the --cpu-set from-to. In your case, the full command should be:
mpirun --cpu-set 144-191 --map-by core --bind-to core -np 64 ./myexec param > logfile.log
You can also pass the --report-bindings option to get a nice graphical visualisation of the bindings (which in your case will be a bit hard to read...)
Note that --nooversubscribe is used to prevent the library from placing more processes than there are slots defined on the node. By default there are as many slots as logical CPUs seen by the OS, therefore passing this option does nothing in your case (64 < 384).
Here is a segment of my data. When i do read.csv(data, sep = " ") i get a dataframe with columns and rows. However this data is all of one type, so i just need either one row, one column or a vector.
Any help is apreciated.
0 0 0 10 10 10 10 10 10 10 10 20 20 20 20 20 20 20 20 20 30 30 30 30 30 30 30 30 30 40 40 40 40 40 40 50 50 50 50 50 50 60 60 60 60 60 60 60 60 60 60 60 60 70 70 70 70 70 70 70 70 70 80 80 80 80 80 80 80 80 80 90 90 90 90 90 90 90 90 90 100 100 100 100 100 100 100 100 100 110 110 110 110 110 110 110 110 110 110 110 110 120 120 120 120 120 130 130 130 130 130 130 140 140 140 140 140 140 140 140 150 150 150
What about:
scan(data,what="numeric", sep=" ")
When you use ´read.csv´ (or any read.xxx) R understand that you're trying to import a table, so it creates a dataframe with columns and rows with the contents of the file. You can read as a string directly with scan or change the dataframe later:
Load the data:
df=read.csv(data, sep=" ")
Change it to a string:
as.numeric(df)
I cut and paste your example into a text file called "test" and was able to import your example using this code here:
testdf = read.csv('test', sep=" ", header = FALSE)
When I first tried, I just got a bunch of columns with no data.
For me, the key was the term:
header = False
Hope this helps!
I am new to R. I am not sure how to do the following function in R. I am able to do this in excel. But not able to do it in R. Can anybody help me in this?
I want to get the cumulative sum of the counter value once it reaches 64,
The following is my data,
x
57
57
57
57
57
57
58
58
58
58
61
61
62
62
1
1
11
16
16
16
16
16
16
22
22
22
27
28
I want the cumulative sum after the count reaches 64. I am not sure how to do that in R.
The following is the output I require,
x
57
57
57
57
57
57
58
58
58
58
61
61
62
62
65
65
76
81
81
81
81
81
81
87
87
87
92
93
Can anybody help with doing this?
Thanks
If your data is resetting at 64, and continuing on, and you want it to keep the 64, try:
diffs <- c(dat$x[1], diff(dat$x))
diffs[diffs < 0] <- 64 + diffs[diffs < 0]
cumsum(diffs)
An explanation:
The first line takes all the differences from one number to the next, starting with the initial value (in the example case, 57).
The second line finds all negative diff values, and changes them to 64 + what they were - if it was 62 changing to 2, we need to add on 4: 2 to hit 64 and then 2 more.
The third line takes the cumsum to give the final values.
I have a structure
struct
{
unsigned char data[6]; // switches
unsigned char name[12]; // entry name
unsigned char desc[16]; // entry description
} TOC; // table of contents
and a vector <unsigned char> midiData of 41 bytes, 34 of which represent the values that should fill the above struct TOC, starting from byte n. 6 (0-index).
So I do:
memcpy(&TOC, &midiData[6], 34);
This compiles, but what I get is the expected characters plus unwanted ones. Where's the problem?
EDIT:
vector midiData contains:
240 117 38 9 85 0 1 0 0 0 0 0 84 101 115 116 32 78 97 109 101 32 49 32 84 101 115 116 32 68 101 115 99 114 105 112 116 105 111 110 247
TOC.name contains: Test Name 1
TOC.desc contains: Test Description5èBVÄÿ5¤¦l