I am trying to control where I execute my MPI code.
To do so there are several way, taskset, dplace, numactl or just the options of mpirun like --bind-to or -cpu-set.
The machine: is shared memory, 16 nodes, of 2 times 12cores (so 24 cores per nodes)
> numactl -H
available: 16 nodes (0-15)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 192 193 194 195 196 197 198 199 200 201 202 203
node 1 cpus: 12 13 14 15 16 17 18 19 20 21 22 23 204 205 206 207 208 209 210 211 212 213 214 215
node 2 cpus: 24 25 26 27 28 29 30 31 32 33 34 35 216 217 218 219 220 221 222 223 224 225 226 227
... (I reduce the output)
node 15 cpus: 180 181 182 183 184 185 186 187 188 189 190 191 372 373 374 375 376 377 378 379 380 381 382 383
node distances:
node 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0: 10 50 65 65 65 65 65 65 65 65 79 79 65 65 79 79
1: 50 10 65 65 65 65 65 65 65 65 79 79 65 65 79 79
2: 65 65 10 50 65 65 65 65 79 79 65 65 79 79 65 65
3: 65 65 50 10 65 65 65 65 79 79 65 65 79 79 65 65
4: 65 65 65 65 10 50 65 65 65 65 79 79 65 65 79 79
5: 65 65 65 65 50 10 65 65 65 65 79 79 65 65 79 79
6: 65 65 65 65 65 65 10 50 79 79 65 65 79 79 65 65
7: 65 65 65 65 65 65 50 10 79 79 65 65 79 79 65 65
8: 65 65 79 79 65 65 79 79 10 50 65 65 65 65 65 65
9: 65 65 79 79 65 65 79 79 50 10 65 65 65 65 65 65
10: 79 79 65 65 79 79 65 65 65 65 10 50 65 65 65 65
11: 79 79 65 65 79 79 65 65 65 65 50 10 65 65 65 65
12: 65 65 79 79 65 65 79 79 65 65 65 65 10 50 65 65
13: 65 65 79 79 65 65 79 79 65 65 65 65 50 10 65 65
14: 79 79 65 65 79 79 65 65 65 65 65 65 65 65 10 50
15: 79 79 65 65 79 79 65 65 65 65 65 65 65 65 50 10
My code does not take advantage of the shared memory, I would like to use it as on distributed memory. But the processes seems to move and get too far from their data, so I would like to bind them and see if the performance is better.
What I have try so far:
the classic call mpirun -np 64 ./myexec param > logfile.log
Now I wanted to bind the run on the last nodes, lets say 12 to 15, with dplace or numactl (I do not see main difference...)
mpirun -np 64 dplace -c144-191,336-383 ./myexec param > logfile.log
mpirun -np 64 numactl --physcpubind=144-191,336-383 -l ./myexec param > logfile.log
(the main difference of the two is the -l of numactl that 'bound' the memory, but I am not even sure that it makes a difference..)
So, they both work well, the processes are bounded where I wanted to, BUT by looking closer to each process, it appears that some are allocated on the same core! so they are using only 50% of the core each! This happen even if the number of available core is larger than the number of processes! This is not good at all.
So I try to add some mpirun optin like --nooversubscribe but it changes nothing... I do not understand that. I also try with --bind-to none (to avoid conflict between mpirun and dplace/numactl), -cpus-per-proc 1 and -cpus-per-rank 1... not solving it.
So, I tried with only mpirun option
mpirun -cpu-set 144-191 -np 64 ./myexec param > logfile.log
but the -cpu-set option is not massively documented, and I do not find a way to bind one process per core.
The Question: May someone help me to have one process per core, on the cores that I want ?
Omit 336-383 from the list of physical CPUs in the numactl command. Those are the second hardware threads and having them on the allowed CPU list permits the OS to schedule two processes on the different hardware threads of the same core.
Generally, with Open MPI, mapping and binding are two separate operations and the get both done on core bases, the following options are necessary:
--map-by core --bind-to core
The mapper starts by default from the first core on the first socket. To limit the core choice, pass the --cpu-set from-to. In your case, the full command should be:
mpirun --cpu-set 144-191 --map-by core --bind-to core -np 64 ./myexec param > logfile.log
You can also pass the --report-bindings option to get a nice graphical visualisation of the bindings (which in your case will be a bit hard to read...)
Note that --nooversubscribe is used to prevent the library from placing more processes than there are slots defined on the node. By default there are as many slots as logical CPUs seen by the OS, therefore passing this option does nothing in your case (64 < 384).
Related
I am trying to use designmatch package for cardinality matching of a treated group (n=88) to two untreated contols. The output returns 88x3=264 group_id and 88 t_id, but only 88 c_id (instead of 88x2=176). I understand designmatch does not use replacement by default so I don't understand why I only get 88 c_id.
out <- bmatch(t_ind = t_ind, near_exact = near_exact, n_controls=2)
out
$obj_total
[1] -88
$obj_dist_mat
NULL
$t_id
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
[44] 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86
[87] 87 88
$c_id
[1] 108 308 279 131 220 147 231 437 194 278 153 445 383 290 482 105 241 335 238 202 289 301 323 312 159 262 176 315 443 200 377 393
[33] 885 581 927 398 217 117 240 448 263 554 525 854 169 352 317 119 386 414 518 477 424 469 280 286 297 513 316 97 936 609 387 455
[65] 168 702 284 432 349 379 446 543 552 293 851 185 713 501 232 641 997 561 499 310 485 466 675 647
$group_id
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
[44] 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86
[87] 87 88 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 12 13 13 14 14 15 15 16 16 17 17 18 18 19 19 20 20 21
[130] 21 22 22 23 23 24 24 25 25 26 26 27 27 28 28 29 29 30 30 31 31 32 32 33 33 34 34 35 35 36 36 37 37 38 38 39 39 40 40 41 41 42 42
[173] 43 43 44 44 45 45 46 46 47 47 48 48 49 49 50 50 51 51 52 52 53 53 54 54 55 55 56 56 57 57 58 58 59 59 60 60 61 61 62 62 63 63 64
[216] 64 65 65 66 66 67 67 68 68 69 69 70 70 71 71 72 72 73 73 74 74 75 75 76 76 77 77 78 78 79 79 80 80 81 81 82 82 83 83 84 84 85 85
[259] 86 86 87 87 88 88
Thanks for any help
Answer
The function does not seem to work properly, thus this is likely not possible. The package also does not seem to be actively maintained. My recommendation is moving on to a different package, like MatchIt.
Details
I had an extensive look at the source code of the package. I made several observations.
The group_id element in the output does not seem based on anything.
In the output, you indeed see group_id that seems to have the correct dimensions. However, the numbers don't seem to represent anything meaningful:
group_id_t = 1:(length(t_id))
group_id_c = sort(rep(1:(length(t_id)), n_controls))
group_id = c(group_id_t, group_id_c)
As you can see, they just create a vector group_id_t that runs from 1 to length(t_id) (the IDs of the treated group, see t_id in your output). Next, they create a vector group_id_c that is the exact same thing, just repeated n_controls times. The final group_id is just the concatenated version of that.
I looked around for a matrix where you could enter this, or a matrix that has the number of rows/columns that matches the length of group_id. I cannot find one. The numbers in group_id seem to have no value.
The optimizer seems to optimize for n_controls or less
The bmatch function has several steps. First, it calculates some initial parameters. Second, it puts those parameters in an optimizer (in the default case: glpk using Rglpk::Rglpk_solve_LP). Third, it does some calculations to create the output.
When you vary n_controls (1, 2, 10, etc.), it changes only 1 parameter of the initial parameters (bvec). This parameter essentially carries information on how many matches should be found, and are then entered as a constraint into the optimizer. However, I'm getting the impression that something is wrong with bvec. It gets entered with the condition <=, meaning that the optimizer only has to find a solution where you get n_controls or fewer. I tried looking deeper into how the initial parameters are determined, but that's several hundreds of lines of code, so I gave up.
Final thoughts
The package was last updated on 2018-06-18, which suggests to me that the authors haven't looked at it for a while. You can/should contact them and see what they say. Alternatively, there are other packages like MatchIt that have been verified extensively. You can also switch to one of those packages instead.
I have to get order of one vector to sort other vector. The point is I don't want my function to be stable. In fact, I'd like to random order of equal values. Any idea how do it in R in finite time? :D
Thanks for any help.
You can do this in base R using order. order will take multiple variable to sort on. If you make the second one be a random variable, it will randomize the ties. Here is an example using the built-in iris data. The variable Sepal.Length has several ties for second lowest value. Here are some:
iris$Sepal.Length[c(9,39,43)]
[1] 4.4 4.4 4.4
Now let's sort just that variable (stable sort) and then sort with a random secondary sort.
order(iris$Sepal.Length)
[1] 14 9 39 43 42 4 7 23 48 3 30 12 13 25 31 46 2 10 35
[20] 38 58 107 5 8 26 27 36 41 44 50 61 94 1 18 20 22 24 40
[39] 45 47 99 28 29 33 60 49 6 11 17 21 32 85 34 37 54 81 82
[58] 90 91 65 67 70 89 95 122 16 19 56 80 96 97 100 114 15 68 83
[77] 93 102 115 143 62 71 150 63 79 84 86 120 139 64 72 74 92 128 135
[96] 69 98 127 149 57 73 88 101 104 124 134 137 147 52 75 112 116 129 133
[115] 138 55 105 111 117 148 59 76 66 78 87 109 125 141 145 146 77 113 144
[134] 53 121 140 142 51 103 110 126 130 108 131 106 118 119 123 136 132
order(iris$Sepal.Length, sample(150,150))
[1] 14 43 39 9 42 48 7 4 23 3 30 25 31 46 13 12 35 38 107
[20] 10 58 2 8 41 27 61 94 5 36 44 50 26 18 22 99 40 20 47
[39] 24 45 1 33 60 29 28 49 85 11 6 32 21 17 90 81 91 54 34
[58] 37 82 67 122 95 65 70 89 100 96 56 114 80 16 19 97 93 15 68
[77] 143 102 83 115 150 62 71 120 79 84 63 139 86 72 135 74 64 92 128
[96] 149 69 98 127 88 134 101 57 137 73 104 147 124 138 112 129 116 75 52
[115] 133 148 55 111 105 117 59 76 87 66 78 146 141 109 125 145 144 113 77
[134] 140 53 121 142 51 103 126 130 110 108 131 106 136 119 118 123 132
Without the random secondary sort, positions 2,3,and 4 are in order (stable). With the random secondary sort, they are jumbled.
Try fct_reorder in the forcats package to order one factor by another. If you want to introduce randomness as well, try fct_reorder2 with .y = runif(length(your_vector))
(I'm apparently thinking in strange directions today - fct_reorder will reorder the levels of a factor. If that's what you are after, this may help. Otherwise, order is the better approach.)
This question already has answers here:
Binning across multiple categories
(2 answers)
Closed 5 years ago.
I am very new to r but have been asked to use it by my professor to analyze our data. Currently, we are trying to conduct a changepoint analysis on a large set of data which I know how to do. But we want to first place our data into time bins of 30 seconds. Our trials are 20 minutes in length so we should have a total of 40 bins. We have columns for: time, Flow, and MAP and would like to take the values of flow and MAP within each 30 second bin and average them. This will condense 1120-2000 points of data into a much cleaner 40 data points. We are having trouble binning the data and dont even know where to start, once binned we would like to generate a table of those new 40 values (40 for MAP and 40 for Flow) so that we can use the changepoint package to find the changepoint in our set. We believe possibly clip( could be what we need.
Sorry if this is too confusing or too vague, we have no programming experience whatsoever.
Edit I believe this is different than the bacteria question because I wanted a direct output into a table rather than interpolating from a graph then into a table.
Here is a sample from our data:
RawMin Flow MAP
2.9982 51 77
3.0113 110 80
3.0240 84 77
3.0393 119 75
3.0551 93 75
3.0692 136 73
3.0839 81 73
3.0988 58 72
3.1138 125 71
3.1285 89 72
3.1432 160 73
3.1576 87 74
3.1714 128 74
3.1860 90 74
3.2015 63 76
3.2154 120 76
3.2293 65 76
3.2443 156 78
3.2585 66 78
3.2723 130 78
3.2876 89 77
3.3029 111 77
3.3171 90 75
3.3329 100 76
3.3482 127 76
3.3618 69 78
3.3751 155 78
3.3898 90 79
3.4041 127 80
3.4176 103 80
3.4325 87 79
3.4484 134 78
3.4637 57 77
3.4784 147 78
3.4937 75 78
3.5080 137 78
3.5203 123 78
3.5337 99 80
3.5476 170 80
3.5620 90 79
3.5756 164 78
3.5909 85 78
3.6061 164 77
3.6203 103 77
3.6348 140 79
3.6484 152 79
3.6611 79 80
3.6742 184 82
3.6872 128 81
3.7017 123 82
3.7152 176 81
3.7295 74 81
3.7436 153 80
3.7572 85 80
3.7708 115 79
3.7847 187 78
3.7980 105 78
3.8108 175 78
3.8252 124 79
3.8392 171 79
3.8528 127 78
3.8669 138 79
3.8811 198 79
3.8944 109 80
3.9080 171 80
3.9214 137 79
3.9341 109 81
3.9455 193 83
3.9575 108 85
3.9707 163 84
3.9853 136 82
4.0005 121 81
4.0164 164 79
4.0311 73 79
4.0450 171 78
4.0591 105 79
4.0716 117 79
4.0833 210 81
4.0940 103 85
4.1041 193 88
4.1152 163 84
4.1310 145 82
4.1486 126 79
4.1654 118 77
4.1811 130 75
4.1975 83 74
4.2127 176 73
4.2277 72 74
4.2424 177 74
4.2569 90 75
4.2705 148 76
4.2841 148 77
4.2986 123 77
4.3130 150 76
4.3280 71 77
4.3433 176 76
4.3583 90 76
4.3727 138 77
4.3874 136 79
4.4007 106 80
4.4133 167 83
4.4247 119 87
4.4360 123 88
4.4496 141 85
4.4673 117 84
4.4841 133 80
4.5005 83 79
4.5166 156 77
4.5324 97 77
4.5463 182 77
4.5605 110 79
4.5744 187 80
4.5882 121 81
4.6024 142 81
4.6171 178 81
4.6313 96 80
4.6452 180 80
4.6599 107 80
4.6741 151 79
4.6876 137 80
4.7009 132 82
4.7141 199 80
4.7279 91 81
4.7402 172 83
4.7531 172 80
4.7660 128 84
4.7785 197 83
4.7909 122 84
4.8046 129 84
4.8187 176 82
4.8328 102 81
4.8448 184 81
4.8556 145 83
4.8657 123 84
4.8768 138 86
4.8885 143 82
4.9040 135 81
4.9198 112 78
4.9362 134 77
4.9515 152 76
4.9651 83 76
4.9785 177 78
4.9912 114 79
5.0037 127 80
5.0167 200 81
5.0297 104 81
5.0429 175 81
5.0559 123 81
5.0685 106 81
5.0809 176 81
5.0937 113 82
5.1064 191 81
5.1181 178 79
5.1297 121 79
5.1404 176 80
5.1506 214 83
5.1606 132 85
5.1709 149 83
5.1829 175 80
5.1981 103 79
5.2128 169 76
5.2283 97 75
5.2431 149 74
5.2575 109 74
5.2709 97 74
5.2842 195 75
5.2975 104 75
5.3106 143 77
5.3231 185 76
5.3361 140 77
5.3487 132 78
5.3614 162 79
5.3750 98 78
5.3900 137 78
5.4047 108 76
5.4202 94 76
5.4341 186 75
5.4475 82 77
5.4608 157 80
5.4739 176 81
5.4867 90 83
5.4989 123 86
Assuming RawMin is time in minutes, you could do something like this...
df2 <- aggregate(df, #the data frame
by=list(cut(df$RawMin,seq(0,10,0.5))), #the bins (see below)
mean) #the aggregating function
df2
Group.1 RawMin Flow MAP
1 (2.5,3] 2.998200 51.0000 77.00000
2 (3,3.5] 3.251682 103.5588 76.20588
3 (3.5,4] 3.748994 135.9722 79.75000
4 (4,4.5] 4.240434 132.0857 79.25714
5 (4.5,5] 4.749781 140.1892 80.43243
6 (5,5.5] 5.246556 140.9231 78.89744
Binning is done with the cut function - here by 0.5 minute intervals between 0 and 10, which you might want to change. The bin names are the intervals - e.g. (2.5,3] means greater than 2.5, less than or equal to 3.
If you don't want RawMin included in the output, just use df[,-1] in the input to aggregate.
I am attempting to bind a subset of rows from one dataframe to a subset of rows from 11 other dataframes repeatedly through a large dataset. E.g.
df=JAN df=FEB
Day Jan Day Feb
1 70 1 66
2 70 2 66
3 70 3 66
4 70 4 66
5 70 5 66
6 70 6 66
7 70 7 66
8 70 8 66
9 70 9 66
10 70 10 66
11 70 11 66
12 70 12 66
13 70 13 66
14 70 14 66
15 70 15 66
16 70 16 66
17 70 17 66
18 70 18 66
19 70 19 66
20 70 20 66
21 70 21 66
22 70 22 66
23 70 23 66
24 70 24 66
25 70 25 66
26 70 26 66
27 70 27 66
28 70 28 66
29 70
30 70
31 70
............................
In the example above, what I want to do is cbind rows 1:31 from df Jan with rows 1:28 from df Feb through to rows 1:31 from df Dec (not shown), then continue the cbind for the next 31 days in Jan (i.e. rows 32:62 from df Jan), then rows 29:56 from df Feb and so on.
There are 12 data frames in total (one for each month) that take the form as shown. There are 120 months of data in each data frame.
My output should be a single column and look like:
70 (repeated 31 times)
66 (repeated 28 times)
......................
I have trawled this site and others for help, but can't find anything directly applicable here. Any suggestions?
We create a grouping variable using gl for every 2 row, use that in the tapply, unlist the dataset to get the expected output.
unlist(tapply(as.matrix(df2), as.numeric(gl(nrow(df2), 2, nrow(df2)))[row(df2)],
FUN=unlist), use.names=FALSE)
#[1] 70 70 64 64 58 58 66 66 61 61 59 59 53 53 56 56 69 69 77 77 74 74 72 72 71
#[26] 71 57 57 49 49 62 62 66 66 58 58 55 55 44 44 73 73 87 87 69 69 64 64
Update
Based on the updated dataset
lst <- mget(toupper(month.abb[1:2]))
#Here I am using only JAN and FEB, so `[1:2]`
#For the OP's dataset, we need
# lst <- mget(toupper(month.abb))
library(data.table)
DT <- rbindlist(lapply(lst, function(x)
transform(x, GROUP= cumsum(c(TRUE,diff(Day)<0)))), idcol=TRUE)
unlist(split(DT$Jan, DT$GROUP), use.names=FALSE)
# [1] 70 70 70 70 70 70 70 70 70 70 70 70 70 70 70 70 70 70 70 70 70 70 70 70 70
# [26] 70 70 70 70 70 70 66 66 66 66 66 66 66 66 66 66 66 66 66 66 66 66 66 66 66
# [51] 66 66 66 66 66 66 66 66 66 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
# [76] 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 65 65 65 65 65 65 65 65 65 65
#[101] 65 65 65 65 65 65 65 65 65 65 65 65 65 65 65 65 65 65
data
JAN <- data.frame(Day= rep(1:31, 2), Jan =rep(c(70, 42), each=31))
FEB <- data.frame(Day= rep(1:28, 2), Feb =rep(c(66, 65), each=28))
You can do it like this:
df <- read.table(text = " Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
1 70 64 58 66 61 59 53 56 69 77 74 72
2 70 64 58 66 61 59 53 56 69 77 74 72
3 71 57 49 62 66 58 55 44 73 87 69 64
4 71 57 49 62 66 58 55 44 73 87 69 64")
row_pairs <- lapply(seq(1, nrow(df), by=2), function(x) df[x:(x+1), ])
vec_of_pairs <- do.call(c, lapply(row_pairs, unlist))
unname(vec_of_pairs)
[1] 70 70 64 64 58 58 66 66 61 61 59 59 53 53 56 56 69 69 77 77 74 74 72 72 71 71
[27] 57 57 49 49 62 62 66 66 58 58 55 55 44 44 73 73 87 87 69 69 64 64
The getOption("max.print") can be used to limit the number of values that can be printed from a single function call. For example:
options(max.print=20)
print(cars)
prints only the first 10 rows of 2 columns. However, max.print doesn't work very well lists. Especially if they are nested deeply, the amount of lines printed to the console can still be infinite.
Is there any way to specify a harder cutoff of the amount that can be printed to the screen? For example by specifying the amount of lines after which the printing can be interrupted? Something that also protects against printing huge recursive objects?
Based in part on this question, I would suggest just building a wrapper for print that uses capture.output to regulate what is printed:
print2 <- function(x, nlines=10,...)
cat(head(capture.output(print(x,...)), nlines), sep="\n")
For example:
> print2(list(1:10000,1:10000))
[[1]]
[1] 1 2 3 4 5 6 7 8 9 10 11 12
[13] 13 14 15 16 17 18 19 20 21 22 23 24
[25] 25 26 27 28 29 30 31 32 33 34 35 36
[37] 37 38 39 40 41 42 43 44 45 46 47 48
[49] 49 50 51 52 53 54 55 56 57 58 59 60
[61] 61 62 63 64 65 66 67 68 69 70 71 72
[73] 73 74 75 76 77 78 79 80 81 82 83 84
[85] 85 86 87 88 89 90 91 92 93 94 95 96
[97] 97 98 99 100 101 102 103 104 105 106 107 108