MPI_Scatter values with repetitions - mpi

For example I have 6 MPI nodes forming a 1D grid.
On the master process I have some values for the edges of the grid:
[1 2 3 4 5]
And I want to distribute these values to put each value to both nodes that are adjacent to the corresponding edge. That is, I want to get the following data distribution among the nodes:
1 | 1 2 | 2 3 | 3 4 | 4 5 | 5
What is the best way to perform this? Seems that this cannot be done with a single MPI_Scatter call.

Related

How do I change the order of multiple grouped values in a row dependent on another variable in that row in R?

I need some help conditionally sorting/switching data based on a factor variable.
I'm not sure if it's a typical use case I just can't formulate properly enough for a search engine to show me a solution or if it is that niche but I haven't found anything yet.
I currently have a dataframe like this:
id group a1 a2 a3 a4 b1 b2 b3 b4
1 1 2 6 6 3 4 4 6 4
2 2 5 2 2 2 2 5 2 3
3 1 6 3 3 1 3 6 4 1
4 1 4 8 4 2 7 8 8 9
5 2 3 1 1 4 2 1 1 7
For context this is from a psychological experiment where people went through two variations of a task and the order of those conditions was determined by the experimental group they were assigned to. The columns represent different measurements from different trials and are currently grouped together for the same variable and in chronological order, meaning a1,a2,a3,a4 are essentially the same variable at consecutive time points, same with b1,b2,b3,b4.
I want to split them up for the different conditions so regardless of which group (=which order of tasks) someone went through, data from one condition should come first in the dataframe and columns should still be grouped together for the same variables and in chronological order within that condition. It should essentially look like this:
id group c1a1 c1a2 c2a1 c2a2 c1b1 c1b2 c2b1 c2b2
1 1 2 6 6 3 4 4 6 4
2 2 2 2 5 2 2 3 2 5
3 1 6 3 3 1 3 6 4 1
4 1 4 8 4 2 7 8 8 9
5 2 1 4 3 1 1 7 2 1
So essentially for group 1 everything stays the same since they happened to go through the conditions in the same order that I want to have in the new dataframe while for group 2 values are being switched where the originally second half of values for each variable is put in front of the originally first one.
I hope I formulated the problem in a way, people can understand it.
My real dataset is a bit more complicated it has 180 columns minus id and group so 178.
I have 13 variables some of which were measured over two conditions with 5 trials for each of those and some which have those 5 trials for each of the 2 main condition but which also have 2 adittional measurements for each condition where the order was determined by the same group variable.
(We essentially asked participants to do the task again in two certain ways, which allowed us to see if they were capable of doing them like that if they wanted to under the circumstences of both main conditions).
So there are an adittional 4 columns for some variables which need to be treated seperately. It should look like this when transformed (x and y are the 2 extra tasks where only b was measured once):
id group c1a1 c1a2 c2a1 c2a2 c1b1 c1b2 c1bx c1by c2b1 c2b2 c2bx c2by
1 1 2 6 6 3 4 4 3 7 6 4 4 2
2 2 2 2 5 2 2 3 4 3 2 5 2 2
3 1 6 3 3 1 3 6 2 2 4 1 1 1
4 1 4 8 4 2 7 8 1 1 8 9 5 8
5 2 1 4 3 1 1 7 8 9 2 1 3 4
What I want to say with this is, I need a pretty general solution.
I already tried formulating a function for creation of two seperate datasets for the groups and then merging them by id but got stuck with the automatic creation and naming of columns which I can't seem to wrap my head around. dplyr is currently loaded and used for some other transformations but since I'm not really good with it, I need to ask for your help regarding a solution with or without it. I'm still pretty new to R and this is for my bachelor thesis.
Thanks in advance!
Your question leaves a few things unclear that make this hard to answer, but here is maybe a start that could help, or at least help clarify your problem.
It would really help if you could clarify 2 pieces of info, what types of column rearrangements you need, and how you distinguish what indicates that a row needs to have this transformation.
I'm also wondering if instead of trying to manipulate your data in its current shape, if it not might be more practical to figure out how to change the shape of your data to better represent your data, perhaps using something like pivot_longer(), I don't know how this data will ultimately be used or what the actual values indicate, but it doesn't seem to be very tidy in its current form, and instead having a "longer" table might be more meaningful, but I'll still provide what I think is a solution to your listed problem.
This creates some example data that looks like it reflects yours in the example table.
ID=seq(1:10)
group=sample(1:2,10,replace=T)
Data=matrix(sample(1:10,80,replace=T),nrow=10,ncol=8)
DataFrame=data.frame('ID'=ID,'Group'=group,Data)
You then define the groups of columns that need to be kept together. I can't tell if there is an automated way for you to indicate which columns are grouped, but this might get bulky if done manually. Some more information on what your column names actually are, and how they are distributed in groups would help.
ColumnGroups=list('One'=c('X1','X2'),'Two'=c('X3','X4'),'Three'=c('X5','X6'),'Four'=c('X7','X8'))
You can then figure out which rows need to have rearranged done by using some conditional. Based on your example, I'm assuming when the group variable equals 2, then the rearranging needs to be done, which is what I've used here.
FlipRows=DataFrame$Group==2
You can then have R only apply the rearrangement needed to those rows that need it, and define the rearrangement based on the ordering of the different column groups. I know you ask for a general solution, but is hard to identify the general solution you need without knowing what types of column rearrangements you need. If it is always flipping two sets of consecutive column groups, that would be easier to define without having to type it all out. What I have done here would require you to manually type out the order of the different column groups that you would like the rows to be rearranged as. The SortedDataFrame object seems to be what you are looking for, but might not actually reflect your real data. I removed columns 1 and 2 in this operation since those are ID and group which you don't want overridden.
SortedDataFrame=DataFrame
SortedDataFrame[FlipRows,-c(1,2)]=DataFrame[FlipRows,c(ColumnGroups$Two,ColumnGroups$One,ColumnGroups$Four,ColumnGroups$Three)]
This solution won't work if you need to rearrange each row differently, but it is unclear if that is the case. Try to provide any of the other info requested here, and let me know where this solution doesn't work for you, and that.

Problems with stem - can't stem correctly, and with identify in R using plot

I'm trying to use the stem function to create a stem and leaf plot, but as far as I know it isn't working good, and I don't know why.
I'm doing something like this:
d= c(60,85,72,59,37,75,93,7,98,63,41,90,5,17,97)
stem(d,scale=1)
And I'm getting stem like this:
0 | 577
2 | 7
4 | 19
6 | 0325
8 | 50378
As far as I know there isn't any 27 value in d, or two times 7 value...
It's working weirdly and incorrect, and I don't know why. Additionally, I know that there should be one more column with numbers of observation, and my stem doesn't have it...
The data is being collapsed into groups of 20, not 10. You can see that the stem portion goes up in 2s, so all data are collapsed into groups represented in the stem steps.
If you lengthen the plot with scale, e.g. stem(d, scale = 2) this becomes obvious, and the diagram looks more obviously correct. You can see that what looks like a '27' in your plot above, is actually the 37.
0 | 57
1 | 7
2 |
3 | 7
4 | 1
5 | 9
6 | 03
7 | 25
8 | 5
9 | 0378

Sum variables conditionally with loop in r

I realize this is a topic that's covered somewhat well but I couldn't find anything that approaches this specific concern:
I have a df with 800 columns, 10 iterations of 80 columns (each column represents an item) - Each column is named something like: 1_BL_PRE.1 1_FU_PRE.1 1_BL_PRE.1 1_BL_POST.1
Where the first '1' indicates the item number and the second '1' indicates the iteration number.
What I'm trying to figure out is how to get the sums of specific groups of items from all 10 iterations.
As a short example let's say I want to take the 1st and 3rd item of BL_PRE and get the sum of all 10 iterations for those 2 items - how would I do this?
subject 1_BL_PRE.1 2_BL_PRE.1 3_BL_PRE.1 1_BL_PRE.2 2_BL_PRE.2
1 40002 3 4 3 1 2
2 40004 1 2 3 4 4
3 40006 4 3 3 3 1
4 40008 2 3 1 2 3
5 40009 3 4 1 2 3
Expected output (where A represents the sum of 1_BL_PRE.1, 3_BL_PRE.1, 1_BL_PRE.2 and so on):
subject BL_PRE_A
1 40002 12
2 40004 14
3 40006 15
4 40008 20
5 40009 12
My hunch is the solution is related to a for-loop or lappy (and I'm not familiar at all with either). I'm trying to work with apply(finaldata,1,function(x) {sum(x ...)}) but I haven't been able to figure out the conditional statement for the function of sum.
If there's an implementation with plyr I'd be really curious to see what that looks like. (and if there's a thread that answers this, apologies and just re-direct!)
**Edited to include small example + code I'm trying to get to work
Thanks!

filter sqlite query based on counts of pairwise interactions

I am trying to filter a somewhat involved sqlite3 query using a pairwise association table. Say I have these tables (where pet_id_x references an id in table pets):
[pets]
id | name | animal_types_id | <additional_info>
1 Spike 2
2 Fluffy 1
3 Whiskers 1
4 Spot 2
5 Garth 2
6 Hamilton 3
7 Dingus 1
8 Scales 3
. . .
. . .
[animal_types]
id | type
1 cat
2 dog
3 lizard
[successful_pairings]
pet_id_1 | pet_id_2
1 4
2 4
2 8
3 2
3 4
4 5
4 6
4 7
5 6
5 7
6 7
. .
. .
A toy example for my query would be to get the names of all dogs which meet certain constraints (from columns within the pets table) and have > 2 successful pairings with other dogs, resulting in:
name | successful pairings
Spot 6
Garth 3
As per the above, the total counts for each id need to be combined from pet_id_1 and pet_id_2 in successful_pairings, as an id may be represented for a given pairing in either column.
I am new to sql syntax, and am having trouble chaining queries together to filter based on conditions distributed across multiple tables.

Find the upper bound of the minimum number of spanning trees needed to cover all links in the graph

My question:
Let G(V,E) be a fully connected graph, where V is set of nodes and E is set of links.
What is the upper bound (worst case) of the minimum number of spanning trees needed to cover all the links in the graph, if the spanning trees are sorted in lexicographic order?
As an example, for |V|=4, and thus |E|=6, G(V,E) contains the following 16 spanning trees (in lexicograhic order); note that labelling the links differently may produce different order of spanning trees.
1 2 3
1 2 4
1 2 6
1 3 4
1 3 5
1 3 6
1 4 5
1 5 6
2 3 4
2 3 5
2 4 5
2 4 6
2 5 6
3 4 6
3 5 6
4 5 6
In this case, the minimum number of spanning trees needed to cover all the links in the graph
will be 5 spanning trees ({1 2 3},{1 2 4 },{1 2 6}, {1 3 4}, {1 3 5}). So all the links are included in these 5 spanning trees.
It is easy to count the number of spanning trees for small graph, but I have problem with larger sized graph, e.g., |V|>4.
Is there any formula to compute the upper bound number for the spanning trees to cover all links in the graph?
Thanks alot
There are V-1 edges in any MST, and (V)(V-1)/2 total edges. So the lower bound is ceiling(V/2).
I think this is also an exact bound.
You should be able to find "a" combination of MSTs which do not reuse other edges till the the last steps. Think in terms of finding an MST, removing those edges, and still leaving the reduced graph connected, so that new MSTs can be embedded, without destroying the connectivity.

Resources