Multiple unions - r

I am trying to do unions on several lists (these are actually GRanges objects not integer lists but the priciple is the same), basically one big union.
x<-sort(sample(1:20, 9))
y<-sort(sample(10:30, 9))
z<-sort(sample(20:40, 9))
mylists<-c("x","y","z")
emptyList<-list()
sapply(mylists,FUN=function(x){emptyList<-union(emptyList,get(x))})
That is just returning the list contents.
I need the equivalent of
union(x,union(y,z))
[1] 2 3 5 6 7 10 13 15 20 14 19 21 24 27 28 29 26 31 36 39
but written in an extensible and non-"variable explicit" form

A not necessarily memory efficient paradigm that will work with GRanges is
Reduce(union, list(x, y, z))
The argument might also be a GRangesList(x, y, z) for appropriate values of x etc.

x<-sort(sample(1:20, 9))
y<-sort(sample(10:30, 9))
z<-sort(sample(20:40, 9))
Both of the below produce the same output
unique(c(x,y,z))
[1] 1 2 4 6 7 8 11 15 17 14 16 18 21 23 26 28 29 20 22 25 31 32 35
union(x,union(y,z))
[1] 1 2 4 6 7 8 11 15 17 14 16 18 21 23 26 28 29 20 22 25 31 32 35

unique(unlist(mget(mylists, globalenv())))
will do the trick. (Possibly changing the environment given in the call to mget, as required.)

I think it would be cleaner to separate the "dereference" part from the n-ary union part, e.g.
dereflist <- function(l) lapply(a,get)
nunion <- function(l) Reduce(union,l)
But if you look at how union works, you'll see that you could also do
nunion <- function(l) unique(do.call(c,l))
which is faster in all the cases I've tested (much faster for long lists).
-s

This can be done by using the reduce function in the purrr package.
purrr::reduce(list(x, y, z),union)

ok this works but I am curious why sapply seems to have its own scope
x<-sort(sample(1:20, 9))
y<-sort(sample(10:30, 9))
z<-sort(sample(20:40, 9))
mylists<-c("x","y","z")
emptyList<-vector()
for(f in mylists){emptyList<-union(emptyList,get(f))}

Related

Split a vector list with M elements into 2 lists of N and M-N elements

I created a vector list, aa, with 50 elements. And I need to split aa into two vector lists called bb and cc. bb has the first 20 elements of aa while cc has the last 30 elements of aa. How do I do it?
Creation of original vector list
aa <- list (sample (1:50))
aa
#[[1]]
# [1] 29 30 39 45 17 11 43 14 24 34 3 1 28 2 21 23 6 31 5 27 44 7 4 46 49 22 33 38 50 36 15 48 8 16 25 42 13 41 47
#[40] 37 26 32 35 9 18 10 20 40 19 12
Sorry all, I know my question is really basic. Maybe it is because the question is too simple and the solution is thus not easily found from the internet.
Since I couldn't a direct question answering this adding an answer. We can first subset the list using [[ and then select individual elements in it with [.
bb <- aa[[1]][1:20]
cc <- aa[[1]][21:50]
We can also use head and tail to select first 20 and last 30 elements respectively.
bb <- head(aa[[1]], 20)
cc <- tail(aa[[1]], 30)
We can use split to create a list of vectors
lst1 <- split(aa[[1]], rep(1:2, c(20, 30)))
and extract the vector with [[
lst[[1]]
lst1[[2]]
It can be extended to any number of splits (i.e. generalized version) where we just need to change the rep

How can I create a matrix , with random number on row and not replace,but in col can replace, R language

How can I create a matrix , with random number on row and not replace.
like this
5 29 24 20 31 33
2 18 35 4 11 21
30 40 22 14 2 28
33 14 4 18 5 10
10 33 15 2 28 18
7 22 9 25 31 20
12 29 31 22 37 26
7 31 34 28 19 23
7 34 11 6 31 28
my code :
matrix(sample(1:42, 60, replace = FALSE), ncol = 6)
But I receive this error message:
Error in sample.int(length(x), size, replace, prob) : cannot take a
sample larger than the population when 'replace = FALSE'
but it's wrong because only 1~42, it can't create a 60 matrix.
You can not generate all 60 of the numbers with one sample function as you want to allow replacement of numbers in a different row. Therefore you have to do one sample per row. #Jav provided very neat code to accomplish this in the comment to the question:
t(sapply(1:10, function(x) sample(1:42, 6, replace = FALSE)))
if you want to have a different sample in each row, then replicate can help you -- but replicate (as pretty much everything else in R) works naturally columnwise, so you have to transpose the result:
t(replicate(10, sample(1:42, 6)))
replace = FALSE is the default, so I didn't include it
after transposing, 10 becomes the number of rows and 6 becomes the number of columns

Evaluating combination with vectorized function in Julia

In Julia, vectorized function with dot . is used for element-wise manipulation.
Running f.(x) means f(x[1]), f(x[2]),... are sequentially executed
However, suppose I have a function which takes two arguments, say g(x,y)
I want g(x[1],y[1]),g(x[2],y[1]), g(x[3],y[1]), ..., g(x[1],y[2]), g(x[2],y[2]), g(x[3],y[2]), ...
Is there any way to evaluate all combination of x and y?
Matt's answer is good, but I'd like to provide an alternative using an array comprehension:
julia> x = 1:5
y = 10:10:50
[i + j for i in x, j in y]
5×5 Array{Int64,2}:
11 21 31 41 51
12 22 32 42 52
13 23 33 43 53
14 24 34 44 54
15 25 35 45 55
In my opinion the array comprehension can often be more readable and more flexible than broadcast and reshape.
Yes, reshape y such that it is orthogonal to x. The . vectorization uses broadcast to do its work. I imagine this as "extruding" singleton dimensions across all the other dimensions.
That means that for vectors x and y, you can evaluate the product of all combinations of x and y simply by reshaping one of them:
julia> x = 1:5
y = 10:10:50
(+).(x, reshape(y, 1, length(y)))
5×5 Array{Int64,2}:
11 21 31 41 51
12 22 32 42 52
13 23 33 43 53
14 24 34 44 54
15 25 35 45 55
Note that the shape of the array matches the orientation of the arguments; x spans the rows and y spans the columns since it was transposed to a single-row matrix.

R efficiently add up tables in different order

At some point in my code, I get a list of tables that looks much like this:
[[1]]
cluster_size start end number p_value
13 2 12 13 131 4.209645e-233
12 1 12 12 100 6.166824e-185
22 11 12 22 132 6.916323e-143
23 12 12 23 133 1.176194e-139
13 1 13 13 31 3.464284e-38
13 68 13 117 34 3.275941e-37
23 78 23 117 2 4.503111e-32
....
[[2]]
cluster_size start end number p_value
13 2 12 13 131 4.209645e-233
12 1 12 12 100 6.166824e-185
22 11 12 22 132 6.916323e-143
23 12 12 23 133 1.176194e-139
13 1 13 13 31 3.464284e-38
....
While I don't show the full table here I know they are all the same size. What I want to do is make one table where I add up the p-values. Problem is that the $cluster_size, start, $end and $number columns don't necessarily correspond to the same row when I look at the table in different list elements so I can't just do a simple sum.
The brute force way to do this is to: 1) make a blank table 2) copy in the appropriate $cluster_size, $start, $end, $number columns from the first table and pull the correct p-values using a which() statement from all the tables. Is there a more clever way of doing this? Or is this pretty much it?
Edit: I was asked for a dput file of the data. It's located here:
http://alrig.com/code/
In the sample case, the order of the rows happen to match. That will not always be the case.
Seems like you can do this in two steps
Convert your list to a data.frame
Use any of the split-apply-combine approaches to summarize.
Assuming your data was named X, here's what you could do:
library(plyr)
#need to convert to data.frame since all of your list objects are of class matrix
XDF <- as.data.frame(do.call("rbind", X))
ddply(XDF, .(cluster_size, start, end, number), summarize, sump = sum(p_value))
#-----
cluster_size start end number sump
1 1 12 12 100 5.550142e-184
2 1 13 13 31 3.117856e-37
3 1 22 22 1 9.000000e+00
...
29 105 23 117 2 6.271469e-16
30 106 22 146 13 7.266746e-25
31 107 23 146 12 1.382328e-25
Lots of other aggregation techniques are covered here. I'd look at data.table package if your data is large.

R sorts a vector on its own accord

df.sorted <- c("binned_walker1_1.grd", "binned_walker1_2.grd", "binned_walker1_3.grd",
"binned_walker1_4.grd", "binned_walker1_5.grd", "binned_walker1_6.grd",
"binned_walker2_1.grd", "binned_walker2_2.grd", "binned_walker3_1.grd",
"binned_walker3_2.grd", "binned_walker3_3.grd", "binned_walker3_4.grd",
"binned_walker3_5.grd", "binned_walker4_1.grd", "binned_walker4_2.grd",
"binned_walker4_3.grd", "binned_walker4_4.grd", "binned_walker4_5.grd",
"binned_walker5_1.grd", "binned_walker5_2.grd", "binned_walker5_3.grd",
"binned_walker5_4.grd", "binned_walker5_5.grd", "binned_walker5_6.grd",
"binned_walker6_1.grd", "binned_walker7_1.grd", "binned_walker7_2.grd",
"binned_walker7_3.grd", "binned_walker7_4.grd", "binned_walker7_5.grd",
"binned_walker8_1.grd", "binned_walker8_2.grd", "binned_walker9_1.grd",
"binned_walker9_2.grd", "binned_walker9_3.grd", "binned_walker9_4.grd",
"binned_walker10_1.grd", "binned_walker10_2.grd", "binned_walker10_3.grd")
One would expect that order of this vector would be 1:length(df.sorted), but that appears not to be the case. It looks like R internally sorts the vector according to its logic but tries really hard to display it the way it was created (and is seen in the output).
order(df.sorted)
[1] 37 38 39 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
[26] 23 24 25 26 27 28 29 30 31 32 33 34 35 36
Is there a way to "reset" the ordering to 1:length(df.sorted)? That way, ordering, and the output of the vector would be in sync.
Use the mixedsort (or) mixedorder functions in package gtools:
require(gtools)
mixedorder(df.sorted)
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
[28] 28 29 30 31 32 33 34 35 36 37 38 39
construct it as an ordered factor:
> df.new <- ordered(df.sorted,levels=df.sorted)
> order(df.new)
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ...
EDIT :
After #DWins comment, I want to add that it is even not nessecary to make it an ordered factor, just a factor is enough if you give the right order of levels :
> df.new2 <- factor(df.sorted,levels=df.sorted)
> order(df.new)
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ...
The difference will be noticeable when you use those factors in a regression analysis, they can be treated differently. The advantage of ordered factors is that they let you use comparison operators as < and >. This makes life sometimes a lot easier.
> df.new2[5] < df.new2[10]
[1] NA
Warning message:
In Ops.factor(df.new[5], df.new[10]) : < not meaningful for factors
> df.new[5] < df.new[10]
[1] TRUE
Isn't this simply the same thing you get with all lexicographic shorts (as e.g. ls on directories) where walker10_foo sorts higher than walker1_foo?
The easiest way around, in my book, is to use a consistent number of digits, i.e. I would change to binned_walker01_1.grd and so on inserting a 0 for the one-digit counts.
In response to Dwin's comment on Dirk's answer: the data are always putty in your hands. "This is R. There is no if. Only how." -- Simon Blomberg
You can add 0 like so:
df.sorted <- gsub("(walker)([[:digit:]]{1}_)", "\\10\\2", df.sorted)
If you needed to add 00, you do it like this:
df.sorted <- gsub("(walker)([[:digit:]]{1}_)", "\\10\\2", df.sorted)
df.sorted <- gsub("(walker)([[:digit:]]{2}_)", "\\10\\2", df.sorted)
...and so on.

Resources