Shift rows of a large matrix horizontally - r

I am looking for a way to shift the rows of a square matrix horizontally. Particularly, my question is for a case where the dimension of a matrix is very large, say 500*500 or 1000*1000, but I am giving a small example of 5*5 here to make it clear. Assume we have the following matrix:
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
16 17 18 19 20
21 22 23 24 25
I would like to shift the rows horizontally in which I get the following matrix and fill the empty cells with zero:
1 7 13 19 25
2 8 14 20 0
3 9 15 0 0
4 10 0 0 0
5 0 0 0 0
Writing code for a small matrix such as this is easy in R, but I am looking for very large matrices as I pointed out the above. Any help would be appreciated.

The example suggests you want to create a new matrix whose jth column is the jth row of the original, shifted to the left j-1 places and padded with zeros on the right, as in this calculation with a 10,000 X 10,000 matrix:
n <- 1e4
a <- matrix(seq_len(n^2), n, byrow=TRUE)
system.time({
b <- matrix(sapply(seq_len(nrow(a)), function(i) c(a[i,i:ncol(a)], rep(0, i-1))), n, n)
})
user system elapsed
0.97 0.00 0.99
(That's using a single thread and reflects a typical run out of many test runs.) One second for a matrix with 100,000,000 entries isn't bad. It's a big RAM hog though, so you might want to modify the code if the input is a sparse matrix so that it outputs a sparse matrix, too.
Reflecting on this, it occurred to me that avoiding the concatenation c and just copying in place should be faster, assuming one could initialize a matrix of zeros extremely quickly. That turns out to the be the case (and the code is even simpler):
system.time({
b <- matrix(0, nrow(a), ncol(a))
for (i in seq_len(nrow(a))) b[1:(n+1-i), i] <- a[i, i:ncol(a)]
})
user system elapsed
0.62 0.00 0.62
It's about 50% faster. Since the loop overhead will be relatively small and the body of the loop is (presumably) an optimized vector copy, it's unlikely an appreciably faster single-threaded solution exists.

Related

Is there a way to create a permutation of a vector without using the sample() function in R?

I hope you are having a nice day. I would like to know if there is a way to create a permutation (rearrangement) of the values in a vector in R?
My professor provided with an assignment in which we are supposed create functions for a randomization test, one while using sample() to create a permutation and one not using the sample() function. So far all of my efforts have been fruitless, as any answer that I can find always resorts in the use of the sample() function. I have tried several other methods, such as indexing with runif() and writing my own functions, but to no avail. Alas, I have accepted defeat and come here for salvation.
While using the sample() function, the code looks like:
#create the groups
a <- c(2,5,5,6,6,7,8,9)
b <- c(1,1,2,3,3,4,5,7,7,8)
#create a permutation of the combined vector without replacement using the sample function()
permsample <-sample(c(a,b),replace=FALSE)
permsample
[1] 2 5 6 1 7 7 3 8 6 3 5 9 2 7 4 8 1 5
And, for reference, the entire code of my function looks like:
PermutationTtest <- function(a, b, P){
sample.t.value <- t.test(a, b)$statistic
perm.t.values<-matrix(rep(0,P),P,1)
N <-length(a)
M <-length(b)
for (i in 1:P)
{
permsample <-sample(c(a,b),replace=FALSE)
pgroup1 <- permsample[1:N]
pgroup2 <- permsample[(N+1) : (N+M)]
perm.t.values[i]<- t.test(pgroup1, pgroup2)$statistic
}
return(mean(perm.t.values))
}
How would I achieve the same thing, but without using the sample() function and within the confines of base R? The only hint my professor gave was "use indices." Thank you very much for your help and have a nice day.
You can use runif() to generate a value between 1.0 and the length of the final array. The floor() function returns the integer part of that number. At each iteration, i decrease the range of the random number to choose, append the element in the rn'th position of the original array to the new one and remove it.
a <- c(2,5,5,6,6,7,8,9)
b <- c(1,1,2,3,3,4,5,7,7,8)
c<-c(a,b)
index<-length(c)
perm<-c()
for(i in 1:length(c)){
rn = floor(runif(1, min=1, max=index))
perm<-append(perm,c[rn])
c=c[-rn]
index=index-1
}
It is easier to see what is going on if we use consecutive numbers:
a <- 1:8
b <- 9:17
ab <- c(a, b)
ab
# [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Now draw 17 (length(ab)) random numbers and use them to order ab:
rnd <- runif(length(ab))
ab[order(rnd)]
# [1] 5 13 11 12 6 1 17 3 10 2 8 16 7 4 9 15 14
rnd <- runif(length(ab))
ab[order(rnd)]
# [1] 14 11 5 15 10 7 13 9 17 8 2 6 1 4 16 12 3
For each permutation just draw another 17 random numbers.

R triangular numbers function

While working on a small program for calculating the right triangular number that fulfils an equation, I stumbled over a page that holds documentation on the function Triangular()
Triangular function
When I tried to use this, Rstudio says it couldn't find it and I can't seem to find any other information about what library this could be in.
Does this function even exist and/or are there other ways to fill a vector with triangular numbers?
Here is a base R solution to define your custom triangular number generator, i.e.,
myTriangular <- function(n) choose(seq(n),2)
or
myTriangular <- function(n) cumsum(seq(n)-1)
such that
> myTriangular(10)
[1] 0 1 3 6 10 15 21 28 36 45
If you would like to use Triangular() from package Zseq, then please try
Zseq::Triangular(10)
such that
> Zseq::Triangular(10)
Big Integer ('bigz') object of length 10:
[1] 0 1 3 6 10 15 21 28 36 45
It's pretty easy to do it yourself:
triangular <- function(n) sapply(1:n, function(x) sum(1:x))
So you can do:
triangular(10)
# [1] 1 3 6 10 15 21 28 36 45 55

R: Sample unique X, Y pairs without duplication of Xs and Ys

I am having a hard time figuring how to program this in R: Given a number of X and Y pairs, such as
X Y
9 1
1 2
12 3
8 4
9 4
4 5
16 6
18 7
5 8
11 9
4 10
6 11
6 12
14 13
18 13
20 13
13 14
20 15
20 16
I need to sample randomly n pairs that fulfil the condition that Xs and Ys are unique. For instance, if n=3 and using the data above, the following combinations (9,1) (4,5) (4,10) or (1,2) (14,13) (20,13) will be invalid because X=4 or Y=13 are duplicated in each of the solutions. However, (9,1) (1,2) and (8,4) will be a valid solution because the Xs and Ys are unique. Any help will be moooooost welcome.
If you start by sampling (randomizing) the rows of your original data, then subset only those rows where X or Y are not duplicated and then select the first, last or any n (=3) number of rows (you could use sample again), you should be fine, I think.
set.seed(1) # for reproducibility
head(subset(df[sample(nrow(df)),], !duplicated(X) & !duplicated(Y)), 3)
# X Y
#6 4 5
#7 16 6
#10 11 9
In response to the comment by #Richo64, saying that this approach will not randomly select the pairs:
It does sample the pairs randomly because the first (inner most) thing I do is
df[sample(nrow(df)),]
which samples the rows of the data randomly. Now, once we have done that, it's a random process which, say 4, in column X will come first and therefore will remain in the data because the other 4 is removed since it is a duplicated entry in X.
The same applies to values in Y.
It's obvious then, that after the sampling and subsetting, you are free to choose any 3 rows of the remaining data and even if you always selected the first 3 rows, you would still get a random selection that will differ every time you run it (except when it coincidently samples the same rows again).

Altering a large distance matrix to be just three columns

I have a large data frame/.csv that is a matrix with 42 columns and 110,357,407. It was derived from the x and y coordinates for two datasets of points, one with 41 and another with 110,357,407 and the values of the rows represent the distances between these two sets of points (the distance of each point on list 1 to every single point on list 2). The first column is a list of points (from 1 to 110,357,407). An excerpt from the matrix is below.
V1 V2 V3 V4 V5 V6 V7
1 38517.05 38717.8 38840.16 38961.37 39281.06 88551.03 88422.62
2 38514.05 38714.79 38837.15 38958.34 39278 88545.48 88417.09
3 38511.05 38711.79 38834.14 38955.3 39274.94 88539.92 88411.56
4 38508.05 38708.78 38831.13 38952.27 39271.88 88534.37 88406.03
5 38505.06 38705.78 38828.12 38949.24 39268.83 88528.82 88400.5
6 38502.07 38702.78 38825.12 38946.21 39265.78 88523.27 88394.97
7 38499.08 38699.78 38822.12 38943.18 39262.73 88517.72 88389.44
8 38496.09 38696.79 38819.12 38940.15 39259.68 88512.17 88383.91
9 38493.1 38693.8 38816.12 38937.13 39256.63 88506.62 88378.38
10 38490.12 38690.8 38813.12 38934.11 39253.58 88501.07 88372.85
11 38487.14 38687.81 38810.13 38931.09 39250.54 88495.52 88367.33
12 38484.16 38684.83 38807.14 38928.07 39247.5 88489.98 88361.8
13 38481.18 38681.84 38804.15 38925.06 39244.46 88484.43 88356.28
14 38478.21 38678.86 38801.16 38922.04 39241.43 88478.88 88350.75
15 38475.23 38675.88 38798.17 38919.03 39238.39 88473.34 88345.23
16 38472.26 38672.9 38795.19 38916.03 39235.36 88467.8 88339.71
My issue is that I would like to change this matrix into just 3 columns, the first column would be similar to the first column of the matrix with the 110,357,407 rows, the second would be the 41 data points (each matched up with a distance each of the first points to all of the others) and the third would be the distance between those points. So it would look something like this
Back Pres Dist
1 1 3486
2 1 3456
3 1 3483
4 1 3456
5 1 3429
6 1 3438
7 1 3422
8 1 3427
9 1 3428
(After the distances between the back and all of the first value of pres are complete, pres will change to 2 and will eventually work its way up to 41)
I realize that this will output a hugely ridiculous number of rows, but this is the format that I need to run some processes that are outside of R.
I tried using this code
cols.Output <- data.frame(col = rep(colnames(output3), each = nrow(output3)),
row = rep(rownames(output3), ncol(output3)),
value = as.vector(output3))
But there won’t be the same number of rows for each column, so I received an error (and I don’t think it would have really worked with my pres column needs). I tried experimenting with some of the rbind.fill and cbind.fill functions (the one in plyr and ones that others have come up with in the forum). I also looked into some of the melting and reshaping but I was very confused about the functions and couldn’t figure out how to implement them appropriately (or if they even are appropriate for what I need). I would really appreciate any help on this as I’ve been struggling with it for a long time.
Edit: Just to be a little more clear about what I need. Take these two smaller data sets
back <- 1 dataset with 5 sets of x, y points
pres <- 1 dataset with 3 sets of x, y points
Calculating distances between these two data frames generates the initial matrix:
Back 1 2 3
1 3427 3444 3451
2 3432 3486 3476
3 3486 3479 3486
4 3449 3438 3484
5 3483 3486 3486
And my desired output would look like this:
Back Pres Dist
1 1 3427
2 1 3432
3 1 3486
4 1 3449
5 1 3483
1 2 3444
2 2 3486
3 2 3479
4 2 3438
5 2 3486
1 3 3451
2 3 3476
3 3 3486
4 3 3484
5 3 3486
Yes, it looks this is the kind of problem generally solved with some combination of melt and cast in the reshape2 package. That said, with 100+ million rows, I'm not sure that that's the most efficient way to go in this case.
You could do it all manually as follows. I'll assume your data frame is called df, and the distances are in columns 2 to 42. See if this works.
d <- unlist(df[-1]) # put all the distances into a vector
newdf <- cbind(expand.grid(back=seq_len(nrow(df)), pres=seq_len(ncol(df) - 1)), d)
This will probably die unless you have tons of memory. The same holds for any simple solution though, since you have > 4.2 billion elements in the vector of distances. You can work on subsets of the full dataset at a time to get around this problem.
Here's how to use melt on a small example:
require(reshape2)
a <- matrix(rnorm(9), nrow = 3)
a[, 1] <- 1:3 ## Pretending these are one set of points
rownames(a) <- a[, 1] ## We'll put them as rownames instead of a column
melt(a[, -1]) ## And omit that column when melting
If you have memory issues, you could write a for loop and do it in pieces, writing each to a file when they're completed.

Re-sample a data frame with panel dimension

I have a data set consisting of 2000 individuals. For each individual, i:2000 , the data set contains n repeated situations. Letting d denote this data set, each row of dis indexed by i and n. Among other variables, d has a variable pid which takes on identical value for an individual across different (situations) rows.
Taking into consideration the panel nature of the data, I want to re-sample d (as in bootstrap):
with replacement,
store each re-sample data as a data frame
I considered using the sample function but could not make it work. I am a new user of r and have no programming skills.
The data set consists of many variables, but all the variables have numeric values. The data set is as follows.
pid x y z
1 10 2 -5
1 12 3 -4.5
1 14 4 -4
1 16 5 -3.5
1 18 6 -3
1 20 7 -2.5
2 22 8 -2
2 24 9 -1.5
2 26 10 -1
2 28 11 -0.5
2 30 12 0
2 32 13 0.5
The first six rows are for the first person, for which pid=1, and the next sex rows, pid=2 are different observations for the second person.
This should work for you:
z <- replicate(100,
d[d$pid %in% sample(unique(d$pid), 2000, replace=TRUE),],
simplify = FALSE)
The result z will be a list of dataframes you can do whatever with.
EDIT: this is a little wordy, but will deal with duplicated rows. replicate has its obvious use of performing a set operation a given number of times (in the example below, 4). I then sample the unique values of pid (in this case 3 of those values, with replacement) and extract the rows of d corresponding to each sampled value. The combination of a do.call to rbind and lapply deal with the duplicates that are not handled well by the above code. Thus, instead of generating dataframes with potentially different lengths, this code generates a dataframe for each sampled pid and then uses do.call("rbind",...) to stick them back together within each iteration of replicate.
z <- replicate(4, do.call("rbind", lapply(sample(unique(d$pid),3,replace=TRUE),
function(x) d[d$pid==x,])),
simplify=FALSE)

Resources