Say I have a even-length vector such as this:
v <- c(1,1,1,1,2,2,2,3,3,3,4,5,6,7)
It is 14 elements long. I wish to randomly sample 7 pairs of elements without replacement, but a rule is that no pair should contain two of the same item.
So the following result would be acceptable:
1-2, 1-2, 1-2, 1-3, 3-4, 3-5, 6-7
I am not sure how to do this systematically. Clearly brute force would work, e.g.
set.seed(1)
v=c(1,1,1,1,2,2,2,3,3,3,4,5,6,7)
length(v)
v1<-sample(v)
pairs <- split(v1, ceiling(seq_along(v1)/2))
sapply(pairs, diff)
1 2 3 4 5 6 7
1 1 2 3 -6 -3 3
This shows that no pair has duplicate elements as the difference is always not 0. In my case, I need to do this 1000s of times and it's not so easy to avoid duplicates. Is there a more effective way?
v0 <- table(v)
set.seed(2)
out <- replicate(7, sample(names(v0), size=2, prob=v0))
out
# [,1] [,2] [,3] [,4] [,5] [,6] [,7]
# [1,] "1" "2" "4" "1" "3" "2" "6"
# [2,] "5" "1" "7" "7" "2" "1" "1"
I use table(v) and names(v0) so that I'm guaranteed the names and the probs are in the same order. (I didn't want to assume that your actual data is structured identically.) If you need integers, then it's easy enough to us as.integer.
If you literally need 1-2, then
apply(out, 2, paste, collapse="-")
# [1] "1-5" "2-1" "4-7" "1-7" "3-2" "2-1" "6-1"
I'm confident that this will produce no dupes (because names(v0) is unique and the default replace=FALSE), but here's an empirical test:
set.seed(3)
l <- replicate(1e5, sample(unique(v), size=2, prob=table(v)))
any(l[1,] == l[2,])
# [1] FALSE
Here is a variation of your "brute-force" approach (better known as "hit-or-miss"):
rand.pairs <- function(v, time.out = 1000){
n <- length(v)
for(i in 1:time.out){
v <- sample(v)
first <- v[1:(n/2)]
second <- v[(n/2+1):n]
if(all(first != second)) return(unname(rbind(first,second)))
}
NULL
}
The point of time.out is to avoid infinite loops. For some input vectors a solution might be either impossible or too hard to hit upon by chance.
Example run:
> v <- c(1,1,1,1,2,2,2,3,3,3,4,5,6,7)
> set.seed(1234)
> rand.pairs(v)
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] 6 3 3 7 2 2 5
[2,] 1 4 1 1 3 1 2
It is fast enough to run thousands of times:
> library(microbenchmark)
> microbenchmark(rand.pairs(v))
Unit: microseconds
expr min lq mean median uq max neval
rand.pairs(v) 6.7 7.758 16.17517 12.166 19.747 70.877 100
Your mileage may vary, but if your machine is at all comparable, you should be able to call this function over 50,000 times per second. replicate(10000,rand.pairs(v)) takes much less than a second to run. On the other hand, if you have an input for which the constraints are harder to satisfy, a solution might require more time.
Related
I have the following data
margin1 <- c(72,34,446,40,33,71,2,96)
margin2 <- c(70,36,455,41,36,56,2,98)
propabilities <- matrix(1/8,8,8)
Now I would like to fill the inner cells of a 8x8 matrix by multiplying the following logic
matrix <- matrix(0,8,8)
matrix[1,] <- probabilities[1,]*margin2[1]
matrix[2,] <- probabilities[2,]*margin2[2]
matrix[3,] <- probabilities[3,]*margin2[3]
matrix[4,] <- probabilities[4,]*margin2[4]
matrix[5,] <- probabilities[5,]*margin2[5]
matrix[6,] <- probabilities[6,]*margin2[6]
matrix[7,] <- probabilities[7,]*margin2[7]
matrix[8,] <- probabilities[8,]*margin2[8]
However, what makes this difficult is, that the inner cells should always be integers. Therefore, I wrote the following rounding function:
rounding <- function(x) {
output <- matrix(0,8,8)
for(i in 1:nrow(x)){
obj <- x[i,]
y <- floor(obj)
indices <- tail(order(obj-y), round(sum(obj)) - sum(y))
y[indices] <- y[indices] + 1
output[i,]<- y
}
x <- output
return(x)
}
My expected output is the following:
matrix <- rounding(matrix)
While this works to ensure, that the rowSums of the matrix object are equal to margin2, the colSums do not equal margin1. This however, is exactly what I would need. Is there any way to rewrite the rounding function, that would achieve this?
Provided I have understood you correctly, the problem you're describing refers to the question, how to fill a matrix given its row and column sums (the "margins" as you call them).
In your particular case, you're trying to fill an 8x8 matrix. Since you have 64 unknowns, but 8 + 8 - 1 = 15 independent equations (8 row sums, 8 column sums, minus 1 because the sum of the row sums must be equal to the sum of the column sums) the bottom line is that there doesn't exist a unique solution, and instead there will be many.
If matrix values can be rational numbers you can fill the matrix with values margin2_i * margin1_j / sum(margin2) for row i and column j, or in R
mat <- margin2 %*% t(margin1) / sum(margin1)
mat
# [,1] [,2] [,3] [,4] [,5] [,6]
#[1,] 6.3476071 2.99748111 39.319899 3.5264484 2.90931990 6.2594458
#[2,] 3.2644836 1.54156171 20.221662 1.8136020 1.49622166 3.2191436
#[3,] 41.2594458 19.48362720 255.579345 22.9219144 18.91057935 40.6863980
#[4,] 3.7178841 1.75566751 23.030227 2.0654912 1.70403023 3.6662469
#[5,] 3.2644836 1.54156171 20.221662 1.8136020 1.49622166 3.2191436
#[6,] 5.0780856 2.39798489 31.455919 2.8211587 2.32745592 5.0075567
#[7,] 0.1813602 0.08564232 1.123426 0.1007557 0.08312343 0.1788413
#[8,] 8.8866499 4.19647355 55.047859 4.9370277 4.07304786 8.7632242
# [,7] [,8]
#[1,] 0.176322418 8.4634761
#[2,] 0.090680101 4.3526448
#[3,] 1.146095718 55.0125945
#[4,] 0.103274559 4.9571788
#[5,] 0.090680101 4.3526448
#[6,] 0.141057935 6.7707809
#[7,] 0.005037783 0.2418136
#[8,] 0.246851385 11.8488665
We can confirm that indeed
the row sum of mat is equal to margin2
identical(rowSums(mat), margin2)
#[1] TRUE
and that
the column sum of mat is equal to margin1
identical(colSums(mat), margin1)
#[1] TRUE
The problem is more complex if you want to restrict matrix values to only integer values. Here I would refer you to an excellent post on Mathematics that illustrates an iterative solution strategy.
I want to compute cumulative sum for the first (n-1) columns(if we have n columns matrix) and subsequently average the values. I created a sample matrix to do this task. I have the following matrix
ma = matrix(c(1:10), nrow = 2, ncol = 5)
ma
[,1] [,2] [,3] [,4] [,5]
[1,] 1 3 5 7 9
[2,] 2 4 6 8 10
I wanted to find the following
ans = matrix(c(1,2,2,3,3,4,4,5), nrow = 2, ncol = 4)
ans
[,1] [,2] [,3] [,4]
[1,] 1 2 3 4
[2,] 2 3 4 5
The following are my r function.
ColCumSumsAve <- function(y){
for(i in seq_len(dim(y)[2]-1)) {
y[,i] <- cumsum(y[,i])/i
}
}
ColCumSumsAve(ma)
However, when I run the above function its not producing any output. Are there any mistakes in the code?
Thanks.
There were several mistakes.
Solution
This is what I tested and what works:
colCumSumAve <- function(m) {
csum <- t(apply(X=m, MARGIN=1, FUN=cumsum))
res <- t(Reduce(`/`, list(t(csum), 1:ncol(m))))
res[, 1:(ncol(m)-1)]
}
Test it with:
> colCumSumAve(ma)
[,1] [,2] [,3] [,4]
[1,] 1 2 3 4
[2,] 2 3 4 5
which is correct.
Explanation:
colCumSumAve <- function(m) {
csum <- t(apply(X=m, MARGIN=1, FUN=cumsum)) # calculate row-wise colsum
res <- t(Reduce(`/`, list(t(csum), 1:ncol(m))))
# This is the trickiest part.
# Because `csum` is a matrix, the matrix will be treated like a vector
# when `Reduce`-ing using `/` with a vector `1:ncol(m)`.
# To get quasi-row-wise treatment, I change orientation
# of the matrix by `t()`.
# However, the output, the output will be in this transformed
# orientation as a consequence. So I re-transform by applying `t()`
# on the entire result at the end - to get again the original
# input matrix orientation.
# `Reduce` using `/` here by sequencial list of the `t(csum)` and
# `1:ncol(m)` finally, has as effect `/`-ing `csum` values by their
# corresponding column position.
res[, 1:(ncol(m)-1)] # removes last column for the answer.
# this, of course could be done right at the beginning,
# saving calculation of values in the last column,
# but this calculation actually is not the speed-limiting or speed-down-slowing step
# of these calculations (since this is sth vectorized)
# rather the `apply` and `Reduce` will be rather speed-limiting.
}
Well, okay, I could do then:
colCumSumAve <- function(m) {
csum <- t(apply(X=m[, 1:(ncol(m)-1)], MARGIN=1, FUN=cumsum))
t(Reduce(`/`, list(t(csum), 1:ncol(m))))
}
or:
colCumSumAve <- function(m) {
m <- m[, 1:(ncol(m)-1)] # remove last column
csum <- t(apply(X=m, MARGIN=1, FUN=cumsum))
t(Reduce(`/`, list(t(csum), 1:ncol(m))))
}
This is actually the more optimized solution, then.
Original Function
Your original function makes only assignments in the for-loop and doesn't return anything.
So I copied first your input into a res, processed it with your for-loop and then returned res.
ColCumSumsAve <- function(y){
res <- y
for(i in seq_len(dim(y)[2]-1)) {
res[,i] <- cumsum(y[,i])/i
}
res
}
However, this gives:
> ColCumSumsAve(ma)
[,1] [,2] [,3] [,4] [,5]
[1,] 1 1.5 1.666667 1.75 9
[2,] 3 3.5 3.666667 3.75 10
The problem is that the cumsum in matrices is calculated in column-direction instead row-wise, since it treats the matrix like a vector (which goes columnwise through the matrix).
Corrected Original Function
After some frickeling, I realized, the correct solution is:
ColCumSumsAve <- function(y){
res <- matrix(NA, nrow(y), ncol(y)-1)
# create empty matrix with the dimensions of y minus last column
for (i in 1:(nrow(y))) { # go through rows
for (j in 1:(ncol(y)-1)) { # go through columns
res[i, j] <- sum(y[i, 1:j])/j # for each position do this
}
}
res # return `res`ult by calling it at the end!
}
with the testing:
> ColCumSumsAve(ma)
[,1] [,2] [,3] [,4]
[1,] 1 2 3 4
[2,] 2 3 4 5
Note: dim(y)[2] is ncol(y) - and dim(y)[1] is nrow(y) -
and instead seq_len(), 1: is shorter and I guess even slightly faster.
Note: My solution given first will be faster, since it uses apply, vectorized cumsum and Reduce. - for-loops in R are slower.
Late Note: Not so sure that the first solution is faster. Since R-3.x it seems that for loops are faster. Reduce will be the speed limiting funtion and can be sometimes incredibly slow.
k <- t(apply(ma,1,cumsum))[,-ncol(k)]
for (i in 1:ncol(k)){
k[,i] <- k[,i]/i
}
k
This should work.
All you need is rowMeans:
nc <- 4
cbind(ma[,1],sapply(2:nc,function(x) rowMeans(ma[,1:x])))
[,1] [,2] [,3] [,4]
[1,] 1 2 3 4
[2,] 2 3 4 5
Here's how I did it
> t(apply(ma, 1, function(x) cumsum(x) / 1:length(x)))[,-NCOL(ma)]
[,1] [,2] [,3] [,4]
[1,] 1 2 3 4
[2,] 2 3 4 5
This applies the cumsum function row-wise to the matrix ma and then divides by the correct length to get the average (cumsum(x) and 1:length(x) will have the same length). Then simply transpose with t and remove the last column with [,-NCOL(ma)].
The reason why there is no output from your function is because you aren't returning anything. You should end the function with return(y) or simply y as Marius suggested. Regardless, your function doesn't seem to give you the correct response anyway.
Consider the following vector x and list s
x <- c("apples and pears", "one banana", "pears, oranges, and pizza")
s <- strsplit(x, "(,?)\\s+")
The desired result will be the following, but please keep reading.
> t(sapply(s, `length<-`, 4))
# [,1] [,2] [,3] [,4]
#[1,] "apples" "and" "pears" NA
#[2,] "one" "banana" NA NA
#[3,] "pears" "oranges" "and" "pizza"
That's fine, it's a good way to do it. But R's vectorization is one its best features, and I'd like to see if I can do this with recursive indexing, that is, using only [ subscript indexing.
I want start with the following, and use the row and column indices to turn the matrix s into a 3x4 matrix. So I'm calling cbind on the list s, and starting from there.
(cb <- cbind(s))
# s
# [1,] Character,3
# [2,] Character,2
# [3,] Character,4
class(cb[1])
#[1] "list"
is.recursive(cb)
#[1] TRUE
I've gotten this far, but now I'm struggling with the higher dimensions. Here's the first row, From here I as to unlist the rest of the matrix using the [ and [[ index.
w <- character(nrow(cb)+nrow(cb)^2)
dim(w) <- c(3,4)
w[cbind(1, 1:3)] <- cb[[1]]
# [,1] [,2] [,3] [,4]
#[1,] "apples" "and" "pears" ""
#[2,] "" "" "" ""
#[3,] "" "" "" ""
At level 2 it gets more difficult. I've been doing things like this
> cb[[c(1,2,1), exact = TRUE]]
# Error in cb[[c(1, 2, 1), exact = TRUE]] :
# recursive indexing failed at level 2
> cb[[cbind(1,2,1)]]
# Error in cb[[cbind(1, 2, 1)]] : recursive indexing failed at level 2
Here's an example of how the indexing proceeds. I've tried all kinds of combinations of w[[cbind(1, 1:2)]] and alike
w[cbind(1, 1:3)] <- cb[[1]]
w[cbind(2, 1:2)] <- cb[[2]]
w[cbind(3, 1:4)] <- cb[[3]]
From the empty matrix w, this produces the result
# [,1] [,2] [,3] [,4]
#[1,] "apples" "and" "pears" ""
#[2,] "one" "banana" "" ""
#[3,] "pears" "oranges" "and" "pizza"
Is it possible to use recursive indexing on all levels, so that I can unlist cb into an empty matrix directly from when it was a list? i.e. put the three w[] <- cb[[]] lines into one.
I'm asking this because it gets to the heart of matrix structures in R. It's about learning the indexing, and not about finding an alternative solution to my problem.
You can use the rbind.fill.matrix function from the plyr package.
library(plyr)
rbind.fill.matrix(lapply(s, rbind))
This returns
1 2 3 4
[1,] "apples" "and" "pears" NA
[2,] "one" "banana" NA NA
[3,] "pears" "oranges" "and" "pizza"
Note that this does use as.matrix internally: rbind.fill.matrix calls matrices[] <- lapply(matrices, as.matrix)
If you wanted to bypass the intermediary steps, you can just use my cSplit function, like this:
cSplit(as.data.table(x), "x", "(,?)\\s+", fixed = FALSE)
# x_1 x_2 x_3 x_4
# 1: apples and pears NA
# 2: one banana NA NA
# 3: pears oranges and pizza
as.matrix(.Last.value)
# x_1 x_2 x_3 x_4
# [1,] "apples" "and" "pears" NA
# [2,] "one" "banana" NA NA
# [3,] "pears" "oranges" "and" "pizza"
Under the hood, however, that still does require creating a matrix and filling it in. It uses matrix indexing to fill in the values, so it is quite fast.
A manual approach would look something like:
myFun <- function(invec, split, fixed = TRUE) {
s <- strsplit(invec, split, fixed)
Ncol <- vapply(s, length, 1L)
M <- matrix(NA_character_, ncol = max(Ncol),
nrow = length(invec))
M[cbind(rep(sequence(length(invec)), times = Ncol),
sequence(Ncol))] <- unlist(s, use.names = FALSE)
M
}
myFun(x, "(,?)\\s+", FALSE)
# [,1] [,2] [,3] [,4]
# [1,] "apples" "and" "pears" NA
# [2,] "one" "banana" NA NA
# [3,] "pears" "oranges" "and" "pizza"
Speed is not everything, but it certainly should be a consideration for this type of transformation.
Here are some tests of what has been suggested so far:
## The manual approach
fun1 <- function(x) myFun(x, "(,?)\\s+", FALSE)
## The cSplit approach
fun2 <- function(x) cSplit(as.data.table(x), "x", "(,?)\\s+", fixed = FALSE)
## The OP's approach
fun3 <- function(x) {
s <- strsplit(x, "(,?)\\s+")
mx <- max(sapply(s, length))
do.call(rbind, lapply(s, function(x) { length(x) <- mx; x }))
}
## The plyr approach
fun4 <- function(x) {
s <- strsplit(x, "(,?)\\s+")
rbind.fill.matrix(lapply(s, rbind))
}
And, for fun, here's another approach, this one using dcast.data.table:
fun5 <- function(x) {
dcast.data.table(
data.table(
strsplit(x, "(,?)\\s+"))[, list(
unlist(V1)), by = sequence(length(x))][, N := sequence(
.N), by = sequence], sequence ~ N, value.var = "V1")
}
Testing is on slightly bigger data. Not very big--12k values:
x <- unlist(replicate(4000, x, FALSE))
length(x)
# [1] 12000
## I expect `rbind.fill.matrix` to be slow:
system.time(fun4(x))
# user system elapsed
# 3.38 0.00 3.42
library(microbenchmark)
microbenchmark(fun1(x), fun2(x), fun3(x), fun5(x))
# Unit: milliseconds
# expr min lq median uq max neval
# fun1(x) 97.22076 100.8013 102.5754 107.8349 166.6632 100
# fun2(x) 115.01466 120.6389 125.0622 138.0614 222.7428 100
# fun3(x) 146.33339 155.9599 158.8394 170.3917 228.5523 100
# fun5(x) 257.53868 266.5994 273.3830 296.8003 346.3850 100
A bit bigger data, but still not what others might consider big: 1.2M values.
X <- unlist(replicate(100, x, FALSE))
length(X)
# [1] 1200000
## Dropping fun3 and fun5 now, though they are very close...
## I wonder how fun5 scales further (but don't have the patience to wait)
system.time(fun5(X))
# user system elapsed
# 31.28 0.43 31.76
system.time(fun3(X))
# user system elapsed
# 31.62 0.33 31.99
microbenchmark(fun1(X), fun2(X), times = 10)
# Unit: seconds
# expr min lq median uq max neval
# fun1(X) 11.65622 11.76424 12.31091 13.38226 13.46488 10
# fun2(X) 12.71771 13.40967 14.58484 14.95430 16.15747 10
The penalty for the cSplit approach would be in terms of having to convert to a "data.table" and the checking of different conditions, but as your data grows, those penalties become less noticeable.
I have a matrix, named "mat", and a smaller matrix, named "center".
temp = c(1.8421,5.6586,6.3526,2.904,3.232,4.6076,4.8,3.2909,4.6122,4.9399)
mat = matrix(temp, ncol=2)
[,1] [,2]
[1,] 1.8421 4.6076
[2,] 5.6586 4.8000
[3,] 6.3526 3.2909
[4,] 2.9040 4.6122
[5,] 3.2320 4.9399
center = matrix(c(3, 6, 3, 2), ncol=2)
[,1] [,2]
[1,] 3 3
[2,] 6 2
I need to compute the distance between each row of mat with every row of center. For example, the distance of mat[1,] and center[1,] can be computed as
diff = mat[1,]-center[1,]
t(diff)%*%diff
[,1]
[1,] 3.92511
Similarly, I can find the distance of mat[1,] and center[2,]
diff = mat[1,]-center[2,]
t(diff)%*%diff
[,1]
[1,] 24.08771
Repeat this process for each row of mat, I will end up with
[,1] [,2]
[1,] 3.925110 24.087710
[2,] 10.308154 7.956554
[3,] 11.324550 1.790750
[4,] 2.608405 16.408805
[5,] 3.817036 16.304836
I know how to implement it with for-loops. I was really hoping someone could tell me how to do it with some kind of an apply() function, maybe mapply() I guess.
Thanks
apply(center, 1, function(x) colSums((x - t(mat)) ^ 2))
# [,1] [,2]
# [1,] 3.925110 24.087710
# [2,] 10.308154 7.956554
# [3,] 11.324550 1.790750
# [4,] 2.608405 16.408805
# [5,] 3.817036 16.304836
If you want the apply for expressiveness of code that's one thing but it's still looping, just different syntax. This can be done without any loops, or with a very small one across center instead of mat. I'd just transpose first because it's wise to get into the habit of getting as much as possible out of the apply statement. (The BrodieG answer is pretty much identical in function.) These are working because R will automatically recycle the smaller vector along the matrix and do it much faster than apply or for.
tm <- t(mat)
apply(center, 1, function(m){
colSums((tm - m)^2) })
Use dist and then extract the relevant submatrix:
ix <- 1:nrow(mat)
as.matrix( dist( rbind(mat, center) )^2 )[ix, -ix]
6 7
# 1 3.925110 24.087710
# 2 10.308154 7.956554
# 3 11.324550 1.790750
# 4 2.608405 16.408805
# 5 3.817036 16.304836
REVISION: simplified slightly.
You could use outer as well
d <- function(i, j) sum((mat[i, ] - center[j, ])^2)
outer(1:nrow(mat), 1:nrow(center), Vectorize(d))
This will solve it
t(apply(mat,1,function(row){
d1<-sum((row-center[1,])^2)
d2<-sum((row-center[2,])^2)
return(c(d1,d2))
}))
Result:
[,1] [,2]
[1,] 3.925110 24.087710
[2,] 10.308154 7.956554
[3,] 11.324550 1.790750
[4,] 2.608405 16.408805
[5,] 3.817036 16.304836
I have two equally long dataset - 'vpXmin' and 'vpXmax' created from 'vp'
> head(vpXmin)
vp
[1,] 253641 2621722
[2,] 253641 2622722
[3,] 253641 2623722
[4,] 253641 2624722
[5,] 253641 2625722
[6,] 253641 2626722
> head(vpXmax)
vp
[1,] 268641 2621722
[2,] 268641 2622722
[3,] 268641 2623722
[4,] 268641 2624722
[5,] 268641 2625722
[6,] 268641 2626722
I want to join each of the rows from these datasets using 'rbind' and want to create separate matrix; e.g.
l1<-rbind(vpXmax[1,],vpXmin[1,])
l2<-rbind(vpXmax[2,],vpXmin[2,])
... ...
Even though I'm not familiar with R loops, I want to deal with such a large data as a loop ... but I failed while trying this:
for (i in 1:length(vp)){rbind(vpXmax[i,],vpXmin[i,])}
Any idea why? Also, please gimme some good references for learning different kinds of loops using R, if any. thanks in advance.
Maybe something like:
vpXmax <- matrix(1:10,ncol=2)
vpXmin <- matrix(11:20,ncol=2)
l <- lapply(1:nrow(vpXmin),function(i) rbind(vpXmax[i,],vpXmin[i,]) )
Then, instead of l1, l2 etc etc you have
l[[1]]
# [,1] [,2]
#[1,] 1 6
#[2,] 11 16
l[[2]]
# [,1] [,2]
#[1,] 2 7
#[2,] 12 17
And although it is probably not ideal, there is one major thing wrong with your initial loop.
You aren't assigning your output, so you need to use assign or <- in some way to actually make an object. However, using assign, is pretty much a flag to set off alarm bells that there is a better way to do things, and <- would require pre-allocating or other stuffing around.
Nevertheless, it will work, albeit polluting your work space with l1 l2... ln objects:
for (i in 1:nrow(vpXmax)) {assign(paste0("l",i), rbind(vpXmax[i,],vpXmin[i,]) )}
> l1
# [,1] [,2]
#[1,] 1 6
#[2,] 11 16
> l2
# [,1] [,2]
#[1,] 2 7
#[2,] 12 17
As #ToNoy indicates, it is not obvious the kind of output that you want. The easiest way to proceed would be to create a list in which each element is the result of rbind each row of the two original data frames.
A <- data.frame("a" = runif(100, -1, 0), "b" = runif(100, 0, 1))
Z <- data.frame("a" = runif(100, -2, -1), "b" = runif(100, 1, 2))
output <- vector("list", nrow(A))
for (i in 1:nrow(A)) {
output[[i]] <- rbind(A[i, ], Z[i, ])
}