I have a large matrix/dataframe (2500x20) and need, by column, a rolling deviation of the maximum less the minimum of all previous cells - excluding the current.
I apply my function by column. My function shifts the whole column (as vector) by 1:length, producing a list of lists. I need this as matrix to apply the max(x)-min(x) function. This works for a small matrix and runs too long for the size I need.
The (small) source (provided):
[,1] [,2] [,3] [,4] [,5]
[1,] 55 9 99 0 NA
[2,] 54 7 98 1 NA
[3,] 56 12 97 2 NA
[4,] 53 8 96 3 1
[5,] 57 22 95 4 0
[6,] 52 51 94 5 -1
[7,] 58 6 93 6 NA
[8,] 51 6 93 7 0
[9,] 59 51 92 8 2
[10,] 50 78 91 9 NA
[11,] 60 12 90 10 NA
[12,] 49 5 89 11 -2
Expected outcome:
[,1] [,2] [,3] [,4] [,5]
[1,] 0 0 0 0 0
[2,] 0 0 0 0 0
[3,] 1 2 1 1 0
[4,] 2 5 2 2 0
[5,] 3 5 3 3 0
[6,] 4 15 4 4 1
[7,] 5 44 5 5 2
[8,] 6 45 6 6 2
[9,] 7 45 6 7 2
[10,] 8 45 7 8 3
[11,] 9 72 8 9 3
[12,] 10 72 9 10 3
The first result-row will always be 0 because it has no previous rows it could calculate from.
The second result-row will always be zero because the one previous row is the maximum and minimum value.
The last source-row will never influence the result.
What I have so far:
library(data.table)
mytest <- matrix(c(
55, 9,99, 0,NA,
54, 7,98, 1,NA,
56,12,97, 2,NA,
53, 8,96, 3, 1,
57,22,95, 4, 0,
52,51,94, 5,-1,
58, 6,93, 6,NA,
51, 6,93, 7, 0,
59,51,92, 8, 2,
50,78,91, 9,NA,
60,12,90,10,NA,
49, 5,89,11,-2
), ncol=5, byrow = TRUE)
rolling_deviation <- function (a_column){
tmp1 = shift(a_column, 1:(length(a_column)-1), NA, "lag")
tmp2 = matrix(unlist(tmp1), ncol = length(a_column), byrow = TRUE)
apply(tmp2,2,function(x){
x = x[!is.na(x)]
ifelse(length(x)==0, 0, max(x) - min(x))
})
}
apply(mytest,2,rolling_deviation)
I need this to calculate faster, there will be more rows as observations increase.
How about this?
> system.time(outcome<- apply(mytest,2,rolling_deviation))
user system elapsed
0.014 0.002 0.038
> system.time(outcome1<- setDT(data.frame(mytest))[, lapply(.SD, rolling_deviation)])
user system elapsed
0.002 0.000 0.002
The results are the same:
> outcome
[,1] [,2] [,3] [,4] [,5]
[1,] 0 0 0 0 0
[2,] 0 0 0 0 0
[3,] 1 2 1 1 0
[4,] 2 5 2 2 0
[5,] 3 5 3 3 0
[6,] 4 15 4 4 1
[7,] 5 44 5 5 2
[8,] 6 45 6 6 2
[9,] 7 45 6 7 2
[10,] 8 45 7 8 3
[11,] 9 72 8 9 3
[12,] 10 72 9 10 3
> outcome1
X1 X2 X3 X4 X5
1: 0 0 0 0 0
2: 0 0 0 0 0
3: 1 2 1 1 0
4: 2 5 2 2 0
5: 3 5 3 3 0
6: 4 15 4 4 1
7: 5 44 5 5 2
8: 6 45 6 6 2
9: 7 45 6 7 2
10: 8 45 7 8 3
11: 9 72 8 9 3
12: 10 72 9 10 3
I solved my problem by creating a nested loop. I was taught that resorting to loops would almost always be bad for performance, but I can't find a better way. I need a helper function for my min/max operations because NA arguments are likely and not an error for my purpose.
rolling_range <- function(in_mat){
ignore_na = function(op, a,b){
if(is.na(a)){
return(b)
}else if(is.na(b)){
return(a)
}else{
return(op(a,b))
}
}
out_mat.min = matrix(NA, nrow = nrow(in_mat), ncol = ncol(in_mat))
out_mat.max = matrix(NA, nrow = nrow(in_mat), ncol = ncol(in_mat))
out_mat.result = matrix(0, nrow = nrow(in_mat), ncol = ncol(in_mat))
for(m in 1:ncol(in_mat)){
for(n in 2:nrow(in_mat)){
out_mat.min[n,m] = ignore_na(`min`, out_mat.min[(n-1),m], in_mat[(n-1),m])
out_mat.max[n,m] = ignore_na(`max`, out_mat.max[(n-1),m], in_mat[(n-1),m])
out_mat.result[n,m] = ifelse(is.na(out_mat.min[n,m]) || is.na(out_mat.max[n,m]), 0, out_mat.max[n,m] - out_mat.min[n,m] )
}
}
return(out_mat.result)
}
rolling_range(mytest)
Related
There are 9 treatments and we want to have 7 blocks. In each block, the treatment should be repeated once.
The 9 treatments are marked as follows:
-Treatment 1 (1-7)
-Treatment 2 (8-14)
-Treatment 3 (15-21)
-Treatment 4 (22-28)
-Treatment 5 (29-35)
-Treatment 6 (36-42)
-Treatment 7 (43-49)
-Treatment 8 (50-56)
-Treatment 9 (57-63)
Each number represents a pot. We want these pots randomised in 7 blocks (columns) but we don't want two pot of the same treatment adjacent to each other - highlighted in grey:
How would I go about this in R?
If I'm interpreting it correctly, this should work.
We'll do a two-step sampling:
First, sample the treatment group itself, making it much easier to determine if a particular row in the block is in the same treatment group as the same row, previous block.
Second, sample one from each of the proven-safe groups.
I'll use a random seed here for reproducibility, do not use set.seed(.) in production.
set.seed(42)
nBlocks <- 7
treatments <- list(1:7, 8:14, 15:21, 22:28, 29:35, 36:42, 43:49, 50:56, 57:63)
blocks <- Reduce(function(prev, ign) {
while (TRUE) {
this <- sample(length(treatments))
if (!any(this == prev)) break
}
this
}, seq.int(nBlocks)[-1], init = sample(length(treatments)), accumulate = TRUE)
blocks <- do.call(cbind, blocks)
blocks
# [,1] [,2] [,3] [,4] [,5] [,6] [,7]
# [1,] 1 3 4 2 8 2 1
# [2,] 5 1 2 4 5 7 9
# [3,] 9 8 9 3 1 3 5
# [4,] 7 9 3 6 7 9 3
# [5,] 2 4 8 5 4 1 4
# [6,] 4 7 1 9 6 4 2
# [7,] 8 6 5 7 2 6 8
# [8,] 3 5 6 8 9 5 6
# [9,] 6 2 7 1 3 8 7
Here each column is a "block", and each number represents the treatment group assigned to each row. You can see that no rows contain the same group in subsequent columns.
For instance, the first column ("block 1") will have something from the Treatment 1 group in the first row, Treatment 5 group in row two, etc. Further, inspection will show that all treatments are included in each block column, an inferred requirement of the experimental design.
(FYI, it is theoretically possible that this will take a while based on the random conditions. Because it repeats per-column, it should be relatively efficient, though. I have no safeguards here for too-long-execution, but I don't think it is required: the conditions here do not lend to a high likelihood of "failure" requiring much repetition.)
The next step is to convert each of these group numbers into a number from the respective treatment group.
apply(blocks, 1:2, function(ind) sample(treatments[[ind]], 1))
# [,1] [,2] [,3] [,4] [,5] [,6] [,7]
# [1,] 6 17 22 11 54 14 3
# [2,] 30 3 13 22 33 48 58
# [3,] 63 55 61 15 4 21 33
# [4,] 49 60 21 36 43 58 21
# [5,] 12 25 55 32 27 7 25
# [6,] 24 46 4 58 38 28 11
# [7,] 53 38 35 49 11 36 56
# [8,] 16 29 36 56 63 29 40
# [9,] 36 8 47 3 19 50 43
To verify, in the first matrix, our first three rows (block 1) were 1, 5, and 9, which should translate into 1-7, 29-35, and57-63, respectively. "6" is within 1-7, "30" is within 29-35, and "63" is within 59-63. Inspection will show the remainder to be correct.
Because of the step of determining treatment groups first, it is much simpler to verify/guarantee that you will not repeat treatment groups in a row between two adjacent blocks.
EDIT
Rules:
The same treatment group may not be on the same row in adjacent columns; and
The same treatment (not group) may not be in any row in adjacent columns.
We can use the same methodology as before. Note that as any groups become smaller, the iteration time may increase but I do not expect it likely to get into an infinite loop. (However, if you inadvertently have a group of length 1, then ... this will never end.)
nBlocks <- 7
treatments <- list(1:7, 8:14, 15:21, 22:28, 29:35, 36:42, 43:49, 50:56, 57:63)
# helper function for randomized selection of treatments given groups
func <- function(grp) cbind(grp, sapply(treatments[grp], sample, size = 1))
set.seed(42)
func(c(1,3,5))
# grp
# [1,] 1 1
# [2,] 3 19
# [3,] 5 29
And then the same Reduce mindset:
set.seed(42)
blocks <- Reduce(function(prev, ign) {
while (TRUE) {
this1 <- sample(length(treatments))
if (!any(this1 == prev[,1])) break
}
while (TRUE) {
this2 <- func(this1)
if (!any(this2[,2] %in% prev[,2])) break
}
this2
}, seq.int(nBlocks-1), init = func(sample(length(treatments))), accumulate = TRUE)
blocks <- do.call(cbind, blocks)
groups <- blocks[, seq(1, by = 2, length.out = nBlocks)]
treats <- blocks[, seq(2, by = 2, length.out = nBlocks)]
From this, we have two products (though you will likely only care about the second):
The treatment groups, good to verify rule 1 above: no group may be in the same row in adjacent columns:
groups
# grp grp grp grp grp grp grp
# [1,] 1 3 1 7 8 5 1
# [2,] 5 1 2 8 2 7 3
# [3,] 9 8 5 2 1 4 6
# [4,] 7 9 6 3 4 8 5
# [5,] 2 4 7 9 3 9 4
# [6,] 4 7 4 5 7 1 2
# [7,] 8 6 9 1 9 6 7
# [8,] 3 5 8 6 5 2 9
# [9,] 6 2 3 4 6 3 8
The treatments themselves, for rule 2 above, where no treatment may be in adjacent columns:
treats
#
# [1,] 7 19 2 47 51 33 3
# [2,] 35 4 12 50 8 44 15
# [3,] 60 51 35 10 1 22 41
# [4,] 43 58 41 21 26 55 31
# [5,] 12 24 43 57 17 57 26
# [6,] 27 49 26 34 48 6 11
# [7,] 53 36 62 6 62 36 47
# [8,] 16 33 54 42 32 10 62
# [9,] 37 9 15 27 37 18 56
Edit 2:
Another rule:
Each treatment group must be seen exactly once in each row and column (requiring a square experimental design).
I think this is effectively generating a sudoku-like matrix of treatment groups, and once that is satisfied, backfill rule #2 (no repeat treatments in adjacent columns). One way (though it is hasty) is suggested by https://gamedev.stackexchange.com/a/138228:
set.seed(42)
vec <- sample(9)
ind <- sapply(cumsum(c(0, 3, 3, 1, 3, 3, 1, 3, 3)), rot, x = vec)
apply(ind, 1, function(z) all(1:9 %in% z)) # all rows have all 1-9, no repeats
# [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
apply(ind, 1, function(z) all(1:9 %in% z)) # ... columns ...
# [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
ind
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
# [1,] 1 7 8 3 5 2 4 6 9
# [2,] 5 2 3 6 9 4 8 1 7
# [3,] 9 4 6 1 7 8 3 5 2
# [4,] 7 8 1 5 2 3 6 9 4
# [5,] 2 3 5 9 4 6 1 7 8
# [6,] 4 6 9 7 8 1 5 2 3
# [7,] 8 1 7 2 3 5 9 4 6
# [8,] 3 5 2 4 6 9 7 8 1
# [9,] 6 9 4 8 1 7 2 3 5
This makes a rather fixed-style of random group arrangements given the constraints on groups. Since this is a design of experiments, if you're going to use this method (and proximity between blocks is at all a concern), then you should likely randomize columns and/or rows of the ind matrix before sampling the treatments themselves. (You can do columns and rows, just do them piece-wise, and it should preserve the constraints.)
ind <- ind[sample(9),][,sample(9)]
ind
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
# [1,] 2 3 8 1 4 7 9 6 5
# [2,] 7 8 4 6 2 9 5 3 1
# [3,] 1 7 9 4 5 6 3 2 8
# [4,] 8 1 6 9 3 4 2 5 7
# [5,] 5 2 7 8 9 1 6 4 3
# [6,] 3 5 1 7 6 8 4 9 2
# [7,] 4 6 3 5 8 2 7 1 9
# [8,] 6 9 5 2 1 3 8 7 4
# [9,] 9 4 2 3 7 5 1 8 6
From here, we can enact rule 2:
treatments <- list(1:7, 8:14, 15:21, 22:28, 29:35, 36:42, 43:49, 50:56, 57:63)
mtx <- do.call(rbind, Reduce(function(prev, ind) {
while (TRUE) {
this <- sapply(treatments[ind], sample, size = 1)
if (!any(prev %in% this)) break
}
this
}, asplit(ind, 2)[-1],
init = sapply(treatments[ind[,1]], sample, size = 1),
accumulate = TRUE))
mtx
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
# [1,] 11 44 4 52 30 15 23 41 59
# [2,] 16 56 49 3 12 33 39 57 27
# [3,] 52 24 60 40 46 2 20 29 13
# [4,] 1 37 23 63 56 48 32 12 17
# [5,] 24 10 30 16 58 39 50 2 47
# [6,] 49 57 41 25 6 52 11 17 34
# [7,] 59 31 19 14 38 23 47 51 7
# [8,] 41 17 11 33 24 61 5 43 54
# [9,] 29 4 51 45 20 8 58 28 40
I have a matrix:
mat <- matrix(c(2,11,3,1,2,4,55,65,12,4,6,6,7,9,3,23,16,77,5,5,7),ncol = 3, byrow = TRUE)
[,1] [,2] [,3]
[1,] 2 11 3
[2,] 1 2 4
[3,] 55 65 12
[4,] 4 6 6
[5,] 7 9 3
[6,] 23 16 77
[7,] 5 5 7
I want to add a column with rows index. This index will starts at 1 and repeats the same index, until it arrived to a row where the rowsums is > 100 to move to the next value.
Indx[,2][,3][,4]
[1,] 1 2 11 3
[2,] 1 1 2 4
[3,] 2 55 65 12
[4,] 3 4 6 6
[5,] 3 7 9 3
[6,] 4 23 16 77
[7,] 5 5 5 7
Using rle:
matRle <- rle(rowSums(mat) > 100)$lengths
cbind(rep(seq(length(matRle)), matRle), mat)
# [,1] [,2] [,3] [,4]
# [1,] 1 2 11 3
# [2,] 1 1 2 4
# [3,] 2 55 65 12
# [4,] 3 4 6 6
# [5,] 3 7 9 3
# [6,] 4 23 16 77
# [7,] 5 5 5 7
A solution using dplyr.
library(dplyr)
mat2 <- mat %>%
as.data.frame() %>%
mutate(Indx = cumsum(rowSums(dat) > 100 | lag(rowSums(dat) > 100, default = TRUE))) %>%
select(Indx, paste0("V", 1:ncol(mat))) %>%
as.matrix()
mat2
# Indx V1 V2 V3
# [1,] 1 2 11 3
# [2,] 1 1 2 4
# [3,] 2 55 65 12
# [4,] 3 4 6 6
# [5,] 3 7 9 3
# [6,] 4 23 16 77
# [7,] 5 5 5 7
cbind(cumsum(replace(a<-rowSums(mat)>100,which(a==1)+1,1))+1,mat)
[,1] [,2] [,3] [,4]
[1,] 1 2 11 3
[2,] 1 1 2 4
[3,] 2 55 65 12
[4,] 3 4 6 6
[5,] 3 7 9 3
[6,] 4 23 16 77
[7,] 5 5 5 7
What does this do??:
first obtain the rowSums which are greater than 100
a<-rowSums(mat)>100
Then the next row for every row>100, should have the next index. Thus do a replace and cumsum:
cumsum(replace(a,which(a==1)+1,1))
Now you will realize that this starts from zero, so you add 1.
We could do this with rleid from data.table
library(data.table)
cbind(Indx = rleid(rowSums(mat) > 100), mat)
# Indx
#[1,] 1 2 11 3
#[2,] 1 1 2 4
#[3,] 2 55 65 12
#[4,] 3 4 6 6
#[5,] 3 7 9 3
#[6,] 4 23 16 77
#[7,] 5 5 5 7
So I am trying to sum the rows of a matrix, and there are inf's within it. How do I sum the row, omitting the inf's?
Multiply your matrix by the result of is.finite(m) and call rowSums on the product with na.rm=TRUE. This works because Inf*0 is NaN.
m <- matrix(c(1:3,Inf,4,Inf,5:6),4,2)
rowSums(m*is.finite(m),na.rm=TRUE)
A[is.infinite(A)]<-NA
rowSums(A,na.rm=TRUE)
Some benchmarking for comparison:
library(microbenchmark)
rowSumsMethod<-function(A){
A[is.infinite(A)]<-NA
rowSums(A,na.rm=TRUE)
}
applyMethod<-function(A){
apply( A , 1 , function(x){ sum(x[!is.infinite(x)])})
}
rowSumsMethod2<-function(m){
rowSums(m*is.finite(m),na.rm=TRUE)
}
rowSumsMethod0<-function(A){
A[is.infinite(A)]<-0
rowSums(A)
}
A1 <- matrix(sample(c(1:5, Inf), 50, TRUE), ncol=5)
A2 <- matrix(sample(c(1:5, Inf), 5000, TRUE), ncol=5)
microbenchmark(rowSumsMethod(A1),rowSumsMethod(A2),
rowSumsMethod0(A1),rowSumsMethod0(A2),
rowSumsMethod2(A1),rowSumsMethod2(A2),
applyMethod(A1),applyMethod(A2))
Unit: microseconds
expr min lq median uq max neval
rowSumsMethod(A1) 13.063 14.9285 16.7950 19.3605 1198.450 100
rowSumsMethod(A2) 212.726 220.8905 226.7220 240.7165 307.427 100
rowSumsMethod0(A1) 11.663 13.9960 15.3950 18.1940 112.894 100
rowSumsMethod0(A2) 103.098 109.6290 114.0610 122.9240 159.545 100
rowSumsMethod2(A1) 8.864 11.6630 12.5960 14.6955 49.450 100
rowSumsMethod2(A2) 57.380 60.1790 63.4450 67.4100 81.172 100
applyMethod(A1) 78.839 84.4380 92.1355 99.8330 181.005 100
applyMethod(A2) 3996.543 4221.8645 4338.0235 4552.3825 6124.735 100
So Joshua's method wins! And apply method is clearly slower than two other methods (relatively speaking of course).
I'd use apply and is.infinite in order to avoid replacing Inf values by NA as in #Hemmo's answer.
> set.seed(1)
> Mat <- matrix(sample(c(1:5, Inf), 50, TRUE), ncol=5)
> Mat # this is an example
[,1] [,2] [,3] [,4] [,5]
[1,] 2 2 Inf 3 5
[2,] 3 2 2 4 4
[3,] 4 5 4 3 5
[4,] Inf 3 1 2 4
[5,] 2 5 2 5 4
[6,] Inf 3 3 5 5
[7,] Inf 5 1 5 1
[8,] 4 Inf 3 1 3
[9,] 4 3 Inf 5 5
[10,] 1 5 3 3 5
> apply(Mat, 1, function(x) sum(x[!is.infinite(x)]))
[1] 12 15 21 10 18 16 12 11 17 17
Try this...
m <- c( 1 ,2 , 3 , Inf , 4 , Inf ,5 )
sum(m[!is.infinite(m)])
Or
m <- matrix( sample( c(1:10 , Inf) , 100 , rep = TRUE ) , nrow = 10 )
sums <- apply( m , 1 , FUN = function(x){ sum(x[!is.infinite(x)])})
> m
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 8 9 7 Inf 9 2 2 6 1 Inf
[2,] 8 7 4 5 9 5 8 4 7 10
[3,] 7 9 3 4 7 3 3 6 9 4
[4,] 7 Inf 2 6 4 8 3 1 9 9
[5,] 4 Inf 7 5 9 5 3 5 9 9
[6,] 7 3 7 Inf 7 3 7 3 7 1
[7,] 5 7 2 1 Inf 1 9 8 1 5
[8,] 4 Inf 10 Inf 8 10 4 9 7 2
[9,] 10 7 9 7 2 Inf 4 Inf 4 6
[10,] 9 4 6 3 9 6 6 5 1 8
> sums
[1] 44 67 55 49 56 45 39 54 49 57
This is a "non-apply" and non-destructive approach:
rowSums( matrix(match(A, A[is.finite(A)]), nrow(A)), na.rm=TRUE)
[1] 2 4
Although it is reasonably efficient, it is not as fast as Johsua's multiplication method.
I have a matrix
df<-matrix(data=c(3,7,5,0,1,0,0,0,0,8,0,9), ncol=2)
rownames(df)<-c("a","b","c","d","e","f")
[,1] [,2]
a 3 0
b 7 0
c 5 0
d 0 8
e 1 0
f 0 9
and I would like to order the matrix in descending order first by column 1 and then by column two resulting in the matrix
df.ordered<-matrix(data=c(7,5,3,1,0,0,0,0,0,0,9,8),ncol=2)
rownames(df.ordered)<-c("b","c","a","e","f","d")
[,1] [,2]
b 7 0
c 5 0
a 3 0
e 1 0
f 0 9
d 0 8
Any suggestions on how I could achieve this? Thanks.
The order function should do it.
df[order(df[,1],df[,2],decreasing=TRUE),]
To complete the main answer, here is a way to do it programmatically, without having to specify the columns by hand:
set.seed(2013) # preparing my example
mat <- matrix(sample.int(10,size = 30, replace = T), ncol = 3)
mat
[,1] [,2] [,3]
[1,] 5 1 6
[2,] 10 3 1
[3,] 8 8 1
[4,] 8 9 9
[5,] 3 7 3
[6,] 8 8 5
[7,] 10 10 2
[8,] 8 10 7
[9,] 10 1 9
[10,] 9 4 5
As a simple example, let say I want to use all the columns in their order of appearance to sort the rows of the matrix: (One could easily give a vector of indexes to the matrix)
mat[do.call(order, as.data.frame(mat)),] #could be ..as.data.frame(mat[,index_vec])..
[,1] [,2] [,3]
[1,] 3 7 3
[2,] 5 1 6
[3,] 8 8 1
[4,] 8 8 5
[5,] 8 9 9
[6,] 8 10 7
[7,] 9 4 5
[8,] 10 1 9
[9,] 10 3 1
[10,] 10 10 2
order function will help you out, try this:
df[order(-df[,1],-df[,2]),]
[,1] [,2]
b 7 0
c 5 0
a 3 0
e 1 0
f 0 9
d 0 8
The minus before df indicates that the order is decreasing. You will get the same result setting decreasing=TRUE.
df[order(df[,1],df[,2],decreasing=TRUE),]
I have a data.frame with numeric values. How can I replace the each row maximum with 0
So for example in a row:
10, 1, 3, 4
the output would be
0, 1, 3, 4
I tried:
df1 <- data.frame(df)[apply(df,1,which.max=0)]
but I have something wrong.
I would be grateful for your help.
How about
replace_max <- function(x){x[which.max(x)] <- 0;x}
t(apply(df, 1, replace_max))
or
library(plyr)
adply(df, 1, replace_max)
EDIT to do rows
EDIT:2 to ensure data.frame
Here's how I would do it:
a <-matrix(round(runif(25)*100,0),ncol=5) #create matrix
my.max <-apply(a,1,which.max) #find max position by row
> a
[,1] [,2] [,3] [,4] [,5]
[1,] 62 14 19 64 40
[2,] 74 83 26 95 14
[3,] 32 69 24 12 67
[4,] 100 57 19 3 16
[5,] 41 6 93 85 67
z <-cbind(1:5,my.max) #create coordinates
a[z] <-0 #replace those entries
> a
[,1] [,2] [,3] [,4] [,5]
[1,] 62 14 19 0 40
[2,] 74 83 26 0 14
[3,] 32 0 24 12 67
[4,] 0 57 19 3 16
[5,] 41 6 0 85 67
Try this:
#Generating a fake dataframe:
df=data.frame(A=c(1:5), B=c(6,111,5,7,10), C=c(11,28,65,7,15) , D=c(21:25))
> df
A B C D
1 1 6 11 21
2 2 111 28 22
3 3 5 65 23
4 4 7 7 24
5 5 10 15 25
n=length(rownames(df))
for(i in 1:n){
c1=as.numeric(which.max(df[i,]))
df[i,c1]=0
}
df #output
A B C D
1 1 6 11 0
2 2 0 28 22
3 3 5 0 23
4 4 7 7 0
5 5 10 15 0
How about:
x <- matrix(sample(1:16),nrow=4)
x
[,1] [,2] [,3] [,4]
[1,] 1 12 6 4
[2,] 16 2 13 15
[3,] 11 8 10 7
[4,] 14 9 5 3
x*as.logical(x-apply(x,1,max))
[,1] [,2] [,3] [,4]
[1,] 1 0 6 4
[2,] 0 2 13 15
[3,] 0 8 10 7
[4,] 0 9 5 3