I have a dataset with values 0, 1, and 2.
data <- matrix(c(1, 0, 0, 1, 2, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 2, 0, 1, 1, 1), nrow = 5, ncol = 4)
> data
[,1] [,2] [,3] [,4]
[1,] 1 1 0 2
[2,] 0 1 0 0
[3,] 0 0 0 1
[4,] 1 1 0 1
[5,] 2 0 0 1
I would like to a create matrix based on this data such that the value 0 is (0, 0), 1 is (1, 0), and 2 is (0, 1). Below is the code that I'm using:
data.exp <- matrix(NA, nrow = nrow(data)*2, ncol = ncol(data))
for(i in 1:nrow(data)){
for(j in 1:(ncol(data))){
if(data[i,j] == 1){
vec <- c(1, 0)
}else if(data[i, j] == 0){
vec <- c(0, 0)
}else{
vec <- c(0, 1)
}
data.exp[((i*2-1):(i*2)), j] <- vec
}
}
> data.exp
[,1] [,2] [,3] [,4]
[1,] 1 1 0 0
[2,] 0 0 0 1
[3,] 0 1 0 0
[4,] 0 0 0 0
[5,] 0 0 0 1
[6,] 0 0 0 0
[7,] 1 1 0 1
[8,] 0 0 0 0
[9,] 0 0 0 1
[10,] 1 0 0 0
Is there a faster way to generate the matrix, data.exp, without having to use a nested for loop in R? As the sample size increases, the nested for loop approach is not very efficient.
apply should be pretty fast for matrices. Create a list, v, with appropriate values and subset by considering 0, 1, or 2 in data as indices of v
v = list(c(0, 0), c(1, 0), c(0, 1))
apply(data, 2, function(i) do.call(cbind, v[i + 1]))
# [,1] [,2] [,3] [,4]
# [1,] 1 1 0 0
# [2,] 0 0 0 1
# [3,] 0 1 0 0
# [4,] 0 0 0 0
# [5,] 0 0 0 1
# [6,] 0 0 0 0
# [7,] 1 1 0 1
# [8,] 0 0 0 0
# [9,] 0 0 0 1
# [10,] 1 0 0 0
Here is an option without any loop
t(
matrix(
scan(text = toString(c("0, 0", "1, 0", "0, 1")[data + 1]), sep = ","),
byrow = TRUE,
nrow = ncol(data)
)
)
which gives
[,1] [,2] [,3] [,4]
[1,] 1 1 0 0
[2,] 0 0 0 1
[3,] 0 1 0 0
[4,] 0 0 0 0
[5,] 0 0 0 1
[6,] 0 0 0 0
[7,] 1 1 0 1
[8,] 0 0 0 0
[9,] 0 0 0 1
[10,] 1 0 0 0
A more concise option (thank #akrun's contribution)
> matrix(unlist(list(c(0, 0), c(1, 0), c(0, 1))[data + 1]), nrow = nrow(data) * 2)
[,1] [,2] [,3] [,4]
[1,] 1 1 0 0
[2,] 0 0 0 1
[3,] 0 1 0 0
[4,] 0 0 0 0
[5,] 0 0 0 1
[6,] 0 0 0 0
[7,] 1 1 0 1
[8,] 0 0 0 0
[9,] 0 0 0 1
[10,] 1 0 0 0
Start by making two matrices of the same dimensions as data, one which sets all of the 2s to 0 and the other which sets all of the 1s to 0 and all of the 2s to 1. Then interleave the two matrices row by row.
The first part if accomplished using ifelse; for the second part, flodel's answer to this question helps.
Putting it all together, you have
data <- matrix(c(1, 0, 0, 1, 2, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 2, 0, 1, 1, 1), nrow = 5, ncol = 4)
l<-list(ifelse(data < 2, data, 0),
ifelse(data > 1, 1, 0))
do.call(rbind, l)[order(sequence(sapply(l, nrow))), ]
# [,1] [,2] [,3] [,4]
# [1,] 1 1 0 0
# [2,] 0 0 0 1
# [3,] 0 1 0 0
# [4,] 0 0 0 0
# [5,] 0 0 0 1
# [6,] 0 0 0 0
# [7,] 1 1 0 1
# [8,] 0 0 0 0
# [9,] 0 0 0 1
# [10,] 1 0 0 0
Related
I'm trying to generate a trip parttern using a matrix.
Glastonbury Munich Venice Paris Ibiza Kamar-Taj
[1,] 1 2 3 4 5 6
[2,] 1 2 3 4 5 0
[3,] 1 2 3 4 0 0
[4,] 1 2 3 0 0 0
[5,] 1 2 0 0 0 0
[6,] 3 2 1 4 5 6
[7,] 0 2 3 4 1 0
[8,] 0 1 2 0 0 0
[9,] 5 1 3 2 0 0
each row represents a single trip, and the number in a cell represents the order they visited each locations. Zero means they didn't visit that place.
currently I'm creating it like this:
tripMatrix <- list()
tripMatrix[[ 1 ]] <- c(1, 2, 3, 4, 5, 6)
tripMatrix[[ 2 ]] <- c(1, 2, 3, 4, 5, 0)
tripMatrix[[ 3 ]] <- c(1, 2, 3, 4, 0, 0)
tripMatrix[[ 4 ]] <- c(1, 2, 3, 0, 0, 0)
tripMatrix[[ 5 ]] <- c(1, 2, 0, 0, 0, 0)
tripMatrix[[ 6 ]] <- c(3, 2, 1, 4, 5, 6)
tripMatrix[[ 7 ]] <- c(0, 2, 3, 4, 1, 0)
tripMatrix[[ 8 ]] <- c(4, 5, 3, 2, 1, 0)
tripMatrix[[ 8 ]] <- c(0, 1, 2, 0, 0, 0)
tripMatrix[[ 10 ]] <- c(5, 1, 3, 2, 0, 0)
trips <- matrix(unlist(tripMatrix), ncol = 6, byrow = TRUE)
I can only do this for a few rows, but I'll like to generate N number of rows with different conbinations of places visited during a trip.
Please I can I do this for N number of rows without having to manually create a list of trips?
Starting with a math approach by thinking about the theoretical number of trips. For example, out of the 6 places, you can choose 4 to visit (i.e. 6C4 = 6 choose 4) and among these 4 places, you can visit them in any order giving 4! (4 factorial) ways of visiting them in different sequence. The same applies to other number of places, giving a theoretical number of 6! + 6C5*5! + 6C4*4! + 6C3*3! + 6C2*2! + 6C1 = 1956 possible trips.
Here is an option using utils::combn and RcppAlgos::permuteGeneral to generate all 1,956 possibilities:
nc <- 6L
l <- lapply(1L:nc, function(n) combn(1L:nc, n,
function(x) {
if (length(x) > 1L) {
p <- RcppAlgos::permuteGeneral(x, length(x))
a <- matrix(0L, nrow=nrow(p), ncol=nc)
a[cbind(c(row(p)), c(p))] <- col(p)
} else {
a <- integer(nc)
a[x] <- 1L
}
a
}, simplify=FALSE))
m <- do.call(rbind, unlist(l, recursive=FALSE))
head(m, 20):
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 1 0 0 0 0 0
[2,] 0 1 0 0 0 0
[3,] 0 0 1 0 0 0
[4,] 0 0 0 1 0 0
[5,] 0 0 0 0 1 0
[6,] 0 0 0 0 0 1
[7,] 1 2 0 0 0 0
[8,] 2 1 0 0 0 0
[9,] 1 0 2 0 0 0
[10,] 2 0 1 0 0 0
[11,] 1 0 0 2 0 0
[12,] 2 0 0 1 0 0
[13,] 1 0 0 0 2 0
[14,] 2 0 0 0 1 0
[15,] 1 0 0 0 0 2
[16,] 2 0 0 0 0 1
[17,] 0 1 2 0 0 0
[18,] 0 2 1 0 0 0
[19,] 0 1 0 2 0 0
[20,] 0 2 0 1 0 0
You could use sample and replicate like :
N <- 10
unique_trip_locations <- 6
t(replicate(N, sample(0:unique_trip_locations, unique_trip_locations)))
# [,1] [,2] [,3] [,4] [,5] [,6]
# [1,] 2 3 1 5 0 6
# [2,] 5 3 6 2 0 1
# [3,] 2 4 1 3 0 5
# [4,] 1 6 5 3 0 2
# [5,] 6 3 1 4 2 0
# [6,] 1 2 0 6 5 4
# [7,] 5 1 3 0 4 6
# [8,] 0 4 6 2 3 1
# [9,] 0 5 1 3 2 6
#[10,] 5 6 0 1 3 2
However, this has got one limitation that 0 can occur max only once like all other numbers.
You can use brute force to do it, and in this case it is totally feasible.
I think mine is clearer than the previous answers:
vector<-c(1:6, rep(0,6)) # This is what you need as combinations
Data<-data.frame() # Initilizing Dataframe
i<-1 # Just to check total iterations
N<-1956 # Number of unique combinations to get
while(nrow(Data)<=N){
i<- i+1
Data<- rbind(Data,sample(vector, 6, replace = F))
Data<-Data[!duplicated(Data),]
}
Here is a base R solution without using any additional packages:
# define your own permutation function
perm <- function(x) {
if (length(x)==1) return(t(x))
subset(r <- do.call(expand.grid,replicate(length(x),x,simplify = F)), colSums(apply(r, 1, duplicated))==0)
}
# define function that give all permutations with given number of posistions
f <- function(x,vl) {
p <- perm(x)
unname(t(apply(p, 1, function(q) replace(rep(0,vl),q,seq_along(q)))))
}
# generate the desired output
v = 1:6
res <- do.call(rbind,
Map(f,
vl = length(v),
unlist(sapply(seq_along(v), function(k) combn(v,k,simplify = F)),recursive = F)))
such that
> head(res,25)
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 1 0 0 0 0 0
[2,] 0 1 0 0 0 0
[3,] 0 0 1 0 0 0
[4,] 0 0 0 1 0 0
[5,] 0 0 0 0 1 0
[6,] 0 0 0 0 0 1
[7,] 2 1 0 0 0 0
[8,] 1 2 0 0 0 0
[9,] 2 0 1 0 0 0
[10,] 1 0 2 0 0 0
[11,] 2 0 0 1 0 0
[12,] 1 0 0 2 0 0
[13,] 2 0 0 0 1 0
[14,] 1 0 0 0 2 0
[15,] 2 0 0 0 0 1
[16,] 1 0 0 0 0 2
[17,] 0 2 1 0 0 0
[18,] 0 1 2 0 0 0
[19,] 0 2 0 1 0 0
[20,] 0 1 0 2 0 0
[21,] 0 2 0 0 1 0
[22,] 0 1 0 0 2 0
[23,] 0 2 0 0 0 1
[24,] 0 1 0 0 0 2
[25,] 0 0 2 1 0 0
assume I have a large matrix (matrix_1) of 2000 columns. Each cell has a value of 0 or 1. I want to find a best combination of 10 columns. The best combination gives the maximum number of non-0 values per row. So it basically gives maximum
sum (apply (matrix_2, 1, function(x) any(x == 1)))
I cannot go through all possible combinations since it is too computationally intensive (there is 2.758988e+26). Any suggestions?
For an example take this matrix it has 4 rows and I am only picking 2 columns at a time
mat <- matrix (c(1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0), nrow = 4, byrow = FALSE)
mat
# combination of columns 2 and 3 is best: 3 rows with at least a single 1 value
sum (apply (mat[, c(2, 3)], 1, function(x) any (x == 1)))
# combination of columns 1 and 2 is worse: 2 rows with at least a single 1 value
sum (apply (mat[, c(1, 2)], 1, function(x) any (x == 1)))
You could use a function like this...
find10 <- function(mat,n=10){
cols <- rep(FALSE,ncol(mat)) #columns to exclude
rows <- rep(TRUE,nrow(mat)) #rows to include
for(i in 1:n){
colsums <- colSums(mat[rows,])
colsums[cols] <- -1 #to exclude those already accounted for
maxcol <- which.max(colsums)
cols[maxcol] <- TRUE
rows <- rows & !as.logical(mat[,maxcol])
}
return(which(cols))
}
It looks for the column with most non-zeros, removes those rows from the comparison, and repeats. It returns the column numbers of the n best columns.
An example...
m <- matrix(sample(0:1,100,prob = c(0.8,0.2),replace=TRUE),nrow=10)
m
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 0 1 0 0 0 0 0 1 1 0
[2,] 1 0 0 0 0 0 0 0 1 1
[3,] 0 0 0 0 1 0 0 0 0 0
[4,] 0 0 0 1 0 1 0 1 0 1
[5,] 0 0 0 0 1 0 0 1 0 0
[6,] 0 0 0 1 0 1 1 0 0 0
[7,] 0 0 1 0 0 0 0 0 0 0
[8,] 0 0 0 0 0 0 0 0 1 0
[9,] 0 0 0 0 0 0 0 1 0 0
[10,] 0 0 0 0 0 0 0 0 0 0
find10(m,5)
[1] 3 4 5 8 9
It also comes up with 2,3 for the example you give.
I would like to transform a matrix of 0s and 1s into a corresponding matrix that gives the cumulative row sum for non-zero entries. Example input and output is given below:
set.seed(404)
input <- matrix(rbinom(10 * 5, 1, 0.5), ncol = 5, nrow = 5)
output <- data.frame(a = c(1, 1, 1, 1, 0),
b = c(0, 0, 0, 0, 0),
c = c(2, 2, 0, 2, 1),
d = c(3, 0, 0, 3, 2),
e = c(0, 3, 0, 0, 0))
input
# [,1] [,2] [,3] [,4] [,5]
#[1,] 1 0 1 1 0
#[2,] 1 0 1 0 1
#[3,] 1 0 0 0 0
#[4,] 1 0 1 1 0
#[5,] 0 0 1 1 0
output
# a b c d e
#1 1 0 2 3 0
#2 1 0 2 0 3
#3 1 0 0 0 0
#4 1 0 2 3 0
#5 0 0 1 2 0
We can use apply with MARGIN=1 to get the cumsum of each row of 'input', transpose (t) and multiply with 'input' so that the 1 values gets replaced by the cumsum output and '0' remain the same.
input*t(apply(input, 1, cumsum))
# [,1] [,2] [,3] [,4] [,5]
#[1,] 1 0 2 3 0
#[2,] 1 0 2 0 3
#[3,] 1 0 0 0 0
#[4,] 1 0 2 3 0
#[5,] 0 0 1 2 0
Or we can use rowCumsums from library(matrixStats) to get the cumsum of each row and multiply as before.
library(matrixStats)
input*rowCumsums(input)
# [,1] [,2] [,3] [,4] [,5]
#[1,] 1 0 2 3 0
#[2,] 1 0 2 0 3
#[3,] 1 0 0 0 0
#[4,] 1 0 2 3 0
#[5,] 0 0 1 2 0
I have a matrix in the following format:
set.seed(1)
m = matrix(sample(c(0,0,0,1),25,rep=T), nrow=5)
m[13] = 4
print(m)
[,1] [,2] [,3] [,4] [,5]
[1,] 0 1 0 0 1
[2,] 0 1 0 0 0
[3,] 0 0 4 1 0
[4,] 1 0 0 0 0
[5,] 0 0 1 1 0
Consider [3,3] is some hotspot that we want to 'flatten' by spreading it's value across the nearest neighbouring/nearby cells of zero value. In this case that means assigning 1 to cells [2,3], [3,2] and [4,3] so that [3,3] can also be reduced to 1:
[,1] [,2] [,3] [,4] [,5]
[1,] 0 1 0 0 1
[2,] 0 1 1 0 0
[3,] 0 1 1 1 0
[4,] 1 0 1 0 0
[5,] 0 0 1 1 0
Is anyone aware of a matrix/raster operation that can achieve this efficiently, while preserving the sum total of all the cells?
I got interested in this question, so I made an attempt. There, probably, exists a "rastery" tool for what you're trying but I'm not aware.
First, a helper function that finds the indices of elements of a square surrounding a specific element in a matrix:
find_neighbors = function(i, j, n)
{
tmp = expand.grid(replicate(2, -n:n, simplify = F))
tmp2 = tmp[rowSums(abs(tmp) < n) < 2, ]
inds = cbind(tmp2[, 1] + i, tmp2[, 2] + j)
inds[order(rowSums(abs(cbind(inds[, 1] - i, ##so that up/down/right/left are filled before diagonal neighbors
inds[, 2] - j)))), ]
}
E.g.:
m1 = matrix(0, 7, 8)
m1
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#[1,] 0 0 0 0 0 0 0 0
#[2,] 0 0 0 0 0 0 0 0
#[3,] 0 0 0 0 0 0 0 0
#[4,] 0 0 0 0 0 0 0 0
#[5,] 0 0 0 0 0 0 0 0
#[6,] 0 0 0 0 0 0 0 0
#[7,] 0 0 0 0 0 0 0 0
m1[find_neighbors(3, 4, 1)] = 1
m1[find_neighbors(3, 4, 2)] = 2
m1[find_neighbors(3, 4, 3)] = 3
m1
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#[1,] 3 2 2 2 2 2 3 0
#[2,] 3 2 1 1 1 2 3 0
#[3,] 3 2 1 0 1 2 3 0
#[4,] 3 2 1 1 1 2 3 0
#[5,] 3 2 2 2 2 2 3 0
#[6,] 3 3 3 3 3 3 3 0
#[7,] 0 0 0 0 0 0 0 0
And the function that flattens the hot-spots. There is a nested loop. The first "for" to loop over the hotspots and the second to iteratively flatten the hotspot to its neighbors. Nevertheless, the loop is exitted once the spot is flattened.
ff = function(mat, thres = 1)
{
wh = which(mat > thres, T)
for(r in seq_len(nrow(wh))) {
for(n in seq_len(max(c(dim(mat) - wh[r, ], wh[r, ] - 1)))) {
if(mat[wh[r, , drop = F]] <= thres) break #stop flattening if we are done
inds = find_neighbors(wh[r, 1], wh[r, 2], n) #get indices of neighbours
inds = inds[!((rowSums(inds <= 0) > 0) | #use valid indices..
inds[, 1] > nrow(mat) |
inds[, 2] > ncol(mat)), ]
inds = inds[mat[inds] < thres, , drop = F] #use indices that are allowed to take values
tofill = nrow(inds) * thres #how many 'units' need to be taken from the hotspot?
mat[wh[r, , drop = F]] = mat[wh[r, , drop = F]] + sum(mat[inds]) #in case the neighbors
#of the hotspot are > 0,
#the, just, increase the
#value of the hotspot
if(mat[wh[r, , drop = F]] <= tofill) tofill = mat[wh[r, , drop = F]] - thres #do we have enough
#'units' in the hotspot?
if(tofill > 0) {
if(tofill < thres) {
mat[inds[1, , drop = F]] = tofill
mat[wh[r, , drop = F]] = mat[wh[r, , drop = F]] - tofill
next
}
nr = tofill %/% thres
mat[inds[seq_len(nr), , drop = F]] = thres
if((tofill %% thres) > 0) mat[inds[nr + 1, , drop = F]] = tofill %% thres
mat[wh[r, , drop = F]] = mat[wh[r, , drop = F]] - tofill
}
}
}
mat
}
And an example:
mm = matrix(0, 11, 9); mm[8, 2] = 12; mm[6, 7] = 4
mm
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
# [1,] 0 0 0 0 0 0 0 0 0
# [2,] 0 0 0 0 0 0 0 0 0
# [3,] 0 0 0 0 0 0 0 0 0
# [4,] 0 0 0 0 0 0 0 0 0
# [5,] 0 0 0 0 0 0 0 0 0
# [6,] 0 0 0 0 0 0 4 0 0
# [7,] 0 0 0 0 0 0 0 0 0
# [8,] 0 12 0 0 0 0 0 0 0
# [9,] 0 0 0 0 0 0 0 0 0
#[10,] 0 0 0 0 0 0 0 0 0
#[11,] 0 0 0 0 0 0 0 0 0
ff(mm)
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
# [1,] 0 0 0 0 0 0 0 0 0
# [2,] 0 0 0 0 0 0 0 0 0
# [3,] 0 0 0 0 0 0 0 0 0
# [4,] 0 0 0 0 0 0 0 0 0
# [5,] 0 0 0 0 0 0 1 0 0
# [6,] 0 1 0 0 0 1 1 0 0
# [7,] 1 1 1 0 0 0 1 0 0
# [8,] 1 1 1 1 0 0 0 0 0
# [9,] 1 1 1 0 0 0 0 0 0
#[10,] 0 1 0 0 0 0 0 0 0
#[11,] 0 0 0 0 0 0 0 0 0
ff(mm, 3)
ff(mm, 5)
ff(mm, 1500)
Hope any of these will be helpful.
Possible outline of an attack.
1) find the hotspots:
hotind <- which (m > 1, arr.ind=TRUE)
2) loop over the rows of hotind to spread:
for (j in 1: nrow(hotind) {
hotpoint <- hotind[j,]
# for example, divvy up the hot value into four nearest neighbors
m[hotpoint[1]-1,hotpoint[2]-1] <- m[hotpoint[1],hotpoint[2]]/4
# do_same_for m[hotpoint[1]+1,hotpoint[2]-1] and_so_on
m[hotpoint[1],hotpoint[2]] <- 1 # or your choice of final value
}
It sure "feels" to me like there's a way to do this with a smoothing convolution kernel approach, so here's hoping someone posts a slicker method.
I'm using the following code to generate a random matrix with some elements = 1 near the diagonal, the rest = 0. (This is basically a random walk along the main diagonal.)
n <- 20
rw <- matrix(0, ncol = 2, nrow = n)
indx <- cbind(seq(n), sample(c(1, 2), n, TRUE))
rw[indx] <- 1
rw[,1] <- cumsum(rw[, 1])+1
rw[,2] <- cumsum(rw[, 2])+1
rw2 <- subset(rw, (rw[,1] <= 10 & rw[,2] <= 10))
field <- matrix(0, ncol = 10, nrow = 10)
field[rw2] <- 1
field
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 0 1 1 1 0 0 0 0 0 0
[2,] 0 0 0 1 0 0 0 0 0 0
[3,] 0 0 0 1 0 0 0 0 0 0
[4,] 0 0 0 1 1 1 1 0 0 0
[5,] 0 0 0 0 0 0 1 1 0 0
[6,] 0 0 0 0 0 0 0 1 0 0
[7,] 0 0 0 0 0 0 0 1 0 0
[8,] 0 0 0 0 0 0 0 1 1 1
[9,] 0 0 0 0 0 0 0 0 0 0
[10,] 0 0 0 0 0 0 0 0 0 0
Next thing, I would like to replace the 0 elements to the right-hand/upper side of the 1-elements by 1. For the above matrix the desired output would be:
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 0 1 1 1 1 1 1 1 1 1
[2,] 0 0 0 1 1 1 1 1 1 1
[3,] 0 0 0 1 1 1 1 1 1 1
[4,] 0 0 0 1 1 1 1 1 1 1
[5,] 0 0 0 0 0 0 1 1 1 1
[6,] 0 0 0 0 0 0 0 1 1 1
[7,] 0 0 0 0 0 0 0 1 1 1
[8,] 0 0 0 0 0 0 0 1 1 1
[9,] 0 0 0 0 0 0 0 0 0 0
[10,] 0 0 0 0 0 0 0 0 0 0
I have tried
fill <- function(row) {first = match(1, row); if (is.na(first)) {row = rep(1, 10)} else {row[first:10] = 1}; return(row)}
field2 <- apply(field, 1, fill)
field2
But that gives me instead:
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 0 0 0 0 0 0 0 0 1 1
[2,] 1 0 0 0 0 0 0 0 1 1
[3,] 1 0 0 0 0 0 0 0 1 1
[4,] 1 1 1 1 0 0 0 0 1 1
[5,] 1 1 1 1 0 0 0 0 1 1
[6,] 1 1 1 1 0 0 0 0 1 1
[7,] 1 1 1 1 1 0 0 0 1 1
[8,] 1 1 1 1 1 1 1 1 1 1
[9,] 1 1 1 1 1 1 1 1 1 1
[10,] 1 1 1 1 1 1 1 1 1 1
Can anyone help me fix this?
Cheers,
mce
PS: If the first row is all zeros (as it can happen with the above code) it should be changed to all ones.
Why not just:
t(apply(field,1,cummax))
One instance:
dput(field)
structure(c(0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0), .Dim = c(10L,
10L))
> field
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 0 0 0 0 0 0 0 0 0 0
[2,] 1 1 1 1 1 1 0 0 0 0
[3,] 0 0 0 0 0 1 0 0 0 0
[4,] 0 0 0 0 0 1 0 0 0 0
[5,] 0 0 0 0 0 1 1 1 1 1
[6,] 0 0 0 0 0 0 0 0 0 0
[7,] 0 0 0 0 0 0 0 0 0 0
[8,] 0 0 0 0 0 0 0 0 0 0
[9,] 0 0 0 0 0 0 0 0 0 0
[10,] 0 0 0 0 0 0 0 0 0 0
The output:
> t(apply(field,1,cummax))
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 0 0 0 0 0 0 0 0 0 0
[2,] 1 1 1 1 1 1 1 1 1 1
[3,] 0 0 0 0 0 1 1 1 1 1
[4,] 0 0 0 0 0 1 1 1 1 1
[5,] 0 0 0 0 0 1 1 1 1 1
[6,] 0 0 0 0 0 0 0 0 0 0
[7,] 0 0 0 0 0 0 0 0 0 0
[8,] 0 0 0 0 0 0 0 0 0 0
[9,] 0 0 0 0 0 0 0 0 0 0
[10,] 0 0 0 0 0 0 0 0 0 0
This should work:
MaxFull <- which.max((apply(field,1,sum) > 0) * (1:10))
rbind(t(apply(field[1:MaxFull,], 1, fill)),matrix(0,ncol=10,nrow=10-MaxFull))
notice that it uses fill as you defined it.
In the help for the value of apply, "If each call to FUN returns a vector of length n, then apply returns an array of dimension c(n, dim(X)[MARGIN])". So, you want the transpose of this. Print statements were added to the fill function to confirm the operation. You may want to check if your function is hiding another function, there is a function named fill, but it doesn't matter in this case.
n <- 20
rw <- matrix(0, ncol = 2, nrow = n)
indx <- cbind(seq(n), sample(c(1, 2), n, TRUE))
rw[indx] <- 1
rw[,1] <- cumsum(rw[, 1])+1
rw[,2] <- cumsum(rw[, 2])+1
rw2 <- subset(rw, (rw[,1] <= 10 & rw[,2] <= 10))
field <- matrix(0, ncol = 10, nrow = 10)
field[rw2] <- 1
field
myfill <- function(row) {
print("Function start")
print(row)
first = match(1, row)
print(paste("Match", first))
if (is.na(first)) {
row = rep(1, 10)
} else {
row[first:10] = 1
};
print(row)
flush.console()
return(row)
}
field2 = t(apply(field, 1, myfill))
field2