Hi I was looking for some help on trying to understand what does a probability matrix achieve when sampling, I am having a hard time wrapping my head around the prob[a, b] does, to be honest the syntax here seems a bit different than in other languages, first because we pass indexes to a matrix and it construct a bigger one (that is kinda cool) but I digress I currently have the following bernoulli sampling:
N <- 8
prob <- matrix(c(0.1,0.1,0.5,0.8), nrow=2)
a <- sample(1:2, size=N, replace=TRUE)
b <- rbern(N, ifelse(l==1, 0.5, .1)) + 1
rbern(N, prob = prob[a, b])
What I am unable to understand well is when sampling I give a matrix of 8x8, not sure which probability is going to use to sample if I am only asking for 8 observations.
It will simply take the first N values from the matrix used in the prob argument (starting with the first column).
Consider the following code.
N <- 8
m <- matrix(sample(0:1, N^2, 1), N, N)
m
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#> [1,] 0 1 0 1 1 0 0 1
#> [2,] 1 1 1 0 0 0 1 1
#> [3,] 0 1 0 1 0 1 1 1
#> [4,] 1 0 0 1 0 0 0 1
#> [5,] 0 1 0 0 0 0 1 0
#> [6,] 0 1 1 1 0 1 1 0
#> [7,] 1 0 0 0 0 1 1 0
#> [8,] 0 0 1 1 1 1 0 0
rbinom(N, 1, prob = m)
#> [1] 0 1 0 1 0 0 1 0
Only the first N values of m are used for probabilities, so the result of rbinom(N, 1, prob = m) is the same as the first column of m (since all probabilities are either 0 or 1).
How can I create dummy variables from a numeric variable in R?
I want to create N dummy variables. In such a way the numeric variable means how many zeros will come, counting from the first column. Imagine N=6. Like this:
x
a 5
b 2
c 4
d 1
e 9
It must become:
1 2 3 4 5 6
a 0 0 0 0 0 1
b 0 0 1 1 1 1
c 0 0 0 0 1 1
d 0 1 1 1 1 1
e 0 0 0 0 0 0
Thank you!
Here's a hacky solution for you
x = c(5,2,4,1,9)
N = 6
out = matrix(1, length(x), N)
for (i in 1:length(x))
out[i,1:min(x[i], N)] = 0
> out
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0 0 0 0 0 1
[2,] 0 0 1 1 1 1
[3,] 0 0 0 0 1 1
[4,] 0 1 1 1 1 1
[5,] 0 0 0 0 0 0
We could do this in a vectorized manner by creating row/column index and assigning an already created matrix of 1s to 0 based on the index
m1 <- matrix(1, ncol = N, nrow = length(x),
dimnames = list(letters[seq_along(x)], seq_len(N)))
x1 <- pmin(x, ncol(m1))
m1[cbind(rep(seq_len(nrow(m1)), x1), sequence(x1))] <- 0
m1
# 1 2 3 4 5 6
#a 0 0 0 0 0 1
#b 0 0 1 1 1 1
#c 0 0 0 0 1 1
#d 0 1 1 1 1 1
#e 0 0 0 0 0 0
data
x <- c(5,2,4,1,9)
N <- 6
I have a binary vector that holds information on whether or not some event happened for some observation:
v <- c(0,1,1,0)
What I want to achieve is a matrix that holds information on all bivariate pairs of observations in this vector. That is, if two observations both have 0 or both have 1 in this vector v, they should get a 1 in the matrix. If one has 0 and the other has 1, they should get a 0 otherwise.
Hence, the goal is this matrix:
[,1] [,2] [,3] [,4]
[1,] 0 0 0 1
[2,] 0 0 1 0
[3,] 0 1 0 0
[4,] 1 0 0 0
Whether the main diagonal is 0 or 1 does not matter for me.
Is there an efficient and simple way to achieve this that does not require a combination of if statements and for loops? v might be of considerable size.
Thanks!
We can use outer
out <- outer(v, v, `==`)
diag(out) <- 0L # as you don't want to compare each element to itself
out
# [,1] [,2] [,3] [,4]
#[1,] 0 0 0 1
#[2,] 0 0 1 0
#[3,] 0 1 0 0
#[4,] 1 0 0 0
Another option with expand.grid is to create pairwise combinations of v with itself and since you have values of only 0 and 1, we can find values with 0 and 2. (0 + 0 and 1 + 1).
inds <- rowSums(expand.grid(v, v))
matrix(+(inds == 0 | inds == 2), nrow = length(v))
# [,1] [,2] [,3] [,4]
#[1,] 1 0 0 1
#[2,] 0 1 1 0
#[3,] 0 1 1 0
#[4,] 1 0 0 1
Since, the diagonal element are not important for you, I will keep it as it is or if you want to change you can use diag as shown in #markus's answer.
Another (slightly less efficient) approach than the use of outer would be sapply:
out <- sapply(v, function(x){
x == v
})
diag(out) <- 0L
out
[,1] [,2] [,3] [,4]
[1,] 0 0 0 1
[2,] 0 0 1 0
[3,] 0 1 0 0
[4,] 1 0 0 0
microbenchmark on a vector of length 1000:
> test <- microbenchmark("LAP" = sapply(v, function(x){
+ x == v
+ }),
+ "markus" = outer(v, v, `==`), times = 1000, unit = "ms")
> test
Unit: milliseconds
expr min lq mean median uq max neval
LAP 3.973111 4.065555 5.747905 4.573002 6.324607 101.03498 1000
markus 3.515725 3.535067 4.852606 3.694924 4.908930 84.85184 1000
If you allow the main diagonal to be 1, then there will always be two unique rows v and 1 - v in this matrix no matter how large v is. Since the matrix is symmetric, it also has two such unique columns. This makes it trivial to construct this matrix.
## example `v`
set.seed(0)
v <- sample.int(2, 10, replace = TRUE) - 1L
#[1] 1 0 0 1 1 0 1 1 1 1
## column expansion from unique columns
cbind(v, 1 - v, deparse.level = 0L)[, 2 - v]
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
# [1,] 1 0 0 1 1 0 1 1 1 1
# [2,] 0 1 1 0 0 1 0 0 0 0
# [3,] 0 1 1 0 0 1 0 0 0 0
# [4,] 1 0 0 1 1 0 1 1 1 1
# [5,] 1 0 0 1 1 0 1 1 1 1
# [6,] 0 1 1 0 0 1 0 0 0 0
# [7,] 1 0 0 1 1 0 1 1 1 1
# [8,] 1 0 0 1 1 0 1 1 1 1
# [9,] 1 0 0 1 1 0 1 1 1 1
#[10,] 1 0 0 1 1 0 1 1 1 1
What is the purpose of this matrix?
If there are n0 zeros and n1 ones, the matrix will have dimension (n0 + n1) x (n0 + n1), but there are only (n0 x n0 + n1 x n1) ones in the matrix. So for long vector v, the matrix is sparse. In fact, it has super sparsity, as it has large number of duplicated rows / columns.
Obviously, if you want to store the position of 1 in this matrix, you can simply get it without forming this matrix at all.
Example code here:
> temp2
a b c d e f g h
i 1 1 0 0 0 1 0 1
j 0 1 0 0 0 1 0 1
k 0 1 1 0 0 1 1 1
l 0 0 0 0 1 0 0 1
m 0 0 1 1 0 0 1 1
n 0 0 1 1 0 0 1 1
o 0 0 0 1 0 0 1 1
p 0 0 0 0 1 0 0 1
> dput(temp2)
structure(list(a = c(1, 0, 0, 0, 0, 0, 0, 0), b = c(1, 1, 1,
0, 0, 0, 0, 0), c = c(0, 0, 1, 0, 1, 1, 0, 0), d = c(0, 0, 0,
0, 1, 1, 1, 0), e = c(0, 0, 0, 1, 0, 0, 0, 1), f = c(1, 1, 1,
0, 0, 0, 0, 0), g = c(0, 0, 1, 0, 1, 1, 1, 0), h = c(1, 1, 1,
1, 1, 1, 1, 1)), .Names = c("a", "b", "c", "d", "e", "f", "g",
"h"), class = "data.frame", row.names = c("i", "j", "k", "l",
"m", "n", "o", "p"))
I have this 8x8 grid of 1s and 0s. I need to solve for some grid where each row and each column has exactly one 1 and the rest 0s, but the 1 has to be in a place where the original grid has a 1. It's almost like a sudoku question but not exactly. Any thoughts on how to get started?
I would need some function that can do this for a general grid, not simply this specific one. We can assume that there's always a solution grid, given some starting grid.
Thanks!
Edit: a valid solution
> temp3
a b c d e f g h
i 1 0 0 0 0 0 0 0
j 0 1 0 0 0 0 0 0
k 0 0 0 0 0 1 0 0
l 0 0 0 0 1 0 0 0
m 0 0 0 1 0 0 0 0
n 0 0 1 0 0 0 0 0
o 0 0 0 0 0 0 1 0
p 0 0 0 0 0 0 0 1
EDIT2: given that there's only 8! unique solutions for any grid, i may attempt a brute force / matching approach.
This can be solved as a transportation problem or as an integer programming problem. We also show a one-line solution using only base R which generates random matrices for which each row and each columns column sums to 1 filtering out and returning the ones satisfying the additional constraints that each element of the solution matrix be less than or equal to the corresponding element of temp2.
1) transportation problem Using lp.transport in lpSolve we can solve it in one statement:
library(lpSolve)
res <- lp.transport(as.matrix(temp2), "max",
rep("=", 8), rep(1, 8), rep("=", 8), rep(1, 8), integers = 0:1)
res
## Success: the objective function is 8
soln <- array(res$solution, dim(temp2))
# verify
all(colSums(soln)==1) && all(rowSums(soln)==1) && all(temp2>=soln) && all(soln %in% 0:1)
## [1] TRUE
2) integer programming
If X is the solution we have specified the row and column constraints but have not specified the X <= temp2 constraints since they will be satisfied automatically as no solution putting a 1 where a temp2 0 is can have the maximum objective of 8.
library(lpSolve)
n <- nrow(temp2)
obj <- unlist(temp2)
const_row <- t(sapply(1:n, function(i) c(row(temp2)) == i)) # each row sums to 1
const_col <- t(sapply(1:n, function(i) c(col(temp2)) == i)) # each col sums to 1
const.mat <- rbind(const_row, const_col)
res <- lp("max", obj, const.mat, "=", 1, all.bin = TRUE)
res
## Success: the objective function is 8
soln <- array(res$solution, dim(temp2))
# verify
all(colSums(soln)==1) && all(rowSums(soln)==1) && all(temp2>=soln) && all(soln %in% 0:1)
## [1] TRUE
(Note that by the same argument we could have relaxed the problem to a linear programming problem provided we add 0 <= soln[i, j] <= 1 constraints since by the same argument that allowed us to omit the soln[i, j] <= temp2[i, j] constraints the maximization will force the soln elements to be 0 or 1 anyways.)
2a) This approach is longer but does spell out the X <= temp2 constraints explicitly:
n <- nrow(temp2)
obj <- numeric(n*n)
const1 <- diag(n*n) # soln[i,j] <= temp2[i,j]
const2 <- t(sapply(1:n, function(i) c(row(temp2)) == i)) # each row sums to 1
const3 <- t(sapply(1:n, function(i) c(col(temp2)) == i)) # each col sums to 1
const.mat <- rbind(const1, const2, const3)
const.dir <- rep(c("<=", "="), c(n*n, 2*n))
const.rhs <- c(unlist(temp2), rep(1, 2*n))
res <- lp("max", obj, const.mat, const.dir, const.rhs, all.bin = TRUE)
res
## Success: the objective function is 0
soln <- array(res$solution, dim(temp2))
# verify
all(colSums(soln)==1) && all(rowSums(soln)==1) && all(temp2>=soln) && all(soln %in% 0:1)
## [1] TRUE
2b) Note that if X is the solution matrix then in X <= temp2 only the positions of X corresponding to zeros in temp2 actually constrain so we could eliminate any constraint corresponding to a 1 in temp2 in the (2a) solution. With this change all constraints become equality constraints.
n <- nrow(temp2)
obj <- numeric(n*n)
const1 <- diag(n*n)[unlist(temp2) == 0, ]
const2 <- t(sapply(1:n, function(i) c(row(temp2)) == i)) # each row sums to 1
const3 <- t(sapply(1:n, function(i) c(col(temp2)) == i)) # each col sums to 1
const.mat <- rbind(const1, const2, const3)
const.dir <- "="
const.rhs <- c(numeric(nrow(const1)), rep(1, 2*n))
res <- lp("max", obj, const.mat, const.dir, const.rhs, all.bin = TRUE)
res
## Success: the objective function is 0
soln <- array(res$solution, dim(temp2))
# verify
all(colSums(soln)==1) && all(rowSums(soln)==1) && all(temp2>=soln) && all(soln %in% 0:1)
## [1] TRUE
In fact, we could go further and remove the variables that correspond to zero elements of temp2.
3) r2dtable Here we use rd2table to generate 10,000 8x8 tables whose rows and columns sum to 1 and then filter them to pick out only those satisfying the X < temp2 constrainsts. Withtemp2` from the question and the random seed shown has found 3 solutions. If with different inputs it finds no solutions then try generating a higher number of random proposals. This approach does not use any packages.
set.seed(123) # for reproducibility
Filter(function(x) all(x <= temp2), r2dtable(10000, rep(1, 8), rep(1, 8)))
giving:
[[1]]
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,] 1 0 0 0 0 0 0 0
[2,] 0 0 0 0 0 1 0 0
[3,] 0 1 0 0 0 0 0 0
[4,] 0 0 0 0 0 0 0 1
[5,] 0 0 0 0 0 0 1 0
[6,] 0 0 1 0 0 0 0 0
[7,] 0 0 0 1 0 0 0 0
[8,] 0 0 0 0 1 0 0 0
[[2]]
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,] 1 0 0 0 0 0 0 0
[2,] 0 0 0 0 0 1 0 0
[3,] 0 1 0 0 0 0 0 0
[4,] 0 0 0 0 1 0 0 0
[5,] 0 0 0 1 0 0 0 0
[6,] 0 0 1 0 0 0 0 0
[7,] 0 0 0 0 0 0 1 0
[8,] 0 0 0 0 0 0 0 1
[[3]]
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,] 1 0 0 0 0 0 0 0
[2,] 0 1 0 0 0 0 0 0
[3,] 0 0 0 0 0 1 0 0
[4,] 0 0 0 0 1 0 0 0
[5,] 0 0 1 0 0 0 0 0
[6,] 0 0 0 0 0 0 1 0
[7,] 0 0 0 1 0 0 0 0
[8,] 0 0 0 0 0 0 0 1
A brute-force way:
m = as.matrix(temp2)
w = data.frame(which(m == 1, arr.ind = TRUE))
combos = as.matrix(do.call(expand.grid, with(w, split(col, row))))
combos[ apply(combos, 1, function(x) !anyDuplicated(x)), ]
1 2 3 4 5 6 7 8
[1,] 1 6 2 8 7 3 4 5
[2,] 1 2 6 8 7 3 4 5
[3,] 1 6 2 8 3 7 4 5
[4,] 1 2 6 8 3 7 4 5
[5,] 1 6 2 8 4 3 7 5
[6,] 1 2 6 8 4 3 7 5
[7,] 1 6 2 8 3 4 7 5
[8,] 1 2 6 8 3 4 7 5
[9,] 1 6 2 5 7 3 4 8
[10,] 1 2 6 5 7 3 4 8
[11,] 1 6 2 5 3 7 4 8
[12,] 1 2 6 5 3 7 4 8
[13,] 1 6 2 5 4 3 7 8
[14,] 1 2 6 5 4 3 7 8
[15,] 1 6 2 5 3 4 7 8
[16,] 1 2 6 5 3 4 7 8
OP claims to only ever need to handle an 8x8 grid, so I guess this performs well enough. Each row of the result is a solution. The first row says that (1,1), (2,6), (3,2) ... is a solution.
A variation using data.table:
library(data.table)
m = as.matrix(temp2)
comboDT = setDT(melt(m))[ value == 1, do.call(CJ, split(Var2, Var1)) ][,
rid := .I ][, melt(.SD, id="rid", variable.name="row", value.name="col")]
setkey(comboDT, rid)
comboDT[ .( comboDT[, !anyDuplicated(col), by=rid][(V1), rid]) ]
this works. Let grid be my grid (temp2 from above). then this will return a grid that works
# create random sufficient grid
counter = 0
while(2 > 1) {
counter = counter + 1
if(counter == 10000) {
break
}
rand_grid = matrix(0, nrow = 8, ncol = 8)
indices_avail = seq(1,8,by=1)
for(i in 1:8) {
k = sample(indices_avail, 1)
rand_grid[i, k] = 1
indices_avail = indices_avail[indices_avail != k]
}
if(sum(grid[which(rand_grid == 1)]) == 8) {
break
}
print(counter)
}
This approach will return all valid combinations. First find all matrix row combinations. Then search through exhaustively. This method would have to be improved if your matrix size increased. One simple improvement would be to run the diag test in parallel.
st<-as.matrix(temp2) # make sure we are working with matrices
## This method will return all possible matrices of combinations
## in essence if you have diag(matr) = width matrix than you have
## a valid choice
## Helper function to build all combinations, there may be better way to
## do this but it gets the job done
allCombinationsAux<-function(z,nreg,x){
if(sum(nreg)>1){
innerLoop<-do.call(rbind,lapply(x[nreg&(z!=x)], test1,nreg&(z!=x),x))
ret<-cbind(z,innerLoop )
}
else{
ret<-x[nreg]
}
ret
}
## Build all of the combinations of possible matrices
combs<-do.call(rbind,lapply(x,function(y) allCombinationsAux(y,y!=x,x)))
## iterate through all the possible combinations of matrices, to find out
## which ones have 1s throughout the diag
inds<-which(apply(combs,1,function(x) sum(diag(st[x,]))==8))
lapply(inds,function(x) st[combs[x,],])
While there are great answers already here for the brute-force approach and actually using math, just for kicks, here's a version that guesses and checks lags of the non-matching columns. For the example in question, it actually turns out to be quite quick, and as a bonus, you could find a new answer on any particular run! How fun! To the code:
set.seed(47) # remove this to have more fun
mat.in <- as.matrix(temp2) # we'll work in matrices
mat.out <- diag(8) # a starting guess
dimnames(mat.out) <- dimnames(mat.in) # make our answer pretty
iteration <- 1 # for kicks, a loop counter
while (any((mat.out != mat.in)[as.logical(mat.out)])) {
mat.ref <- mat.out
mat.out <- mat.out[, sample(8)] # make this deterministic if you like
inner <- 1 # don't repeat yourself (too much)
while (any(mat.out != mat.ref) & inner <= 8) {
mat.ref <- mat.out
# find non-matching indices and lag those columns
to.lag <- which((mat.out != mat.in)[as.logical(mat.out)])
i <- 1:8
i[to.lag] <- c(to.lag[length(to.lag)], to.lag[-length(to.lag)])
mat.out <- mat.out[, i]
cat(inner, " ") # let's see what it does
inner <- inner + 1
}
print(iteration) # whoo, scrolling numbers
iteration <- iteration + 1
}
## 1 2 3 [1] 1
## 1 2 3 4 5 6 7 8 [1] 2
## 1 2 [1] 3
## 1 2 3 [1] 4
which, for this particular seed returns
mat.out
## a c e g d b f h
## i 1 0 0 0 0 0 0 0
## j 0 0 0 0 0 1 0 0
## k 0 1 0 0 0 0 0 0
## l 0 0 0 0 1 0 0 0
## m 0 0 1 0 0 0 0 0
## n 0 0 0 0 0 0 1 0
## o 0 0 0 1 0 0 0 0
## p 0 0 0 0 0 0 0 1
It could certainly be optimized further, but it's already pretty quick (without the printing, which slows it down):
Unit: microseconds
expr min lq mean median uq max neval
let's guess 137.796 383.6445 838.2327 693.819 1163.08 2510.436 100
running all 100 times in a fraction of a second. It's quite a bit faster than actual guessing (chopping out the inner loop):
Unit: microseconds
expr min lq mean median uq max neval cld
guess smart 148.997 349.916 848.6314 588.162 1085.841 3117.78 100 a
actually guess 322.458 7341.961 31197.1237 20012.969 47677.501 160250.02 100 b
Note, though, that luck plays a role here, and if there are fewer solutions, it will take longer. If there are no solutions, it will run forever. It could, of course, be optimized to avoid such a fate by making sure it doesn't reuse the same starting permutation provided by sample(8) (a good idea regardless, which I deemed superfluous here as it only runs through a handful of permutations each run anyway). Hack away.
Hi I'm pretty much stumped on on trying to figure this out and could use a little help. Basically, I have a n x n matrix where the diagonal is set to a value k and every other value is 0.
1 2 3 4 5
1 k 0 0 0 0
2 0 k 0 0 0
3 0 0 k 0 0
4 0 0 0 k 0
5 0 0 0 0 k
Basically, I need to be able to make two other diagonals in this matrix with the value of 1 so it ends up looking like this:
1 2 3 4 5
1 k 1 0 0 0
2 1 k 1 0 0
3 0 1 k 1 0
4 0 0 1 k 1
5 0 0 0 1 k
So far all I have for code is being able to make the diagonal matrix
m=diag(k,n,n) but I have no idea on how to add the two other diagonals. Would I use apply() and cbind() or rbind()?
You can use col and row to create and index to subset and assign the upper and lower diagonals.
k=3
m <- k* diag(6)
m[abs(row(m) - col(m)) == 1] <- 1
m
# [,1] [,2] [,3] [,4] [,5] [,6]
#[1,] 3 1 0 0 0 0
#[2,] 1 3 1 0 0 0
#[3,] 0 1 3 1 0 0
#[4,] 0 0 1 3 1 0
#[5,] 0 0 0 1 3 1
#[6,] 0 0 0 0 1 3
If you wanted reverse diagonals you could use col(m) - row(m)
Try this function, it will make a matrix of dimensions row X col and diagonal of the numeric n.
matfun <- function(diag=n, row=4,col=4){
x = diag(1,row,col)
diag*x+rbind(as.vector(rep(0,col)),x[1:(row-1),])+cbind(as.vector(rep(0,row)),x[,1:(col-1)])
}
HTH