I have a matrix where I would like to estimate 4 values:
M = [ var1 corr 0]
[ corr var2 0]
[ 0 0 0]
Basically, the matrix has two variances and a correlation in the position M[1,2] and M[2,1] which should be equal. To this end, I wrote a function for optim as follows:
M <- matrix(c(NA,NA,0,NA,NA,0,0,0,0),3,3,byrow=TRUE)
objf <- function(pars, model, estimate = TRUE) {
model$M[is.na(model$M)] <- pars[1:4]
if (estimate) {
-logLik(model)
} else {
model
}
}
However, when I run my code the output for the two correlation values is different, so I was wondering if there is a better way to write my code such that I have M[1,2] = M[2,1]. I think using the line model$M[is.na(model$M)] is the issue but I'm not sure how to write the code better, so if anyone could help me out I'd appreciate it, thanks!
You could change your objective function to have only 3 parameters and replace
model$M[is.na(model$M)] <- pars[1:4]
with
model$M[is.na(model$M)] <- pars[c(1,2,2,3)]
An illustration
M <- matrix(c(NA,NA,0,
NA,NA,0,
0, 0,0), 3, 3, byrow=TRUE)
pars <- 1:3
M[is.na(M)] <- pars[c(1,2,2,3)]
M
[,1] [,2] [,3]
[1,] 1 2 0
[2,] 2 3 0
[3,] 0 0 0
Related
Trying to write a function to sort a matrix by rows.
I could write something to loop over the values on a vector of values but couldn't add complexity to make it loop over some matrix.
sww = function(x){
n <- length(x)
for(i in 1:(n-1)){
for (j in (i+1):n) {
if(x[i] > x[j]){
tmp = x[i]; x[i] = x[j]; x[j] = tmp
}
}
}
return(x)
}
does anyone knows how to make it loop over an entire matrix ?
Edit:
By sorting a matrix by rows I meant to have a matrix like:
2 1 4 "Sorted by row" 1 2 4
5 4 0 --> 0 4 5
Thank you
Edit1: I know about the r functions but would like to write my own
Use apply:
m <- matrix(c(2, 5, 1, 4, 4, 0), 2) # test matrix
t(apply(m, 1, sort))
## [,1] [,2] [,3]
## [1,] 1 2 4
## [2,] 0 4 5
If you really want to loop over the rows:
mm <- m
for(i in 1:nrow(m)) mm[i, ] <- sort(m[i, ])
and, of course, you can replace sort with your own version if you wish.
While there is a function used for Trace Matrix as seen below:
sum(diag(matrix))
This may incorrectly give you a result if the matrix is not Square (i.e. an "n x n" size). Are there any other inbuilt functions for running "Trace" of a matrix?
Package: psych
Function: tr()
Example:
> x <- matrix(replicate(9,1), ncol = 3, nrow = 3)
> x
[,1] [,2] [,3]
[1,] 1 1 1
[2,] 1 1 1
[3,] 1 1 1
> tr(x)
[1] 3
> x <- matrix(replicate(12,1), ncol = 4, nrow = 3)
> x
[,1] [,2] [,3] [,4]
[1,] 1 1 1 1
[2,] 1 1 1 1
[3,] 1 1 1 1
> tr(x)
Fehler in tr(x) : m must be a square matrix
("Fehler" means error)
Moreover
Package: matrixcalc
Function: matrix.trace
Below is a quick function to test if the object is a matrix and then test if it is also square.
tr <- function (m)
{
total_sum <- 0
if(is.matrix(m))
{
row_count <- nrow(m)
col_count <- ncol(m)
if(row_count == col_count)
{
total_sum <-sum(diag(m))
total_sum
}
else
{
message ('Matrix is not square')
}
}
else
{
message( 'Object is not a matrix')
}
}
I also found the following package for Matrix.Trace:
Matrixcalc
You can try using eigenvalues
# first find eigenvalues
e = eigen(matrix)
# Calculate the trace of the matrix, and compare with the sum of the eigenvalues.
# function to calculate the trace using sum of the diagonal
trace <- function(data)sum(diag(data))
trace(H)
# using sum of the eigenvalues
sum(e$values)
Hope it helps.
so I've estimated a multidimensional IRT model using the TAM package, based on this dataset that I have.
So now that I have the TAM fit object, is there any way to use it to simulate a new dataset that "abides by the rules" of that model I estimated?
Here is something similar, but about lme fit objects:
https://stats.stackexchange.com/questions/11233/how-to-simulate-data-based-on-a-linear-mixed-model-fit-object-in-r
Thanks in advance,
KH
Edit
now, since TAM version 1.10-0, it is possible using the function IRT.simulate (see respective help file). Thanks again for the request.
library(TAM)
data(data.gpcm)
psych::describe(data.gpcm)
resp <- data.gpcm
# define three dimensions and different loadings of item categories
# on these dimensions in B loading matrix
I <- 3 # 3 items
D <- 3 # 3 dimensions
# define loading matrix B
# 4 categories for each item (0, 1, 2, 3)
B <- array(0 , dim = c(I, 4, D))
for (ii in 1:I){
B[ii, 1:4, 1] <- 0:3
B[ii, 1, 2] <- 1
B[ii, 4, 3] <- 1
}
dimnames(B)[[1]] <- colnames(resp)
B[1, , ]
## > B[1,,]
## [,1] [,2] [,3]
## [1,] 0 1 0
## [2,] 1 0 0
## [3,] 2 0 0
## [4,] 3 0 1
#-- test run
mod1 <- tam.mml(resp, B = B, control = list(snodes = 1000, maxiter = 5))
sim.dat <- IRT.simulate(mod1, nobs = 2000)
Old Solution
I wouldn't say it is impossible. However, for the time being, it is not easy since it involves handling of TAM internal functions and attributes of the estimation object. That is, there is no method yet that lets you extract the response probability function at prespecified trait points.
However, thanks to your request, we are working on exactly this very valuable feature and I'll give an update to this answer as soon as the method is on CRAN.
For now, let's extend the example of that request: Implement ConQuest score command in TAM that Alex also included at the manual page of the tam function as EXAMPLE 20.
data(data.gpcm)
psych::describe(data.gpcm)
resp <- data.gpcm
# define three dimensions and different loadings of item categories
# on these dimensions in B loading matrix
I <- 3 # 3 items
D <- 3 # 3 dimensions
# define loading matrix B
# 4 categories for each item (0, 1, 2, 3)
B <- array(0 , dim = c(I, 4, D))
for (ii in 1:I){
B[ii, 1:4, 1] <- 0:3
B[ii, 1, 2] <- 1
B[ii, 4, 3] <- 1
}
dimnames(B)[[1]] <- colnames(resp)
B[1, , ]
## > B[1,,]
## [,1] [,2] [,3]
## [1,] 0 1 0
## [2,] 1 0 0
## [3,] 2 0 0
## [4,] 3 0 1
#-- test run
mod1 <- tam.mml(resp, B = B, control = list(snodes = 1000, maxiter = 5))
Now for the part where we extract the attributes that are necessary for the computation of the response probabilities and generate new testees.
# Extract necessary item attributes
xsi <- mod1$xsi$xsi
A <- mod1$A
B <- mod1$B
maxK <- mod1$maxK
nI <- dim(A)[1]
iIndex <- 1:nI
AXsi <- matrix(0, nrow = nI, ncol = maxK)
# Simulate new testees
nnodes <- 2000
theta <- mvrnorm(n = nnodes, mod1$beta, mod1$variance)
The response probabilities can be obtained from a call to an internal function.
# Calculate response probablities and simulate
p <- TAM:::calc_prob.v5(iIndex, A, AXsi, B, xsi, theta, nnodes, maxK, recalc = TRUE)$rprobs
p[,,1] # response probability of testee 1 to each category 0, 1, 2, 3 for all three items
# [,1] [,2] [,3] [,4]
# [1,] 0.06738066 0.8111365 0.1043441 0.0171387
# [2,] 0.02545206 0.4895568 0.3182046 0.1667866
# [3,] 0.04503185 0.5105446 0.3429603 0.1014633
With this, simulate the success cut and compare that to the response probabilities.
sim.data <- matrix(runif(nnodes * nI), nrow = nnodes, ncol = nI)
for(pp in 1:nnodes){
cat.success.pp <- (sim.data[pp, ] > t(apply(p[, , pp], 1, cumsum)))
sim.data[pp, ] <- c(cat.success.pp %*% rep(1, maxK))
}
Best,
Tom
I have two rasters with data as below:
library("raster")
mdata <- raster(matrix(c(0,2,3, 11,12,13), nrow = 2, ncol = 3, byrow = TRUE))
ndata <- raster(matrix(c(0,1,2, 11,14,13), nrow = 2, ncol = 3, byrow = TRUE))
I want to stack them and estimate the maximum position with the criteria that if both the raster has value of 0, I should be able to write 0. This means that the output raster/matrix should have either 0, 1 or 2 in this case.
I tried following codes but it does not perform quite exactly the way I want.
odata <- stack(mdata, ndata)
e <- which.max(odata)
How should I be able to introduce the criteria that checks if both matrices have value of 0 for same position and assign 0 if there is?
I really appreciate your feedback on this. Thanks!
How about this:
Rgames> foo<-matrix(rep(1,6),nr=2,nc=3)
Rgames> foo
[,1] [,2] [,3]
[1,] 1 1 1
[2,] 1 1 1
Rgames> foo[(ndata-mdata >0)] <-2
Rgames> foo[(mdata==0 & ndata==0)] <-0
Rgames> foo
[,1] [,2] [,3]
[1,] 0 1 1
[2,] 1 2 1
I still can't tell from your question whether you want to locate the maximum of mdata or all locations where mdata>ndata , but if you clarify that it's easy to modify the conditionals to match.
EDIT: discuss extending to N layers of a raster.
To find which layer has the max at each location, I might use an array.
cube <- array(c(data.1,data.2,...),dim=c(dim(data.1),N)) #for N layers
# and I apologize but I may have loaded this 3-D cube in the wrong order
maxvals<-array(dim=dim(data.1))
for (j in 1:dim(data.1)[1]) {
for (k in 1:dim(data.1)[2]) {
maxvals[j,k]<-which.max(cube[j,k,])
if(sum(cube[j,k,])==0 ) maxvals[j,k] <- 0
}
}
This can probably be done a lot more efficiently with aggregate or plyr tools but I hope this points the way.
I guess, I have a two leveled question referring to diag in R and matlab.
1) I was wondering if there was a way already developed to access different diagonals of matrices in R similar to the way it is done in Matlab (see http://www.mathworks.com/help/techdoc/ref/diag.html).
2) If there is not already a current function how can my code be improved such that it is similar to the R diag where
diag(x = 1, nrow, ncol) # returns the values of the diagonal
diag(x) <- value # inserts values on the diagonal
Presently my code returns the elements on the diagonal given k but how can it be written so that if it is specified like the second way (above) that it allows me to insert the values on the diagonal. Presently to do this, I use diag.ind to give me the indices and then using those indices to insert the values on the k diagonal.
Here is the code:
'diag.ind'<-function(x,k=0){
if(k=='') k=0
x<-as.matrix(x)
if(dim(x)[2]==dim(x)[1]){
stp_pt_r<-dim(x)[1]
stp_pt_c<-dim(x)[2]
}
if(ncol(x)> dim(x)[1]){
stp_pt_r<-dim(x)[1]
stp_pt_c<-stp_pt_r + 1
}
if(ncol(x)< dim(x)[1]){
stp_pt_c<-dim(x)[2]
stp_pt_r<-stp_pt_c+1
}
if(k==0){
r<-as.matrix(seq(1,stp_pt_r,by=1))
c<-as.matrix(seq(1,stp_pt_c,by=1))
ind.r<- cbind(r,c)
}
if(k>0){
r<-t(as.matrix(seq(1,stp_pt_r,by=1)))
c<-t(as.matrix(seq((1+k),stp_pt_c,by=1)))
ind<-t(rbind.fill.matrix(r,c))
ind.r<-ind[!is.na(ind[,2]),]
}
if(k<0){
k<-abs(k)
r<-t(as.matrix(seq((1+k),stp_pt_r,by=1)))
c<-t(as.matrix(seq(1,stp_pt_c,by=1)))
ind<-t(rbind.fill.matrix(r,c))
ind.r<-ind[!is.na(ind[,1]),]
}
diag.x<-x[ind.r]
output<-list(diag.x=diag.x, diag.ind=ind.r)
return(output)
}
This is kind of clunky and I feel like I must be reinventing the wheel. Thanks in advance for any insight!
After your reply to Andrie this may satisfy:
exdiag <- function(mat, off) {mat[row(mat)+off == col(mat)]}
x <- matrix(1:16, ncol=4)
exdiag(x,1)
#[1] 5 10 15
I was thinking you wanted a function that can assign or return one of a diagonal or a sub- or super- diagonal matrix, This is the constructor function:
subdiag <- function(vec, size, offset=0){
M <- matrix(0, size, size)
M[row(M)-offset == col(M)] <- vec
return(M)}
> subdiag(1, 5, 1)
[,1] [,2] [,3] [,4] [,5]
[1,] 0 0 0 0 0
[2,] 1 0 0 0 0
[3,] 0 1 0 0 0
[4,] 0 0 1 0 0
[5,] 0 0 0 1 0
Called with only two arguments you would get a diagonal matrix. You can construct super-diagonal matrices with negative offsets. If this is what you wanted for the constructor, then it should not be too hard to construct a similar subdiag<- function to go along with it.
In MATLAB, to assign the values x to the diagonal of A:
n = size(A,1);
A(1:n+1:end) = x
Look up linear indexing.
Although, that might not be what you asked.