I have initially posted this question on stats.stackexchange.com,
but it was closed due to being focused on programming. Hopefully, I
can get any help here.
I will not put many theoretical details here to make it simple, but my final goal is to implement a Hidden Markov Model using R.
Although I am fine with the theoretical model construction, when I tried to implement it, I realized that I do not know basic things about computational statistics. My question goes into this direction.
Let and be random variables such that and , with and . If denotes distribution, how can I compute
using R?
I mean, what is the exact meaning of these distributions (one discrete and one continuous) multiplication? How can I do this using R? The answer is obviously a function of , but how is it represented in my code?
Is there any change if is also discrete? For instance, , with . How it would affect the implemented code?
I know my questions are not very specific, but I am very lost on how to start. My goal with this question is understanding how I can "translate" what I have written in paper to the computer.
Translation
The equations describe how to compute the probability distribution of X given an observation of Y=y and values for parameters p and sigma. Ultimately, you want to implement a function p_X_given_Y that takes a value of Y and returns a probability distribution for X. A good place to start is to implement the two functions used in the RHS of the expression. Something like,
p_X <- function (x, p=0.5) { switch(as.character(x), "0"=p, "1"=1-p, 0) }
p_Y_given_X <- function (y, x, sigma=1) { dnorm(y, x, sd=sigma) }
Note that p and sigma are picked arbitrarily here. These functions can then be used to define the p_X_given_Y function:
p_X_given_Y <- function (y) {
# numerators: for each x \in X
ps <- sapply(c("0"=0,"1"=1),
function (x) { p_X(x) * p_Y_given_X(y, x) })
# divide out denominator
ps / sum(ps)
}
which can be used like:
> p_X_given_Y(y=0)
# 0 1
# 0.6224593 0.3775407
> p_X_given_Y(y=0.5)
# 0 1
# 0.5 0.5
> p_X_given_Y(y=2)
# 0 1
# 0.1824255 0.8175745
These numbers should make intuitive sense (given p=0.5): Y=0 is more likely to come from X=0, Y=0.5 is equally likely to be X=0 or X=1, etc.. This is only one way of implementing it, where the idea is to return the "distribution of X", which in this case is simply a named numeric vector, where the names ("0", "1") correspond to the support of X, and the values correspond to the probability masses.
Some alternative implementations might be:
a p_X_given_Y(x,y) that also takes a value for x and returns the corresponding probability mass
a p_X_given_Y(y) that returns another function that takes an x argument and returns the corresponding probability mass (i.e., the probability mass function)
You have a set of N=400 objects, each having its own coordinates in a, say, 19-dimensional space.
You calculate the (Euclidean) distance matrix (all pairwise distances).
Now you want to select n=50 objects, such that the sum of all pairwise distances between the selected objects is maximal.
I devised a way to solve this by linear programming (code below, for a smaller example), but it seems inefficient to me, because I am using N*(N-1)/2 binary variables, corresponding to all the non-redundant elements of the distance matrix, and then a lot of constraints to ensure self-consistency of the solution vector.
I suspect there must be a simpler approach, where only N variables are used, but I can't immediately think of one.
This post briefly mentions some 'Bron–Kerbosch' algorithm, which apparently addresses the distance sum part.
But in that example the sum of distances is a specific number, so I don't see a direct application to my case.
I had a brief look at quadratic programming, but again I could not see the immediate parallel with my case, although the 'b %*% bT' matrix, where b is the (column) binary solution vector, could in theory be used to multiply the distance matrix, etc.; but I'm really not familiar with this technique.
Could anyone please advise (/point me to other posts explaining) if and how this kind of problem can be solved by linear programming using only N binary variables?
Or provide any other advice on how to tackle the problem more efficiently?
Thanks!
PS: here's the code I referred to above.
require(Matrix)
#distmat defined manually for this example as a sparseMatrix
distmat <- sparseMatrix(i=c(rep(1,4),rep(2,3),rep(3,2),rep(4,1)),j=c(2:5,3:5,4:5,5:5),x=c(0.3,0.2,0.9,0.5,0.1,0.8,0.75,0.6,0.6,0.15))
N = 5
n = 3
distmat_summary <- summary(distmat)
distmat_summary["ID"] <- 1:NROW(distmat_summary)
i.mat <- xtabs(~i+ID,distmat_summary,sparse=T)
j.mat <- xtabs(~j+ID,distmat_summary,sparse=T)
ij.mat <- rbind(i.mat,"5"=rep(0,10))+rbind("1"=rep(0,10),j.mat)
ij.mat.rowSums <- rowSums(ij.mat)
ij.diag.mat <- .sparseDiagonal(n=length(ij.mat.rowSums),-ij.mat.rowSums)
colnames(ij.diag.mat) <- dimnames(ij.mat)[[1]]
mat <- rbind(cbind(ij.mat,ij.diag.mat),cbind(ij.mat,ij.diag.mat),c(rep(0,NCOL(ij.mat)),rep(1,NROW(ij.mat)) ))
dir <- c(rep("<=",NROW(ij.mat)),rep(">=",NROW(ij.mat)),"==")
rhs <- c(rep(0,NROW(ij.mat)),1-unname(ij.mat.rowSums),n)
obj <- xtabs(x~ID,distmat_summary)
obj <- c(obj,setNames(rep(0, NROW(ij.mat)), dimnames(ij.mat)[[1]]))
if (length(find.package(package="Rsymphony",quiet=TRUE))==0) install.packages("Rsymphony")
require(Rsymphony)
LP.sol <- Rsymphony_solve_LP(obj,mat,dir,rhs,types="B",max=TRUE)
items.sol <- (names(obj)[(1+NCOL(ij.mat)):(NCOL(ij.mat)+NROW(ij.mat))])[as.logical(LP.sol$solution[(1+NCOL(ij.mat)):(NCOL(ij.mat)+NROW(ij.mat))])]
items.sol
ID.sol <- names(obj)[1:NCOL(ij.mat)][as.logical(LP.sol$solution[1:NCOL(ij.mat)])]
as.data.frame(distmat_summary[distmat_summary$ID %in% ID.sol,])
This problem is called the p-dispersion-sum problem. It can be formulated using N binary variables, but using quadratic terms. As far as I know, it is not possible to formulate it with only N binary variables in a linear program.
This paper by Pisinger gives the quadratic formulation and discusses bounds and a branch-and-bound algorithm.
Hope this helps.
I have been trying to compute a bigger function and one part of it is a while loop with 2 conditions. Foreach value of k, in a certain range of values (x_min and x_max are computed within the whole function), i am trying to compute a matrix with values from a distribution in which the k itself is a part. The while loop assesses that the necessary conditions for the distribution are met, while the foreach- function should compute the while loop for every element of k. Since i do not know the exact amount of elements in k, i thought the problem might be the predetermination of I. The best i could derive was an endless computation within the loops (or simple crashes of R). I am thankful for any suggestion!
I<-mat.or.vec(n,10000)
k<-x[x_min < x & x < x_max]
foreach(k) %do% {
for(i in 1:100){
check=0
while(check==0){
I<-replicate(n=100,rbinom(n= 250, size=1, prob = k/250))
if(sum(I[,i])==k) check=1
}
}
}
Changing the order unfortunatly did not work.
It seems to still have problems. I tried to extract the matrix, but it is reporting "NULL".
n is 250, x_min and x_max are defined from a formula (around 2-8), k is defined within the formula given above, x are values between 0 and around 10 (also computed within the formula). I would provide you with the whole formula, but it is big and i could narrow down the problems to these parts, so i wanted to keep the problem as simple as possible. Thank you for your help and comments!
In R, I wrote a log-likelihood function containing two recursive calculation. The log-likelihood function works properly (it gives answer for known values of parameters), but when I try to maximize it using optim(), it takes too much time. How can I optimize the code? Thanks in advance for ideas.
This is the log-likelihood function for a markov regime switching model with a dependence structure using copula functions.
Named g in the for loop:
Named p in the for loop:
Named f in the codes:
Some data:
u <- cbind(rt(100,10),rt(100,13))
f function:
f=function(u,p,e1,e2){
s=diag(2);s[1,2]=p
ff=dcopula.gauss(cbind(pt(u[,1],e1),pt(u[,2],e2)),Sigma=s)*dt(u[,1],e1)*dt(u[,2],e2)
return(ff)
}
log-likelihood function:
loglik=function(x){
p11<-x[1];p12<-x[2];p21<-x[3];p22<-x[4];p31<-x[5];p32<-x[6];r<-x[7];a1<-x[8];a2<-x[9];s<-x[10];b1<-x[11];b2<-x[12];t<-x[13];c1<-x[14];c2<-x[15]
p1=c(numeric(nrow(u)));p2=c(numeric(nrow(u)));p3=c(numeric(nrow(u)))
g=c(numeric(nrow(u)))
p1_0=.3
p2_0=.3
g[1]<-(p1_0*f(u,r,a1,a2)[1])+(p2_0*f(u,s,b1,b2)[1])+((1-(p1_0+p2_0))*f(u,t,c1,c2)[1])
p1[1]<-((p1_0*p11*f(u,r,a1,a2)[1])+(p2_0*p21*f(u,r,a1,a2)[1])+((1-(p1_0+p2_0))*p31*f(u,r,a1,a2)[1]))/g[1]
p2[1]<-((p1_0*p12*f(u,s,b1,b2)[1])+(p2_0*p22*f(u,s,b1,b2)[1])+((1-(p1_0+p2_0))*p32*f(u,s,b1,b2)[1]))/g[1]
p3[1]<-((p1_0*(1-(p11+p12))*f(u,t,c1,c2)[1])+(p2_0*(1-(p21+p22))*f(u,t,c1,c2)[1])+((1-(p1_0+p2_0))*(1-(p31+p32))*f(u,t,c1,c2)[1]))/g[1]
for(i in 2:nrow(u)){
g[i]<-(p1[i-1]*p11*f(u,r,a1,a2)[i])+(p1[i-1]*p12*f(u,s,b1,b2)[i])+(p1[i-1]*(1-(p11+p12))*f(u,t,c1,c2)[i])+
(p2[i-1]*p21*f(u,r,a1,a2)[i])+(p2[i-1]*p22*f(u,s,b1,b2)[i])+(p2[i-1]*(1-(p21+p22))*f(u,t,c1,c2)[i])+
(p3[i-1]*p31*f(u,r,a1,a2)[i])+(p3[i-1]*p32*f(u,s,b1,b2)[i])+(p3[i-1]*(1-(p31+p32))*f(u,t,c1,c2)[i])
p1[i]<-((p1[i-1]*p11*f(u,r,a1,a2)[i])+(p1[i-1]*p12*f(u,s,b1,b2)[i])+(p1[i-1]*(1-(p11+p12))*f(u,t,c1,c2)[i]))/g[i]
p2[i]<-((p2[i-1]*p21*f(u,r,a1,a2)[i])+(p2[i-1]*p22*f(u,s,b1,b2)[i])+(p2[i-1]*(1-(p21+p22))*f(u,t,c1,c2)[i]))/g[i]
p3[i]<-((p3[i-1]*p31*f(u,r,a1,a2)[i])+(p3[i-1]*p32*f(u,s,b1,b2)[i])+(p3[i-1]*(1-(p31+p32))*f(u,t,c1,c2)[i]))/g[i]
}
return(-sum(log(g)))
}
Optimization:
library(QRM)
library(copula)
start=list(0,1,0,0,0,0,1,9,7,-1,10,13,1,6,4)
##
optim(start,loglik,lower=c(rep(0,6),-1,1,1,-1,1,1,-1,1,1),
upper=c(rep(1,6),1,Inf,Inf,1,Inf,Inf,1,Inf,Inf),
method="L-BFGS-B") -> fit
This looks like a question for Stack-Overflow.
Something that springs to my mind is:
Define a vector containing the values f(.,.,.,.) in order to avoid doing k*nrow(u) evaluations of the same function and simply call those entries of interest.
It seems like the loop could be replaced by matrix and/or vector products. However, without further information it is unclear what the code is doing and it would take eons to extract this information from the code.