How to implement q-learning in R? - r

I am learning about q-learning and found a Wikipedia post and this website.
According to the tutorials and pseudo code I wrote this much in R
#q-learning example
#http://mnemstudio.org/path-finding-q-learning-tutorial.htm
#https://en.wikipedia.org/wiki/Q-learning
set.seed(2016)
iter=100
dimension=5;
alpha=0.1 #learning rate
gamma=0.8 #exploration/ discount factor
# n x n matrix
Q=matrix( rep( 0, len=dimension*dimension), nrow = dimension)
Q
# R -1 is fire pit,0 safe path and 100 Goal state########
R=matrix( sample( -1:0, dimension*dimension,replace=T,prob=c(1,2)), nrow = dimension)
R[dimension,dimension]=100
R #reward matrix
################
for(i in 1:iter){
row=sample(1:dimension,1)
col=sample(1:dimension,1)
I=Q[row,col] #randomly choosing initial state
Q[row,col]=Q[row,col]+alpha*(R[row,col]+gamma*max(Qdash-Q[row,col])
#equation from wikipedia
}
But I have problem in max(Qdash-Q[row,col] which according to the website is Max[Q(next state, all actions)] How to I programmatically search all actions for next state?
The second problem is this pseudo code
Do While the goal state hasn't been reached.
Select one among all possible actions for the current state.
Using this possible action, consider going to the next state.
Get maximum Q value for this next state based on all possible actions.
Compute: Q(state, action) = R(state, action) + Gamma * Max[Q(next state, all actions)]
Set the next state as the current state.
End Do
Is it this
while(Q<100){
Q[row,col]=Q[row,col]+alpha*(R[row,col]+gamma*max(Qdash-Q[row,col])
}

This post is by no means a complete implementation of Q-learning in R. It is an attempt to answer the OP with regards to the description of the algorithm in the website linked in the post and in Wikipedia.
The assumption here is that the reward matrix R is as described in the website. Namely that it encodes reward values for possible actions as non-negative numbers, and -1's in the matrix represent null values (i.e., where there is no possible action to transition to that state).
With this setup, an R implementation of the Q update is:
Q[cs,ns] <- Q[cs,ns] + alpha*(R[cs,ns] + gamma*max(Q[ns, which(R[ns,] > -1)]) - Q[cs,ns])
where
cs is the current state at the current point in the path.
ns is the new state based on a (randomly) chosen action at the current state. This action is chosen from the collection of possible actions at the current state (i.e., for which R[cs,] > -1). Since the state transition itself is deterministic here, the action is the transition to the new state.
For this action resulting in ns, we want to add its maximum (future) value over all possible actions that can be taken at ns. This is the so-called Max[Q(next state, all actions)] term in the linked website and the "estimate of optimal future value" in Wikipedia. To compute this, we want to maximize over the ns-th row of Q but consider only columns of Q for which columns of R at the corresponding ns-th row are valid actions (i.e., for which R[ns,] > -1). Therefore, this is:
max(Q[ns, which(R[ns,] > -1)])
An interpretation of this value is a one-step look ahead value or an estimate of the cost-to-go in dynamic programming.
The equation in the linked website is the special case in which alpha, the learning rate, is 1. We can view the equation in Wikipedia as:
Q[cs,ns] <- (1-alpha)*Q[cs,ns] + alpha*(R[cs,ns] + gamma*max(Q[ns, which(R[ns,] > -1)]))
where alpha "interpolates" between the old value Q[cs,ns] and the learned value R[cs,ns] + gamma*max(Q[ns, which(R[ns,] > -1)]). As noted in Wikipedia,
In fully deterministic environments, a learning rate of alpha=1 is optimal
Putting it all together into a function:
q.learn <- function(R, N, alpha, gamma, tgt.state) {
## initialize Q to be zero matrix, same size as R
Q <- matrix(rep(0,length(R)), nrow=nrow(R))
## loop over episodes
for (i in 1:N) {
## for each episode, choose an initial state at random
cs <- sample(1:nrow(R), 1)
## iterate until we get to the tgt.state
while (1) {
## choose next state from possible actions at current state
## Note: if only one possible action, then choose it;
## otherwise, choose one at random
next.states <- which(R[cs,] > -1)
if (length(next.states)==1)
ns <- next.states
else
ns <- sample(next.states,1)
## this is the update
Q[cs,ns] <- Q[cs,ns] + alpha*(R[cs,ns] + gamma*max(Q[ns, which(R[ns,] > -1)]) - Q[cs,ns])
## break out of while loop if target state is reached
## otherwise, set next.state as current.state and repeat
if (ns == tgt.state) break
cs <- ns
}
}
## return resulting Q normalized by max value
return(100*Q/max(Q))
}
where the input parameters are:
R is the rewards matrix as defined in the blog
N is the number of episodes to iterate
alpha is the learning rate
gamma is the discount factor
tgt.state is the target state of the problem.
Using the example in the linked website as a test:
N <- 1000
alpha <- 1
gamma <- 0.8
tgt.state <- 6
R <- matrix(c(-1,-1,-1,-1,0,-1,-1,-1,-1,0,-1,0,-1,-1,-1,0,-1,-1,-1,0,0,-1,0,-1,0,-1,-1,0,-1,0,-1,100,-1,-1,100,100),nrow=6)
print(R)
## [,1] [,2] [,3] [,4] [,5] [,6]
##[1,] -1 -1 -1 -1 0 -1
##[2,] -1 -1 -1 0 -1 100
##[3,] -1 -1 -1 0 -1 -1
##[4,] -1 0 0 -1 0 -1
##[5,] 0 -1 -1 0 -1 100
##[6,] -1 0 -1 -1 0 100
Q <- q.learn(R,iter,alpha,gamma,tgt.state)
print(Q)
## [,1] [,2] [,3] [,4] [,5] [,6]
##[1,] 0 0 0.0 0 80 0.00000
##[2,] 0 0 0.0 64 0 100.00000
##[3,] 0 0 0.0 64 0 0.00000
##[4,] 0 80 51.2 0 80 0.00000
##[5,] 64 0 0.0 64 0 100.00000
##[6,] 0 80 0.0 0 80 99.99994

Related

Wrong result from constroptim function

I'm trying to use constrOptim to optimize the sum of square errors from a linear multiple regression. The main equation should be D = Beta1*Xa+Beta2*Xb+Beta3*Xc+Beta4*Xd , with D,Xa,Xb,Xc,Xd from a imported .csv file, and the Betas are the coefficients I want to find, minimizing the quadratic errors.
So far I imported the file.csv to R, named each column as Ds,Xa,Xb,Xc,Xd, created the objfunction=
function(Beta1,Beta2,Beta3,Beta4)'sum(E²)'=(sum(D) - sum(Beta1*Xa+Beta2*Xb+Beta3*Xc+Beta4*Xd))^2)
created the matrix 'C' and vector 'd' to configure the constraints that should restrict the Beta's to <=0. I dont know how to find the feasible region, although I've used initial values that made the function work.
Here is the code:
> Tabela= read.table("Simulacao.csv", header=T, sep= ";")
> Tabela
D A B C D.1
1 -1 1 -1 0 0
2 4 0 0 1 -1
3 4 1 0 -1 0
4 0 0 1 0 -1
5 -2 1 0 0 -1
> Ds= Tabela[,1]
> Xa= Tabela[,2]
> Xb= Tabela[,3]
> Xc= Tabela[,4]
> Xd= Tabela[,5]
> simulaf= function(x1,x2,x3,x4) {
+ Ds= Tabela[,1]
+ Xa= Tabela[,2]
+ Xb= Tabela[,3]
+ Xc= Tabela[,4]
+ Xd= Tabela[,5]
+ J=sum(Ds)
+ H=sum(x1*Xa+x2*Xb+x3*Xc+x4*Xd)
+ sx=(J-H)^2
+ return(sx)
+ }
> s= function(x) {simulaf(x[1],x[2],x[3],x[4])}
> d= c(0,0,0,0)
> C= matrix(c(-1,0,0,0,0,-1,0,0,0,0,-1,0,0,0,0,-1),nrow=4,ncol=4,byrow=T)
> constrOptim(c(-1,-1,-1,-1),s,NULL,C,d)
$par
[1] -0.2608199 -0.8981110 -1.1095961 -1.9274866
The result I expect should be:
$par
[1] -0.125 0 -0.5 -0.875
After researching this, my conclusions are that it could be because I'm using bad initial values, parameterization problem (don't understand why its needed) or if it's simply that I have programmed it incorrectly.
What do I need to do to fix this?
The formula for the sum of squared errors is
sum((y - yhat)^2)
and not
(sum(y) - sum(yhat))^2
where yhat is the predicted value.
Also, if your only constraints are that the estimated betas should be negative (which is a bit weird, usually you want them to be positive but never mind), then you don't need constrOptim. Regular optim(method="L-BFGS-B") or nlminb will work with so-called box constraints.

EM algorithm for multivariate t mixed models

I'm trying to implement an EM algorithm for family data where I'm assuming my observations have a multivariate t distribution. I have only two siblings per family, so all of the family groups have only two observations. Basically I'm trying to follow the E(C)M steps in this article:
https://pdfs.semanticscholar.org/9445/ef865c4eb1431f9cb2abdb5efc1c361172cc.pdf
However, now I'm not sure if EM is doable for this kind of data, since my correlation matrix Psi should be block diagonal for families.
So here's an R example of how my families are structured
fam_id = sort(rep(1:5, 2))
Z= matrix(0, nrow = length(fam_id), ncol = length(unique(fam_id)))
colnames(Z) = unique(fam_id)
k = 1
i = 1
# Random effects dummy matrix
while (k <= ncol(Z)) {
Z[i:(i+1), k] = c(1, 1)
k = k +1
i = i+2
}
> Z
1 2 3 4 5
[1,] 1 0 0 0 0
[2,] 1 0 0 0 0
[3,] 0 1 0 0 0
[4,] 0 1 0 0 0
...
The EM algorithm chokes after 5th iteration saying that the correlation matrix Psi is not:
Error in solve.default(psi_hat) :
system is computationally singular
If anyone could shed some light to this, I'd be very happy!
Please check this answer in the Statschange website
https://stats.stackexchange.com/questions/76488/error-system-is-computationally-singular-when-running-a-glm
Your are probably ending up with a non invertible matrix in your 5th iteration

Extract knots, basis, coefficients and predictions for P-splines in adaptive smooth

I'm using the mgcv package to fit some polynomial splines to some data via:
x.gam <- gam(cts ~ s(time, bs = "ad"), data = x.dd,
family = poisson(link = "log"))
I'm trying to extract the functional form of the fit. x.gam is a gamObject, and I've been reading the documentation but haven't found enough information in order to manually reconstruct the fitted function.
x.gam$smooth contains information about whether the knots have been placed;
x.gam$coefficients gives the spline coefficients, but I don't know what order polynomial splines are used and looking in the code has not revealed anything.
Is there a neat way to extract the knots, coefficients and basis used so that one can manually reconstruct the fit?
I don't have your data, so I take the following example from ?adaptive.smooth to show you where you can find information you want. Note that though this example is for Gaussian data rather than Poisson data, only the link function is different; all the rest are just standard.
x <- 1:1000/1000 # data between [0, 1]
mu <- exp(-400*(x-.6)^2)+5*exp(-500*(x-.75)^2)/3+2*exp(-500*(x-.9)^2)
y <- mu+0.5*rnorm(1000)
b <- gam(y~s(x,bs="ad",k=40,m=5))
Now, all information on smooth construction is stored in b$smooth, we take it out:
smooth <- b$smooth[[1]] ## extract smooth object for first smooth term
knots:
smooth$knots gives you location of knots.
> smooth$knots
[1] -0.081161 -0.054107 -0.027053 0.000001 0.027055 0.054109 0.081163
[8] 0.108217 0.135271 0.162325 0.189379 0.216433 0.243487 0.270541
[15] 0.297595 0.324649 0.351703 0.378757 0.405811 0.432865 0.459919
[22] 0.486973 0.514027 0.541081 0.568135 0.595189 0.622243 0.649297
[29] 0.676351 0.703405 0.730459 0.757513 0.784567 0.811621 0.838675
[36] 0.865729 0.892783 0.919837 0.946891 0.973945 1.000999 1.028053
[43] 1.055107 1.082161
Note, three external knots are placed beyond each side of [0, 1] to construct spline basis.
basis class
attr(smooth, "class") tells you the type of spline. As you can read from ?adaptive.smooth, for bs = ad, mgcv use P-splines, hence you get "pspline.smooth".
mgcv use 2nd order pspline, you can verify this by checking the difference matrix smooth$D. Below is a snapshot:
> smooth$D[1:6,1:6]
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 1 -2 1 0 0 0
[2,] 0 1 -2 1 0 0
[3,] 0 0 1 -2 1 0
[4,] 0 0 0 1 -2 1
[5,] 0 0 0 0 1 -2
[6,] 0 0 0 0 0 1
coefficients
You have already known that b$coefficients contain model coefficients:
beta <- b$coefficients
Note this is a named vector:
> beta
(Intercept) s(x).1 s(x).2 s(x).3 s(x).4 s(x).5
0.37792619 -0.33500685 -0.30943814 -0.30908847 -0.31141148 -0.31373448
s(x).6 s(x).7 s(x).8 s(x).9 s(x).10 s(x).11
-0.31605749 -0.31838050 -0.32070350 -0.32302651 -0.32534952 -0.32767252
s(x).12 s(x).13 s(x).14 s(x).15 s(x).16 s(x).17
-0.32999553 -0.33231853 -0.33464154 -0.33696455 -0.33928755 -0.34161055
s(x).18 s(x).19 s(x).20 s(x).21 s(x).22 s(x).23
-0.34393354 -0.34625650 -0.34857906 -0.05057041 0.48319491 0.77251118
s(x).24 s(x).25 s(x).26 s(x).27 s(x).28 s(x).29
0.49825345 0.09540020 -0.18950763 0.16117012 1.10141701 1.31089436
s(x).30 s(x).31 s(x).32 s(x).33 s(x).34 s(x).35
0.62742937 -0.23435309 -0.19127140 0.79615752 1.85600016 1.55794576
s(x).36 s(x).37 s(x).38 s(x).39
0.40890236 -0.20731309 -0.47246357 -0.44855437
basis matrix / model matrix / linear predictor matrix (lpmatrix)
You can get model matrix from:
mat <- predict.gam(b, type = "lpmatrix")
This is an n-by-p matrix, where n is the number of observations, and p is the number of coefficients. This matrix has column name:
> head(mat[,1:5])
(Intercept) s(x).1 s(x).2 s(x).3 s(x).4
1 1 0.6465774 0.1490613 -0.03843899 -0.03844738
2 1 0.6437580 0.1715691 -0.03612433 -0.03619157
3 1 0.6384074 0.1949416 -0.03391686 -0.03414389
4 1 0.6306815 0.2190356 -0.03175713 -0.03229541
5 1 0.6207361 0.2437083 -0.02958570 -0.03063719
6 1 0.6087272 0.2688168 -0.02734314 -0.02916029
The first column is all 1, giving intercept. While s(x).1 suggests the first basis function for s(x). If you want to view what individual basis function look like, you can plot a column of mat against your variable. For example:
plot(x, mat[, "s(x).20"], type = "l", main = "20th basis")
linear predictor
If you want to manually construct the fit, you can do:
pred.linear <- mat %*% beta
Note that this is exactly what you can get from b$linear.predictors or
predict.gam(b, type = "link")
response / fitted values
For non-Gaussian data, if you want to get response variable, you can apply inverse link function to linear predictor to map back to original scale.
Family information are stored in gamObject$family, and gamObject$family$linkinv is the inverse link function. The above example will certain gives you identity link, but for your fitted object x.gam, you can do:
x.gam$family$linkinv(x.gam$linear.predictors)
Note this is the same to x.gam$fitted, or
predict.gam(x.gam, type = "response").
Other links
I have just realized that there were quite a lot of similar questions before.
This answer by Gavin Simpson is great, for predict.gam( , type = 'lpmatrix').
This answer is about predict.gam(, type = 'terms').
But anyway, the best reference is always ?predict.gam, which includes extensive examples.

lpsolveAPI in RStudio

I am using the lpsolveAPI in RStudio. When I type the name of a model with few decision variables, I can read a printout of the current constraints in the model. For example
> lprec
Model name:
COLONE COLTWO COLTHREE COLFOUR
Minimize 1 3 6.24 0.1
THISROW 0 78.26 0 2.9 >= 92.3
THATROW 0.24 0 11.31 0 <= 14.8
LASTROW 12.68 0 0.08 0.9 >= 4
Type Real Real Real Real
Upper Inf Inf Inf 48.98
Lower 28.6 0 0 18
But when I make a model that has more than 9 decision variables, it no longer gives the full summary and I instead see:
> lprec
Model name:
a linear program with 13 decision variables and 258 constraints
Does anyone know how I can see the same detailed summary of the model when there are large numbers of decision variables?
Bonus Question: Is RStudio the best console for working with R?
Here is an example:
>lprec <- make.lp(0,5)
This makes a new model called lprec, with 0 constraints and 5 variables. Even if you call the name now you get:
>lprec
Model name:
C1 C2 C3 C4 C5
Minimize 0 0 0 0 0
Kind Std Std Std Std Std
Type Real Real Real Real Real
Upper Inf Inf Inf Inf Inf
Lower 0 0 0 0 0
The C columns correspond to the 5 variables. Right now there are no constraints and the objective function is 0.
You can add a constraint with
>add.constraint(lprec, c(1,3,4,2,-8), "<=", 0)
This is the constraint C1 + 3*C2 + 4*C3 + 2*C4 - 8*C5 <= 0. Now the print out is:
Model name:
C1 C2 C3 C4 C5
Minimize 0 0 0 0 0
R1 1 3 4 2 -8 <= 0
Kind Std Std Std Std Std
Type Real Real Real Real Real
Upper Inf Inf Inf Inf Inf
Lower 0 0 0 0 0
Anyway the point is that no matter how many constraints, if there are more than 9 variables then I don't get the full print out.
>lprec <- make.lp(0,15)
>lprec
Model name:
a linear program with 15 decision variables and 0 constraints
Write it out to a file for examination
When I work with LPs using lpSolveAPI, I prefer to write them out to a file. The lp format works fine for my needs. I then examine the LP model using any text editor. If you click on the output file in the "Files" panel in RStudio, it will open it too, and you can inspect it.
write.lp(lprec, "lpfilename.lp", "lp") #write it to a file in LP format
You can also write it out as MPS format if you so choose.
Here's the help file on write.lp().
Hope that helps.
Since it is an S3 object of class lpExtPtr,
the function called to display it is print.lpExtPtr.
If you check its code, you will see that it displays the object
differently depending on its size --
details for very big objects would not be very useful.
Unfortunately, the threshold cannot be changed.
class(r)
# [1] "lpExtPtr"
print.lpExtPtr
# function (x, ...)
# {
# (...)
# if (n > 8) {
# cat(paste("Model name: ", name.lp(x), "\n", " a linear program with ",
# n, " decision variables and ", m, " constraints\n",
# sep = ""))
# return(invisible(x))
# }
# (...)
You can access the contents of the object with the various get.* functions,
as the print method does.
Alternatively, you can just change the print method.
# A function to modify functions
patch <- function( f, before, after ) {
f_text <- capture.output(dput(f))
g_text <- gsub( before, after, f_text )
g <- eval( parse( text = g_text ) )
environment(g) <- environment(f)
g
}
# Sample data
library(lpSolveAPI)
r <- make.lp(0,5)
r # Shows the details
r <- make.lp(0,20)
r # Does not show the details
# Set the threshold to 800 variables instead of 8
print.lpExtPtr <- patch( print.lpExtPtr, "8", "800" )
r # Shows the details

How to replace an element of a symmetric matrix randomly?

Suppose I have a matrix like so:
data=matrix(c(1,0,0,0,0,0,1,0,0.6583,0,0,0,1,0,0,0,0.6583,0,1,0,0,0,0,0,1),nrow=5,ncol=5)
[,1] [,2] [,3] [,4] [,5]
[1,] 1 0.0000 0 0.0000 0
[2,] 0 1.0000 0 0.6583 0
[3,] 0 0.0000 1 0.0000 0
[4,] 0 0.6583 0 1.0000 0
[5,] 0 0.0000 0 0.0000 1
How do I create another matrix, say "data2", such that it has the same number of off-diagonal nonzero elements as "data" but in another location other than the one in data? The randomly simulated data will be uniform (so runif).
Here is a somewhat clumsy way to do this. It works well for small matrices but would be too slow if you're going to use this for some very high-dimensional problems.
# Current matrix:
data=matrix(c(1,0,0,0,0,0,1,0,0.6583,0,0,0,1,0,0,0,0.6583,0,1,0,0,0,0,0,1),nrow=5,ncol=5)
# Number of nonzero elements in upper triangle:
no.nonzero<-sum(upper.tri(data)*data>0)
# Generate same number of new nonzero correlations:
new.cor<-runif(no.nonzero,-1,1)
# Create new diagonal matrix:
p<-dim(data)[1]
data2<-diag(1,p,p)
### Insert nonzero correlations: ###
# Step 1. Identify the places where the nonzero elements can be placed:
pairs<-(p^2-p)/2 # Number of element in upper triangle
combinations<-matrix(NA,pairs,2) # Matrix containing indices for those elements (i.e. (1,2), (1,3), ... (2,3), ... and so on)
k<-0
for(i in 1:(p-1))
{
for(j in {i+1}:p)
{
k<-k+1
combinations[k,]<-c(i,j)
}
}
# Step 2. Randomly pick indices:
places<-sample(1:k,no.nonzero)
# Step 3. Insert nonzero correlations:
for(i in 1:no.nonzero)
{
data2[combinations[places[i],1],combinations[places[i],2]]<-data2[combinations[places[i],2],combinations[places[i],1]]<-new.cor[i]
}
Not really understood the question. There are two off-diagonal and non-zero elements (0.6583) in the example, right? Is matrix with two elements the result you want in this case?
data=matrix(c(1,0,0,0,0,0,1,0,0.6583,0,0,0,1,0,0,0,0.6583,0,1,0,0,0,0,0,1),nrow=5,ncol=5)
# Convert to vector
data2 <- as.numeric(data)
# Remove diagonal
data2 <- data2[as.logical(upper.tri(data) | lower.tri(data))]
# Remove 0 elements
data2 <- data2[data2 != 0]
data2

Resources