constrOptim function: Optimize R code - r

I am trying to solve an optimization problem.
Below is the mathematical explanation of the problem and the code I used:
F = {f_1, f_2, ... f_n}
S = {s_1, s_2, ....s_m}
Here m is always greater than n, and sum(S) is always greater than sum(F)
if ST = transpose(S)
Find a matrix P (n x m) = {p_ij}, such that: P %* % ST = F, where %* % is matrix multiplication, with respect to following constraints:
p_ij>= 0, for all i and j
sum (p_ij) <=1 when i varies from 1 to n.
Since the exact solution may not exist, I am trying to minimize Error by minimizing [ P %* % ST - F ].[ P %* % ST - F ], where . is the dot product
So the problem is that of constrained optimization where I use the following code.
F = c(10,10,5)
S = c(8,8,9,8,4)
loss_fun <- function(P){
T = matrix(S*P, nrow = n,ncol = m, byrow=T)
F2 = rowSums(T) # Predicted values of F
E = F - F2 # Error
return(sum(E*E))
}
n = length(F)
m = length(S)
P_init = c(rep(0.0001,n*m)) #Initial solution (theta)
# Creating constraint matrix
ui_1 = matrix(0,ncol = n*m, nrow= m)
for (i in 1:m){
for (j in 1:(n*m)) {
if (i%%m==j%%m) ui_1[i,j] = -1
}
}
ui_2 = diag(1,ncol = n*m, nrow = m*n)
my_ui <- rbind(ui_1,ui_2)
# Creating constraint vector
my_ci = c(rep(-1,m),rep(0,n*m))
z = constrOptim(P_init,loss_fun,NULL,ui=my_ui, ci=my_ci)
#result
P_final = matrix(z$par,nrow=n,byrow=T)
#verification of result
T = t(S*t(P_final)) #proportion matrix * S, transpose to ensure multiplication is by row.
F2 = rowSums(T) # Predicted values of F
E = F - F2 # Error
sum(E*E)
The above code works fine and runs in less than 0.5 seconds on my machine which has i5 CPU, 4 cores, 8 GB RAM, 64 bit windows 7, and 64 bit R 3.1.1.
However when I used F and S as in my real problem it ran for around 15 hours without producing any output. F has 39 elements, S has 196.
F = c(212,359,186,396,460,449,206,180,383,264,294,179,256,294,173,415,363,323,389,219,298,338,287,434,195,450,120,460,164,395,198,108,72,345,54,450,420,488,262)
S = c(233.81,0,1.13,59.68,0,768.18,12.33,147.56,115.2,537.32,0,144.35,93.63,13.43,48.58,60,78.26,1280,369.62,8.11,0,342.96,452.99,521.72,4995.58,0,0,10.59,8.1,38.89,161.67,186.14,0,83.22,13.89,37.35,2370,0,0,8.61,4.95,6.31,0,1.53,3600,0,12.48,444.26,0,8490,615.25,27.11,402.95,393.46,1.26,0,44.36,728.85,37.61,159.06,103.63,145.38,0.51,0,0,18.6,3.24,44.5,17.46,210,128.03,19.48,340.79,54.79,54.42,48.48,0,44.76,0,0,0,43.19,102.03,0,0,470,0,101,0,9060,6.09,8.33,49.09,0,19.72,170,57.54,128.78,636.01,10.93,38.79,0,0,49.65,173.58,101.96,21.84,2.55,14.55,770,7419.13,216.21,238.15,582.95,57.93,26.97,71.88,4.63,0,31,103.37,570.58,45.79,540,348.9,151.82,207.41,29.56,51.73,92.25,0,0,51.39,25.14,0,0,95.21,298.94,5.77,154.29,280,1666.59,40.19,0,9.37,119.76,0,0,9.17,28.19,67.5,129.62,85.41,24.59,3607.98,0,130.28,99.57,0,0,0,36.23,1140,328.87,0,0,0,40,22.77,0,2.08,0,0,0,14.66,0,102.86,50.06,13.22,62.25,1410,860,930,646.15,0,0,0,0,890,0,0,12.61,86.4,95.35,19.31,87.74
)
The rbind itself takes 3 to 4 seconds but the real problem is time taken by constrOptim.

Because your constraint is simple, you can avoid big-matrix calculations in the constraint part when you use some packages that can take function as a constraint argument, such as alabama.
loss_fun <- function(P){
T = matrix(S*P, nrow = n,ncol = m, byrow=T)
F2 = rowSums(T) # Predicted values of F
E = F - F2 # Error
return(sum(E*E))
}
n = length(F)
m = length(S)
P_init = c(rep(0.0001, n*m)) #Initial solution (theta)
# Creating inequality constraint function (this is much faster than my_ui %*% P - my_ci)
hin <- function(P){
P_mat <- matrix(P, nrow = m)
c(rowSums(P_mat) * -1 +1, P)
}
library(alabama)
aug_res <- auglag(P_init, loss_fun, hin = hin, control.outer = list(kkt2.check = FALSE))

Related

Improved inverse transform method for Poisson random variable generation in R

I am reading Section 4.2 in Simulation (2006, 4ed., Elsevier) by Sheldon M. Ross, which introducing generating a Poisson random variable by the inverse transform method.
Denote pi =P(X=xi)=e^{-λ} λ^i/i!, i=0,1,... and F(i)=P(X<=i)=Σ_{k=0}^i pi to be the PDF and CDF for Poisson, respectively, which can be computed via dpois(x,lambda) and ppois(x,lambda) in R.
There are two inverse transform algorithms for Poisson: the regular version and the improved one.
The steps for the regular version are as follows:
Simulate an observation U from U(0,1)​.
Set i=0​ and ​F=F(0)=p0=e^{-λ}​.
If U<F​, select ​X=​i and terminate.
If U >= F​, obtain i=i+1, F=F+pi​ and return to the previous step.
I write and test the above steps as follows:
### write the regular R code
pois_inv_trans_regular = function(n, lambda){
X = rep(0, n) # generate n samples
for(m in 1:n){
U = runif(1)
i = 0; F = exp(-lambda) # initialize
while(U >= F){
i = i+1; F = F + dpois(i,lambda) # F=F+pi
}
X[m] = i
}
X
}
### test the code (for small λ, e.g. λ=3)
set.seed(0); X = pois_inv_trans_regular(n=10000,lambda=3); c(mean(X),var(X))
# [1] 3.005000 3.044079
Note that the mean and variance for Poisson(λ) are both λ, so the writing and testing for the regular code are making sense!
Next I tried the improved one, which is designed for large λ and described according to the book as follows:
The regular algorithm will need to make 1+λ searches, i.e. O(λ) computing complexity, which is fine when λ is small, while it can be greatly improved upon when λ is large.
Indeed, since a Poisson random variable with mean λ is most likely to take on one of the two integral values closest to λ , a more efficient algorithm would first check one of these values, rather than starting at 0 and working upward. For instance, let I=Int(λ) and recursively determine F(I).
Now generate a Poisson random variable X with mean λ by generating a random number U, noting whether or not X <= I​ by seeing whether or not ​U <= F(I)​. Then search downward starting from ​I​ in the case where X <= I​ and upward starting from ​I+1​ otherwise.
It is said that the improved algorithm only need 1+0.798√λ searches, i.e., having O(√λ) complexity.
I tried to wirte the R code for the improved one as follows:
### write the improved R code
pois_inv_trans_improved = function(n, lambda){
X = rep(0, n) # generate n samples
p = function(x) {dpois(x,lambda)} # PDF: p(x) = P(X=x) = λ^x exp(-λ)/x!
F = function(x) {ppois(x,lambda)} # CDF: F(x) = P(X ≤ x)
I = floor(lambda) # I=Int(λ)
F1 = F(I); F2 = F(I+1) # two close values
for(k in 1:n){
U = runif(1)
i = I
if ( F1 < U & U <= F2 ) {
i = I+1
}
while (U <= F1){ # search downward
i = i-1; F1 = F1 - p(i)
}
while (U > F2){ # search upward
i = i+1; F2 = F2 + p(i)
}
X[k] = i
}
X
}
### test the code (for large λ, e.g. λ=100)
set.seed(0); X = pois_inv_trans_improved(n=10000,lambda=100); c(mean(X),var(X))
# [1] 100.99900000 0.02180118
From the simulation results [1] 100.99900000 0.02180118 for c(mean(X),var(X)), which shows nonsense for the variance part. What should I remedy this issue?
The main problem was that F1 and F2 were modified within the loop and not reset, so eventually a very wide range of U's are considered to be in the middle.
The second problem was on the search downward the p(i) used should be the original i, because F(x) = P(X <= x). Without this, the code hangs for low U.
The easiest fix for this is to start i = I + 1. Then "in the middle" if statement isn't needed.
pois_inv_trans_improved = function(n, lambda){
X = rep(0, n) # generate n samples
p = function(x) {dpois(x,lambda)} # PDF: p(x) = P(X=x) = λ^x exp(-λ)/x!
`F` = function(x) {ppois(x,lambda)} # CDF: F(x) = P(X ≤ x)
I = floor(lambda) # I=Int(λ)
F1 = F(I); F2 = F(I+1) # two close values
for(k in 1:n){
U = runif(1)
i = I + 1
# if ( F1 < U & U <= F2 ) {
# i = I + 1
# }
F1tmp = F1
while (U <= F1tmp){ # search downward
i = i-1; F1tmp = F1tmp - p(i);
}
F2tmp = F2
while (U > F2tmp){ # search upward
i = i+1; F2tmp = F2tmp + p(i)
}
X[k] = i
}
X
}
This gives:
[1] 100.0056 102.2380

Sum of arrays of different dimension in R

I have this code:
k = 20
rho = 0.5
pi_greco = array(rep(1/k, k), c(1,20,50))
pi_greco_x <- function(k, rho, pi_greco){
E = array(diag(k),c(20, 20, 50))
E[k,k,] = 0
prob = (1-rho)*pi_greco + rho*E
return(prob)
}
and in prob I need to sum each matrix of dimension 1x20 of pi_greco (multiplied by 1-rho) with each matrix of dimension 20x20 of E (multiplied by rho) in order to get 50 different matrix in prob. But how can I do it?
It is like saying that I would like to do
prob = (1-rho)*pi_greco[,,1] + rho*E[,,1]
But for all the 50 times without using a for cycle
Thanks in advance.
Consider this:
k = 3
rho = 0.5
pi_greco = array(rep(1/k, k), c(1,20,50))
pi_greco_x <- function(k, rho, pi_greco){
E = array(diag(k),c(k, 20, 50))
E[k,k,] = 0
p_rows <- Reduce(
f=function(a,b){ abind( a, (1-rho)*pi_greco, along=1 ) },
x=1:k,
init=NULL
)
prob <- p_rows + rho*E
return(prob)
}
pi_greco_x( k, rho, pi_greco )
I repeat the 1x20x50 to be 20x20x50 by using abind 20 times
This means they can now safely be added together. Working with more than two dimensions can be problematic. The typical human brain isn't very used to it.

REBayes Error in KWDual MKS_RES_TERM_STALL

I am trying to run the following simulation below. Note that this does require Mosek and RMosek to be installed!
I keep getting the error
Error in KWDual(A, d, w, ...) :
Mosek error: MSK_RES_TRM_STALL: The optimizer is terminated due to slow progress.
How can I resolve the MSK_RES_TRM_STALL error?
Further Research
When looking up the documentation for this I found this:
The optimizer is terminated due to slow progress.
Stalling means that numerical problems prevent the optimizer from making reasonable progress and that it makes no sense to continue. In many cases this happens if the problem is badly scaled or otherwise ill-conditioned. There is no guarantee that the solution will be feasible or optimal. However, often stalling happens near the optimum, and the returned solution may be of good quality. Therefore, it is recommended to check the status of the solution. If the solution status is optimal the solution is most likely good enough for most practical purposes.
Please note that if a linear optimization problem is solved using the interior-point optimizer with basis identification turned on, the returned basic solution likely to have high accuracy, even though the optimizer stalled.
Some common causes of stalling are a) badly scaled models, b) near feasible or near infeasible problems.
So I checked the final value A, but nothing was in it. I found that if I change the simulations from 1000 to 30 I do get values (A <- sim1(30, 30, setting = 1)), but this is suboptimal.
Reproducible Script
KFE <- function(y, T = 300, lambda = 1/3){
# Kernel Fourier Estimator: Stefanski and Carroll (Statistics, 1990)
ks <- function(s,x) exp(s^2/2) * cos(s * x)
K <- function(t, y, lambda = 1/3){
k <- y
for(i in 1:length(y)){
k[i] <- integrate(ks, 0, 1/lambda, x = (y[i] - t))$value/pi
}
mean(k)
}
eps <- 1e-04
if(length(T) == 1) T <- seq(min(y)-eps, max(y)+eps, length = T)
g <- T
for(j in 1:length(T))
g[j] <- K(T[j], y, lambda = lambda)
list(x = T, y = g)
}
BDE <- function(y, T = 300, df = 5, c0 = 1){
# Bayesian Deconvolution Estimator: Efron (B'ka, 2016)
require(splines)
eps <- 1e-04
if(length(T) == 1) T <- seq(min(y)-eps, max(y)+eps, length = T)
X <- ns(T, df = df)
a0 <- rep(0, ncol(X))
A <- dnorm(outer(y,T,"-"))
qmle <- function(a, X, A, c0){
g <- exp(X %*% a)
g <- g/sum(g)
f <- A %*% g
-sum(log(f)) + c0 * sum(a^2)^.5
}
ahat <- nlm(qmle, a0, X=X, A=A, c0 = c0)$estimate
g <- exp(X %*% ahat)
g <- g/integrate(approxfun(T,g),min(T),max(T))$value
list(x = T,y = g)
}
W <- function(G, h, interp = FALSE, eps = 0.001){
#Wasserstein distance: ||G-H||_W
H <- cumsum(h$y)
H <- H/H[length(H)]
W <- integrate(approxfun(h$x, abs(G(h$x) - H)),min(h$x),max(h$x))$value
list(W=W, H=H)
}
biweight <- function(x0, x, bw){
t <- (x - x0)/bw
(1-t^2)^2*((t> -1 & t<1)-0) *15/16
}
Wasser <- function(G, h, interp = FALSE, eps = 0.001, bw = 0.7){
#Wasserstein distance: ||G-H||_W
if(interp == "biweight"){
yk = h$x
for (j in 1:length(yk))
yk[j] = sum(biweight(h$x[j], h$x, bw = bw)*h$y/sum(h$y))
H <- cumsum(yk)
H <- H/H[length(H)]
}
else {
H <- cumsum(h$y)
H <- H/H[length(H)]
}
W <- integrate(approxfun(h$x, abs(G(h$x) - H)),min(h$x),max(h$x),
rel.tol = 0.001, subdivisions = 500)$value
list(W=W, H=H)
}
sim1 <- function(n, R = 10, setting = 0){
A <- matrix(0, 4, R)
if(setting == 0){
G0 <- function(t) punif(t,0,6)/8 + 7 * pnorm(t, 0, 0.5)/8
rf0 <- function(n){
s <- sample(0:1, n, replace = TRUE, prob = c(1,7)/8)
rnorm(n) + (1-s) * runif(n,0,6) + s * rnorm(n,0,0.5)
}
}
else{
G0 <- function(t) 0 + 7 * (t > 0)/8 + (t > 2)/8
rf0 <- function(n){
s <- sample(0:1, n, replace = TRUE, prob = c(1,7)/8)
rnorm(n) + (1-s) * 2 + s * 0
}
}
for(i in 1:R){
y <- rf0(n)
g <- BDE(y)
Wg <- Wasser(G0, g)
h <- GLmix(y)
Wh <- Wasser(G0, h)
Whs <- Wasser(G0, h, interp = "biweight")
k <- KFE(y)
Wk <- Wasser(G0, k)
A[,i] <- c(Wg$W, Wk$W, Wh$W, Whs$W)
}
A
}
require(REBayes)
set.seed(12)
A <- sim1(1000, 1000, setting = 1)
I ran the code and indeed it stalls at the end, but the solution is not any worse than in the preceding cases that solve without stall:
17 1.7e-07 3.1e-10 6.8e-12 1.00e+00 5.345949918e+00 5.345949582e+00 2.4e-10 0.40
18 2.6e-08 3.8e-11 2.9e-13 1.00e+00 5.345949389e+00 5.345949348e+00 2.9e-11 0.41
19 2.6e-08 3.8e-11 2.9e-13 1.00e+00 5.345949389e+00 5.345949348e+00 2.9e-11 0.48
20 2.6e-08 3.8e-11 2.9e-13 1.00e+00 5.345949389e+00 5.345949348e+00 2.9e-11 0.54
Optimizer terminated. Time: 0.62
Interior-point solution summary
Problem status : PRIMAL_AND_DUAL_FEASIBLE
Solution status : OPTIMAL
Primal. obj: 5.3459493890e+00 nrm: 6e+00 Viol. con: 2e-08 var: 0e+00 cones: 4e-09
Dual. obj: 5.3459493482e+00 nrm: 7e-01 Viol. con: 1e-11 var: 4e-11 cones: 0e+00
A quick hack for now that worked for me is to relax the termination tolerances a little bit in the call to GLmix:
control <- list()
control$dparam <- list(INTPNT_CO_TOL_REL_GAP=1e-7,INTPNT_CO_TOL_PFEAS=1e-7,INTPNT_CO_TOL_DFEAS=1e-7)
h <- GLmix(y,control=control,verb=5)
A better solution as I indicated in the comments is not to treat the stall termination code as an error by the REBayes package but use solution status/quality instead.
I have modified the return from KWDual to avoid such messages provided that
the status sol$itr$solsta from Mosek is "Optimal" in REBayes v2.2 now on CRAN.

Markowitz model / portfolio optimization using local search in R

I am taking baby steps to use metaheuristics for solving constrained optimization problems. I am trying to solve basic Markowitz Mean-Variance optimization model (given below) using NMOFpackage in R.
Min
lambda * [sum{i=1 to N}sum{j = 1 to N}w_i*w_i*Sigma_ij] - (1-lambda) * [sum{i=1 to N}(w_i*mu_i)]
subject to
sum{i=1 to N}{w_i} = 1
0 <= w_i <= 1; i = 1,...,N
where, lambda takes values between 0 and 1, N is number of assets.
Following is my code (Based on Book: Numerical Methods and Optimization in Finance):
library(NMOF)
na <- dim(fundData)[2L]
ns <- dim(fundData)[1L]
Sigma <- cov(fundData)
winf <- 0.0
wsup <- 1.0
m <- colMeans(fundData)
resample <- function(x,...) x[sample.int(length(x),...)]
data <- list(R = t(fundData),
m = m,
na = dim(fundData)[2L],
ns = dim(fundData)[1L],
Sigma = Sigma,
eps = 0.5/100,
winf = winf,
wsup = wsup,
nFP = 100)
w0 <- runif(data$na); w0 <- w0/sum(w0)
OF <- function(w,data){
wmu <- crossprod(w,m)
res <- crossprod(w, data$Sigma)
res <- tcrossprod(w,res)
result <- res - wmu
}
neighbour <- function(w, data){
toSell <- w > data$winf
toBuy <- w < data$wsup
i <- resample(which(toSell), size = 1L)
j <- resample(which(toBuy), size = 1L)
eps <- runif(1) * data$eps
eps <- min(w[i] - data$winf, data$wsup - w[j], eps)
w[i] <- w[i] - eps
w[j] <- w[j] + eps
w
}
algo <- list(x0 = w0, neighbour = neighbour, nS = 5000L)
system.time(sol1 <- LSopt(OF, algo, data))
I am not sure how to include lambda in the objective function (OF). The above code does not include lambda in OF. I tried using for loop but it resulted in following error:
OF <- function(w,data){
lambdaSeq <- seq(.001,0.999, length = data$nFP)
for(lambda in lambdaSeq){
wmu <- crossprod(w,m)
res <- crossprod(w, data$Sigma)
res <- tcrossprod(w,res)
result <- lambda*res - (1-lambda)*wmu
}
}
Error:
Local Search.
Initial solution:
| | 0%
Error in if (xnF <= xcF) { : argument is of length zero
Timing stopped at: 0.01 0 0.03
It would be nice if someone could help me in this regard.
P.S: I am also aware that this can be solved using quadratic programming. This is just an initiation to include other constraints.
If I understand correctly, you want to replicate the mean--variance efficient frontier by Local Search? Then you need to run a Local Search for every value of lambda that you want to include in the frontier.
The following example should help you get going. I start by attaching the package and setting up the list data.
require("NMOF")
data <- list(m = colMeans(fundData), ## expected returns
Sigma = cov(fundData), ## expected var of returns
na = dim(fundData)[2L], ## number of assets
eps = 0.2/100, ## stepsize for LS
winf = 0, ## minimum weight
wsup = 1, ## maximum weight
lambda = 1)
Next I compute a benchmark for the minimum-variance case (i.e. lambda equals one).
## benchmark: the QP solution
## ==> this will only work with a recent version of NMOF,
## which you can get by saying:
## install.packages('NMOF', type = 'source',
## repos = c('http://enricoschumann.net/R',
## getOption('repos')))
##
require("quadprog")
sol <- NMOF:::minvar(data$Sigma, 0, 1)
Objective function and neighbourhood function. I have slightly simplified both functions (for clarity; using crossprod in the objective function would probably be more efficient).
OF <- function(w, data){
data$lambda * (w %*% data$Sigma %*% w) -
(1 - data$lambda) * sum(w * data$m)
}
neighbour <- function(w, data){
toSell <- which(w > data$winf)
toBuy <- which(w < data$wsup)
i <- toSell[sample.int(length(toSell), size = 1L)]
j <- toBuy[sample.int(length(toBuy), size = 1L)]
eps <- runif(1) * data$eps
eps <- min(w[i] - data$winf, data$wsup - w[j], eps)
w[i] <- w[i] - eps
w[j] <- w[j] + eps
w
}
Now we can run Local Search. Since it is a fairly large dataset (200 assets),
you will need a relatively large number of steps to reproduce the QP solution.
w0 <- runif(data$na) ## a random initial solution
w0 <- w0/sum(w0)
algo <- list(x0 = w0, neighbour = neighbour, nS = 50000L)
sol1 <- LSopt(OF, algo, data)
You can compare the weights you get from Local Search with the QP solution.
par(mfrow = c(3,1), mar = c(2,4,1,1), las = 1)
barplot(sol, main = "QP solution")
barplot(sol1$xbest, main = "LS solution")
barplot(sol - sol1$xbest,
ylim = c(-0.001,0.001)) ## +/-0.1%
Finally, if you want to compute the whole frontier, you need to rerun this code for different levels of data$lambda.

R: How to add jitter only on singular matrices within a function?

I have the following function that I need to (m)apply on a list of more than 1500 large matrices (Z) and a list of vectors (p) of the same length. However, I get the error that some matrices are singular as I already posted here. Here my function:
kastner <- function(item, p) { print(item)
imp <- rowSums(Z[[item]])
exp <- colSums(Z[[item]])
x = p + imp
ac = p + imp - exp
einsdurchx = 1/as.vector(x)
einsdurchx[is.infinite(einsdurchx)] <- 0
A = Z[[item]] %*% diag(einsdurchx)
R = solve(diag(length(p))-A) %*% diag(p)
C = ac * einsdurchx
R_bar = diag(as.vector(C)) %*% R
rR_bar = round(R_bar)
return(rR_bar)
}
and my mapply command that also prints the names of the running matrix:
KASTNER <- mapply(kastner, names(Z), p, SIMPLIFY = FALSE)
In order to overcome the singularity problem, I want to add a small amount of jitter the singular matrices. The problem starts in line 9 of the function R = solve(diag(length(p))-A) %*% diag(p) as this term(diag(length(p))-A) gets singular and can't be solved. I tried to add jitter to all Z matrices in the first line of the function using: Z <- lapply(Z,function(x) jitter(x, factor = 0.0001, amount = NULL)), but this is very very low and produces still errors.
Therefore my idea is to check with if/else or something similar if this matrix diag(length(p))-A is singular (maybe using eigenvectors to check collinearity) and add on those matrices jitter, else (if not) the solve command should performed as it is. Ideas how to implement this on the function? Thanks
Here some example data, although there is no problem with singularity as I was not able to rebuild this error for line 9:
Z <- list("111.2012"= matrix(c(0,0,100,200,0,0,0,0,50,350,0,50,50,200,200,0),
nrow = 4, ncol = 4, byrow = T),
"112.2012"= matrix(c(10,90,0,30,10,90,0,10,200,50,10,350,150,100,200,10),
nrow = 4, ncol = 4, byrow = T))
p <- list("111.2012"=c(200, 1000, 100, 10), "112.2012"=c(300, 900, 50, 100))
Edit: a small amount o jitter shouldn't be problematic in my data as I have probably more than 80% of zeros in my matrices and than large values. And I am only interested in those large values, but the large amount of 0s are probably the reason for the singularity, but needed.
Since you didn't provide a working example I couldn't test this easily, so the burden of proof is on you. :) In any case, it should be a starting point for further tinkering. Comments in the code.
kastner <- function(item, p) { print(item)
imp <- rowSums(Z[[item]])
exp <- colSums(Z[[item]])
x = p + imp
ac = p + imp - exp
einsdurchx = 1/as.vector(x)
einsdurchx[is.infinite(einsdurchx)] <- 0
# start a chunk that repeats until you get a valid result
do.jitter <- TRUE # bureaucracy
while (do.jitter == TRUE) {
# run the code as usual
A = Z[[item]] %*% diag(einsdurchx)
# catch any possible errors, you can even catch "singularity" error here by
# specifying error = function(e) e
R <- tryCatch(solve(diag(length(p))-A) %*% diag(p), error = function(e) "jitterme")
# if you were able to solve(), and the result is a matrix (carefuly if it's a vector!)...
if (is.matrix(R)) {
# ... turn the while loop off
do.jitter <- FALSE
} else {
#... else apply some jitter and repeat by construcing A from a jittered Z[[item]]
Z[[item]] <- jitter(Z[[item]])
}
}
C = ac * einsdurchx
R_bar = diag(as.vector(C)) %*% R
rR_bar = round(R_bar)
return(rR_bar)
}

Resources