Solving a system of non-linear equations with only one unknown - r

I am trying to pin down the barrier between high-skill technology and low-skill technology industries in the Task-Based model of the labor market. This takes place in the task in which the price of production is equal to each other.
I is equal to P1 - P2 = 0
P1 <- (mu_L[i]^(-1)*(w_L/delta)^delta)
P2 <- (mu_H[i]^(-1)*(w_H/alpha)^alpha)
It is part of a dynamic problem in which wage is dependent on I and I on wage. The relevant further equations are wage high and low: wage high and wage low. L_L, the labour supply linked to low and high-skill is based on data and around 0.8 for L_L and 0.2 for L_H. Y is for now set on 40,000 also based on data.
task <- seq(0.01,1, by = 0.01)
alpha <- 0.55
delta <- 0.83
mu_H <- task
mu_L <- 1-task
L_L <- 0.8
L_H <- 0.2
Y <- 40000
I_start <- 0.5
I <- I_start
w_H <- alpha * I * (Y / L_H)
w_L <- delta*(1-I)*(Y / L_L)
In the definition of I, it is the task (i) in which the price of low-skill production minus the price of high-skill production is zero. the value of mu is dependent on the industry and by finding for which value of mu, the equation equals zero, one can find which industry I is. mu high is equal to i, while mu low is (1-i). i indicating the task from a uniform distribution between 0 and 1.
I have written a for loop to account for the dynamic modelling but cannot figure out how to pin down I in this, all non-linear system solvers require two unknown for two equations and I cannot figure out how to code it in such a way that it is equal to zero as a condition.
I have tried the following approaches:
using range:
range2 <- seq(0.01,1, by = 0.001)
range <- (1 - range2)
range[which(range^(-1)*(w_L/delta)^delta == range2^(-1)*(w_H/alpha)^alpha)]
The issue is that the answer lies between step 14 and 15, and as the steps are too large there is no exact match found. Also not when I increase the steps to 1e-8, at some point I get an error that there are too many steps. If I could at a tolerance level here that might be the solution, however, I dont know how I can do that. I tried it like this:
r <- range^(-1)*(w_L/delta)^delta - range2^(-1)*(w_H/alpha)^alpha
rr <- ifelse(r<0.01, print(TRUE), print(FALSE))
data.r <- data.frame(r, rr)
length(unique(data.r$rr))
but the tolerance might be too small, at the same time, the difference must be between 0 and 1, as the mu is limited between 0 and 1. So I feel that a tolerance of 0.01 is not asking too much. In the theory that should work.
I tried the nleqslv package but it gives me answers outside of the range [0,1].
dslnex <- function(x) {
y <- numeric(2)
y[1] <- (x[1]^(-1)*(w_L/delta)^delta)
y[2] <- (x[2]^(-1)*(w_H/alpha)^alpha)
y
}
xstart=c(0.5, 0.5)
nleqslv(xstart, dslnex, control = list())
I am hoping to find for which value of mu such that the two equations for price are equal to each other so I can pin down the task (i) in which this is.

You can use uniroot() for this:
f <- function(x) {
(1/x)*(w_L/delta)^delta - (1/(1-x))*(w_H/alpha)^alpha
}
uniroot(f, c(1e-2, 1))
Solution:
$root
[1] 0.888271
$f.root
[1] -1.03622
$iter
[1] 9
$init.it
[1] NA
$estim.prec
[1] 6.103516e-05
For more precision (if you need it) you can reduce the tolerance: uniroot(f, c(1e-2, 1), tol = 1e-8) gives a root at 0.8882506

Related

generating random x and y coordinates with a minimum distance

Is there a way in R to generate random coordinates with a minimum distance between them?
E.g. what I'd like to avoid
x <- c(0,3.9,4.1,8)
y <- c(1,4.1,3.9,7)
plot(x~y)
This is a classical problem from stochastic geometry. Completely random points in space where the number of points falling in disjoint regions are independent of each other corresponds to a homogeneous Poisson point process (in this case in R^2, but could be in almost any space).
An important feature is that the total number of points has to be random before you can have independence of the counts of points in disjoint regions.
For the Poisson process points can be arbitrarily close together. If you define a process by sampling the Poisson process until you don't have any points that are too close together you have the so-called Gibbs Hardcore process. This has been studied a lot in the literature and there are different ways to simulate it. The R package spatstat has functions to do this. rHardcore is a perfect sampler, but if you want a high intensity of points and a big hard core distance it may not terminate in finite time... The distribution can be obtained as the limit of a Markov chain and rmh.default lets you run a Markov chain with a given Gibbs model as its invariant distribution. This finishes in finite time but only gives a realisation of an approximate distribution.
In rmh.default you can also simulate conditional on a fixed number of points. Note that when you sample in a finite box there is of course an upper limit to how many points you can fit with a given hard core radius, and the closer you are to this limit the more problematic it becomes to sample correctly from the distribution.
Example:
library(spatstat)
beta <- 100; R = 0.1
win <- square(1) # Unit square for simulation
X1 <- rHardcore(beta, R, W = win) # Exact sampling -- beware it may run forever for some par.!
plot(X1, main = paste("Exact sim. of hardcore model; beta =", beta, "and R =", R))
minnndist(X1) # Observed min. nearest neighbour dist.
#> [1] 0.102402
Approximate simulation
model <- rmhmodel(cif="hardcore", par = list(beta=beta, hc=R), w = win)
X2 <- rmh(model)
#> Checking arguments..determining simulation windows...Starting simulation.
#> Initial state...Ready to simulate. Generating proposal points...Running Metropolis-Hastings.
plot(X2, main = paste("Approx. sim. of hardcore model; beta =", beta, "and R =", R))
minnndist(X2) # Observed min. nearest neighbour dist.
#> [1] 0.1005433
Approximate simulation conditional on number of points
X3 <- rmh(model, control = rmhcontrol(p=1), start = list(n.start = 42))
#> Checking arguments..determining simulation windows...Starting simulation.
#> Initial state...Ready to simulate. Generating proposal points...Running Metropolis-Hastings.
plot(X3, main = paste("Approx. sim. given n =", 42))
minnndist(X3) # Observed min. nearest neighbour dist.
#> [1] 0.1018068
OK, how about this? You just generate random number pairs without restriction and then remove the onces which are too close. This could be a great start for that:
minimumDistancePairs <- function(x, y, minDistance){
i <- 1
repeat{
distance <- sqrt((x-x[i])^2 + (y-y[i])^2) < minDistance # pythagorean theorem
distance[i] <- FALSE # distance to oneself is always zero
if(any(distance)) { # if too close to any other point
x <- x[-i] # remove element from x
y <- y[-i] # and remove element from y
} else { # otherwise...
i = i + 1 # repeat the procedure with the next element
}
if (i > length(x)) break
}
data.frame(x,y)
}
minimumDistancePairs(
c(0,3.9,4.1,8)
, c(1,4.1,3.9,7)
, 1
)
will lead to
x y
1 0.0 1.0
2 4.1 3.9
3 8.0 7.0
Be aware, though, of the fact that these are not random numbers anymore (however you solve problem).
You can use rejection sapling https://en.wikipedia.org/wiki/Rejection_sampling
The principle is simple: you resample until you data verify the condition.
> set.seed(1)
>
> x <- rnorm(2)
> y <- rnorm(2)
> (x[1]-x[2])^2+(y[1]-y[2])^2
[1] 6.565578
> while((x[1]-x[2])^2+(y[1]-y[2])^2 > 1) {
+ x <- rnorm(2)
+ y <- rnorm(2)
+ }
> (x[1]-x[2])^2+(y[1]-y[2])^2
[1] 0.9733252
>
The following is a naive hit-and-miss approach which for some choices of parameters (which were left unspecified in the question) works well. If performance becomes an issue, you could experiment with the package gpuR which has a GPU-accelerated distance matrix calculation.
rand.separated <- function(n,x0,x1,y0,y1,d,trials = 1000){
for(i in 1:trials){
nums <- cbind(runif(n,x0,x1),runif(n,y0,y1))
if(min(dist(nums)) >= d) return(nums)
}
return(NA) #no luck
}
This repeatedly draws samples of size n in [x0,x1]x[y0,y1] and then throws the sample away if it doesn't satisfy. As a safety, trials guards against an infinite loop. If solutions are hard to find or n is large you might need to increase or decrease trials.
For example:
> set.seed(2018)
> nums <- rand.separated(25,0,10,0,10,0.2)
> plot(nums)
runs almost instantly and produces:
Im not sure what you are asking.
if you want random coordinates here.
c(
runif(1,max=y[1],min=x[1]),
runif(1,max=y[2],min=x[2]),
runif(1,min=y[3],max=x[3]),
runif(1,min=y[4],max=x[4])
)

Constrained optimization of a vector

I have a (non-symmetric) probability matrix, and an observed vector of integer outcomes. I would like to find a vector that maximises the probability of the outcomes, given the transition matrix. Simply, I am trying to estimate a distribution of particles at sea given their ultimate distribution on land, and a matrix of probabilities of a particle released from a given point in the ocean ending up at a given point on the land.
The vector that I want to find is subject to the constraint that all components must be between 0-1, and the sum of the components must equal 1. I am trying to figure out the best optimisation approach for the problem.
My transition matrix and data set are quite large, but I have created a smaller one here:
I used a simulated known at- sea distribution of
msim<-c(.3,.2,.1,.3,.1,0) and a simulated probability matrix (t) to come up with an estimated coastal matrix (Datasim2), as follows:
t<-matrix (c(0,.1,.1,.1,.1,.2,0,.1,0,0,.3,0,0,0,0,.4,.1,.3,0,.1,0,.1,.4,0,0,0,.1,0,.1,.1),
nrow=5,ncol=6, byrow=T)
rownames(t)<-c("C1","C2","C3","C4","C5") ### locations on land
colnames(t)<-c("S1","S2","S3","S4","S5","S6") ### locations at sea
Datasim<-as.numeric (round((t %*% msim)*500))
Datasim2<-c(rep("C1",95), rep("C2",35), rep("C3",90),rep("C4",15),rep("C5",30))
M <-c(0.1,0.1,0.1,0.1,0.1,0.1) ## starting M
I started with a straightforward function as follows:
EstimateSource3<-function(M,Data,T){
EstEndProbsall<-M%*%T
TotalLkhd<-rep(NA, times=dim(Data)[1])
for (j in 1:dim(Data)[1]){
ObsEstEndLkhd<-0
ObsEstEndLkhd<-1-EstEndProbsall[1,] ## likelihood of particle NOT ending up at locations other than the location of interest
IndexC<-which(colnames(EstEndProbsall)==Data$LocationCode[j], arr.ind=T) ## likelihood of ending up at location of interest
ObsEstEndLkhd[IndexC]<-EstEndProbsall[IndexC]
#Total likelihood
TotalLkhd[j]<-sum(log(ObsEstEndLkhd))
}
SumTotalLkhd<-sum(TotalLkhd)
return(SumTotalLkhd)
}
DistributionEstimate <- optim(par = M, fn = EstimateSource3, Data = Datasim2, T=t,
control = list(fnscale = -1, trace=5, maxit=500), lower = 0, upper = 1)
To constrain the sum to 1, I tried using a few of the suggestions posted here:How to set parameters' sum to 1 in constrained optimization
e.g. adding M<-M/sum(M) or SumTotalLkhd<-SumTotalLkhd-(10*pwr) to the body of the function, but neither yielded anything like msim, and in fact, the 2nd solution came up with the error “L-BFGS-B needs finite values of 'fn'”
I thought perhaps the quadprog package might be of some help, but I don’t think I have a symmetric positive definite matrix…
Thanks in advance for your help!
What about that: Let D = distribution at land, M = at sea, T the transition matrix. You know D, T, you want to calculate M. You have
D' = M' T
hence D' T' = M' (T T')
and accordingly D'T'(T T')^(-1) = M'
Basically you solve it as when doing linear regression (seems SO does not support math notation: ' is transpose, ^(-1) is ordinary matrix inverse.)
Alternatively, D may be counts of particles, and now you can ask questions like: what is the most likely distribution of particles at sea. That needs a different approach though.
Well, I have never done such models but think along the following lines. Let M be of length 3 and D of length 2, and T is hence 3x2. We know T and we observe D_1 particles at location 1 and D_2 particles at location 2.
What is the likelihood that you observe one particle at location D_1? It is Pr(D = 1) = M_1 T_11 + M_2 T_21 + M_3 T_32. Analogously, Pr(D = 2) = M_1 T_12 + M_2 T_22 + M_3 T_32. Now you can easily write the log-likelihood of observing D_1 and D_2 particles at locations 1 and 2. The code might look like this:
loglik <- function(M) {
if(M[1] < 0 | M[1] > 1)
return(NA)
if(M[2] < 0 | M[2] > 1)
return(NA)
M3 <- 1 - M[1] - M[2]
if(M3 < 0 | M3 > 1)
return(NA)
D[1]*log(T[1,1]*M[1] + T[2,1]*M[2] + T[3,1]*M3) +
D[2]*log(T[1,2]*M[1] + T[2,2]*M[2] + T[3,2]*M3)
}
T <- matrix(c(0.1,0.2,0.3,0.9,0.8,0.7), 3, 2)
D <- c(100,200)
library(maxLik)
m <- maxLik(loglik, start=c(0.4,0.4), method="BFGS")
summary(m)
I get the answer (0, 0.2, 0.8) when I estimate it but standard errors are very large.
As I told, I have never done it so I don't know it it makes sense.

R optimize not giving the finite minimum but Inf when the search interval is wider

I have a problem with optimize().
When I limit the search in a small interval around zero, e.g., (-1, 1), the optimize algorithm gives a finite minimum with a finite objective function value.
But when I make the interval wider to (-10, 10), then the minimum is on the boundary of the interval and the objective is Inf, which is really puzzling for me.
How can this happen and how to fix this? Thanks a lot in advance.
The following is my code.
set.seed(123)
n <- 120
c <- rnorm(n,mean=1,sd=.3);
eps <- rnorm(n,mean=0,sd=5)
tet <- 32
r <- eps * c^tet
x <- matrix(c(c,r), ncol=2)
g <- function(tet, x){
matrix((x[,1]^(-tet))*x[,2],ncol=1)
}
theta <- 37
g_t <- g(theta,x)
f.tau <- function(tau){
exp.tau.g <- exp(g_t %*% tau)
g.exp <- NULL; i <- 1:n
g.exp <- matrix(exp.tau.g[i,] * g_t[i,], ncol=1)
sum.g.exp <- apply(g.exp,2,sum)
v <- t(sum.g.exp) %*% sum.g.exp
return(v)
}
band.tau <- 1;
f <- optimize(f.tau, c(-band.tau, band.tau), tol=1e-20)
print("interval=(-1, 1)"); print(f);
band.tau <- 10;
f <- optimize(f.tau, c(-band.tau, band.tau), tol=1e-20)
print("interval=(-10, 10)"); print(f);
The problem is that your function f.tau(x) is not well behaved. You can see that here:
vect.f <- Vectorize(f.tau)
z1 <- seq(-1,1,by=0.01)
z10 <- seq(-10,10,by=0.01)
par(mfrow=c(2,1), mar=c(2,2,1,1))
plot(z1, log(vect.f(z1)), type="l")
plot(z10,log(vect.f(z10)),type="l")
Note that these are plots of log(f.tau). So there are two problems: f.tau(...) has an extremely large slope on either side of the minimum, and f.tau = Inf for x<-0.6 and x>1.0, where Inf means that f.tau(...) is greater than the largest number that can be represented on this system. When you set the range to (-1,1) your starting point is close enough to the minimum that optimize(...) manages to converge. When you set the limits to (-10,10) the starting point is too far away. There are examples in the documentation which show a similar problem with functions that are not nearly as ill-behaved as f.tau.
EDIT (Response to OP's comment)
The main problem is that you are trying to optimize a function which has computational infinities in the interval of interest. Here's a way around that.
band.tau <- 10
z <- seq(-band.tau,band.tau,length=1000)
vect.f <- Vectorize(f.tau)
interval <- range(z[is.finite(vect.f(z))])
f <- optimize(f.tau, interval, tol=1e-20)
f
# $minimum
# [1] 0.001615433
#
# $objective
# [,1]
# [1,] 7.157212e-12
This evaluates f.tau(x) at 1000 equally spaced points on (-band.tau,+band.tau), identifies all the values of x where f.tau is finite, and uses the range as the increment in optimize(...). This works in your case because f.tau(x) does not (appear to...) have asymptotes.

Portfolio optimization with Differential evolution

I faced an optimization problem. I need to optimize portfolio for return Omega measure. I found suggestions that this can be done by using differential evolution through DEoptim(Yollin's very nice slides on R tools for portfolio optimization. Original code can be found there).
I tried to adapt this method to my problem (since I only changed numbers and I hope didn't make any mistakes. Full credit for the author here for the idea):
optOmega <-function(x,ret,L){ #function I want to optimize and
retu = ret %*% x # x is vector of asset weights
obj = -Omega(retu,L=L,method="simple") #Omega from PerformanceAnalytics
weight.penalty = 100*(1-sum(x))^2
return( obj + weight.penalty )
}
L=0 #Parameter which defines loss
#in Omega calculation
lower = rep(0,30) #I want weight to be in bounds
upper = rep(1,30) # 0<=x<=1
res = DEoptim(optOmega,lower,upper, #I have 30 assets in StockReturn
control=list(NP=2000,itermax=100,F=0.2,CR=0.8),
ret=coredata(StockReturn),L=L)
Omega is calculated as mean(pmax(retu-L,0))/mean(pmax(L-retu,0))
When asset number is very small (5 for example), I get results which pretty much satisfy me: asset weights add up to 0.999???? which is fairly close to one and the Omega of such portfolio is greater than Omega of any single asset (otherwise, why not invest everything in that single asset). This can be reached with 100 iterations.
But when I increase asset number up to 30, result is not satisfying. Sum of weights comes to be 3 or more and Omega lower than that of some single assets. I thought this might be due to small number of iterations (I used 1000), so I tried 10 000 which is painfully slow. But the result is pretty much the same: weighs add up to way more than 1 and Omega does not seem optimal. With 10 asset algorithm seems to find weights close to 1, but Omega is lower than the one of a single asset.
My PC is quite old and it has Intel Core Duo 2 GHZ. Though, is it normal for such optimization with 1000 iterations to run ~40 minutes?
What might be the problem here? Is number of iterations too small, or my interpretation of provided algorithm is totally wrong. Thank You for your help!
If I comment out the control argument in your call to DEoptim, I have much better results:
the sum of the weights is closer to 1 (it was 3), and the objective is better that for the 1-asset portfolios (it was worse).
# Sample data
library(xts)
n <- 600
k <- 26
StockReturn <- matrix( rnorm(n*k), nc=k )
colnames(StockReturn) <- LETTERS[1:k]
StockReturn <- xts( StockReturn, seq.Date(Sys.Date(), length=n, by=1) )
# Objective
library(PerformanceAnalytics)
optOmega <- function(x, ret = coredata(StockReturn), L=0) {
penalty <- (1-sum(x))^2
x <- x/sum(x)
objective <- -Omega( ret %*% x, L=L, method="simple" )
objective + penalty
}
# Optimization
library(DEoptim)
lower <- rep(0,k)
upper <- rep(1,k)
res <- DEoptim(
optOmega, lower, upper,
# control = list(NP=2000, itermax=100, F=0.2, CR=0.8),
ret = coredata(StockReturn), L = L
)
# Check the results
w <- res$optim$bestmem
sum(w) # Close to 1
w <- w / sum(w)
optOmega(w) # Better (lower) that for the 1-asset portfolios
min( apply( diag(k), 2, optOmega ) )

Generate Random Numbers with Std Dev x and Fixed Product

I want generate a series of returns x such that the standard deviation of the returns are say 0.03 and the product of 1+x = 1. To summarise, there are two conditions for the returns:
1) sd(x) == 0.03
2) prod(1+x) == 1
Is this possible and if so, how can I implement it in R?
Thank you.
A slightly more sophisticated approach is to use knowledge of the log-normal distribution: from ?dlnorm, Var= exp(2*mu + sigma^2)*(exp(sigma^2) - 1). We want the geometric mean to equal 1, so the mean on the log scale should be 0. We have Var = exp(sigma^2)*(exp(sigma^2)-1), can't obviously solve this analytically but we can use uniroot:
Find the correct log-variance:
vfun <- function(s2,v=0.03^2) { exp(s2)*(exp(s2)-1)-v }
s2 <- uniroot(vfun,interval=c(1e-6,100))$root
Generate values:
set.seed(1001)
x <- rnorm(1000,mean=0,sd=sqrt(s2))
x <- exp(x-mean(x))-1 ## makes sum(x) exactly zero
prod(1+x) ## exactly 1
sd(x)
This produces values with a standard deviation not exactly equal to 0.03, but close. If we wanted we could fix this too ...
A very simple approach is to simply simulate returns until you have a set that satisfies your requirements. You will need to specify a tolerance to your requirements, though (see here why).
nn <- 10
epsilon <- 1e-3
while ( TRUE ) {
xx <- rnorm(nn,0,0.03)
if ( abs(sd(xx)-0.03)<epsilon & abs(prod(1+xx)-1)<epsilon ) break
}
xx
yields
[1] 0.007862226 -0.011437600 -0.038740969 0.028614022 0.006986953
[6] -0.004131429 0.030846398 -0.037977057 0.046448318 -0.025294236

Resources