Regression in R using vectorization and matrices - r

I have a vectorization Q in R using matrices. I have 2 Cols that need to be regressed against each using certain indices. Data is
matrix_senttoR = [ ...
0.11 0.95
0.23 0.34
0.67 0.54
0.65 0.95
0.12 0.54
0.45 0.43 ] ;
indices_forR = [ ...
1
1
1
2
2
2 ] ;
Col1 in matrix is data for say MSFT and GOOG (3 rows each) and Col2 is the return from benchmark StkIndex, on corresponding dates. The data is in matrix format as it is sent from Matlab.
I currently use
slope <- by( data.frame(matrix_senttoR), indices_forR, FUN=function(x)
{zyp.sen (X1~X2,data=x) $coeff[2] } )
betasFac <- sapply(slope , function(x) x+0)
I'm using data.frame above as I could not use cbind(). If I use cbind() then Matlab gives an error as it doesn't understand that format of data. I'm running these commands from inside Matlab (http://www.mathworks.com/matlabcentral/fileexchange/5051). You can replace zyp (zyp.sen) with lm.
BY is slow here (may be because of dataframes?). Is there a better way to do it? It takes 14secs+ for 150k rows of data. Can I instead use matrix-vectorization in R? Thanks.

This could easily be moved to a comment, but:
A few things to consider, I tend to avoid the by() function since its return value is a funky object. Instead, why not add your indices_forR vector to the data.frame?
df <- data.frame(matrix_senttoR)
df$indices_forR <- indices_forR
the plyr package does the work from here:
ddply(df,.(indices_forR),function(x) zyp.sen(X1~X2,data=x)$coeff[2])
you can easily multi-thread this operation using doMC or doSnow and the argument .parallel=TRUE to ddply.
if speed is the goal, I would also learn the data.table package (which wraps data.frame and is much faster). Also, I assume that the slow piece is the zyp.sen() call rather than the by() call. Executing on multiple cores will speed this along.
> dput(df)
structure(list(X1 = c(0.11, 0.23, 0.67, 0.65, 0.12, 0.45), X2 = c(0.95,
0.34, 0.54, 0.95, 0.54, 0.43), indices_forR = c(1, 1, 1, 2, 2,
2)), .Names = c("X1", "X2", "indices_forR"), row.names = c(NA,
-6L), class = "data.frame")
> ddply(df,.(indices),function(x) lm(X1~X2,data=x)$coeff[2])
indices X2
1 1 -0.3702172
2 2 0.6324900

I still think that you are overcomplicating things by moving from MATLAB to R and back. And passing 150k rows of data must be slowing things down considerably.
zyp.sen is actually pretty trivial to port to MATLAB. Here you go:
function [intercept, slope, intercepts, slopes, rank, residuals] = ZypSen(x, y)
% Computes a Thiel-Sen estimate of slope for a vector of data.
n = length(x);
slopes = arrayfun(#(i) ZypSlopediff(i, x, y, n), 1:(n - 1), ...
'UniformOutput', false);
slopes = [slopes{:}];
sni = isfinite(slopes);
slope = median(slopes(sni));
intercepts = y - slope * x;
intercept = median(intercepts);
rank = 2;
residuals = x - slope * y + intercept;
end
function z = ZypSlopediff(i, x, y, n)
z = (y(1:(n - i)) - y((i + 1):n)) ./ ...
(x(1:(n - i)) - x((i + 1):n));
end
I checked this using the R's example(zyp.sen), and it gives the same answer.
x = [0 1 2 4 5]
y = [6 4 1 8 7]
[int, sl, ints, sls, ra, res] = ZypSen(x, y)
You should really do some further checking though, just to be sure.

Related

Combinatorial optimization with discrete options in R

I have a function with five variables that I want to maximize using only an specific set of parameters for each variable.
Are there any methods in R that can do this, other than by brutal force? (e.g. Particle Swarm Optimization, Genetic Algorithm, Greedy, etc.). I have read a few packages but they seem to create their own set of parameters from within a given range. I am only interested in optimizing the set of options provided.
Here is a simplified version of the problem:
#Example of 5 variable function to optimize
Fn<-function(x){
a=x[1]
b=x[2]
c=x[3]
d=x[4]
e=x[5]
SUM=a+b+c+d+e
return(SUM)
}
#Parameters for variables to optimize
Vars=list(
As=c(seq(1.5,3, by = 0.3)), #float
Bs=c(1,2), #Binary
Cs=c(seq(1,60, by=10)), #Integer
Ds=c(seq(60,-60, length.out=5)), #Negtive
Es=c(1,2,3)
)
#Full combination
FullCombn= expand.grid(Vars)
Results=data.frame(I=as.numeric(), Sum=as.numeric())
for (i in 1:nrow(FullCombn)){
ParsI=FullCombn[i,]
ResultI=Fn(ParsI)
Results=rbind(Results,c(I=i,Sum=ResultI))
}
#Best iteration (Largest result)
Best=Results[Results[, 2] == max(Results[, 2]),]
#Best parameters
FullCombn[Best$I,]
Two more possibilities. Both minimize by default, so I flip the sign in your objective function (i.e. return -SUM).
#Example of 5 variable function to optimize
Fn<-function(x, ...){
a=x[1]
b=x[2]
c=x[3]
d=x[4]
e=x[5]
SUM=a+b+c+d+e
return(-SUM)
}
#Parameters for variables to optimize
Vars=list(
As=c(seq(1.5,3, by = 0.3)), #float
Bs=c(1,2), #Binary
Cs=c(seq(1,60, by=10)), #Integer
Ds=c(seq(60,-60, length.out=5)), #Negtive
Es=c(1,2,3)
)
First, a grid search. Exactly what you did, just convenient. And the implementation allows you to distribute the evaluations of the objective function.
library("NMOF")
gridSearch(fun = Fn,
levels = Vars)[c("minfun", "minlevels")]
## 5 variables with 6, 2, 6, 5, ... levels: 1080 function evaluations required.
## $minfun
## [1] -119
##
## $minlevels
## [1] 3 2 51 60 3
An alternative: a simple Local Search. You start with a valid initial guess, and then move randomly through possible feasible solutions. The key ingredient is the neighbourhood function. It picks one element randomly and then, again randomly, sets this element to one allowed value.
nb <- function(x, levels, ...) {
i <- sample(length(levels), 1)
x[i] <- sample(levels[[i]], 1)
x
}
(There would be better algorithms for neighbourhood functions; but this one is simple and so demonstrates the idea well.)
LSopt(Fn, list(x0 = c(1.8, 2, 11, 30, 2), ## a feasible initial solution
neighbour = nb,
nI = 200 ## iterations
),
levels = Vars)$xbest
## Local Search.
## ##...
## Best solution overall: -119
## [1] 3 2 51 60 3
(Disclosure: I am the maintainer of package NMOF, which provides functions gridSearch and LSopt.)
In response to the comment, a few remarks on Local Search and the neighbourhood function above (nb). Local Search, as implemented in
LSopt, will start with an arbitrary solution, and
then change that solution slightly. This new solution,
called a neighbour, will be compared (by its
objective-function value) to the old solution. If the new solution is
better, it becomes the current solution; otherwise it
is rejected and the old solution remains the current one.
Then the algorithm repeats, for a number of iterations.
So, in short, Local Search is not random sampling, but
a guided random-walk through the search space. It's
guided because only better solutions get accepted, worse one's get rejected. In this sense, LSopt will narrow down on good parameter values.
The implementation of the neighbourhood is not ideal
for two reasons. The first is that a solution may not
be changed at all, since I sample from feasible
values. But for a small set of possible values as here,
it might often happen that the same element is selected
again. However, for larger search spaces, this
inefficiency is typically negligible, since the
probability of sampling the same value becomes
smaller. Often so small, that the additional code for
testing if the solution has changed becomes more
expensive that the occasionally-wasted iteration.
A second thing could be improved, albeit through a more
complicated function. And again, for this small problem it does not matter. In the current neighbourhood, an
element is picked and then set to any feasible value.
But that means that changes from one solution to the
next might be large. Instead of picking any feasible values of the As,
in realistic problems it will often be better to pick a
value close to the current value. For example, when you are at 2.1, either move to 1.8 or 2.4, but not to 3.0. (This reasoning is only relevant, of course, if the variable in question is on a numeric or at least ordinal scale.)
Ultimately, what implementation works well can be
tested only empirically. Many more details are in this tutorial.
Here is one alternative implementation. A solution is now a vector of positions for the original values, e.g. if x[1] is 2, it "points" to 1.8, if x[2] is 2, it points to 1, and so on.
## precompute lengths of vectors in Vars
lens <- lengths(Vars)
nb2 <- function(x, lens, ...) {
i <- sample(length(lens), 1)
if (x[i] == 1L) {
x[i] <- 2
} else if (x[i] == lens[i]) {
x[i] <- lens[i] - 1
} else
x[i] <- x[i] + sample(c(1, -1), 1)
x
}
## the objective function now needs to map the
## indices in x back to the levels in Vars
Fn2 <- function(x, levels, ...){
y <- mapply(`[`, levels, x)
## => same as
## y <- numeric(length(x))
## y[1] <- Vars[[1]][x[1]]
## y[2] <- Vars[[2]][x[2]]
## ....
SUM <- sum(y)
return(-SUM)
}
xbest <- LSopt(Fn2,
list(x0 = c(1, 1, 1, 1, 1), ## an initial solution
neighbour = nb2,
nI = 200 ## iterations
),
levels = Vars,
lens = lens)$xbest
## Local Search.
## ....
## Best solution overall: -119
## map the solution back to the values
mapply(`[`, Vars, xbest)
## As Bs Cs Ds Es
## 3 2 51 60 3
Here is a genetic algorithm solution with package GA.
The key is to write a function decode enforcing the constraints, see the package vignette.
library(GA)
#> Loading required package: foreach
#> Loading required package: iterators
#> Package 'GA' version 3.2.2
#> Type 'citation("GA")' for citing this R package in publications.
#>
#> Attaching package: 'GA'
#> The following object is masked from 'package:utils':
#>
#> de
decode <- function(x) {
As <- Vars$As
Bs <- Vars$Bs
Cs <- Vars$Cs
Ds <- rev(Vars$Ds)
# fix real variable As
i <- findInterval(x[1], As)
if(x[1L] - As[i] < As[i + 1L] - x[1L])
x[1L] <- As[i]
else x[1L] <- As[i + 1L]
# fix binary variable Bs
if(x[2L] - Bs[1L] < Bs[2L] - x[2L])
x[2L] <- Bs[1L]
else x[2L] <- Bs[2L]
# fix integer variable Cs
i <- findInterval(x[3L], Cs)
if(x[3L] - Cs[i] < Cs[i + 1L] - x[3L])
x[3L] <- Cs[i]
else x[3L] <- Cs[i + 1L]
# fix integer variable Ds
i <- findInterval(x[4L], Ds)
if(x[4L] - Ds[i] < Ds[i + 1L] - x[4L])
x[4L] <- Ds[i]
else x[4L] <- Ds[i + 1L]
# fix the other, integer variable
x[5L] <- round(x[5L])
setNames(x , c("As", "Bs", "Cs", "Ds", "Es"))
}
Fn <- function(x){
x <- decode(x)
# a <- x[1]
# b <- x[2]
# c <- x[3]
# d <- x[4]
# e <- x[5]
# SUM <- a + b + c + d + e
SUM <- sum(x, na.rm = TRUE)
return(SUM)
}
#Parameters for variables to optimize
Vars <- list(
As = seq(1.5, 3, by = 0.3), # Float
Bs = c(1, 2), # Binary
Cs = seq(1, 60, by = 10), # Integer
Ds = seq(60, -60, length.out = 5), # Negative
Es = c(1, 2, 3)
)
res <- ga(type = "real-valued",
fitness = Fn,
lower = c(1.5, 1, 1, -60, 1),
upper = c(3, 2, 51, 60, 3),
popSize = 1000,
seed = 123)
summary(res)
#> ── Genetic Algorithm ───────────────────
#>
#> GA settings:
#> Type = real-valued
#> Population size = 1000
#> Number of generations = 100
#> Elitism = 50
#> Crossover probability = 0.8
#> Mutation probability = 0.1
#> Search domain =
#> x1 x2 x3 x4 x5
#> lower 1.5 1 1 -60 1
#> upper 3.0 2 51 60 3
#>
#> GA results:
#> Iterations = 100
#> Fitness function value = 119
#> Solutions =
#> x1 x2 x3 x4 x5
#> [1,] 2.854089 1.556080 46.11389 49.31045 2.532682
#> [2,] 2.869408 1.638266 46.12966 48.71106 2.559620
#> [3,] 2.865254 1.665405 46.21684 49.04667 2.528606
#> [4,] 2.866494 1.630416 46.12736 48.78017 2.530454
#> [5,] 2.860940 1.650015 46.31773 48.92642 2.521276
#> [6,] 2.851644 1.660358 46.09504 48.81425 2.525504
#> [7,] 2.855078 1.611837 46.13855 48.62022 2.575492
#> [8,] 2.857066 1.588893 46.15918 48.60505 2.588992
#> [9,] 2.862644 1.637806 46.20663 48.92781 2.579260
#> [10,] 2.861573 1.630762 46.23494 48.90927 2.555612
#> ...
#> [59,] 2.853788 1.640810 46.35649 48.87381 2.536682
#> [60,] 2.859090 1.658127 46.15508 48.85404 2.590679
apply(res#solution, 1, decode) |> t() |> unique()
#> As Bs Cs Ds Es
#> [1,] 3 2 51 60 3
Created on 2022-10-24 with reprex v2.0.2

Simple linear transformation of variable in R: changing the scope of a variable. How to make it right?

I am trying to change the value range of a variable (array, set of values) while keeping its properties. I don't know the exact name in math, but I mean such a kind of transformation that the variable array has exactly the same properties, the spacing between the values is the same, but the range is different. Maybe the code below will explain what I mean.
I just want to "linearly transpose" (or smth?) values to some other range and the distribution should remain same. In other words - I'll just change the scope of the variable using the regression equation y = a * x + b. I assume that the transformation will be completely linear, the correlation between the variables is exactly 1, and I calculate new variable (array) from a regression equation, actually a system of equations where I simply substitute the maximum ranges of both variables:
minimum.y1 = minimum.x1 * a + b
maximum.y2 = maximum.x2 * a + b
from which I can work out the following code to obtain a and b coefficients:
# this is my input variable
x <- c(-1, -0.5, 0, 0.5, 1)
# this is the range i want to obtain
y.pred <- c(1,2,3,4,5)
max_y = 5
min_y = 1
min_x = min(x)
max_x = max(x)
c1 = max_x-min_x
c2 = max_y-min_y
a.coeff = c2/c1
b.coeff = a.coeff-min_x
y = x * a.coeff + b.coeff
y
# hey, it works! :)
[1] 1 2 3 4 5
the correlation between the variable before and after the transformation is exactly 1. So we have a basis for further action. Let's get it as a function:
linscale.to.int <- function(max.lengt, vector) {
max_y = max.lengt
min_y = 1
min_x = min(vector)
max_x = max(vector)
c1 = max_x-min_x
c2 = max_y-min_y
a.coeff = c2/c1
b.coeff = a.coeff-min_x
return(vector * a.coeff + b.coeff)
}
x <- c(-1, -0.5, 0, 0.5, 1)
linscale.to.int(5,x)
[1] 1 2 3 4 5
and it works again. But here's the thing: when i aplly this function to random distribution, like this:
x.rand <- rnorm(50)
y.rand <- linscale.to.int(5,x.rand)
plot(x.rand, y.rand)
or better seable this:
x.rand <- rnorm(500)
y.rand <- linscale.to.int(20,x.rand)
plot(x.rand, y.rand)
I get the values of the second variable completely out of range; it should be between 1 and 20 but i get scope of valuest about -1 to 15:
And now the question arises - what am I doing wrong here? Where do I go wrong with such a transformation?
What you are trying to do is very straightforward using rescale from the scales package (which you will already have installed if you have ggplot2 / tidyverse installed). Simply give it the new minimum / maximum values:
x <- c(-1, -0.5, 0, 0.5, 1)
scales::rescale(x, c(1, 5))
#> [1] 1 2 3 4 5
If you want to have your own function written in base R, the following one-liner should do what you want:
linscale_to_int <- function(y, x) (x - min(x)) * (y - 1) / diff(range(x)) + 1
(Note that it is good practice in R to avoid periods in function names because this can cause confusion with S3 method dispatch)
Testing, we have:
x <- c(-1, -0.5, 0, 0.5, 1)
linscale_to_int(5, x)
#> [1] 1 2 3 4 5
x.rand <- rnorm(50)
y.rand <- linscale_to_int(5, x.rand)
plot(x.rand, y.rand)
y.rand <- linscale_to_int(20, x.rand)
plot(x.rand, y.rand)
Created on 2022-08-31 with reprex v2.0.2

Multiple precision Gamma function in R

I need to compute a sum in R that involves the gamma function for each element. When the arguments of the gamma increase I get NaN as a result, and I suspect that the problem is numerical with the evaluation of the gamma. I already read the documentation of Rmpfr and gmp but I only found factiorial for integers.
I also post the code here, maybe you have a better idea about the source of the error.
V1<-w1*(b1^a1)/gamma(a1)
VV1<-outer(V1,V1)
a1mat<-outer(a1, a1-1, FUN="+")
b1mat<-outer(b1, b1, FUN="+")
A<-sum(VV1*gamma(a1mat)/(b1mat^a1mat))
w1 is an array of positive real numbers that sums up to 1, a1 and b1 are vectors of positive values. A becames NaN when a1 (and a1mat) becomes long and with high values (~150).
Try working in log-space:
w1 <- c(0.2, 0.3, 0.2, 0.1, 0.1, 0.1)
a1 <- 3:8
b1 <- rep(4, 6)
a1mat <- outer(a1, a1 - 1, "+")
b1mat <- outer(b1, b1, "+")
# working in log-space
logV1 <- log(w1) + a1*log(b1) - lgamma(a1)
logVV1 <- outer(logV1, logV1, "+")
sum(exp(logVV1 + lgamma(a1mat) - a1mat*log(b1mat)))
#> [1] 0.4614941
# compared to original
V1 <- w1*(b1^a1)/gamma(a1)
VV1 <- outer(V1, V1)
sum(VV1*gamma(a1mat)/(b1mat^a1mat))
#> [1] 0.4614941

Sampling using conditional probability table

I am trying to simulate certain discrete variable depicting "true state of the world" (say, "red", "green" or "blue") and its indicator, somewhat imperfectly describing it.
r_names <- c("real_R", "real_G", "real_B")
Lets say I have some prior belief about distribution of "reality" variable, which I will use to sample it.
r_probs <- c(0.3, 0.5, 0.2)
set.seed(100)
reality <- sample(seq_along(r_names), 10000, prob=r_probs, replace = TRUE)
Now, let's say I have conditional probability table that stipulates the value of indicator given each of the "realities"
ri_matrix <- matrix(c(0.7, 0.3, 0,
0.2, 0.6, 0.2,
0.05,0.15,0.8), byrow=TRUE,nrow = 3)
dimnames(ri_matrix) <- list(paste("real", r_names, sep="_"),
paste("ind", r_names, sep="_"))
ri_matrix
># ind_R ind_G ind_B
># real_Red 0.70 0.30 0.0
># real_Green 0.20 0.60 0.2
># real_Blue 0.05 0.15 0.8
Since base::sample() is not vectorized for prob argument, I have to:
sample_cond <- function(r, rim){
unlist(lapply(r, function(x)
sample(seq_len(ncol(rim)), 1, prob = rim[x,], replace = TRUE)))
}
Now I can sample my "indicator" variable using the conditional probability matrix
set.seed(200)
indicator <- sample_cond(reality, ri_matrix)
Just to make sure the distributions turned out as expected:
prop.table(table(reality, indicator), margin = 1)
#> indicator
#> reality 1 2 3
#> 1 0.70043610 0.29956390 0.00000000
#> 2 0.19976124 0.59331476 0.20692400
#> 3 0.04365278 0.14400401 0.81234320
Is there a better (i.e. more idiomatic and/or efficient) way to sample a discrete variable conditioned on another discrete random variable?
UPDATE:
As suggested by #Mr.Flick, this is at least 50x faster, because it reuses probability vectors instead of repeated subsetting of the conditional probability matrix.
sample_cond_group <- function(r, rim){
il <- mapply(function(x,y){sample(seq(ncol(rim)), length(x), prob = y, replace = TRUE)},
x=split(r, r),
y=split(rim, seq(nrow(rim))))
unsplit(il, r)
}
You can be a bit more efficient by drawing all the random samples per group with a split/combine type strategy. That might look something like this
simFun <- function(N, r_probs, ri_matrix) {
stopifnot(length(r_probs) == nrow(ri_matrix))
ind <- sample.int(length(r_probs), N, prob = r_probs, replace=TRUE)
grp <- split(data.frame(ind), ind)
unsplit(Map(function(data, r) {
draw <-sample.int(ncol(ri_matrix), nrow(data), replace=TRUE, prob=ri_matrix[r, ])
data.frame(data, draw)
}, grp, as.numeric(names(grp))), ind)
}
Than you can call with
simFun(10000, r_probs, ri_matrix)

Use FFT fft() to compute a Riemann sum

I would like to calculate the Following Sum by applying Fast Fourier transforms (FFT). I want to compute the following Riemann sum approximation using FFT :
Here is the Psy function that I'm using :
Psyfun<-function(u,T,r,q,sigma,lmbda,meanV,v0,rho){
j <- as.complex(1i)
a <- lmbda*meanV
b <- lmbda
d <- sqrt((j*rho*sigma*u-b)**2+(u**2+j*u)*sigma**2)
g <- (b-j*rho*sigma*u-d)/(b-j*rho*sigma*u+d)
ret <- exp(j*u*(r-q)*T)
ret <- ret*exp((a/sigma**2)*((b - rho*j*sigma*u - d)*T - 2.0*log((1-g*exp(-d*T))/(1-g))))
return (ret*exp((v0/sigma**2)*(b - rho*j*sigma*u - d)*(1-exp(-d*T))/(1-g*exp(-d*T))))
}
Here are the sample parameters :
r = 0.025 ,q= 0.01, sigma = 0.2, lmbda = 0.5, meanV = 0.5, v0 = 0.5 , rho = 0.3
I want to compute the values for K and T equals to :
K1=172.77 and T1 = 0.197, K2= 75.63 and T2 = 0.563, K3 = 269.54 and T3 = 0.2648
I implement the following code to do it:
N=2^10 # Number of subdivision in [0,a]
alpha=2 # alpha
delta= 0.25 # delta= a/N where a is the up value of w (w in [0,a])
lambda=(2*pi)/(N*delta)
j=seq(1,N,1)
k=seq(1,N,1)
b=(lambda*N)/2
strike= -b+(k-1)*lambda
strike= exp(strike)
res=c()
for (i in 1:N){
w=delta*(i-1) # w= j*delta but from 1 to N so w=(i-1)*delta
w_FC=w-(alpha+1)*1i
phi= Psyfun(w_FC,T,r,q,sigma,lmbda,meanV,v0,rho)
phi=phi*exp(-r1*(T))
phi=phi/(alpha^2+alpha-w^2+1i*(2*alpha+1)*w)
phi=phi*delta*exp(1i*w*b)
res=rbind(res,phi)
}
Result=Re(fft(res))*exp(-alpha*(-b+(k-1)*lambda))/pi
I obtain k numbers of values so how to get the one that correspond to K1,K2,K3.
Can anybody recommend a procedure to implement the computation? Thanks
I have no previous experience in Fast Fourier transforms (FFT) processing, so I appreciate any tips and pointers related to the mathematics / methods / Code in addition to advice on how better to approach this programmatically.

Resources