Can't get an R loop to execute - r

I am starting in R and trying to get this loop to execute. I am trying to get the loop to calculate consecutive distances between coordinates using a function (Vincenty's formula). 'Distfunc' is the file to the function. The function is then called up by 'x' below. All I want then is a data frame or a list of the distances between coordinates. Greatful of any help!
Distfunc <- source("F://Distfunc.R")
for (i in length(Radians)) {
LatRad1 <- Radians[i,1]
LongRad1 <- Radians[i,2]
LatRad2 <- Radians[i+1,1]
LongRad2 <- Radians[i+1,2]
x <- gcd.vif(LongRad1, LatRad1, LongRad2, LatRad2)
print(data.frame(x[i]))
}

Well, without a good description of the problem you are facing and a proper reproducible example it is very difficult to provide any good insight. To start off, see How to make a great R reproducible example?.
There are many things that are not clear in the way you are doing things. First of all, why assign the results of source(...) to the variable Distfunc?
Anyways, here is some code that I put together in trying to understand this; it runs without problems, but it is not clear that it accomplishes what you expect (since you don't provide much information). In particular, the codet uses the implementation for function gcd.vif by Mario Pineda-Krch (http://www.r-bloggers.com/great-circle-distance-calculations-in-r/). The code below is aimed at clarity, since you mention that you are starting in R.
# Calculates the geodesic distance between two points specified by radian latitude/longitude using
# Vincenty inverse formula for ellipsoids (vif)
# By Mario Pineda-Krch (http://www.r-bloggers.com/great-circle-distance-calculations-in-r/)
gcd.vif <- function(long1, lat1, long2, lat2) {
# WGS-84 ellipsoid parameters
a <- 6378137 # length of major axis of the ellipsoid (radius at equator)
b <- 6356752.314245 # ength of minor axis of the ellipsoid (radius at the poles)
f <- 1/298.257223563 # flattening of the ellipsoid
L <- long2-long1 # difference in longitude
U1 <- atan((1-f) * tan(lat1)) # reduced latitude
U2 <- atan((1-f) * tan(lat2)) # reduced latitude
sinU1 <- sin(U1)
cosU1 <- cos(U1)
sinU2 <- sin(U2)
cosU2 <- cos(U2)
cosSqAlpha <- NULL
sinSigma <- NULL
cosSigma <- NULL
cos2SigmaM <- NULL
sigma <- NULL
lambda <- L
lambdaP <- 0
iterLimit <- 100
while (abs(lambda-lambdaP) > 1e-12 & iterLimit>0) {
sinLambda <- sin(lambda)
cosLambda <- cos(lambda)
sinSigma <- sqrt( (cosU2*sinLambda) * (cosU2*sinLambda) +
(cosU1*sinU2-sinU1*cosU2*cosLambda) * (cosU1*sinU2-sinU1*cosU2*cosLambda) )
if (sinSigma==0) return(0)  # Co-incident points
cosSigma <- sinU1*sinU2 + cosU1*cosU2*cosLambda
sigma <- atan2(sinSigma, cosSigma)
sinAlpha <- cosU1 * cosU2 * sinLambda / sinSigma
cosSqAlpha <- 1 - sinAlpha*sinAlpha
cos2SigmaM <- cosSigma - 2*sinU1*sinU2/cosSqAlpha
if (is.na(cos2SigmaM)) cos2SigmaM <- 0  # Equatorial line: cosSqAlpha=0
C <- f/16*cosSqAlpha*(4+f*(4-3*cosSqAlpha))
lambdaP <- lambda
lambda <- L + (1-C) * f * sinAlpha *
(sigma + C*sinSigma*(cos2SigmaM+C*cosSigma*(-1+2*cos2SigmaM*cos2SigmaM)))
iterLimit <- iterLimit - 1
}
if (iterLimit==0) return(NA)  # formula failed to converge
uSq <- cosSqAlpha * (a*a - b*b) / (b*b)
A <- 1 + uSq/16384*(4096+uSq*(-768+uSq*(320-175*uSq)))
B <- uSq/1024 * (256+uSq*(-128+uSq*(74-47*uSq)))
deltaSigma = B*sinSigma*(cos2SigmaM+B/4*(cosSigma*(-1+2*cos2SigmaM^2) -
B/6*cos2SigmaM*(-3+4*sinSigma^2)*(-3+4*cos2SigmaM^2)))
s <- b*A*(sigma-deltaSigma) / 1000
return(s) # Distance in km
}
# Initialize the variable 'Radians' with random data
Radians <- matrix(runif(20, min = 0, max = 2 * pi), ncol = 2)
lst <- list() # temporary list to store the results
for (i in seq(1, nrow(Radians) - 1)) { # loop through each row of the 'Radians' matrix
LatRad1 <- Radians[i, 1]
LongRad1 <- Radians[i, 2]
LatRad2 <- Radians[i + 1, 1]
LongRad2 <- Radians[i + 1, 2]
gcd_vif <- gcd.vif(LongRad1, LatRad1, LongRad2, LatRad2)
# Store the input data and the results
lst[[i]] <- c(
latitude_position_1 = LatRad1,
longtude_position_1 = LongRad1,
latitude_position_2 = LatRad2,
longtude_position_2 = LongRad2,
GCD = gcd_vif
)
}
Results <- as.data.frame(do.call(rbind, lst)) # store the input data and the results in a data frame

Related

algebra using rasters and dataframes

I want to predict vegetation health using 2 remote sensing vegetation indices (VIs) for multiple tree-stands across multiple months. I previously approached this by using a for() loop to iterate through a list of multi-band rasters and calculate the two VIs for each raster (month) using a given equation. I then used raster::extract() to extract the pixels corresponding to each stand. However, I now would like to include some additional variables in my predictions of vegetation health, and am having trouble integrating them using the same method as they are simply columns in a dataframe and not rasters. I'm open to different ways to do this, I just can't think of any.
example:
#Part 1: Loading libraries and creating some sample data
library(sf)
library(raster)
library(terra)
#polygons to generate random points into
v <- vect(system.file("ex/lux.shp", package="terra"))
v <- v[c(1:12)]
v_sf <- st_as_sf(v) # Convert 'SpatVector' into 'sf' object
#5 rasters (months) with 5 bands each
r <- rast(system.file("ex/elev.tif", package="terra"))
r <- rep(r, 5) * 1:5
names(r) <- paste0("band", 1:5)
ras_list <- list(r,r,r,r,r)
#generating some points (10 forest stands)
pnts <- st_sample(v_sf, size = 10, type = "random")
pnts<- as_Spatial(pnts)
#Part 2: Loop to predict vegetation health using two VI variables
vis <- list() #empty list to store NDVI rasters
for (i in seq_along(ras_list)) {
b <- ras_list[[i]]
#vegetation health = 1.23 + (0.45 * VI1) - (0.67 * VI2)
vis[i] <- 1.23 + 0.45*((b[[4]] + b[[3]] - b[[1]]) / (b[[4]] + b[[3]])) - 0.67*(b[[1]] * b[[3]] - b[[4]])
}
#Part 3: Loop to extract pixel values for each forest stand
vi_vals <- list() #empty list to store extracted pixel values
for (i in 1:length(vis)) {
n <- raster(vis[[i]])
vi_vals[[i]] <- raster::extract(n, pnts, method = "bilinear")
}
This method works fine but as I mentioned above, I now need to repeat the same process using a new equation which incorporates variables that can't be calculated from a raster. These values are simply 3 columns in a dataframe that are identified by a stand ID.
Let's first simplify your example a bit
Example data
library(terra)
v <- vect(system.file("ex/lux.shp", package="terra"))
r <- rast(system.file("ex/elev.tif", package="terra"))
r <- rep(r, 5) * 1:5
names(r) <- paste0("b", 1:5)
ras_list <- list(r,r,r,r,r)
set.seed(1)
pnts <- spatSample(v, 10, "random")
values(pnts) = data.frame(id=10, a=5:14, b=3:12, d=6:15)
Compute VI and extract
vis <- list()
for (i in seq_along(ras_list)) {
b <- ras_list[[i]]
vis[[i]] <- 1.23 + 0.45*((b[[4]] + b[[3]] - b[[1]]) / (b[[4]] + b[[3]])) - 0.67*(b[[1]] * b[[3]] - b[[4]])
}
vis <- rast(vis)
names(vis) = paste0("set", 1:5)
vi_vals <- extract(vis, pnts, method = "bilinear")
And now you can do something with the tree parameters
out <- t(t(vi_vals[,-1])) * pnts$a + pnts$b / pnts$d
It would be more efficient to first extract the values and then apply the function
e <- list()
for (i in seq_along(ras_list)) {
x <- extract(ras_list[[i]], pnts, method="bilinear")[,-1]
e[[i]] = (1.23 + 0.45*((x$b4 + x$b3 - x$b1) / (x$b4 + x$b3)) - 0.67*(x$b1 * x$b3 - x$b4)) * pnts$a + pnts$b / pnts$d
}
e <- do.call(cbind, e)
The results are not exactly the same; I assume because of loss of decimal number precision in one or the other method.

How do I convolve() more than two distributions without doubling the result rowcount every time?

I am attempting to convolve() 36 beta distributions. The result distribution is one of the two input distributions to the next successive call to convolve(). After every convolve(), the result has row count = (nrow(vector1)+nrow(vector2)-1). In effect, the row count of the result distribution almost doubles with every call to convolve(). This is very inefficient - it makes runtime impossibly long and consumes large amounts of memory. Is there any way to keep the row count constant?
Code example below ...
# Function from https://stat.ethz.ch/pipermail/r-help/2008-July/168762.html
weighted.var <- function(x, w, na.rm = FALSE) {
if (na.rm) {
w <- w[i <- !is.na(x)]
x <- x[i]
}
sum.w <- sum(w)
sum.w2 <- sum(w^2)
mean.w <- sum(x * w) / sum(w)
(sum.w / (sum.w^2 - sum.w2)) * sum(w * (x - mean.w)^2, na.rm = na.rm);
}
# Define beta distribution shape parameters.
s1a <- 3.52; s1b <- 65.35;
s2a <- 1.684; s2b <- 189.12;
s3a <- 5.696; s3b <- 32.34;
s4a <- 1.81; s4b <- 185.5;
# Define intial set of quantiles.
mQ1 <- matrix(data=seq(0,1,1/1000),ncol=1);
for (i in 1:3){
mPDF <- matrix(data=convolve(dbeta(mQ1,s1a,s1b),rev(dbeta(mQ1,s2a,s2b)),type="open"),ncol=1L);
print(paste(nrow(mPDF),' rows',sep=''));
if(i < 3){
# Calculate the merged shape parameters directly from mPDF.
mQ2 <- matrix(data=seq(0,1L,(1L/(nrow(mPDF)-1L))),ncol=1L);
wtMean <- weighted.mean(mQ2,mPDF);
wtStd <- sqrt(weighted.var(mQ2,mPDF));
s1a <- -1L * ((wtMean*(wtStd^2 + wtMean^2 - wtMean))/wtStd^2);
s1b <- ((wtStd^2 + wtMean^2 - wtMean)*(wtMean - 1))/wtStd^2;
s2a <- s3a; s2b <- s3b;
mQ1 <- mQ2;
}
} #i

Markowitz model / portfolio optimization using local search in R

I am taking baby steps to use metaheuristics for solving constrained optimization problems. I am trying to solve basic Markowitz Mean-Variance optimization model (given below) using NMOFpackage in R.
Min
lambda * [sum{i=1 to N}sum{j = 1 to N}w_i*w_i*Sigma_ij] - (1-lambda) * [sum{i=1 to N}(w_i*mu_i)]
subject to
sum{i=1 to N}{w_i} = 1
0 <= w_i <= 1; i = 1,...,N
where, lambda takes values between 0 and 1, N is number of assets.
Following is my code (Based on Book: Numerical Methods and Optimization in Finance):
library(NMOF)
na <- dim(fundData)[2L]
ns <- dim(fundData)[1L]
Sigma <- cov(fundData)
winf <- 0.0
wsup <- 1.0
m <- colMeans(fundData)
resample <- function(x,...) x[sample.int(length(x),...)]
data <- list(R = t(fundData),
m = m,
na = dim(fundData)[2L],
ns = dim(fundData)[1L],
Sigma = Sigma,
eps = 0.5/100,
winf = winf,
wsup = wsup,
nFP = 100)
w0 <- runif(data$na); w0 <- w0/sum(w0)
OF <- function(w,data){
wmu <- crossprod(w,m)
res <- crossprod(w, data$Sigma)
res <- tcrossprod(w,res)
result <- res - wmu
}
neighbour <- function(w, data){
toSell <- w > data$winf
toBuy <- w < data$wsup
i <- resample(which(toSell), size = 1L)
j <- resample(which(toBuy), size = 1L)
eps <- runif(1) * data$eps
eps <- min(w[i] - data$winf, data$wsup - w[j], eps)
w[i] <- w[i] - eps
w[j] <- w[j] + eps
w
}
algo <- list(x0 = w0, neighbour = neighbour, nS = 5000L)
system.time(sol1 <- LSopt(OF, algo, data))
I am not sure how to include lambda in the objective function (OF). The above code does not include lambda in OF. I tried using for loop but it resulted in following error:
OF <- function(w,data){
lambdaSeq <- seq(.001,0.999, length = data$nFP)
for(lambda in lambdaSeq){
wmu <- crossprod(w,m)
res <- crossprod(w, data$Sigma)
res <- tcrossprod(w,res)
result <- lambda*res - (1-lambda)*wmu
}
}
Error:
Local Search.
Initial solution:
| | 0%
Error in if (xnF <= xcF) { : argument is of length zero
Timing stopped at: 0.01 0 0.03
It would be nice if someone could help me in this regard.
P.S: I am also aware that this can be solved using quadratic programming. This is just an initiation to include other constraints.
If I understand correctly, you want to replicate the mean--variance efficient frontier by Local Search? Then you need to run a Local Search for every value of lambda that you want to include in the frontier.
The following example should help you get going. I start by attaching the package and setting up the list data.
require("NMOF")
data <- list(m = colMeans(fundData), ## expected returns
Sigma = cov(fundData), ## expected var of returns
na = dim(fundData)[2L], ## number of assets
eps = 0.2/100, ## stepsize for LS
winf = 0, ## minimum weight
wsup = 1, ## maximum weight
lambda = 1)
Next I compute a benchmark for the minimum-variance case (i.e. lambda equals one).
## benchmark: the QP solution
## ==> this will only work with a recent version of NMOF,
## which you can get by saying:
## install.packages('NMOF', type = 'source',
## repos = c('http://enricoschumann.net/R',
## getOption('repos')))
##
require("quadprog")
sol <- NMOF:::minvar(data$Sigma, 0, 1)
Objective function and neighbourhood function. I have slightly simplified both functions (for clarity; using crossprod in the objective function would probably be more efficient).
OF <- function(w, data){
data$lambda * (w %*% data$Sigma %*% w) -
(1 - data$lambda) * sum(w * data$m)
}
neighbour <- function(w, data){
toSell <- which(w > data$winf)
toBuy <- which(w < data$wsup)
i <- toSell[sample.int(length(toSell), size = 1L)]
j <- toBuy[sample.int(length(toBuy), size = 1L)]
eps <- runif(1) * data$eps
eps <- min(w[i] - data$winf, data$wsup - w[j], eps)
w[i] <- w[i] - eps
w[j] <- w[j] + eps
w
}
Now we can run Local Search. Since it is a fairly large dataset (200 assets),
you will need a relatively large number of steps to reproduce the QP solution.
w0 <- runif(data$na) ## a random initial solution
w0 <- w0/sum(w0)
algo <- list(x0 = w0, neighbour = neighbour, nS = 50000L)
sol1 <- LSopt(OF, algo, data)
You can compare the weights you get from Local Search with the QP solution.
par(mfrow = c(3,1), mar = c(2,4,1,1), las = 1)
barplot(sol, main = "QP solution")
barplot(sol1$xbest, main = "LS solution")
barplot(sol - sol1$xbest,
ylim = c(-0.001,0.001)) ## +/-0.1%
Finally, if you want to compute the whole frontier, you need to rerun this code for different levels of data$lambda.

Multi-data likelihood function and mle2 function from bbmle package in R

I have written a custom likelihood function that fits a multi-data model that integrates mark-recapture and telemetry data (sensu Royle et al. 2013 Methods in Ecology and Evolution). The likelihood function is designed to be flexible in terms of whether and how many covariates are specified for different linear models in different likelihood components which is determined by values supplied as function arguments (i.e., data matrices "detcovs" and "dencovs" in my code). The likelihood function works when I directly supply it to optimization functions (e.g., optim or nlm), but does not play nice with the mle2 function in the bbmle package. My problem is that I continually run into the following error: "some named arguments in 'start' are not arguments to the specified log-likelihood function". This is my first attempt at writing custom likelihood functions so I'm sure there are general coding conventions of which I'm unaware that make such tasks much more efficient and amendable to the mle2 function. Below is my likelihood function, code creating the staring value objects, and code calling the mle2 function. Any advice how to solve the error problem and general comments on writing cleaner functions is welcome. Many thanks in advance.
Edit: As requested, I have simplified the likelihood function and provided code to simulate reproducible data to which the model can be fit. Included in the simulation code are 2 custom functions and use of the raster function from the raster package. Hopefully, I have sufficiently simplified everything to enable others to troubleshoot. Again, many thanks for your help!
Jared
Likelihood function:
CSCR.RSF.intlik2.EXAMPLE <- function(alpha0,sigma,alphas=NULL,betas=NULL,n0,yscr=NULL,K=NULL,X=X,trapcovs=NULL,Gden=NULL,Gdet=NULL,ytel=NULL,stel=NULL,
dencovs=NULL,detcovs=NULL){
#
# this version of the code handles a covariate on log(Density). This is starting value 5
#
# start = vector of starting values
# yscr = nind x ntraps encounter matrix
# K = number of occasions
# X = trap locations
# Gden = matrix with grid cell coordinates for density raster
# Gdet = matrix with gride cell coordinates for RSF raster
# dencovs = all covariate values for all nGden pixels in density raster
# trapcovs = covariate value at trap locations
# detcovs = all covariate values for all nGrsf pixels in RSF raster
# ytel = nguys x nGdet matrix of telemetry fixes in each nGdet pixels
# stel = home range center of telemetered individuals, IF you wish to estimate it. Not necessary
# alphas = starting values for RSF/detfn coefficients excluding sigma and intercept
# alpha0 = starting values for RSF/detfn intercept
# sigma = starting value for RSF/detfn sigma
# betas = starting values for density function coefficients
# n0 = starting value for number of undetected individuals on log scale
#
n0 = exp(n0)
nGden = nrow(Gden)
D = e2dist(X,Gden)
nGdet <- nrow(Gdet)
alphas = alphas
loglam = alpha0 -(1/(2*sigma*sigma))*D*D + as.vector(trapcovs%*%alphas) # ztrap recycled over nG
psi = exp(as.vector(dencovs%*%betas))
psi = psi/sum(psi)
probcap = 1-exp(-exp(loglam))
#probcap = (exp(theta0)/(1+exp(theta0)))*exp(-theta1*D*D)
Pm = matrix(NA,nrow=nrow(probcap),ncol=ncol(probcap))
ymat = yscr
ymat = rbind(yscr,rep(0,ncol(yscr)))
lik.marg = rep(NA,nrow(ymat))
for(i in 1:nrow(ymat)){
Pm[1:length(Pm)] = (dbinom(rep(ymat[i,],nGden),rep(K,nGden),probcap[1:length(Pm)],log=TRUE))
lik.cond = exp(colSums(Pm))
lik.marg[i] = sum( lik.cond*psi )
}
nv = c(rep(1,length(lik.marg)-1),n0)
part1 = lgamma(nrow(yscr)+n0+1) - lgamma(n0+1)
part2 = sum(nv*log(lik.marg))
out = -1*(part1+ part2)
lam = t(exp(a0 - (1/(2*sigma*sigma))*t(D2)+ as.vector(detcovs%*%alphas)))# recycle zall over all ytel guys
# lam is now nGdet x nG!
denom = rowSums(lam)
probs = lam/denom # each column is the probs for a guy at column [j]
tel.loglik = -1*sum( ytel*log(probs) )
out = out + tel.loglik
out
}
Data simulation code:
library(raster)
library(bbmle)
e2dist <- function (x, y){
i <- sort(rep(1:nrow(y), nrow(x)))
dvec <- sqrt((x[, 1] - y[i, 1])^2 + (x[, 2] - y[i, 2])^2)
matrix(dvec, nrow = nrow(x), ncol = nrow(y), byrow = F)
}
spcov <- function(R) {
v <- sqrt(nrow(R))
D <- as.matrix(dist(R))
V <- exp(-D/2)
cov1 <- t(chol(V)) %*% rnorm(nrow(R))
Rd <- as.data.frame(R)
colnames(Rd) <- c("x", "y")
Rd$C <- as.numeric((cov1 - mean(cov1)) / sd(cov1))
return(Rd)
}
set.seed(1234)
co <- seq(0.3, 0.7, length=5)
X <- cbind(rep(co, each=5),
rep(co, times=5))
B <- 10
co <- seq(0, 1, length=B)
Z <- cbind(rep(co, each=B), rep(co, times=B))
dencovs <- cbind(spcov(Z),spcov(Z)[,3]) # ordered as reading raster image from left to right, bottom to top
dimnames(dencovs)[[2]][3:4] <- c("dencov1","dencov2")
denr.list <- vector("list",2)
for(i in 1:2){
denr.list[[i]] <- raster(
list(x=seq(0,1,length=10),
y=seq(0,1,length=10),
z=t(matrix(dencovs[,i+2],10,10,byrow=TRUE)))
)
}
B <- 20
co <- seq(0, 1, length=B)
Z <- cbind(rep(co, each=B), rep(co, times=B))
detcovs <- cbind(spcov(Z),spcov(Z)[,3]) # ordered as reading raster image from left to right, bottom to top
dimnames(detcovs)[[2]][3:4] <- c("detcov1","detcov2")
detcov.raster.list <- vector("list",2)
trapcovs <- matrix(0,J,2)
for(i in 1:2){
detr.list[[i]] <- raster(
list(x=seq(0,1,length=20),
y=seq(0,1,length=20),
z=t(matrix(detcovs[,i+2],20,20,byrow=TRUE)))
)
trapcovs[,i] <- extract(detr.list[[i]],X)
}
alpha0 <- -3
sigma <- 0.15
alphas <- c(1,-1)
beta0 <- 3
betas <- c(-1,1)
pixelArea <- (dencovs$y[2] - dencovs$y[1])^2
mu <- exp(beta0 + as.matrix(dencovs[,3:4])%*%betas)*pixelArea
EN <- sum(mu)
N <- rpois(1, EN)
pi <- mu/sum(mu)
s <- dencovs[sample(1:nrow(dencovs), size=N, replace=TRUE, prob=pi),1:2]
J <- nrow(X)
K <- 10
yc <- d <- p <- matrix(NA, N, J)
D <- e2dist(s,X)
loglam <- t(alpha0 - t((1/(2*sigma*sigma))*D*D) + as.vector(trapcovs%*%alphas))
p <- 1-exp(-exp(loglam))
for(i in 1:N) {
for(j in 1:J) {
yc[i,j] <- rbinom(1, K, p[i,j])
}
}
detected <- apply(yc>0, 1, any)
yscr <- yc[detected,]
ntel <- 5
nfixes <- 100
poss.tel <- which(s[,1]>0.2 & s[,1]<0.8 & s[,2]>0.2 & s[,2]<0.8)
stel.id <- sample(poss.tel,ntel)
stel <- s[stel.id,]
ytel <- matrix(NA,ntel,nrow(detcovs))
d <- e2dist(stel,detcovs[,1:2])
lam <- t(exp(1 - t((1/(2*sigma*sigma))*d*d) + as.vector(as.matrix(detcovs[,3:4])%*%alphas)))
for(i in 1:ntel){
ytel[i,] <- rmultinom(1,nfixes,lam[i,]/sum(lam[i,]))
}
Specify starting values and call mle2 function:
start1 <- list(alpha0=alpha0,sigma=sigma,alphas=alphas,betas=betas,n0=log(N-nrow(yscr)))
parnames(CSCR.RSF.intlik2.EXAMPLE) <- names(start)
out1 <- mle2(CSCR.RSF.intlik2.EXAMPLE,start=start1,method="SANN",optimizer="optim",
data=list(yscr=yscr,K=K,X=X,trapcovs=trapcovs,Gden=dencovs[,1:2],Gdet=detcovs[,1:2],
ytel=ytel,stel=stel,dencovs=as.matrix(dencovs[,3:4]),detcovs=as.matrix(detcovs[,3:4]))
)

Speeding up this tricky matrix calculation

As of now I am computing some features from a large matrix and doing it all in a for-loop. As expected it's very slow. I have been able to vectorize part of the code, but I'm stuck on one part.
I would greatly appreciate some advice/help!
s1 <- MyMatrix #dim = c(5167,256)
fr <- MyVector #vector of length 256
tw <- 5
fw <- 6
# For each point S(t,f) we need the sub-matrix of points S_hat(i,j),
# i in [t - tw, t + tw], j in [f - fw, f + fw] for the feature vector.
# To avoid edge effects, I pad the original matrix with zeros,
# resulting in a matrix of size nobs+2*tw x nfreqs+2*fw
nobs <- dim(s1)[1] #note: this is 5167
nf <- dim(s1)[2] #note: this is 256
sp <- matrix(0, nobs+2*tw, nf+2*fw)
t1 <- tw+1; tn <- nobs+tw
f1 <- fw+1; fn <- nf+fw
sp[t1:tn, f1:fn] <- s1 # embed the actual matrix into the padding
nfeatures <- 1 + (2*tw+1)*(2*fw+1) + 1
fsp <- array(NaN, c(dim(sp),nfeatures))
for (t in t1:tn){
for (f in f1:fn){
fsp[t,f,1] <- fr[(f - f1 + 1)] #this part I can vectorize
fsp[t,f,2:(nfeatures-1)] <- as.vector(sp[(t-tw):(t+tw),(f-fw):(f+fw)]) #this line is the problem
fsp[t,f,nfeatures] <- var(fsp[t,f,2:(nfeatures-1)])
}
}
fspec[t1:tn, f1:fn, 1] <- t(matrix(rep(fr,(tn-t1+1)),ncol=(tn-t1+1)))
#vectorized version of the first feature ^
return(fsp[t1:tn, f1:fn, ]) #this is the returned matrix
I assume that the var feature will be easy to vectorize after the 2nd feature is vectorized

Resources