I'm developing automatic forecast Software with JAVA & R. The following steps are used in R to forecast next 18 values:
trends <- scan("c:/data_for_R/trends.dat")
auto.arima(trends) (cf. arima(pdq))
trendsarima <- arima(trends, order=c(2,1,3)), note that (2,1,3) was found by the step #2)
trendsforecasts <- forecast.Arima(trendsarima, h=18)
trendsforecasts
plot.forecast(trendsforecasts)
All I want to know is, how do you integrate steps #2, #3 (preferably by a single command)?
trendsarima <- auto.arima(trends)
Related
I tried to generate boson sampling data using R package BosonSampling. Although it takes long time to generate samples for larger values of n and m, I tried smaller values but I didn't get any output from code. I don't know what is the problem.
The documentation is available in the link:
https://cran.r-project.org/web/packages/BosonSampling/index.html
the code from documentation:
library('BosonSampling')
library('Rcpp')
set.seed(7)
n <- 10
m <- 20
A <- randomUnitary(m)[,1:n]
valueList <- bosonSampler(A, sampleSize=10, perm = FALSE)$values
valueList
So I have been working through a population ecology exercise using the popbio package in R-Studio that focuses on using Leslie Matrix's. I have successfully created a Leslie matrix with the proper dimensions using the Fecundity (mx) and Annual Survival values (sx) that I have calculated with my life table. I then am trying to use the pop.projection function in the popbio package to multiply my Leslie matrix (les.mat) by a starting population vector (N0) followed by the number of time intervals (4 years). It is my understanding that you should be able to take a Leslie matrix and multiply by a population vector to calculate a population size after a set number of time intervals. Have I done something wrong here, when I try to run my pop.projection line of code I get the following error message in R:
"> projA <- pop.projection(les.mat,N0,10)
Error in A %*% n : non-conformable arguments"
Could the problem be an issue with my pop.projection function? I am thinking it may be an issue with by N0 argument (population vector), when I look at my N0 values it seems like it has been saved in R as a "Numeric Type", should I be converting it into its own matrix, or as it's own vector somehow to get my pop.projection line of code to run? Any advice would be greatly appreciated, the short code I have been using will be linked below!
Sx <- c(0.8,0.8,0.7969,0.6078,0.3226,0)
mx <- c(0,0,0.6,1.09,0.2,0)
Fx <- mx # fecundity values
S <- Sx # dropping the first value
F <- Fx
les.mat <- matrix(rep(0,36),nrow=6)
les.mat[1,] <- F
les.mat
for(i in 1:5){
les.mat[(i+1),i] <- S[i]
}
les.mat
N0 <- c(100,80,64,51,31,10,0)
projA <- pop.projection(les.mat,N0,10)
The function uses matrix multiplication on the first and second arguments so they must match. The les.mat matrix is 6x6, but N0 is length 7. Try
projA <- pop.projection(les.mat, N0[-7], 10) # Delete last value
or
projA <- pop.projection(les.mat, N0[-1], 10) # Delete first value
I am doing some projects related to statistics simulation using R based on "Introduction to Scientific Programming and Simulation Using R" and in the Students projects session (chapter 24) i am doing the "The pipe spiders of Brunswick" problem, but i am stuck on one part of an evolutionary algorithm, where you need to perform some data perturbation according to the sentence bellow:
"With probability 0.5 each element of the vector is perturbed, independently
of the others, by an amount normally distributed with mean 0 and standard
deviation 0.1"
What does being "perturbed" really mean here? I dont really know which operation I should be doing with my vector to make this perturbation happen and im not finding any answers to this problem.
Thanks in advance!
# using the most important features, we create a ML model:
m1 <- lm(PREDICTED_VALUE ~ PREDICTER_1 + PREDICTER_2 + PREDICTER_N )
#summary(m1)
#anova(m1)
# after creating the model, we perturb as follows:
#install.packages("perturb") #install the package
library(perturb)
set.seed(1234) # for same results each time you run the code
p1_new <- perturb(m1, pvars=c("PREDICTER_1","PREDICTER_N") , prange = c(1,1),niter=200) # your can change the number of iterations to any value n. Total number of iteration would come to be n+1
p1_new # check the values of p1
summary(p1_new)
Perturbing just means adding a small, noisy shift to a number. Your code might look something like this.
x = sample(10, 10)
ind = rbinom(length(x), 1, 0.5) == 1
x[ind] = x[ind] + rnorm(sum(ind), 0, 0.1)
rbinom gets the elements to be modified with probability 0.5 and rnorm adds the perturbation.
I had previously asked this question trying to get top down forecasted proportions forecast recombination using the hts package. The solution there works great for multilevel hierarchies, however I have found I get an error when I try to use the solution on a two level hierarchy.
library(hts)
# Create the hierarchy
newhts <- hts(htseg1$bts, list(ncol(htseg1$bts)))
# forecast creation adapted from the `combinef()` example
h <- 12
ally <- aggts(newhts)
allf <- matrix(NA, nrow = h, ncol = ncol(ally))
for(i in 1:ncol(ally))
allf[,i] <- forecast(auto.arima(ally[,i]), h = h, PI = FALSE)$mean
allf <- ts(allf, start = 51)
# Earo Wang's solution to my previous question
hts:::TdFp(allf, nodes = htseg1$nodes)
Error in *.default(fcasts[, 1L], prop) : time-series/vector length mismatch
The problem seems to arise because a two level hierarchy skips the last if conditional with the condition if (l.levels > 2L). The last statement of this conditional multiplies includes a piece where prop is multiplied by the time series flist[[k + 1L]], which converts prop into a time series matrix. When this statement is skipped, prop remains a regular matrix causing the error when the time series vector fcasts[, 1L] is multiplied by the matrix prop.
I understand that TdFp is a non exported function and therefore may not be as robust as the other functions in the package, but is there any way around this problem? Since it is a relatively simple case, I can code a solution myself, but since hts::forecast.hts() can handle two level hierarchies for method = "tdfp", I thought there might be a nice clean solution.
I am trying to compare forecast reconciliation methods from the hts package on previously existing forecasts. The forecast.gts function is not available to me since there is no computationally tractable way to create a user defined function that returns the values in a forecast object. Because of this, I am using the combinef() function in the package to redistribute the forecasts. I have been able to work of the proper weights to get the wls and nseries methods, and the ols version is the default. I was able to get the "bottom up" method using:
# Creates sample forecasts, taken from `combinef()` example
library(hts)
h <- 12
ally <- aggts(htseg1)
allf <- matrix(NA, nrow = h, ncol = ncol(ally))
for(i in 1:ncol(ally))
allf[,i] <- forecast(auto.arima(ally[,i]), h = h, PI = FALSE)$mean
allf <- ts(allf, start = 51)
# create the weight vector
numTS <- ncol(allf) # Get the total number of series
numBaseTS <- sum(tail(htseg1$nodes, 1)[[1]]) # Get the number of bottom level series
# Create weights of 0 for all aggregate ts and 1 for the base level
weightVals <- c(rep(0, numTS - numBaseTS), rep(1, numBaseTS))
y.f <- combinef(allf, htseg1$nodes, weights = weightVals)
I was hoping that something like making the first weight 1 and the rest 0 might give me one of the three top down forecast, but that just results in a bunch of 0s or NaN values depending on how you try to look at it.
combinef(allf, htseg1$nodes, weights = c(1, rep(0, numTS - 1)))
I know the top down methods aren't the hardest thing to compute manually, and I can just write a function to do that, but are there any tools in the hts package that can help with this? I'd like to keep the data format consistent to simplify my analysis. Most specifically, I would like to get the "top down forcasted proportions" or tdfp method.
The functions to reconcile the forecasts using the "top-down" method are currently not exported. Probably I should export them to make the "top-down" results as tractable as combinef() in the next version. The workaround is as follows:
hts:::TdFp(allf, nodes = htseg1$nodes)
Hope it helps.