For loop using package stratification - r

I want to write a for loop to get the results of 30 populations.
Those populations were imported by:
nomes<-list.files(pattern=".txt$")
dados<-list()
for (i in 1:length(nomes))
{
dados[[i]]<-read.table(nomes[i])
}
names(dados)<-nomes
So I have 30 populations in one data frame.
I have to find the cv of samples with dalenius hodges methods. Any way, this is not important. The important thing is the loop that I am not doing right. I have tried this:
for (i in nomes)
{
#H=3
dh[[i]] <- strata.cumrootf(x = dados[[i]]$V1, Ls = 3, alloc = c(0.5, 0, 0.5), n=100)
vardh[[i]] <- sum((dh[[i]]$var*dh[[i]]$Nh/(dh[[i]]$Nh-1))*(dh[[i]]$Nh*(dh[[i]]$Nh-dh[[i]]$nh)/dh[[i]]$nh))
cvdh[[i]] <- sqrt(vardh[[i]])/sum(dados[[i]]$V1)
}
But it didnt work. This function is in the "stratification" package.
The error that appear is this:
Error in dh[[i]] <- strata.cumrootf(x = dados[[i]]$V1, Ls = 3, alloc = c(0.5, :
object 'dh' not found
In addition: Warning message:
'nclass' value has been chosen arbitrarily
Can anybody help me?
Thanks.

Related

Error in if (ncol(spc1$amp) > ncol(spc2$amp)) { : argument is of length zero

I am using WarbleR in R to do some acoustic analyses. As freq_range couldn't detect all the bottom frequencies very well, I have created a data frame manually with all the right bottom frequencies, loaded this into R and turned it into a selection table. Traq_freq_contour and compare.methods and freq_DTW all work fine (although freq_DTW does give a warning message:
Warning message: In (0:(n - 1)) * f : NAs produced by integer overflow
However. If I try to do the function cross_correlation, I get the following error:
Error in if (ncol(spc1$amp) > ncol(spc2$amp)) { :
argument is of length zero
I do not get this error with a selection table with the bottom and top frequency added with the freq_range function in R instead of manually. What could be the issue here? The selection tables both look similar:
This is the selection table partly made by R through freq_range:
And this is the one with the bottom frequencies added manually (which has more sound files than the one before):
This is part of the code I use:
#Comparing methods for quantitative analysis of signal structure
compare.methods(X = stnew, flim = c(0.6,2.5), bp = c(0.6,2.5), methods = c("XCORR", "dfDTW"))
#Measure acoustic parameters with spectro_analysis
paramsnew <- spectro_analysis(stnew, bp = c(0.6,2), threshold = 20)
write.csv(paramsnew, "new_acoustic_parameters.csv", row.names = FALSE)
#Remove parameters derived from fundamental frequency
paramsnew <- paramsnew[, grep("fun|peakf", colnames(paramsnew), invert = TRUE)]
#Dynamic time warping
dm <- freq_DTW(stnew, length.out = 30, flim = c(0.6,2), bp = c(0.6,2), wl = 300, img = TRUE)
str(dm)
#Spectrographic cross-correlation
xcnew <- cross_correlation(stnew, wl = 300, na.rm = FALSE)
str(xc)
Any idea what I'm doing wrong?

Error in xy.coords when trying to fit ARIMA model, please advise

I'm hoping you might be able to help me with an issue that I'm having when trying to fit an ARIMA model for a school project that I`m working on.
The data that I'm using shows weekly sales figures starting from 2019 and going till 2021. My goal is to produce a forecast for the remainder of 2021 based on those figures. As my dataset is comprised of weekly data and the seasonality based on the ACF and PACF plots seems to occur once a year I've set the "S =" argument from the sarima() function to 52. The problem is that every time I try to run the model, I keep getting an error and I can't figure out any way of getting rid of it.
I've tried to use the same code with other data sets on the datacamp environment with "S = 52" and the model runs without a problem. I'm hoping that somebody might be able to give me some advice on how to deal with this issue. Thank you!
P.S.
If the "S =" argument is set lower than 35 then the model will run. (Just in case this information might help)
####Install Packages####
library(tidyverse)
library(zoo)
library(xts)
library(lubridate)
library(astsa)
library(tseries)
library(forecast)
######Load and inspect the data########
unit_sales <- structure(list(Date = c("30/03/2019", "06/04/2019", "13/04/2019",
"20/04/2019", "27/04/2019", "04/05/2019", "11/05/2019", "18/05/2019",
"25/05/2019", "01/06/2019", "08/06/2019", "15/06/2019", "22/06/2019",
"29/06/2019", "06/07/2019", "13/07/2019", "20/07/2019", "27/07/2019",
"03/08/2019", "10/08/2019", "17/08/2019", "24/08/2019", "31/08/2019",
"07/09/2019", "14/09/2019", "21/09/2019", "28/09/2019", "05/10/2019",
"12/10/2019", "19/10/2019", "26/10/2019", "02/11/2019", "09/11/2019",
"16/11/2019", "23/11/2019", "30/11/2019", "07/12/2019", "14/12/2019",
"21/12/2019", "28/12/2019", "04/01/2020", "11/01/2020", "18/01/2020",
"25/01/2020", "01/02/2020", "08/02/2020", "15/02/2020", "22/02/2020",
"29/02/2020", "07/03/2020", "14/03/2020", "21/03/2020", "28/03/2020",
"04/04/2020", "11/04/2020", "18/04/2020", "25/04/2020", "02/05/2020",
"09/05/2020", "16/05/2020", "23/05/2020", "30/05/2020", "06/06/2020",
"13/06/2020", "20/06/2020", "27/06/2020", "04/07/2020", "11/07/2020",
"18/07/2020", "25/07/2020", "01/08/2020", "08/08/2020", "15/08/2020",
"22/08/2020", "29/08/2020", "05/09/2020", "12/09/2020", "19/09/2020",
"26/09/2020", "03/10/2020", "10/10/2020", "17/10/2020", "24/10/2020",
"31/10/2020", "07/11/2020", "14/11/2020", "21/11/2020", "28/11/2020",
"05/12/2020", "12/12/2020", "19/12/2020", "26/12/2020", "02/01/2021",
"09/01/2021", "16/01/2021", "23/01/2021", "30/01/2021", "06/02/2021",
"13/02/2021", "20/02/2021", "27/02/2021", "06/03/2021", "13/03/2021",
"20/03/2021", "27/03/2021"), Units = c(967053.4, 633226.9, 523264,
473914.2, 418087.5, 504342.2, 477819, 415650, 406972.3, 429791.4,
441724.4, 453221.8, 402005.8, 414993.4, 381457.2, 391218.7, 486925.9,
409791.8, 399217.9, 409210, 478121.2, 495549.1, 503918.3, 535949.5,
517450.4, 523036.8, 616456.9, 665979.3, 705201.5, 700168.1, 763538.8,
875501.2, 886586.6, 967806, 1094195, 1285950.5, 1450436.1, 1592162.8,
2038160.5, 1676988.8, 1026193.7, 820405.5, 738643.9, 669657.6,
720287.7, 673194.1, 754102.5, 639532, 680413.6, 710702, 711722.8,
834036.8, 427817.2, 505849.6, 441047.4, 439411, 487634.1, 594594.8,
548796.7, 565682, 528275.2, 448092, 467780.1, 544160.3, 538275.8,
485055.5, 592097.3, 537514.3, 493381.9, 445280.8, 448111.2, 419263.4,
457125.7, 561169.6, 704575.3, 656423.1, 653751.3, 622937.7, 718022.8,
768901.9, 793443, 814604.2, 876269.3, 982921.8, 1064920.7, 1201494.4,
1337374.9, 1619595.8, 1734773.8, 1624071, 1777832.3, 1648201.9,
1106253.8, 940141.1, 796129.1, 853392.9, 932059.1, 905990.4,
981188.6, 907823.9, 956098.8, 1003966.7, 1331125.5, 805593.6,
799486.2)), class = "data.frame", row.names = c(NA, -105L))
####Convert date column to date format
unit_sales$Date <- as.Date(unit_sales$Date, format ="%d/%m/%Y" )
###Convert to xts object
unit_sales_xts <- xts(unit_sales, unit_sales$Date)
periodicity(unit_sales_xts)
###Convert to ts object
unit_sales_vector <- unit_sales$Units
unit_sales_ts <- ts(unit_sales_vector, start = decimal_date(as.Date("2019-03-30")), frequency = 52)
###Plot data
ts.plot(unit_sales_ts)
###Make data stationary and plot it
ts.plot(diff(log(unit_sales_ts)))
###Plot ACF and PACF
pacf_plot <- pacf(diff(log(unit_sales_ts)), lag.max = 105)
acf_plot <- acf(diff(log(unit_sales_ts)), lag.max = 105)
###Test if data is stationary
adf.test(diff(log(unit_sales_ts)))
###Fit ARIMA model
sarima(unit_sales_ts, p = 1, d = 1, q = 0)
sarima.for(unit_sales_ts, n.ahead = 39, 1,1,0)
**###Fit Seasona ARIMA model - THIS IS WHERE THE ERROR OCCURS -**
sarima(unit_sales_ts, p = 1, d = 1, q = 0, P = 0, D = 1, Q = 0, S = 52)
###Forecast using the above model
sarima.for(unit_sales_ts,n.ahead = 39, p = 1, d = 1, q = 0, P = 0, D = 1, Q = 0, S = 52)
I tested you code and get the same error, so I read into the astsa::sarima() implementation and found these two lines, concerning the use of seasonality and your data:
alag <- max(10 + sqrt(num), 3 * S)
nlag <- ifelse(S < 7, 20, 3 * S)
Without reading the whole implementation, I deduce, that the package creator suposes 3 times the season size for the parameter to work correctly. Which is not your case with 105 observation when using S = 52. Now if that is a bug or just not well documented or properly treated in the code, I can not tell you. I do not know which version of the package datacamp runs and what is the update history of the package itself. But we can assume that at least one of the two lines causes the error since all values from 35 for S cause the same error.
One way to work arround is printing the implementation code of the function to console (just write "astsa::sarima" and hit enter, without the " though), copy it to modify the lines (I tried to use 2 * instead of 3 *) and assing it to a function name of your own. Then the code runs. Also you could try the print at the datacamp environment and compare to you local installation.

"Error in checkForRemoteErrors(val) : 2 nodes produced errors; first error: could not find function "wincrqa"

I am currently trying to run a parallelized RQA with the following code.
library(snow)
library(doSNOW)
library(crqa)
my_wincrqa = function(x, y){
wincrqa(x, y, windowstep = 1000, windowsize = 2000,
radius = .2, delay = 4, embed = 2, rescale = 0, normalize = 0,
mindiagline = 2, minvertline = 2, tw = 0, whiteline = F,
side = "both", method = "crqa", metric = "euclidean", datatype = "continuous")
}
cl<-makeCluster(11,type="SOCK")
start_time <- Sys.time()
WCRQA_list = clusterMap(cl, my_wincrqa, HR_list, RR_list)
end_time <- Sys.time()
end_time - start_time
Unfortunately, I get this: "
Error in checkForRemoteErrors(val) : 2 nodes produced errors; first
error: could not find function "wincrqa"
I know there is probably sum error in setting up the parallel processing, but I am not able to resolve it. I also tried a similar thing using the parallel() package.
I am happy for any help!
Best,
Johnson
The issue is that you’ve loaded and attached the ‘crqa’ package in your main execution environment, but the cluster nodes are running code in separate, isolated R sessions — they don’t see the same loaded packages or global variables!
The easiest solution is to replace use of wincrqa with a fully qualified name, i.e. to use crqa::wincrqa inside your function.
Alternatively, it is possible to attach the ‘crqa’ package on all cluster nodes prior to executing the function:
clusterEvalQ(cl, library(crqa))
WCRQA_list = clusterMap(cl, my_wincrqa, HR_list, RR_list)

Only one processor being used while running NetLogo models using parApply

I am using the 'RNetLogo' package to run sensitivity analyses on my NetLogo model. My model has 24 parameters I need to vary - so parallelising this process would be ideal! I've been following along with the example in Thiele's "Parallel processing with the RNetLogo package" vignette, which uses the 'parallel' package in conjunction with 'RNetLogo'.
I've managed to get R to initialise the NetLogo model across all 12 of my processors, which I've verified using gui=TRUE. The problem comes when I try to run the simulation code across the 12 processors using 'parApply'. This line runs without error, but it only runs on one of the processors (using around 8% of my total CPU power). Here's a mock up of my R code file - I've included some commented-out code at the end, showing how I run the simulation without trying to parallelise:
### Load packages
library(parallel)
### Set up initialisation function
prepro <- function(dummy, gui, nl.path, model.path) {
library(RNetLogo)
NLStart(nl.path, gui=gui)
NLLoadModel(model.path)
}
### Set up finalisation function
postpro <- function(x) {
NLQuit()
}
### Set paths
# For NetLogo
nl.path <- "C:/Program Files/NetLogo 6.0/app"
nl.jarname <- "netlogo-6.0.0.jar"
# For the model
model.path <- "E:/Model.nlogo"
# For the function "sim" code
sim.path <- "E:/sim.R"
### Set base values for parameters
base.param <- c('prey-max-velocity' = 25,
'prey-agility' = 3.5,
'prey-acceleration' = 20,
'prey-deceleration' = 25,
'prey-vision-distance' = 10,
'prey-vision-angle' = 240,
'time-to-turn' = 5,
'time-to-return-to-foraging' = 300,
'time-spent-circling' = 2,
'predator-max-velocity' = 35,
'predator-agility' = 3.5,
'predator-acceleration' = 20,
'predator-deceleration' = 25,
'predator-vision-distance' = 20,
'predator-vision-angle' = 200,
'time-to-give-up' = 120,
'number-of-safe-zones' = 1,
'number-of-target-patches' = 5,
'proportion-obstacles' = 0.05,
'obstacle-radius' = 2.0,
'obstacle-radius-range' = 0.5,
'obstacle-sensitivity-for-prey' = 0.95,
'obstacle-sensitivity-for-predators' = 0.95,
'safe-zone-attractiveness' = 500
)
## Get names of parameters
param.names <- names(base.param)
### Load the code of the simulation function (name: sim)
source(file=sim.path)
### Convert "base.param" to a matrix, as required by parApply
base.param <- matrix(base.param, nrow=1, ncol=24)
### Get the number of simulations we want to run
design.combinations <- length(base.param[[1]])
already.processed <- 0
### Initialise NetLogo
processors <- detectCores()
cl <- makeCluster(processors)
clusterExport(cl, 'sim')
gui <- FALSE
invisible(parLapply(cl, 1:processors, prepro, gui=gui, nl.path=nl.path, model.path=model.path))
### Run the simulation across all processors, using parApply
sim.result.base <- parApply(cl, base.param, 1, sim,
param.names,
no.repeated.sim = 100,
trace.progress = FALSE,
iter.length = design.combinations,
function.name = "base parameters")
### Run the simulation on a single processor
#sim.result.base <- sim(base.param,
# param.names,
# no.repeated.sim = 100,
# my.nl1,
# trace.progress = TRUE,
# iter.length = design.combinations,
# function.name = "base parameters")
Here's a mock up for the 'sim' function (adapted from Thiele's paper "Facilitating parameter estimation and sensitivity analyses of agent-based models - a cookbook using NetLogo and R"):
sim <- function(param.set, parameter.names, no.repeated.sim, trace.progress, iter.length, function.name) {
# Some security checks
if (length(param.set) != length(parameter.names))
{ stop("Wrong length of param.set!") }
if (no.repeated.sim <= 0)
{ stop("Number of repetitions must be > 0!") }
if (length(parameter.names) <= 0)
{ stop("Length of parameter.names must be > 0!") }
# Create an empty list to save the simulation results
eval.values <- NULL
# Run the repeated simulations (to control stochasticity)
for (i in 1:no.repeated.sim)
{
# Create a random-seed for NetLogo from R, based on min/max of NetLogo's random seed
NLCommand("random-seed",runif(1,-2147483648,2147483647))
## This is the stuff for one simulation
cal.crit <- NULL
# Set NetLogo parameters to current parameter values
lapply(seq(1:length(parameter.names)), function(x) {NLCommand("set ",parameter.names[x], param.set[x])})
NLCommand("setup")
# This should run "go" until prey-win =/= 5, i.e. when the pursuit ends
NLDoCommandWhile("prey-win = 5", "go")
# Report a value
prey <- NLReport("prey-win")
# Report another value
pred <- NLReport("predator-win")
## Extract the values we are interested in
cal.crit <- rbind(cal.crit, c(prey, pred))
# append to former results
eval.values <- rbind(eval.values,cal.crit)
}
## Make sure eval.values has column names
names(eval.values) <- c("PreySuccess", "PredSuccess")
# Return the mean of the repeated simulation results
if (no.repeated.sim > 1) {
return(colMeans(eval.values))
}
else {
return(eval.values)
}
}
I think the problem might lie in the "nl.obj" string that RNetLogo uses to identify the NetLogo instance you want to run the code on - however, I've tried several different methods of fixing this, and I haven't been able to come up with a solution that works. When I initialise NetLogo across all the processors using the code provided in Thiele's example, I don't set an "nl.obj" value for each instance, so I'm guessing RNetLogo uses some kind of default list? However, in Thiele's original code, the "sim" function requires you to specify which NetLogo instance you want to run it on - so R will spit an error when I try to run the final line (Error in checkForRemoteErrors(val) : one node produced an error: argument "nl.obj" is missing, with no default). I have modified the "sim" function code so that it doesn't require this argument and just accepts the default setting for nl.obj - but then my simulation only runs on a single processor. So, I think that by default, "sim" must only be running the code on a single instance of NetLogo. I'm not certain how to fix it.
This is also the first time I've used the 'parallel' package, so I could be missing something obvious to do with 'parApply'. Any insight would be much appreciated!
Thanks in advance!
I am still in the process of applying a similar technique to perform a Morris Elementary Effects screening with my NetLogo model. For me the parallel execution works fine. I compared your script to mine and noticed that in my version the 'parApply' call of the simulation function (simfun) is embedded in a function statement (see below). Maybe including the function already solves your issue.
sim.results.morris <- parApply(cl, mo$X, 1, function(x) {simfun(param.set=x,
no.repeated.sim=no.repeated.sim,
parameter.names=input.names,
iter.length=iter.length,
fixed.values=fixed.values,
model.seed=new.model.seed,
function.name="Morris")})

theta.sparse error with lorDIF

I was wondering whether anyone can help me out.
I am trying to run a dif analysis on my data but keep getting a theta.sparse error, which I am unsure of how to fix. I would really appreciate any that you can give me.
library(lordif)
dat<- read.csv2("OPSO.csv",header=TRUE)
datgender <- read.csv2("DATA.csv",header=TRUE)
group<-datgender$Gender
sink("outputDIFopso.txt")
gender.difopso <- lordif(dat, group, selection = NULL,
criterion = c("Chisqr", "R2", "Beta"),
pseudo.R2 = c("McFadden", "Nagelkerke", "CoxSnell"), alpha = 0.01,
beta.change = 0.1, R2.change = 0.02, maxIter = 10, minCell = 5,
minTheta = -4, maxTheta = 4, inc = 0.1, control = list(), model = "GRM",
anchor = NULL, MonteCarlo = FALSE, nr = 100)
print(gender.difopso)
summary(gender.difopso)
sink()
pdf("graphtestop.pdf")
plot(gender.difopso)
dev.off()
dev.off()
Error in lordif(dat, group, selection = NULL, criterion = c("Chisqr", :
object 'theta.sparse' not found
Thank you :)
You should check the error line before then. The output will probably say you have no items flagged for DIF. When that's the case you should just run the mirt function and extract theta and ipar objects as necessary.
The author could add some case handling for when compare(flags, flags.matrix) is true. At the very least, it seems a warning is omitted when there are no items with DIF the same way it says
if (ndif == ni) {
warning("all items got flagged for DIF - stopping\n")
}
and there is no case handling when (ndif == 0) although compare(flags, flag.matrix) evaluates to TRUE.
The implications when all or none of the items have DIF is that you would get the same results (generating the same ICC plots, same inference etc) by fitting mirt in the combined sample (no DIF) or two or more mirt models for each group (all DIF). So it's a correct time saving procedure to just bypass when all that breaks down.

Resources