R: Problems with updating list objects using a while-loop - r

I am currently trying to simulate a (random) market in R using a while-loop that runs while the market is open: while the time is less than 600 minutes.
On this market only one of four events may happen at any time: birth of a supply, birth of a demand, death of a supply or death of a demand.
These are all drawn from exponential distributions using the rexp()-command with their own intensity. Their amounts and respective prices are each drawn from their own normal distribution (only values greater than 0), and the time is then updated depending on which of the events is drawn first.
Then I would like to update these intensities (using cox-regression), and for this to happen I need to store previous information about each of the events, preferably in a list, so that I can for example draw samples from the living supplies and remove them, to imitate a purchase. I basically want to keep track of what is "alive" on the market at a given time. Here is some of my code:
TIME <- 0
count <- 1
...
my.stores <- c()
while(TIME < 600){
time.supply.birth <- rexp(1, intensity1)
time.supply.death <- rexp(1, intensity2)
time.demand.birth <- rexp(1, intensity3)
time.demand.death <- rexp(1, intensity4)
case1 <- time.supply.birth == min(time.supply.birth, time.demand.birth, time.supply.death, time.demand.death)
case2 <- time.supply.death == min(time.supply.birth, time.demand.birth, time.supply.death, time.demand.death)
case3 <- time.demand.birth == min(time.supply.birth, time.demand.birth, time.supply.death, time.demand.death)
case4 <- time.demand.death == min(time.supply.birth, time.demand.birth, time.supply.death, time.demand.death)
TIME <- TIME + time.supply.birth*case1 + time.supply.death*case2 + time.demand.birth*case3 + time.demand.death*case4
if(case1 == T){
amount.supply.birth <- rnorm() # with values
price.supply.birth <- rnorm()
count.supply.birth.event <- count.supply.birth.event + 1
my.stores[[count]]$amount.supply.birth <- c(my.stores[[count-1]]$amount.supply.birth, amount.supply.birth)
my.stores[[count]]$price.supply.birth <- c(my.stores[[count-1]]$price.supply.birth, price.supply.birth)
} else if(case2 ==T) {
# Death supply event: here a sample from the living supplies should be drawn
} else if(case3 == T){
# Similar to case 1
} else if(case4 == T){
# similar to case 2
} else{
}
count <- count + 1
}
My problem is that I cannot even store any information in the list, since the while-loop breaks immediately after one iteration, which results in the length of the list my.stores to be only 1 - I bet it is something about my indexing in the list, but I'm not sure how to get around it. I get the following warning:
Error in my.stores[[count - 1]] :
attempt to select less than one element in get1index <real>
and when I print the list I get the following:
> my.stores[[1]]
$amount.demand.birth
[1] 6.044815
Say I draw a demand.birth with an amount and a price, and then the next iteration I similarly draw a supply.birth, I would have liked something like:
> my.stores[[1]]
$amount.demand.birth
[1] 6.044815
$amount.supply.birth
[1] 0
$price.demand.birth
[1] 50.78
$price.supply.birth
[1] 0
> my.stores[[2]]
$amount.demand.birth
[1] 6.044815
[2] 6.044815
$amount.supply.birth
[1] 0
[2] 7.1312
$price.demand.birth
[1] 50.78
[2] 50.78
$price.supply.birth
[1] 0
[2] 95.00
Anyone who can help me with this or who has other suggestions?
Sorry about the long post.
Cheers!

Since my.stores[[0]] is not valid, did you try :
if (count==1) {
my.stores[[count]]$amount.supply.birth <- amount.supply.birth
my.stores[[count]]$price.supply.birth <- price.supply.birth
}
else {
my.stores[[count]]$amount.supply.birth <- c(my.stores[[count-1]]$amount.supply.birth, amount.supply.birth)
my.stores[[count]]$price.supply.birth <- c(my.stores[[count-1]]$price.supply.birth, price.supply.birth)
}

Related

Basic While Loop with If Statement

I'm creating a basic function that runs a while loop and an if statement for an R class and am looking for help.
I don't want to overcomplicate things, so I'd prefer to stick to just the basics with this answer.
I created a basic football score function that's intention is to add 7 points to the score if yards >=80, add 3 points to the total score if yards >= 60 (with else if), and add 0 to the total score if anything <= 60.
This is where I have started:
teamA <- function(drives) {
i <- 0
score <- 0
while (i < drives){
yards <- sample(0:100,1)
if (yards >= 80){
score <- score + 7
}
else if (yards >= 60){
score <- score + 3
}
else {
score <- score
}
i <- i + 1
return (score)
}
}
teamA(5)
This is obviously not accurate to real football, but I wanted to simplify it for class.
I wanted to make a function where you could specify an amount of drives a team had and compile a score based on a random amount of yards generated by the sample I wrote in the while loop.
Would anyone be able to help fix this code? I'm not very experienced with R and can't think of the best way to solve my issue.
My biggest issue right now is that it seems like I'm only getting one score returned and not compiling a total score.
The problem in your code is that you've placed return(score) in the while loop. return terminates the function and returns the corresponding value. Therefore, your function always gets terminated after the first iteration of your loop.
Another edit I made to your code is to remove the last condition, because it doesn't change the value of score.
set.seed(4)
teamA <- function(drives) {
i <- 0
score <- 0
while (i < drives) {
yards <- sample(0:100,1)
if (yards >= 80) {
score <- score + 7
}
else if (yards >= 60) {
score <- score + 3
}
i <- i + 1
}
return(score)
}
teamA(5)
[1] 6
An easy way to debug such functions is to place a browser() in the code and see what happens in the function.

Changing start dates of schedules to optimize resources

I have a bunch of work that needs to be performed at specific time intervals. However, we have limited resources to do that work, each day. Therefore, I am trying to optimize the start time dates (start time dates can only be moved forward not backward) so that resources used everyday are more less similar to what we have budgeted for.
These functions are used in example below::
# Function to shift/rotate a vector
shifter <- function(x, n = 1) {
if (n == 0) x else c(tail(x, -n), head(x, n))
}
# Getting a range of dates
get_date_range <- function(current_date = Sys.Date(), next_planned_date = Sys.Date() + 5)
{
seq.Date(as.Date(current_date), as.Date(next_planned_date), "days")
}
Assume a toy example dataset :: Here task P1 starts on 14th while P2 starts on 15th. Value of zero means that no work is done for that task on that day.
# EXAMPLE TOY DATASET
datain = data.frame(dated = c("2018-12-14", "2018-12-15", "2018-12-16", "2018-12-17"),
P1 = c(1,2,0,3), P2 = c(0,4,0,6)) %>%
mutate(dated = as.character(dated))
#The amount of resources that can be used in a day
max_work = 4
# We will use all the possible combination of start dates to
# search for the best one
possible_start_dates <- do.call(expand.grid, date_range_of_all)
# Utilisation stores the capacity used during each
# combination of start dates
# We will use the minimum of thse utilisation
utilisation <- NULL # utilisation difference; absolute value
utilisation_act <- NULL # actual utilisation including negative utilisation
# copy of data for making changes
ndatain <- datain
# Move data across possible start dates and
# calculate the possible utilisation in each movements
for(i in 1:nrow(possible_start_dates)) # for every combination
{
for(j in 1:ncol(possible_start_dates)) # for every plan
{
# Number of days that are different
days_diff = difftime(oriz_start_date[["Plan_Start_Date"]][j],
possible_start_dates[i,j], tz = "UTC", units = "days" ) %>% as.numeric()
# Move the start dates
ndatain[, (j+1)] <- shifter(datain[, (j+1)], days_diff)
}
if(is.null(utilisation)) # first iteration
{
# calculate the utilisation
utilisation = c(i, abs(max_work - rowSums(ndatain %>% select(-dated))))
utilisation_act <- c(i, max_work - rowSums(ndatain %>% select(-dated)))
}else{ # everything except first iteration
utilisation = rbind(utilisation, c(i,abs(max_work - rowSums(ndatain %>% select(-dated)))))
utilisation_act <- rbind(utilisation_act, c(i, max_work - rowSums(ndatain %>% select(-dated))))
}
}
# convert matrix to dataframe
row.names(utilisation) <- paste0("Row", 1:nrow(utilisation))
utilisation <- as.data.frame(utilisation)
row.names(utilisation_act) <- paste0("Row", 1:nrow(utilisation_act))
utilisation_act <- as.data.frame(utilisation_act)
# Total utilisation
tot_util = rowSums(utilisation[-1])
# replace negative utilisation with zero
utilisation_act[utilisation_act < 0] <- 0
tot_util_act = rowSums(utilisation_act[-1])
# Index of all possible start dates producing minimum utilization changes
indx_min_all = which(min(tot_util) == tot_util)
indx_min_all_act = which(min(tot_util_act) == tot_util_act)
# The minimum possible dates that are minimum of actual utilisation
candidate_dates <- possible_start_dates[intersect(indx_min_all, indx_min_all_act), ]
# Now check which of them are closest to the current starting dates; so that the movement is not much
time_diff <- c()
for(i in 1:nrow(candidate_dates))
{
# we will add this value in inner loop so here we
timediff_indv <- 0
for(j in 1:ncol(candidate_dates))
{
diff_days <- difftime(oriz_start_date[["Plan_Start_Date"]][j],
candidate_dates[i,j], tz = "UTC", units = "days" ) %>% as.numeric()
# print(oriz_start_date[["Plan_Start_Date"]][j])
# print(candidate_dates[i,j])
#
# print(diff_days)
timediff_indv <- timediff_indv + diff_days
}
time_diff <- c(time_diff, timediff_indv)
}
# Alternatives
fin_dates <- candidate_dates[min(time_diff) == time_diff, ]
The above code runs well and produces the expected output; however it does not scale well. I have very large dataset (Two years worth of work and for more than thousand different tasks repeating in intervals) and searching through every possible combination is not a viable option. Are there ways I can formulate this problem as a standard optimization problem and use Rglpk or Rcplex or some even better solution. Thanks for inputs.
Here comes my longest StackOverflow answer ever, but I really like optimization problems. This is a variant of the so called job shop problem with a single machine, which you might be able to solve with Rcplex if you first formulate it as a LP-model. However, these formulations often scale poorly and computational times can grow exponentially, dependent on the formulation. For big problems, it is very common to use a heuristic, for example a genetic algorithm, which is what I often use in cases like this. It does not guarantee to give the optimal solution, but it gives us more control over performance vs runtime and the solution usually scales very well. Basically, it works by creating a large set of random solutions, called the population. Then we iteratively update this population by combining the solutions to make 'offspring', where better solutions should have a higher probability of creating offspring.
As a scoring function (to determine which solutions are 'better'), I used the sum of squares of the overcapacity per day, which penalizes very large overcapacity on a day. Note that you can use any scoring function that you want, so you could also penalize under-utilization of capacity if you deem that important.
The code for the example implementation is shown below. I generated some data of 200 days and 80 tasks. It runs in about 10 seconds on my laptop, improving the score of the random solution by over 65% from 2634 to 913. With an input of 700 days and 1000 tasks, the algorithm still runs within a matter of minutes with the same parameters.
Best solution score per iteration:
I also included use_your_own_sample_data, which you can set to TRUE to have the algorithm solve a simpler and smaller example to confirm that it gives the expected output:
dated P1 P2 P3 P4 P5 dated P1 P2 P3 P4 P5
2018-12-14 0 0 0 0 0 2018-12-14 0 0 3 1 0
2018-12-15 0 0 0 0 0 2018-12-15 0 3 0 0 1
2018-12-16 0 0 0 0 0 ----> 2018-12-16 0 0 3 1 0
2018-12-17 0 3 3 1 1 2018-12-17 0 3 0 0 1
2018-12-18 4 0 0 0 0 2018-12-18 4 0 0 0 0
2018-12-19 4 3 3 1 1 2018-12-19 4 0 0 0 0
I hope this helps! Let me know if you have more questions regarding this.
CODE
### PARAMETERS -------------------------------------------
n_population = 100 # the number of solutions in a population
n_iterations = 100 # The number of iterations
n_offspring_per_iter = 80 # number of offspring to create per iteration
max_shift_days = 20 # Maximum number of days we can shift a task forward
frac_perm_init = 0.25 # fraction of columns to change from default solution while creating initial solutions
early_stopping_rounds = 100 # Stop if score not improved for this amount of iterations
capacity_per_day = 4
use_your_own_sample_data = FALSE # set to TRUE to use your own test case
### SAMPLE DATA -------------------------------------------------
# datain should contain the following columns:
# dated: A column with sequential dates
# P1, P2, ...: columns with values for workload of task x per date
n_days = 200
n_tasks = 80
set.seed(1)
if(!use_your_own_sample_data)
{
# my sample data:
datain = data.frame(dated = seq(Sys.Date()-n_days,Sys.Date(),1))
# add some random tasks
for(i in 1:n_tasks)
{
datain[[paste0('P',i)]] = rep(0,nrow(datain))
rand_start = sample(seq(1,nrow(datain)-5),1)
datain[[paste0('P',i)]][seq(rand_start,rand_start+4)] = sample(0:5,5,replace = T)
}
} else
{
# your sample data:
library(dplyr)
datain = data.frame(dated = c("2018-12-14", "2018-12-15", "2018-12-16", "2018-12-17","2018-12-18","2018-12-19"),
P1 = c(0,0,0,0,4,4), P2 = c(0,0,0,3,0,3), P3=c(0,0,0,3,0,3), P4=c(0,0,0,1,0,1),P5=c(0,0,0,1,0,1)) %>%
mutate(dated = as.Date(dated,format='%Y-%m-%d'))
}
tasks = setdiff(colnames(datain),c("dated","capacity")) # a list of all tasks
# the following vector contains for each task its maximum start date
max_date_per_task = lapply(datain[,tasks],function(x) datain$dated[which(x>0)[1]])
### ALL OUR PREDEFINED FUNCTIONS ----------------------------------
# helper function to shift a task
shifter <- function(x, n = 1) {
if (n == 0) x else c(tail(x, n), head(x, -n))
}
# Score a solution
# We calculate the score by taking the sum of the squares of our overcapacity (so we punish very large overcapacity on a day)
score_solution <- function(solution,tasks,capacity_per_day)
{
cap_left = capacity_per_day-rowSums(solution[,tasks]) # calculate spare capacity
over_capacity = sum(cap_left[cap_left<0]^2) # sum of squares of overcapacity (negatives)
return(over_capacity)
}
# Merge solutions
# Get approx. 50% of tasks from solution1, and the remaining tasks from solution 2.
merge_solutions <- function(solution1,solution2,tasks)
{
tasks_from_solution_1 = sample(tasks,round(length(tasks)/2))
tasks_from_solution_2 = setdiff(tasks,tasks_from_solution_1)
new_solution = cbind(solution1[,'dated',drop=F],solution1[,tasks_from_solution_1,drop=F],solution2[,tasks_from_solution_2,drop=F])
return(new_solution)
}
# Randomize solution
# Create an initial solution
randomize_solution <- function(solution,max_date_per_task,tasks,tasks_to_change=1/8)
{
# select some tasks to reschedule
tasks_to_change = max(1, round(length(tasks)*tasks_to_change))
selected_tasks <- sample(tasks,tasks_to_change)
for(task in selected_tasks)
{
# shift task between 14 and 0 days forward
new_start_date <- sample(seq(max_date_per_task[[task]]-max_shift_days,max_date_per_task[[task]],by='day'),1)
new_start_date <- max(new_start_date,min(solution$dated))
solution[,task] = shifter(solution[,task],as.numeric(new_start_date-max_date_per_task[[task]]))
}
return(solution)
}
# sort population based on scores
sort_pop <- function(population)
{
return(population[order(sapply(population,function(x) {x[['score']]}),decreasing = F)])
}
# return the scores of a population
pop_scores <- function(population)
{
sapply(population,function(x) {x[['score']]})
}
### RUN SCRIPT -------------------------------
# starting score
print(paste0('Starting score: ',score_solution(datain,tasks,capacity_per_day)))
# Create initial population
population = vector('list',n_population)
for(i in 1:n_population)
{
# create initial solutions by making changes to the initial solution
solution = randomize_solution(datain,max_date_per_task,tasks,frac_perm_init)
score = score_solution(solution,tasks,capacity_per_day)
population[[i]] = list('solution' = solution,'score'= score)
}
population = sort_pop(population)
score_per_iteration <- score_solution(datain,tasks,capacity_per_day)
# Run the algorithm
for(i in 1:n_iterations)
{
print(paste0('\n---- Iteration',i,' -----\n'))
# create some random perturbations in the population
for(j in 1:10)
{
sol_to_change = sample(2:n_population,1)
new_solution <- randomize_solution(population[[sol_to_change]][['solution']],max_date_per_task,tasks)
new_score <- score_solution(new_solution,tasks,capacity_per_day)
population[[sol_to_change]] <- list('solution' = new_solution,'score'= new_score)
}
# Create offspring, first determine which solutions to combine
# determine the probability that a solution will be selected to create offspring (some smoothing)
probs = sapply(population,function(x) {x[['score']]})
if(max(probs)==min(probs)){stop('No diversity in population left')}
probs = 1-(probs-min(probs))/(max(probs)-min(probs))+0.2
# create combinations
solutions_to_combine = lapply(1:n_offspring_per_iter, function(y){
sample(seq(length(population)),2,prob = probs)})
for(j in 1:n_offspring_per_iter)
{
new_solution <- merge_solutions(population[[solutions_to_combine[[j]][1]]][['solution']],
population[[solutions_to_combine[[j]][2]]][['solution']],
tasks)
new_score <- score_solution(new_solution,tasks,capacity_per_day)
population[[length(population)+1]] <- list('solution' = new_solution,'score'= new_score)
}
population = sort_pop(population)
population= population[1:n_population]
print(paste0('Best score:',population[[1]]['score']))
score_per_iteration = c(score_per_iteration,population[[1]]['score'])
if(i>early_stopping_rounds+1)
{
if(score_per_iteration[[i]] == score_per_iteration[[i-10]])
{
stop(paste0("Score not improved in the past ",early_stopping_rounds," rounds. Halting algorithm."))
}
}
}
plot(x=seq(0,length(score_per_iteration)-1),y=score_per_iteration,xlab = 'iteration',ylab='score')
final_solution = population[[1]][['solution']]
final_solution[,c('dated',tasks)]
And indeed, as we expect, the algorithm turns out to be very good in reducing the number of days with a very high overcapacity:
final_solution = population[[1]][['solution']]
# number of days with workload higher than 10 in initial solution
sum(rowSums(datain[,tasks])>10)
> 19
# number of days with workload higher than 10 in our solution
sum(rowSums(final_solution[,tasks])>10)
> 1

Performance R applying indicator and rbinding xts

I'm new to R and currently poking that thing with a stick till it does, what I need to be done. Unfortunately I hit a wall with some performance issues.
My problem is, that I need a CCI indicator calculated on minute periods but refreshed every second for the "actual" minute of the iteration.
My implementation works but is incredibly slow. For 4 days of forex data on EUR/USD, based on second periods, I need almost 15 minutes to apply the indicator.
I did read some stuff about preallocation and slow rbind operations. I already reduced my rbind calls by refactoring the loops. But this didn't improve the performance. So I assume I'm loosing the time elsewhere.
Since I don't know anyone who is fit in R, I post my code here and hope for some help.
What I do is basically looping my second data, accumulate the seconds to minutes, calculate CCI and after I once did that for periode I then refresh the last minutebar every second.
addCCIToData <- function(bars, periode) {
#bars is OHLC based on second periods
#periode is number of periods for cci calculation
require(xts)
require(quantmod)
bars <- as.xts(bars)
bars$CCI <- 0
x <- 1
##scope is time of observation == periode
scope <- list()
for (i in 1:periode ) {
scope[[i]] <- 1 # save time by preallocating?
}
y <- nrow(bars)
lastminute <- 0
createdBarCount <- 1
enoughData <- FALSE
zeit1 <- as.POSIXlt(time(bars[x]))
while(x < y) {
zeit <- as.POSIXlt(time(bars[x]))
if(zeit$min != lastminute) {
zeit2 <-zeit
lastminute <- zeit$min
zeit1 <- as.POSIXlt(time(bars[x])) #reset zeit1 because of new 1 minute bar
createdBarCount <- createdBarCount + 1
if(createdBarCount > periode && enoughData == FALSE) {
enoughData = TRUE
i = 2
dataPeriodeMinus1 = scope[[1]]
while(i <= periode-1) {
dataPeriodeMinus1 = rbind(dataPeriodeMinus1, scope[[i]])
i = i + 1
}
createdBarCount <- periode
}
else if(enoughData == TRUE) {
newScope <- list()
for(i in 1:periode-1) {
newScope[i] <- scope[i+1]
}
scope = newScope
i = 2
dataPeriodeMinus1 = scope[[1]]
while(i <= periode-1) {
dataPeriodeMinus1 = rbind(dataPeriodeMinus1, scope[[i]])
i = i + 1
}
createdBarCount <- periode
}
}
a <- as.character(zeit1)
b <- as.character(as.POSIXlt(time(bars[x])))
c <- paste(a, b, sep = "::")
scope[createdBarCount] <- list(to.minutes(OHLC(bars[c]), 1,"CCI")) #merge the seconds to minutes
if(enoughData == TRUE) {
data = rbind(dataPeriodeMinus1, scope[[periode]])
# i = 2
# while(i <= periode) { #improve!, we need only the last bar to be binded here
# data = rbind(data, scope[[i]]) #internet says this is slow
# i = i + 1
# }
#bars[[x,5]] = SMA(data$CCI.Close, periode)[[periode]][[1]]
bars[[x,5]] = CCI(data[,c("CCI.High","CCI.Close", "CCI.Low")], periode, SMA)[[periode]]
}
x <- x+1
}
bars
}
Edit: fixed code.
Edit2: Testdata can be optained from here Tesdata
It can be loaded using the command "load("path/to/file)". Then just call addCCIToData(bars_seconds["2015-01-05 00:00:00::2015-01-05 02:00:00"], 14) after sourcing the above function. I really do think, that the continous merging of seconds to minutebars is the timeconsuming task. How can I optimize that?
Edit 3: Seems that calculation of CCI is also taking some time...
For the complete set of testdata I need
357s without cci calculation
902s with cci calculation
Thank you very much!

For a growing data feed in R, how can two time lengths be calculated based on "time to peak" and "time back to baseline"?

How can the following be accomplished with R?
Connect a constantly changing data source (e.g. https://goo.gl/XCM6yG) into R,
Measure time once prices start to rise consistently from initial baseline range to peak (represented by the green horizontal line),
Measure time from peak back to baseline range (the teal line)
Note: "Departure from baseline range" (unless there is a better mathematical way) defined as at least the most recent 5 prices all being over 3 standard deviations above the mean of the latest 200 prices
This is a really vague questions with an unknown use case but... here we go.
Monitoring in what way? The length? That's what I did
The vector has over 200 values we can take the mean, so we need a control flow for that part.
I added in some noise which basically says force the behavior you want to calculate ( ifelse(i %in% 996:1000, 100, 0) which means, if the iterator is in 996 to 1000, add 100 to the random normal i generated). We set a counter and check if each value is about 3 sd of the vector values, if so we record the time.
At each input of the data...check if the current value is the max value... now this is more tricky since we would have to look at the trend. This is beyond the scope of my assistance.
Up to you to figure out since I don't really understand
vec <- vecmean <- val5 <- c()
counter <- 0
for(i in 1:1000){
vec[i] <- rnorm(1) + ifelse(i %in% 996:1000, 100, 0)
Sys.sleep(.001) # change to 1 second
#1
cat('The vector has',length(vec),'values within...\n')
#2
if(length(vec)>200){
vecmean <- c(vecmean, mean(vec[(i-200):i]))
cat('The mean of the last 200 observations is ',
format(vecmean[length(vecmean)], digits =2),'\n')
#3
upr <- vecmean[length(vecmean)] + 3*sd(vec)
if(vec[i] > upr){
counter <- counter + 1
} else{
counter <- 0
}
if(counter > 4){
cat('Last 5 values greater than 3sd aboving the rolling mean!\n')
val5 <- Sys.time()
cat("Timestamp:",as.character(val5),'\n')
}
}
# 4
theMax <- max(vec)
if(vec[i] == theMax & !is.null(val5) ){
valMax <- Sys.time()
valDiff <- valMax - val5
cat('The time difference between the first flag and second is', as.character(valDiff),'\n')
}
}

Programming state switches in R

I am trying to write a program that sets a state from A to state B and vice versa.
rnumbers <- data.frame(replicate(5,runif(2000, 0, 1)))
I am imagining this data frame of random numbers in a uniform distribution, except it has 10000 rows instead of 20 rows.
Setting the probability of going to state A and state B :
dt <- c(.02)
A <- dt*1
B <- dt*.5
Making a function that goes through data frame rnumbers and putting in a 0 if the number is less than B and a 1 if the number is less than A.
step_generator <- function(x){
step <- ifelse ( x < B, 0, ifelse(x < A, 1, NA))
return(step)
}
state <- apply(rnumbers, 2, step_generator)
This essentially gives me what I want - a data frame with columns that contain 0, 1, or NA depending on the value of the random number in rnumbers. However, I am missing a couple of things--
1) I would like to keep track of how long each state lasts. What I mean by that, is if you imagine each row as a change in time as above (dt <- c(.02)). I want to be able to plot "state vs. time". In order to address this, this is what I tried :
state1 <- transform(state, time = rep(dt))
state2 <- transform(state1, cumtime = cumsum(time))
This gets me close to what I want, cumtime goes from .02 to .4. However, I want the clock to start at 0 in the 1st row and add .02 to every subsequent row.
2) I need to know how long each state lasts for. Essentially, I want to be able to go through each column, and ask for how much time (cumsum) does each state last. This would then give me a distribution of times for state A and state B. I want this stored in another data frame.
I think this makes sense, if anything is unclear please let me know and I will clarify.
Thanks for any and all help!
The range between "number is less than .02*1 and greater than .02*.5" is very narrow so if you are setting this simulation up, most of the first row will most probably be zero. You cannot really hope to get success with ifelse when the conditions have any look-back features. That function doesn't allow "back-indexing".
rstate <- rnumbers # copy the structure
rstate[] <- NA # preserve structure with NA's
# Init:
rstate[1, ] <- rnumbers[1, ] < .02 & rnumbers[1, ] > 0.01
step_generator <- function(col, rnum){
for (i in 2:length(col) ){
if( rnum[i] < B) { col[i] <- 0 }
else { if (rnum[i] < A) {col[i] <- 1 }
else {col[i] <- col[i-1] } }
}
return(col)
}
# Run for each column index:
for(cl in 1:5){ rstate[ , cl] <-
step_generator(rstate[,cl], rnumbers[,cl]) }

Resources