Performing an Interval Regression in R - r

I am trying to run an interval regression, where my dependent variable, y is made up of 14 intervals, representing incomes. I have 5000 observations. I have six independent variables I am trying to use to predict my y.
I am trying to follow the steps performed here:
http://www.karlin.mff.cuni.cz/~pesta/NMFM404/interval.html#References
So, I actually have y at its exact values, but am trying to learn how to do an interval regression from this. So, first I convert y into an interval.
Income[Income < 10000] <- 1
Income[Income > 10000 & Income < 20001] <- 2
Income[Income > 20000 & Income < 30001] <- 3
...
Income[Income > 300000] <- 14
Okay, fine. From the above link, I should actually convert it to correspond to each lower bound of the interval, and each upper bound. I have to imagine that isn't the only way, but for now, I am following those directions.
lIncome <- rep(0,5000)#lower income bound
uIncome <- rep(0,5000)#upper income bound
for (i in 1:5000){
if(Income[i] == 1){
lIncome[i] = 0
uIncome[i] = 10000
}
if(Income[i] == 2){
lIncome[i] = 10001
uIncome[i] = 20000
}
...
if(Income[i] == 14){
lIncome[i] = 300001
uIncome[i] = Inf
}
}
So now I have columns lIncome and uIncome which correspond to the levels of income. I am fine for this part. Perhaps it is problematic my last interval goes to infinity; but even if I just cap it at 500000 I still get errors.
The instructions next say to incorporate the Surv() function.
So, I perform:
TEST <- Surv(lIncome, uIncome, event = rep(3,5000))
However, my errors start now. I get:
Warning message:
In Surv(lIncome, uIncome, event = rep(3, 5000)) :
Invalid status value, converted to NA
If I try
TEST <- Surv(lIncome, uIncome, event = rep(2,5000))
it works, but then:
m <- survreg(TEST ~ Age + AgeSq + ... , dist="gaussian")
gives:
Error in survreg(TEST ~ Age + AgeSq + NoDegree, dist = "gaussian") :
Invalid survival type
First of all, I am not sure why changing the 3 -> 2 makes it work. Even if I change the Inf value to 500000 (or any appropriate number) having it equal to 2 (or any number) does not resolve the issue.
Second, when I can get past that part, the fact that survreg is failing is leaving me puzzled.
Right now, my approach is to play around with my intervals, to see if I can get it to work somehow, then go from there. I am also looking closer at all the documentation for ?Surv and ?survreg
Any help is very appreciated though, thank you.

Related

R: quickly simulate unbalanced panel with variable that depends on lagged values of itself

I am trying to simulate monthly panels of data where one variable depends on lagged values of that variable in R. My solution is extremely slow. I need around 1000 samples of 2545 individuals, each of whom is observed monthly over many years, but the first sample took my computer 8.5 hours to construct. How can I make this faster?
I start by creating an unbalanced panel of people with different birth dates, monthly ages, and variables xbsmall and error that will be compared to determine the Outcome. All of the code in the first block is just data setup.
# Setup:
library(plyr)
# Would like to have 2545 people (nPerson).
#Instead use 4 for testing.
nPerson = 4
# Minimum and maximum possible ages and birth dates
AgeMin = 10
AgeMax = 50
BornMin = 1950
BornMax = 1963
# Person-specific characteristics
ind =
data.frame(
id = 1:nPerson,
BornYear = floor(runif(length(1:nPerson), min=BornMin, max=BornMax+1)),
BornMonth = ceiling(runif(length(1:nPerson), min=0, max=12))
)
# Make an unbalanced panel of people over age 10 up to year 1986
# panel = ddply(ind, ~id, transform, AgeMonths = BornMonth)
panel = ddply(ind, ~id, transform, AgeMonths = (AgeMin*12):((1986-BornYear)*12 + 12-BornMonth))
# Set up some random variables to approximate the data generating process
panel$xbsmall = rnorm(dim(panel)[1], mean=-.3, sd=.45)
# Standard normal error for probit
panel$error = rnorm(dim(panel)[1])
# Placeholders
panel$xb = rep(0, dim(panel)[1])
panel$Outcome = rep(0, dim(panel)[1])
Now that we have data, here is the part that is slow (around a second on my computer for only 4 observations but hours for thousands of observations). Each month, a person gets two draws (xbsmall and error) from two different normal distributions (these were done above), and Outcome == 1 if xbsmall > error. However, if Outcome equals 1 in the previous month, then Outcome in the current month equals 1 if xbsmall + 4.47 > error. I use xb = xbsmall+4.47 in the code below (xb is the "linear predictor" in a probit model). I ignore the first month for each person for simplicity. For your information, this is simulating a probit DGP (but that is not necessary to know to solve the problem of computation speed).
# Outcome == 1 if and only if xb > -error
# The hard part: xb includes information about the previous month's outcome
start_time = Sys.time()
for(i in 1:nPerson){
# Determine the range of monthly ages to loop over for this person
AgeMonthMin = min(panel$AgeMonths[panel$id==i], na.rm=T)
AgeMonthMax = max(panel$AgeMonths[panel$id==i], na.rm=T)
# Loop over the monthly ages for this person and determine the outcome
for(t in (AgeMonthMin+1):AgeMonthMax){
# Indicator for whether Outcome was 1 last month
panel$Outcome1LastMonth[panel$id==i & panel$AgeMonths==t] = panel$Outcome[panel$id==i & panel$AgeMonths==t-1]
# xb = xbsmall + 4.47 if Outcome was 1 last month
# Otherwise, xb = xbsmall
panel$xb[panel$id==i & panel$AgeMonths==t] = with(panel[panel$id==i & panel$AgeMonths==t,], xbsmall + 4.47*Outcome1LastMonth)
# Outcome == 1 if xb > 0
panel$Outcome[panel$id==i & panel$AgeMonths==t] =
ifelse(panel$xb[panel$id==i & panel$AgeMonths==t] > - panel$error[panel$id==i & panel$AgeMonths==t], 1, 0)
}
}
end_time = Sys.time()
end_time - start_time
My thoughts for reducing computer time:
Something with cumsum()
Some wonderful panel data function that I do not know about
Find a way to make the t loop go through the same starting and ending points for each individual and then somehow use plyr::ddpl() or dplyr::gather_by()
Iterative solution: make an educated guess about the value of Outcome at each monthly age (say, the mode) and somehow adjust values that do not match the previous month. This would work better in my real application because xbsmall has a very clear trend in age.
Do the simulation only for smaller samples and then estimate the effect of sample size on the values I need (the distributions of regression coefficient estimates not calculated here)
One approach is to use a split-apply-combine method. I take out the for(t in (AgeMonthMin+1):AgeMonthMax) loop and put the contents in a function:
generate_outcome <- function(x) {
AgeMonthMin <- min(x$AgeMonths, na.rm = TRUE)
AgeMonthMax <- max(x$AgeMonths, na.rm = TRUE)
for (i in 2:(AgeMonthMax - AgeMonthMin + 1)){
x$xb[i] <- x$xbsmall[i] + 4.47 * x$Outcome[i - 1]
x$Outcome[i] <- ifelse(x$xb[i] > - x$error[i], 1, 0)
}
x
}
where x is a dataframe for one person. This allows us to simplify the panel$id==i & panel$AgeMonths==t construct. Now we can just do
out <- lapply(split(panel, panel$id), generate_outcome)
out <- do.call(rbind, out)
and all.equal(panel$Outcome, out$Outcome) returns TRUE. Computing 100 persons took 1.8 seconds using this method, compared to 1.5 minutes in the original code.

Calculating Value at Risk with performanceanalytics package

I tried to calculate the Value at Risk for a list auf Stock Returns. There are 1000 observations, but i wanted to calculate like the following:
VaR for observation:
1 to 500
2 to 501
3 to 502
4 to 503
and 500 to 999
as you can see the result would be 500 calculations.
To solve the problem I tried to use a if condition with a for loop.
like this:
if(x < 501 & y < 1000){for(i in KO.Returns){VaR(KO.Returns[x: y], p = 0.95, method = "historical")}}
If I use the mentioned code I get the following error code:
VaR calculation produces unreliable result (inverse risk) for column
1:
I think the problem is in your data. When you specify your window, the calculation of historical VaR sorts the data and picks out 95th percentile. Sometimes your data will not have a negative value in that percentile, thus historical VaR is meaningless (your losses cannot be a positive value, loss is always negative). Hence the error.
I have been trying to reproduce similar errors using the following code:
library(PerformanceAnalytics)
data("edhec")
data = edhec[, 5]
valat = rollapply(data = data, width = 20,
FUN = function(x) VaR(x, p = 0.95, method = "historical"),
by.column = TRUE)
valat
But when I change the confidence level to p = 0.99, I stop getting the error. So, maybe you can try to change your confidence level and see.
Also see this and this.

ChoiceModelR, none=TRUE, "Error in betadraw[good, ] = newbeta[good, ]"

I am using ChoiceModelR for hierarchical multinomial logit. I want to get estimates for the utility of the outside good (which follows a normal distribution). The outside good has no covariates like the inside goods - e.g. it cannot have a price or brand dummy - , so I set list(none=TRUE) and do not add this no-choice to the X data (as described in the documentation of ChoiceModelR) but only to the y (choice) data.
The iterations start normally, then at some point it stops and says
"Error in betadraw[good, ] = newbeta[good, ] : NAs are not allowed in subscripted assignments".
This likely happens because in row 388 of the function "choicemodelr", the "good" subscript is NA.
I looked at some questions about choicemodelr (this,this,this), and also about NAs in subscript (this,this), but my guess is that my problem is specific to this function in the sense that probably some inputs in the iteration just get so large/small such that "good" will turn to be NA.
Below is a very simple example. I generate data with 3 products with varying attributed. In half of the periods product 3 is not offered. The 2000 consumers have preferences - distributed normally - over 3 attributes (and a preference for the outside good). Logit error added to be consistent with the model. Outside good is indexed as product 4 (both when 3 and 2 products were in the choice set).
How could I avoid the NA error? Am I doing something wrong, or is it a general bug in the function?
I also searched for examples online setting the option none=TRUE, but I did not find any reproducible one. Perhaps this option is only the problematic thing as there is no problem recovering the true parameters if I set none=FALSE, and I do not let customers choose the outside option.
So the code which results in the NA bug is the following:
library("ChoiceModelR")
library("MASS")
set.seed(36)
# Set demand pars
beta_mu = c(-3,4,1)
beta_sigma = diag(c(1,1,1))
alfa_mu = 5 #outside good mean utility
alfa_sigma = 2 #outside good sd
# Three/two products, 3 vars (2 continuous,1 dummy)
threeprod <- list()
twoprod <- list()
purchase <- list()
for (t in 1:1000){
threeprod[[t]] = cbind(rep(t,3),c(1,1,1),c(1,2,3),runif(3),runif(3),ceiling(runif(3,-0.5,0.5)))
purchase[[t]] = which.max(rbind(threeprod[[t]][,c(4,5,6)]%*%mvrnorm(1,beta_mu,beta_sigma) +
matrix( -log(-log(runif(3))), 3, 1),rnorm(1,alfa_mu,alfa_sigma)) )
threeprod[[t]] = cbind(threeprod[[t]],c(purchase[[t]],0,0))
}
for (t in 1001:2000){
twoprod[[t]] = cbind(rep(t,2),c(1,1),c(1,2),runif(2),runif(2),ceiling(runif(2,-0.5,0.5)))
purchase[[t]] = which.max(rbind(twoprod[[t]][,c(4,5,6)]%*%mvrnorm(1,beta_mu,beta_sigma) +
matrix( -log(-log(runif(2))), 2, 1),rnorm(1,alfa_mu,alfa_sigma)) )
if (purchase[[t]] == 3) {purchase[[t]] <- 4}
twoprod[[t]] = cbind(twoprod[[t]],c(purchase[[t]],0))
}
X <- rbind(do.call(rbind,threeprod),do.call(rbind,twoprod))
xcoding <- c(1,1,1)
mcmc = list(R = 5000, use = 2000)
options = list(none=TRUE, save=TRUE, keep=5)
out = choicemodelr(X, xcoding, mcmc = mcmc,options = options)
You have to sort them by ID,Set,Alt .. that solved the error (the same you got)The questions have to sorted by Respondent ID, The set number (Questions) and Alternatives in a given question.

How to simulate daily stock returns in R

I need to simulate a stock's daily returns. I am given r=(P(t+1)-P(t))/P(t) (normal distribution) mean of µ=1% and sd of σ =5%. P(t) is the stock price at end of day t. Simulate 100,000 instances of such daily returns.
Since I am a new R user, how do I setup t for this example. I am assuming P should be setup as:
P <- rnorm(100000, .01, .05)
r=(P(t+1)-P(t))/P(t)
You are getting it wrong: from what you wrote, the mean and the sd applies on the return and not on the price. I furthermore make the assumption that the mean is set for an annual basis (1% rate of return from one day to another is just ...huge!) and t moves along a day range of 252 days per year.
With these hypothesis, you can get a series of daily return in R with:
r = rnorm(100000, .01/252, .005)
Assuming the model you mentioned, you can get the serie of the prices P (containing 100001 elements, I will take P[1]=100 - change it with your own value if needed):
factor = 1 + r
temp = 100
P = c(100, sapply(1:100000, function(u){
p = factor[u]*temp
temp<<-p
p
}))
Your configuration for the return price you mention (mean=0.01 and sd=0.05) will however lead to exploding stock price (unrealistic model and parameters). Be carefull to check that prod(rate) will not return Inf .
Here is the result for the first 1000 values of P, representing 4 years:
plot(1:1000, P[1:1000])
One of the classical model (which does not mean this model is realistic) assumes the observed log return are following a normal distribution.
Hope this helps.
I see you already have an answer and ColonelBeauvel might have more domain knowledge than I (assuming this is business or finance homework.) I approached it a bit differently and am going to post a commented transcript. His method uses the <<- operator which is considered as a somewhat suspect strategy in R, although I must admit it seems quite elegant in this application. I suspect my method will probably be a lot faster if you ever get into doing large scale simulations.
Starting with your code:
P <- rnorm(100000, .01, .05)
# r=(P(t+1)-P(t))/P(t) definition, not R code
# inference: P_t+1 = r_t*P_t + P_t = P_t*(1+r_t)
# So, all future P's will be determined by P_1 and r_t
Since P_2 will be P_1*(1+r_1)r_1 then P_3 will be P_1*(1+r_1)*(1+r_2), .i.e a continued product of the vector (1+r) for which there is a vectorized function.
P <- P_1*cumprod(1+r)
#Error: object 'P_1' not found
P_1 <- 100
P <- P_1*cumprod(1+r)
#Error: object 'r' not found
# So the random simulation should have been for `r`, not P
r <- rnorm(100000, .01, .05)
P <- P_1*cumprod(1+r)
plot(P)
#Error in plot.window(...) : infinite axis extents [GEPretty(-inf,inf,5)]
str(P)
This occurred because the cumulative product went above the limits of numerical capacity and got assigned to Inf (infinity). Let's be a little more careful:
r <- rnorm(300, .01, .05)
P <- P_1*cumprod(1+r)
plot(P)
This strategy below iteratively updates the price at time t as 'temp' and multiplies it it by a single value. It's likely to be a lot slower.
r = rnorm(100000, .01/252, .005)
factor = 1 + r
temp = 100
P = c(100, sapply(1:300, function(u){
p = factor[u]*temp
temp<<-p
p
}))
> system.time( {r <- rnorm(10000, .01/250, .05)
+ P <- P_1*cumprod(1+r)
+ })
user system elapsed
0.001 0.000 0.002
> system.time({r = rnorm(10000, .01/252, .05)
+ factor = 1 + r
+ temp = 100
+ P = c(100, sapply(1:300, function(u){
+ p = factor[u]*temp
+ temp<<-p
+ p
+ }))})
user system elapsed
0.079 0.004 0.101
To simulate a log return of the daily stock, use the following method:
Consider working with 256 days of daily stock return data.
Load the original data into R
Create another data.frame for simulating Log return.
Code:
logr <- data.frame(Date=gati$Date[1:255], Shareprice=gati$Adj.Close[1:255], LogReturn=log(gati$Adj.Close[1:251]/gati$Adj.Close[2:256]))
gati is the dataset
Date and Adj.close are the variables
notice the [] values.
P <- rnorm(100000, .01, .05)
r=(P(t+1)-P(t))/P(t)
second line translates directly into :
r <- (P[-1] - P[length(P)]) / P[length(P)] # (1:5)[-1] gives 2:5
Stock returns are not normally distributed for Simple Returns ("R"), given their -1 lower bound per compounded period. However, Log Returns ("r") generally are. The below is adapted from #42's post above. There don't seem to be any solutions to simulating from Log Mean ("Expected Return") and Log Stdev ("Risk") in #Rstats, so I've included them here for those looking for "Monte Carlo Simulation using Log Expected Return and Log Standard Deviation"), which are normally distributed, and have no lower bound at -1. Note: from this single example, it would require looping over thousands of times to simulate a portfolio--i.e., stacking 100k plots like the below and averaging a single slice to calculate a portfolio's average expected return at a chosen forward month. The below should give a good basis for doing so.
startPrice = 100
forwardPeriods = 12*10 # 10 years * 12 months with Month-over-Month E[r]
factor = exp(rnorm(forwardPeriods, .04, .10)) # Monthly Expected Ln Return = .04 and Expected Monthly Risk = .1
temp = startPrice
P = c(startPrice, sapply(1:forwardPeriods, function(u){p = factor[u]*temp; temp <<- p; p}))
plot(P, type = "b", xlab = "Forward End of Month Prices", ylab = "Expected Price from Log E[r]", ylim = c(0,max(P)))
n <- length(P)
logRet <- log(P[-1]/P[-n])
# Notice, with many samples this nearly matches our initial log E[r] and stdev(r)
mean(logRet)
# [1] 0.04540838
sqrt(var(logRet))
# [1] 0.1055676
If tested with a negative log expected return, the price should not fall below zero. The other examples, will return negative prices with negative expected returns. The code I've shared here can be tested to confirm that negative prices do not exist in the simulation.
min(P)
# [1] 100
max(P)
# [1] 23252.67
Horizontal axis is number of days, and vertical axis is price.
n_prices <- 1000
volatility <- 0.2
amplitude <- 10
chng <- amplitude * rnorm(n_prices, 0, volatility)
prices <- cumsum(chng)
plot(prices, type='l')

Optimizing weights for average across 3 variables

I am trying to create an ensemble of machine learning and I have code that made predictions based on occupation Ocp, Age Age, and Gender Gender.
I want to average the 3 predictions for a final prediction, but I am not sure how I should optimize the weights to minimize the RSME.
I know that Gender should dominate the data set.
Here is my attempt at the code:
temp <- NA; temp2 <- NA;temp3 <- NA
for (i in seq_len(11)) {
for (j in seq_len(11)){
temp2 = ((i-1)/10)*(((j-1)/10)*movie_pred2[,1]+((11-j)/10)*movie_pred2[,2]) +
((11-i)/10)*movie_pred[,3]
temp2[temp2 > 5] = 5
temp2[temp2 < 1] = 1
temp[j] <- (sum((temp2 - tsind2[,2])^2)/length(tsind2[,2]))^.5
}
temp3[i,j] = temp[j]
}
I now get the warning:
Error in temp3[i, j] = temp[j] : incorrect number of subscripts on matrix
In ((i - 1)/10) * (((j - 1)/10) * movie_pred2[, 1] + ((11 - ... :
longer object length is not a multiple of shorter object length
Your code begins:
> temp3<- NA
.. then some other stuff and ends
> temp3[i,j] = temp[j]
but it doesn't matter what dimensions or size your result temp is you can't push dimensioned data into a null dimension object.
>dim(temp3)
NULL
You probably want something like:
>temp3=matrix(NA, i,j)
>temp3[,j] <- something
Now ..firstly I'm sorry I can't be any more helpful/specific but it's near impossible to interpret the rest of your code without an example of the input data. Secondly unless this is a homework assignment or self-learning I recommend you investigate the many R packages that will calculate the RMSE and/or do ensemble learning for you e.g. the train function of caret

Resources