R: Finding solutions for new x values with nlmrt - r

Good day,
I have tried to figure this out, but I really can't!! I'll supply an example of my data in R:
x <- c(36,71,106,142,175,210,246,288,357)
y <- c(19.6,20.9,19.8,21.2,17.6,23.6,20.4,18.9,17.2)
table <- data.frame(x,y)
library(nlmrt)
curve <- "y~ a + b*exp(-0.01*x) + (c*x)"
ones <- list(a=1, b=1, c=1)
Then I use wrapnls to fit the curve and to find a solution:
solve <- wrapnls(curve, data=table, start=ones, trace=FALSE)
This is all fine and works for me. Then, using the following, I obtain a prediction of y for each of the x values:
predict(solve)
But how do I find the prediction of y for new x values? For instance:
new_x <- c(10, 30, 50, 70)
I have tried:
predict(solve, new_x)
predict(solve, 10)
It just gives the same output as:
predict(solve)
I really hope someone can help! I know if I use the values of 'solve' for parameters a, b, and c and substitute them into the curve formula with the desired x value that I would be able to this, but I'm wondering if there is a simpler option. Also, without plotting the data first.

Predict requires the new data to be a data.frame with column names that match the variable names used in your model (whether your model has one or many variables). All you need to do is use
predict(solve, data.frame(x=new_x))
# [1] 18.30066 19.21600 19.88409 20.34973
And that will give you a prediction for just those 4 values. It's somewhat unfortunate that any mistakes in specifying the new data results in the fitted values for the original model being returned. An error message probably would have been more useful, but oh well.

Related

How to create a loop for Regression

I just started using R for statistical purposes and I appreciate any kind of help.
My task is to make calculations on one index and 20 stocks from the index. The data contains 22 columns (DATE, INDEX, S1 .... S20) and about 4000 rows (one row per day).
Firstly I imported the .csv file, called it "dataset" and calculated log returns this way and did it for all stocks "S1-S20" plus the INDEX.
n <- nrow(dataset)
S1 <- dataset$S1
S1_logret <- log(S1[2:n])-log(S1[1:(n-1)])
Secondly, I stored the data in a data.frame:
logret_data <- data.frame(INDEX_logret, S1_logret, S2_logret, S3_logret, S4_logret, S5_logret, S6_logret, S7_logret, S8_logret, S9_logret, S10_logret, S11_logret, S12_logret, S13_logret, S14_logret, S15_logret, S16_logret, S17_logret, S18_logret, S19_logret, S20_logret)
Then I ran the regression (S1 to S20) using the log returns:
S1_Reg1 <- lm(S1_logret~INDEX_logret)
I couldn't figure out how to write the code in a more efficient way and use some function for repetition.
In a further step I have to run a cross sectional regression for each day in a selected interval. It is impossible to do it manually and R should provide some quick solution. I am quite insecure about how to do this part. But I would also like to use kind of loop for the previous calculations.
Yet I lack the necessary R coding knowledge. Any kind of help top the point or advise for literature or tutorial is highly appreciated! Thank you!
You could provide all the separate dependent variables in a matrix to run your regressions. Something like this:
#example data
Y1 <- rnorm(100)
Y2 <- rnorm(100)
X <- rnorm(100)
df <- data.frame(Y1, Y2, X)
#run all models at once
lm(as.matrix(df[c('Y1', 'Y2')]) ~ X)
Out:
Call:
lm(formula = as.matrix(df[c("Y1", "Y2")]) ~ df$X)
Coefficients:
Y1 Y2
(Intercept) -0.15490 -0.08384
df$X -0.15026 -0.02471

Dynamic linear regression loop for different order summation

I've been trying hard to recreate this model in R:
Model
(FARHANI 2012)
I've tried many things, such as a cumsum paste - however that would not work as I could not assign strings the correct variable as it kept thinking that L was a function.
I tried to do it manually, I'm only looking for p,q = 1,2,3,4,5 however after starting I realized how inefficient this is.
This is essentially what I am trying to do
model5 <- vector("list",20)
#p=1-5, q=0
model5[[1]] <- dynlm(DLUSGDP~L(DLUSGDP,1))
model5[[2]] <- dynlm(DLUSGDP~L(DLUSGDP,1)+L(DLUSGDP,2))
model5[[3]] <- dynlm(DLUSGDP~L(DLUSGDP,1)+L(DLUSGDP,2)+L(DLUSGDP,3))
model5[[4]] <- dynlm(DLUSGDP~L(DLUSGDP,1)+L(DLUSGDP,2)+L(DLUSGDP,3)+L(DLUSGDP,4))
model5[[5]] <- dynlm(DLUSGDP~L(DLUSGDP,1)+L(DLUSGDP,2)+L(DLUSGDP,3)+L(DLUSGDP,4)+L(DLUSGDP,5))
I'm also trying to do this for regressing DLUSGDP on DLWTI (my oil variable's name) for when p=0, q=1-5 and also p=1-5, q=1-5
cumsum would not work as it would sum the variables rather than treating them as independent regresses.
My goal is to run these models and then use IC to determine which should be analyzed further.
I hope you understand my problem and any help would be greatly appreciated.
I think this is what you are looking for:
reformulate(paste0("L(DLUSGDP,", 1:n,")"), "DLUSGDP")
where n is some order you want to try. For example,
n <- 3
reformulate(paste0("L(DLUSGDP,", 1:n,")"), "DLUSGDP")
# DLUSGDP ~ L(DLUSGDP, 1) + L(DLUSGDP, 2) + L(DLUSGDP, 3)
Then you can construct your model fitting by
model5 <- vector("list",20)
for (i in 1:20) {
form <- reformulate(paste0("L(DLUSGDP,", 1:i,")"), "DLUSGDP")
model5[[i]] <- dynlm(form)
}

How to find a percentile that can maximize the correlation coefficient between two vector?

Suppose I have two continuous vectors such like:
set.seed(123)
df <- data.frame(x = rnorm(100),
y = rnorm(100,3,5))
with(df, cor(x,y))
My question is how to find a percentile of x so that to maximize the absolute correlation of x and y such that:
perc <- quantile(df$x, 0.3)
df1 <- subset(df, x > perc)
with(df1, cor(x,y))
Namely how to find perc?
This problem is ill defined. Take your example data set and the function you want to find the maximum of (copied from #coffeinjunky):
set.seed(123)
df <- data.frame(x = rnorm(100),
y = rnorm(100,3,5))
findperc <- function(prop, dat) {
perc <- quantile(dat$x, prop)
with(subset(dat, dat$x > perc), abs(cor(x,y)))
}
Now plot the result of findperc for percentiles between 0 and 1.
x <- seq(0,1,0.01)
plot(x,sapply(x,findperc,df),type="l")
The circled point indicates that found by optimize as in #coffeinjunky's answer. This is clearly only a local maximum. The applicability of the warning from #Thierry, "You need to rethink the question. As soon a x and y contain only 2 element the correlation will be either 1 or -1", should be apparent on the right hand side of the plot.
In general, the fact that you are getting moderate to high correlations when starting with independently generated random variables should warn you that your results are spurious and method suspect.
Well, why not take your question literally, and just search for it? For instance, try:
findperc <- function(prop, dat) {
perc <- quantile(dat$x, prop)
with(subset(dat, dat$x > perc), abs(cor(x,y)))
}
optimize(findperc, lower=0, upper=1, maximum=T, dat=df)
This defines a function that computes the absolute correlation between your vectors based on the corresponding percentile (which here is a single value), just as in your example code. And then I feed this function to a linear optimizer which searches for the input that produces the maximum value for the output.
Edit: Thanks to #A. Webb's answer I learned that optimize uses a gradient search as opposed to a grid search. I thought that this was the main difference between optim and optimize, a clearly wrong assumption I should have checked myself. However, just to provide a solution using grid search that will get you closer to the global maximum, one could use the following:
x <- seq(0,0.97,0.01)
x[which.max(sapply(x, findperc, dat=df))]
Note that I have cut x here at 97%. This ensures that at least 3 people are left in the sample (given a sample size of 100).

How to predict using a locally smoothed mean?

(Statistics beginner here).
I have some training data (x,y), and wish to make prediction for new data x_new.
Now let's assume I have the data for the plot below, but I do not know how y is computed. So I would like to use the data I have a calculate for any given x the local mean of y data, as this seems like the best guess I can make.
install.packages("gplots")
library("gplots")
x <- abs(rnorm(500))
y <- rnorm(500, mean=2*x, sd=2+2*x)
bandplot(x,y)
Is there a R function to predict y for a given x, using the locally smoothed mean (here shown in red thanks to the function bandplot), or something similar?
wapply from gplots returns the locally smoothed mean as a table for x and y.
x <- 1:1000
y <- rnorm(1000, mean=1, sd=1 + x/1000 )
wapply(x,y,mean)
to predict, one would need, I guess, to resolve the closest x that is in the table returned by wapply, then deduce the local mean for y.
For a value a, the closest x will be given by the index:
index = which(abs(wapply(x,y,mean)$x-a)==min(abs(wapply(x,y,mean)$x-a)))
then the prediction should be:
pred = wapply(x,y,mean)[index]
So in one line:
locally_smoothed_mean_prediction = function(a) wapply(x,y,mean)$y[which(abs(wapply(x,y,mean)$x-a)==min(abs(wapply(x,y,mean)$x-a)))]
> locally_smoothed_mean_prediction(600)
[1] 1.055642

Create function to automatically create plots from summary(fit <- lm( y ~ x1 + x2 +... xn))

I am running the same regression with small alterations of x variables several times. My aim is after having determined the fit and significance of each variable for this linear regression model to view all all major plots. Instead of having to create each plot one by one, I want a function to loop through my variables (x1...xn) from the following list.
fit <-lm( y ~ x1 + x2 +... xn))
The plots I want to create for all x are
1) 'x versus y' for all x in the function above
2) 'x versus predicted y
3) x versus residuals
4) x versus time, where time is not a variable used in the regression but provided in the dataframe the data comes from.
I know how to access the coefficients from fit, however I am not able to use the coefficient names from the summary and reuse them in a function for creating the plots, as the names are characters.
I hope my question has been clearly described and hasn't been asked already.
Thanks!
Create some mock data
dat <- data.frame(x1=rnorm(100), x2=rnorm(100,4,5), x3=rnorm(100,8,27),
x4=rnorm(100,-6,0.1), t=(1:100)+runif(100,-2,2))
dat <- transform(dat, y=x1+4*x2+3.6*x3+4.7*x4+rnorm(100,3,50))
Make the fit
fit <- lm(y~x1+x2+x3+x4, data=dat)
Compute the predicted values
dat$yhat <- predict(fit)
Compute the residuals
dat$resid <- residuals(fit)
Get a vector of the variable names
vars <- names(coef(fit))[-1]
A plot can be made using this character representation of the name if you use it to build a string version of a formula and translate that. The four plots are below, and the are wrapped in a loop over all the vars. Additionally, this is surrounded by setting ask to TRUE so that you get a chance to see each plot. Alternatively you arrange multiple plots on the screen, or write them all to files to review later.
opar <- par(ask=TRUE)
for (v in vars) {
plot(as.formula(paste("y~",v)), data=dat)
plot(as.formula(paste("yhat~",v)), data=dat)
plot(as.formula(paste("resid~",v)), data=dat)
plot(as.formula(paste("t~",v)), data=dat)
}
par(opar)
The coefficients are stored in the fit objects as you say, but you can access them generically in a function by referring to them this way:
x <- 1:10
y <- x*3 + rnorm(1)
plot(x,y)
fit <- lm(y~x)
fit$coefficient[1] # intercept
fit$coefficient[2] # slope
str(fit) # a lot of info, but you can see how the fit is stored
My guess is when you say you know how to access the coefficients you are getting them from summary(fit) which is a bit harder to access than taking them directly from the fit. By using fit$coeff[1] etc you don't have to have the name of the variable in your function.
Three options to directly answer what I think was the question: How to access the coefficients using character arguments:
x <- 1:10
y <- x*3 + rnorm(1)
fit <- lm(y~x)
# 1
fit$coefficient["x"]
# 2
coefname <- "x"
fit$coefficient[coefname]
#3
coef(fit)[coefname]
If the question was how to plot the various functions then you should supply a sufficiently complex construction (in R) to allow demonstration of methods with a well-specified set of objects.

Resources