I have a time series of returns. I found out the optimal c(p,d,q) of an ARIMA(p,d,q) (I called it return.order) so that I finally got:
returns.arima <- prima(returns, order=returns.order)
This is not a lm file so I don't get the following diagnostic plots after computing plot(returns.arima):
but instead I get just the inverse unit roots:
So my questions are the following:
Is there a way to maybe convert my format to the lm one, so that I can produce this output easily?
If not, let's say I managed to get the residuals QQ-Plot through the following:
qqnorm(residuals.arima)
however, I didn't manage to find a way to report the outliers as in the first picture I posted.
Somebody knows how to do it?
What about Residual vs leverage, scale location and Cook's Distance Plots?
Related
I want to plot a graph for an exponential function but I am completely lost as i have never previously used matlab or scilab.
I have been researching a bit and now know how to plot linear functions, but I don't know how to plot the exponential function. I tried and kept getting errors such as "inconsistent row/column dimensions". The equation is
Here is how to do what you want on [0,1]. The errors you get are likely due to missing dot in product operator:
t=linspace(0,1,1000);
plot(t,2-2*t.*exp(-t)-2*exp(-t))
In the 'survminer' package I have been able to construct adjusted curves using cox model but this only shows me the surival function. When I try to input "events" or "cumhaz" into fun= option this only gives me the same survival function. I found this link
https://github.com/kassambara/survminer/issues/287
Wondering if anyone have any suggestions?
I took the advice of Chung30916 in the comment chain and used the following code
plotdata2<-plotdata%>%
mutate(cumhaz=1-surv)
to make a cumulative incidence curve but, forgive me for my inexperience, how do I proceed? Just plot the graph in ggplot2 using the strata (2 groups in my case) and the x will be the time whereas y will be the cumhaz?
Thanks
I'm having trouble plotting 2 abline()s on a graph of the log10 Brain mass and log10 body mass. I'm following someone else's script but it just doesn't work for me. This is what I have:
This is the graph produced:
Why are the lines off there like that? Are the values I'm getting for the intercept and slope incorrect, or am I using the wrong ones? I've done other examples of this and its worked OK, but I've always ended up using the first model, never the second one so I'm not sure if I'm using the right values.
If you want to represent a linear regression of the log of the body mass compared to the log of the brain mass, the code is:
model <- lm(log10(brain)~log10(body))
then
abline(model$coefficients[2], model$coefficients[1])
When you don't know which parameter to enter in a function, use the help of that function. For abline(), the first parameter is the slope and the second one is the intercept.
Currently, your model use log10(brain), log10(body) and class.
If you want to assess the quality of your model, look at the residuals.
plot(model)
You can also just use the result of your lm like this:
model <- lm(log10(brain)~log10(body))
plot(log10(brain)~log10(body))
abline(model,col=2)
Hi guys so my problem is possibly either a stats or a programming issue. I have two xts time series of mostly overlapping time periods and I'm simply plotting a regression of their log differences:
logdiff <- merge.xts(diff(log(ts1)),diff(log(ts2)))
plot(logdiff[,1],logdiff[,2])
abline(lm(logdiff[,1]~logdiff[,2]),col=2)
which gives me this plot
So just on an intuitive level I would rather the regression line fit the wider range of data points even if this result its giving me is technically the correct one on a least squares basis. Is there any inbuilt capability to do this "broader regression" or do i have to resort to manual fudging?
I think you are plotting y as a function of x, but regressing x as a function of y.
Try abline(lm(logdiff[,2]~logdiff[,1]),col=2) -- and yes, using column names instead of indices is a good idea.
I want to ask some general questions on the possibility of regression in R.
For instance, I have data between two variables for 58 regions. I want to conduct the whole regression process including assumption check, model fitting and diagnostics for each region, but get the overall result by one command, which means without a loop.
I already know that I can use the lmList function to do model fitting all in one trial. However, I do not know whether it is possible to get Q-Q normal residual plot for all the 58 regressions in one go.
Does anyone get idea whether this is feasible? If so, what kind of functions I might need?
Depends what you mean by "one command", and why you want to avoid loops. How about:
library(nlme)
L <- lmList(y~x|region,data=yourData)
lapply(L,plot,which=2)
should work; however, it will spit out 58 plots in sequence. If you try to capture them all on a single page you'll probably get errors about too-small margins.
You have lots of other choices based on working on the list of regressions that lmList returns. For example,
library(plyr)
qqDat <- ldply(L,function(x) as.data.frame(qqnorm(residuals(x))))
will give you a data frame containing the Q-Q plot information (expected and observed values) for each group in the data.