Actuarial survival analysis, divided into intervals - r

I'm trying to create an actuarial survival analysis in R (I'm following some worked examples). I think the best way to do this is using the survival package. So something like:
library(survival)
surv.test <- survfit(Surv(TIME,STATUS), data=test)
However, to get the correct answer I will need to divide the TIME variable into 365 day intervals and I can't quite work out how to do this so it matches the given result.
As far as I can make out, there is no option within the survfit function that will do this. I went through several document examples and none of them were trying to create a stairstep type of plot (there is a type='interval' option, but seems to do something different). So I guess I need to regroup my data before I apply the survival function?
Any ideas?
P.S: In SPSS this would be INTERVAL = THRU 10000 BY 365; in Stata intervals(365) ... connect(stairsteps)

I am guessing that you want to divide the TIME variable into intervals because you want to plot a Kaplan-Meier curve. In R, that isn't necessary, you can just call plot on the survfit object. For example,
s=survfit(Surv(futime, fustat)~rx, data=ovarian)
plot(s)
I think I understand your question a little better. The reason why you are getting a thick black line is because you have a lot of censoring, and a + is being plotted at every single point where there is censoring, you can turn this off with mark.time=F. (You can see other options in ?survival:::plot.survfit)
However, if you still want to aggregate by year, simply divide your follow up time by 365, and round up. ceiling is used to round up. Here is an example of aggregating at different time levels without censoring.
par(mfrow=c(1,3))
plot(survfit(Surv(ceiling(futime), fustat)~rx, data=ovarian),col=c('blue','red'),main='Day',mark.time=F)
plot(survfit(Surv(ceiling(futime/30), fustat)~rx, data=ovarian),col=c('blue','red'),main='Month',mark.time=F)
plot(survfit(Surv(ceiling(futime/365), fustat)~rx, data=ovarian),col=c('blue','red'),main='Year',mark.time=F)
par(mfrow=c(1,1))
But I think that plotting the Kaplan-Meier without the censoring symbols will look very nice, and provide more insight.

Hurray, I should be able to post the images now:
1) this is how the R basic survival plot looks like at the moment
2) and this is how it should look like (SPSS example)

That was exactly what I was missing! Thanks!
Solution:
vas.surv <- survfit(Surv(ceiling(TIME/365), STATUS)~1, conf.type="none", data=vasectomy)
plot(vas.surv, ylim=c(0.975,1), mark.time=F, xlab="Years", ylab="Cumulative Survival")
A nice touch would be to displays the days on the x-axis instead of the years (as in SPSS) example, but I'm not too bothered about this.

Related

Phase plot in R

I want to make a phase plot like this https://en.wikipedia.org/wiki/Phase_portrait from an non-linear time series in R, Any ideas?
Thank you
you haven't given many details, but I suggest you look at the package phaseR.
I use it to draw isoclines and a flowfield of predator-prey models.
My graphs look like this (right one):

R plotting strangeness with large dataset

I have a data frame with several million points in it - each having two values.
When I plot this like this:
plot(myData)
All the points are plotted, but the plot is quite busy, so I thought I'd plot it as a line:
plot(myData, type="l")
But while the x axis doesn't change (i.e. goes from 0 to 7e+07), the actual plotting stops at about 3e+07 and I don't actually get a proper line plot either.
Is there a limitation on line plotting?
Update
If I use
plot(myData, type="h")
I get correct and useable output, but I still wonder why the type="l" option fails so badly.
Further update
I am plotting a time series - here is one output using type="h":
That's perfectly usable, but having a line would allow me to compare several outputs.
High dimensional data graphic representation is growing issue in data analysis. The problem, actually, is not create the graph. The problem is make the graph capable of communicate information that we could transform in useful knowledge. Allow me to present an example to produce this point, by considering a data with a million observations, that is, not that big.
x <- rnorm(10^6, 0, 1)
y <- rnorm(10^6, 0, 1)
Let's plot it. R can yes easily manage such a problem. But can we? Probably not.
Afterall, what kind of information can we deduce from an ink hard stain? Probably, no more than a tasseographyst trying to divinate the future in patterns of tea leaves, coffee grounds, or wine sediments.
plot(x, y)
A different approach is represented by the smoothScatter function. It creates a density plot of bivariate data. There, we create two examples.
First, with defaults.
smoothScatter(x, y)
Second, the bandwidth was specified to be a little larger than the default, and five points are specified to be shown using a different symbol pch = 3.
smoothScatter(x, y, bandwidth=c(5,1)/(1/3), nrpoints=5, pch=3)
As you can see, the problem is not solved. Nevertheless, we can have a better grasp on the distribution of our data. This kind of approach is still in development, and there are several matters that are discussed and evolved. If this approach represents a more suitable approach to represent your big dataset, I suggest you to visit this blog that discuss throughfully the issue.
For what it's worth, all the evidence I have is that is computer - even though it was a lump of big iron - ran out of memory.

Forest plot from cox object

Please be tolerant :) I am a dummy user of R and I am using the code and sample data to learn how to make forest plot that was shown in the previous post -
Optimal/efficient plotting of survival/regression analysis results
I was wondering is it possible to set user-defined x-axis scale with the code shown there? Up to now x a-axis scale is defined somehow automatically.
Thank you for any tips.
I'm unimpressed with the precision of the documentation since one might assume that the limits argument would be values on the relative risk scale rather than on the log-transformed scale. One gets a ridiculous result if that is done. That quibble not withstanding, it's relatively easy to use that parameter to created an expanded plot:
install('devtools') # then use it to get current package
# executing the install and load of the package referenced at the top of that answer
print(forest_model(lung_cox, limits=log( c(.5, 50) ) ))
Trying for a lower range of 0 on the relative risk scale is not sensible. Would imply a -Inf value on hte log-transformed scale. Trying for lower value, say log(0.001), confuses the pretty printing of the scale in my tests.

R - Adding series to multiple plots

I have the following plot:
plot.ts(returns)
I have another dataframe ma_sd which contains the rolling SD from moving averages of the above returns. The df is structured exactly like returns. Is there a simple way to add each line to the corresponding plots?
lines(1:N, ma_sd) seemed intuitive, but it does not work.
Thanks
The only way I can see you doing this is to plot them separately. This code is a bit clunky but will allow you full flexibility to be able to specify labels and axis ranges. You can build on this.
par(mfrow=c(3,1),oma=c(5,4,4,2),mar=c(0,0,0,0))
time<-as.data.frame(matrix(c(1:length(returns[,1])),length(returns[,1]),3))
plot(time[,1],returns[,1],type='l',xaxt='n')
points(time[,1],ma_sd[,1],type='l',col='red')
plot(time[,2],returns[,2],type='l',xaxt='n')
points(time[,2],ma_sd[,2],type='l',col='red')
plot(time[,3],returns[,3],type='l')
points(time[,3],ma_sd[,3],type='l',col='red')

R: update plot [xy]lims with new points() or lines() additions?

Background:
I'm running a Monte Carlo simulation to show that a particular process (a cumulative mean) does not converge over time, and often diverges wildly in simulation (the expectation of the random variable = infinity). I want to plot about 10 of these simulations on a line chart, where the x axis has the iteration number, and the y axis has the cumulative mean up to that point.
Here's my problem:
I'll run the first simulation (each sim. having 10,000 iterations), and build the main plot based on its current range. But often one of the simulations will have a range a few orders of magnitude large than the first one, so the plot flies outside of the original range. So, is there any way to dynamically update the ylim or xlim of a plot upon adding a new set of points or lines?
I can think of two workarounds for this: 1. store each simulation, then pick the one with the largest range, and build the base graph off of that (not elegant, and I'd have to store a lot of data in memory, but would probably be laptop-friendly [[EDIT: as Marek points out, this is not a memory-intense example, but if you know of a nice solution that'd support far more iterations such that it becomes an issue (think high dimensional walks that require much, much larger MC samples for convergence) then jump right in]]) 2. find a seed that appears to build a nice looking version of it, and set the ylim manually, which would make the demonstration reproducible.
Naturally I'm holding out for something more elegant than my workarounds. Hoping this isn't too pedestrian a problem, since I imagine it's not uncommon with simulations in R. Any ideas?
I'm not sure if this is possible using base graphics, if someone has a solution I'd love to see it. However graphics systems based on grid (lattice and ggplot2) allow the graphics object to be saved and updated. It's insanely easy in ggplot2.
require(ggplot2)
make some data and get the range:
foo <- as.data.frame(cbind(data=rnorm(100), numb=seq_len(100)))
make an initial ggplot object and plot it:
p <- ggplot(as.data.frame(foo), aes(numb, data)) + layer(geom='line')
p
make some more data and add it to the plot
foo <- as.data.frame(cbind(data=rnorm(200), numb=seq_len(200)))
p <- p + geom_line(aes(numb, data, colour="red"), data=as.data.frame(foo))
plot the new object
p
I think (1) is the best option. I actually don't think this isn't elegant. I think it would be more computationally intensive to redraw every time you hit a point greater than xlim or ylim.
Also, I saw in Peter Hoff's book about Bayesian statistics a cool use of ts() instead of lines() for cumulative sums/means. It looks pretty spiffy:

Resources