survival package, right censored data - r

I account for right censored data in the analysis of my dataset. I am using the survival package - given cancer treatment tactics and when the patient last checked in with my clients clinic.
Is there a suggested method or manipulation to the standard survival package to account for right-censored data?
Our rows are unique individual patients...
Here are our columns that are filled out:
List item
our treatment type (constant)
days since original diagnosis
'censored' which is the number of patients who were last heard on this day. Hence, We are now uncertain if they are still alive or dead seen as they stopped attending the clinic. They should be removed from the probability estimate at all points in future.
# of patients who died on that day (from original diagnosis)
So do you recommend a manipulation of the standard survival package? Or using another package? I have seen survSNP, survPRESMOOTH and survBIVAR that may perhaps help. I want to avoid recalculations of the individual columns/fields and creating new objects of the R algorithm seeing as this is a small part of a very large dataset.

Related

Create a new datafram to do piecewise linear regression on percentages after doing serial crosstabs in R

I am working with R. I need to identify the predictors of higher Active trial start percentage over time (StartDateMonthsYrs). I will do linear regression with Percent.Active as the dependent variable.
My original dataframe is attached and my obtained Active trial start percentage over time (named Percent.Activeis presented here.
So, I need to assess whether federal sponsored trials, industry sponsored trials or Other sponsored trials were associated with higher active trial start percentage over time. I have many other variables that I wneed to assess but this is the sample of my data.
I am thinking to do many crosstabs for each variable (eg Fedral & Active then Industry & Active..etc.) in each month (may be with help of lapply then accumulate the obtained percentages data in the second sheet then run the analysis based on that.
My code for linear regression is as follow:
q.lm0 <- lm(Percent.Active ~ Time.point+ xyz, data.percentage);summary(q.lm0)
I'm a little bit confused. You write 'associated'. If you really want to look for association then yeah, a crosstab might be possible, and sufficient, as association is not the same as causation (which is further derived from correlation, if there is a theory behind). If you look for correlation, and insights over time, doing a regression with the lm package is not useful.
If you want to look for a regreesion type analysis there are packages in R like the plm package, which can deal with panel data, as you clearly have panel data (time points, and interested trials labels, and repetitive time points for these labels). Look at this post for infos about the package:https://stackoverflow.com/questions/2804001/panel-data-with-binary-dependent-variable-in-r
I'm writing you this because your Percent.Activevariable is only a binary outcome of 0/1 I'm not sure if this is on purpose. However, even if your outcome is not binary, the plm package might help, but you will find other mentioned packages in that post.

Anomaly detection within a dataset in R

I would like to detect patterns within a weather dataset of around 10'000 data points. I have around 40 possible predictors (temperature, humidity etc.) which may explain good or bad weather the next day (dependent variable). Normally, I would apply classical machine learning methods like Random Forest to build and test models for classifying the whole dataset and find reliable predictors to forecast the next day's weather.
My task though is different. I want to find predictors and their parameters which "guarantee" me good or bad weather in a subset of the overall data. I am not interested in describing the whole dataset but finding the pattern of predictors (and their parameters) that give me good or bad weather indications. So I am trying to find, for example, 100 datapoints with 100% good weather if certain predictors are set to certain levels. I am not interested in the other 9'900 datapoints.
It is kind of the task of trying all combinations and calibrations of the predictors to find a subset of the overall data points which can be predicted with very high accuracy.
How would you do this systematically?

How to construct dataframe for time series data using ensemble learning methods

I am trying to predict the Bitcoin price at t+5, i.e. 5 minutes ahead, using 11 technical indicators up to time t which can all be calculated from the open, high, low, close and volume values from the Bitcoin time series (see my full data set here). As far as I know, it is not necessary to manipulate the data frame when using algorithms like regression trees, support vector machines or artificial neural networks, but when using ensemble methods like random forests (RF) and Boosting, I heard that it is necessary to re-arrange the data frame in some way, because ensemble methods draw repeated RANDOM samples from the training data, in which case the sequence of the Bitcoin time series will be ruined. So, is there a way to re-arrange the data frame in some way such that the time series will still be in chronological order every time repeated samples are drawn from the training data?
I was provided with an explanation of how to construct the data frame here and possibly here, too, but unfortunately, I didn't really understand these explanations, because I didn't see a visual example of the to-be-constructed data frame and because I wasn't able to identify the relevant line of code. So, if someone could, show me how to re-arrange the data frame using an example data frame, I would be very thankful. As example data frame, you might consider using the airquality in-built data frame in r (I think it contains time series data), the data I provided above, or any other data frame you think is best.
Many thanks!
There is no problem with resampling for ML algorithms. To capture (auto)correlation just add columns with lagged values of time series. E.g. in case of univarate time-series x[t], where t is time in minutes, you add x[t - 1], x[t - 2], ..., x[t - n] columns with lagged values. More lags you add more history will be accounted at model training.
Some very basic working example you can find here: Prediction using neural networks
More advanced staff with Keras is here: Time series prediction using RNN
However, just for your information, special message by Mr Chollet and Mr Allaire from the above-mentioned article ,):
NOTE: Markets and machine learning
Some readers are bound to want to take the techniques we’ve introduced
here and try them on the problem of forecasting the future price of
securities on the stock market (or currency exchange rates, and so
on). Markets have very different statistical characteristics than
natural phenomena such as weather patterns. Trying to use machine
learning to beat markets, when you only have access to publicly
available data, is a difficult endeavor, and you’re likely to waste
your time and resources with nothing to show for it.
Always remember that when it comes to markets, past performance is not
a good predictor of future returns – looking in the rear-view mirror
is a bad way to drive. Machine learning, on the other hand, is
applicable to datasets where the past is a good predictor of the
future.

Event duration prediction in R? need tips for methods and packages

I have a general question of methodology.
I have to create a model to predict when/after how many days of therapy a patient reaches a certain value. I have data of this value from tight laboratory controls and also information on some influence variables. But now I'm at a loss and I don't know how to do it best, so that in the end there will be something that can be used to make predictions for new patients when this threshold is reached, or as a binary variable when the value is not longer detectable.
The more I read about the topic, the more unsure I am about the optimal method.

two-sided censored model in R (similar to Zeligs Tobit)?

Is there a model for dependent variables that are censored on both sides? And if so is there an implementation in R? I am only aware of tobit models (e.g. in Zelig package), but they´re obviously only censored on the left side... I wonder if it even makes sense to truncate on both sides...
There's a difference between truncation and censoring. You need to be aware of which is the case before you start modeling. (in a nutshell: Censoring means events can be detected, but the measurements are not known completely (i.e. in your case you neither know the exact beginning nor the exact end of the time interval subjects were under risk for the event you're considering). Truncation means events can be observed only if another condition is fullfilled: a popular example is survival in a retirement home that only accepts people over 65 to take up residence - entry into the study population is then truncated at age 65.)
if you have both left- and right censored data or data that are simultaneously right- and left-censored, the techncal term you are looking for is interval censored. ?Surv in package survival will show you how to define interval censored observations for modelling time-to-event in that case.
In a very real sense most of the observational studies on "free-range human" populations are doubly censored... i.e. we do not observe the individuals over all of their lifespans. Here is a citation to a PhD thesis that seems to lay out the statistical terminology well. Furthermore, several of the packages in R will function properly when set up for interval censoring or left-censoring, including packages survival, NADA, sand (from their DOE website) and several others for which you can search at Baron's website with appropriate search strategies in this link that sets up that page to get both functions and r-help entries.
Edit: Adding comments to address the clarification that this is about truncation rather than censoring.
If one is looking to fit to truncated distributions then look at the gamlss package, or create a suitable density for a doubly-truncated distribution and use fitdistr in the MASS package.

Resources