R 'longitudinal' package - exporting shrinkage objects - r

I am working with multivariate longitudinal data, and was hoping to use the 'longitudinal' package to look at dynamical correlation between variables. I have been able to run the code fine until the end:
[example using stored data from the package]:
library("longitudinal")
data(tcell)
dynpc <- dyn.pcor(tcell.34, lambda=0)
class(dynpc)
(object is of class"shrinkage")
So here is my problem: how do I export this data to a .txt or .csv file? The function gives me a matrix of correlations, along with other information.

Related

How to reinterpolate the data to same length by package ‘dtwclust’

I'm trying to do a hierarchical cluster dendrogram of time series in R using the dtwclust package. For my test dataset I have 7 columns with unequal lengths. The dtwclust package offers a way to equalize the lengths using a reinterpolate function. However I get this error when I try to use it with the following code
data <- reinterpolate(my_data, new.length = max(lengths(my_data)))
Error in check_consistency(x, "ts") : There are missing values in the series.
I do not know if this means that the data table is not organized properly. Can anyone suggest how to read in the data (I just imported it using RStudio) or any other way to resolve this issue. I tried the analysis using the data provided in the package and it worked as advertised.

Store regression models in dataframe

I conduct a large number of regression analyses using ols and cph (different models, sensitivity analyses etc) which takes on my computer around two hours. Therefore, I would like to save these models so that I don't have to re-run the same analyses every time I want to work with them. The models all have very structured names, so I can create a list of names as follows:
model.names <- list()[grep("^im", ls())
But how can I use this to save those models? Could they be placed into a data frame?
I think you are looking for save()
save writes an external representation of R objects to the specified file. The objects can be read back from the file at a later date by using the function load or attach (or data in some cases).

How do I organize my data for time series cluster analysis using the dtwclust package?

I'm trying to do a hierarchical cluster dendrogram of time series in R using the dtwclust package. For my test dataset I have 4 columns with unequal lengths. the dtwclust package offers a way to equalize the lengths using a reinterpolate function. However I get this error when I try to use it with the following code
data <- reinterpolate(fshdtw, new.length = max(lengths(fshdtw)))
Error in check_consistency(x, "ts") : There are missing values in the series. This makes me think (I could be wrong) that the data table is not organized properly. Can anyone suggest how to 1. read in the data (I just imported it using RStudio) or any other way to resolve this issue. PS: I tried the analysis using the data provided in the package and it worked as advertised. Thanks in advance!
The data file is here https://www.dropbox.com/s/dih39ji0zop9xa0/fshdtw.txt?dl=0

Imputation package "mi" output

I am using the "mi" package for imputation of missing values. I have run the following code:
'mi' package code
library(mi)
imp_rd<-mi(rd1) ## rd1 is my data file containing 7 variables.
summary(imp_rd)
hist(imp_rd)
Now, I want to save the output of
"imp_rd" (which is my imputed data file) as .csv file. Any one who will help me regarding this problem.
if you want to export imputed data-sets generated by the model that mi estimated, a good way to do it is by using the mi2stata command, which allows you to export to either a .dta or a .csv format.
But remember not to think about exporting "one" imputed data set. The whole point of multiple imputation is that you can get a bunch of different imputed data sets that will allow you to account for the uncertainty induced by the missing data that you originally had.
So be sure to specify how many imputed data sets you want to export and the path where you want to save the imputed data. In the following example I chose to generate 10 imputed data sets.
library(mi)
imp_rd<-mi(rd1)
mi2stata(imp_rd, m=10, "pathtofile/imp_rd.csv")
Hope you find this useful.
if your output file is a dataframe you can use:
write.csv(imp_rd, file = "imp_rd.csv", sep = ",")
this should save file in csv in your working directory
thanks

Export kmeans clustering results to .csv

I've done a k-means clustering on my data, imported from .csv. Is there anyway to export the clustered results back to .csv file? Cos after the k-means clustering is done, the class of the variable is not a data frame but kmeans.
In most R package help files there will be a subheading that says "value" that describes the output from the analyses conducted. I have not used kmeans recently, but I believe you want something like this:
kmeansresults<-kmeans(dataframe)
x<-kmeansresults$clusters
write.csv(x, file="name_of_file.csv")

Resources