I'm doing some cluster analysis on the MLTobs from the LifeTables package and have come across a tricky problem with the Year variable in the mlt.mx.info dataframe. Year contains the period that the life table was taken, in intervals. Here's a table of the data:
1751-1754 1755-1759 1760-1764 1765-1769 1770-1774 1775-1779 1780-1784 1785-1789 1790-1794
1 1 1 1 1 1 1 1 1
1795-1799 1800-1804 1805-1809 1810-1814 1815-1819 1816-1819 1820-1824 1825-1829 1830-1834
1 1 1 1 1 2 3 3 3
1835-1839 1838-1839 1840-1844 1841-1844 1845-1849 1846-1849 1850-1854 1855-1859 1860-1864
4 1 5 3 8 1 10 11 11
1865-1869 1870-1874 1872-1874 1875-1879 1876-1879 1878-1879 1880-1884 1885-1889 1890-1894
11 11 1 12 2 1 15 15 15
1895-1899 1900-1904 1905-1909 1908-1909 1910-1914 1915-1919 1920-1924 1921-1924 1922-1924
15 15 15 1 16 16 16 2 1
1925-1929 1930-1934 1933-1934 1935-1939 1937-1939 1940-1944 1945-1949 1947-1949 1948-1949
19 19 1 20 1 22 22 3 1
1950-1954 1955-1959 1956-1959 1958-1959 1960-1964 1965-1969 1970-1974 1975-1979 1980-1984
30 30 2 1 40 40 41 41 41
1983-1984 1985-1989 1990-1994 1991-1994 1992-1994 1995-1999 2000-2003 2000-2004 2005-2006
1 42 42 1 1 44 3 41 22
2005-2007
14
As you can see, some of the intervals sit within other intervals. Thankfully none of them overlap. I want to simplify the intervals so intervals such as 1992-1994 and 1991-1994 all go into 1990-1994.
An idea might be to get the modulo of each interval and sort them into their new intervals that way but I'm unsure how to do this with the interval data type. If anyone has any ideas I'd really appreciate the help. Ultimately I want to create a histogram or barplot to illustrate the nicely.
If I understand your problem, you'll want something like this:
bottom <- seq(1750, 2010, 5)
library(dplyr)
new_df <- mlt.mx.info %>%
arrange(Year) %>%
mutate(year2 = as.numeric(substr(Year, 6, 9))) %>%
mutate(new_year = paste0(bottom[findInterval(year2, bottom)], "-",(bottom[findInterval(year2, bottom) + 1] - 1)))
View(new_df)
So what this does, it creates bins, and outputs a new column (new_year) that is the bottom of the bin. So everything from 1750-1754 will correspond to a new value of 1750-1754 (in string form; the original is an integer type, not sure how to fix that). Does this do what you want? Double check the results, but it looks right to me.
Related
Is it possible to split episode by a given variable in survival analysis in R, similar to in STATA using stsplit in the following way: stsplit var, at(0) after(time=time)?
I am aware that the survival package allows one to split episode by given cut points such as c(0,5,10,15) in survSplit, but if a variable, say time of divorce, differs by each individual, then providing cutpoints for each individual would be impossible, and the split would have to be based on the value of a variable (say graduation, or divorce, or job termination).
Is anyone aware of a package or know a resource I might be able to tap into?
Perhaps Epi package is what you are looking for. It offers multiple ways to cut/split the follow-up time using the Lesix objects. Here is the documentation of cutLesix().
After some poking around, I think tmerge() in the survival package can achieve what stsplit var can do, which is to split episodes not just by a given cut points (same for all observations), but by when an event occurs for an individual.
This is the only way I knew how to split data
id<-c(1,2,3)
age<-c(19,20,29)
job<-c(1,1,0)
time<-age-16 ## create time since age 16 ##
data<-data.frame(id,age,job,time)
id age job time
1 1 19 1 3
2 2 20 1 4
3 3 29 0 13
## simple split by time ##
## 0 to up 2 years, 2-5 years, 5+ years ##
data2<-survSplit(data,cut=c(0,2,5),end="time",start="start",
event="job")
id age start time job
1 1 19 0 2 0
2 1 19 2 3 1
3 2 20 0 2 0
4 2 20 2 4 1
5 3 29 0 2 0
6 3 29 2 5 0
7 3 29 5 13 0
However, if I want to split by a certain variable, such as when each individuals finished school, each person might have a different cut point (finished school at different ages).
## split by time dependent variable (age finished school) ##
d1<-data.frame(id,age,time,job)
scend<-c(17,21,24)-16
d2<-data.frame(id,scend)
## create start/stop time ##
base<-tmerge(d1,d1,id=id,tstop=time)
## create time-dependent covariate ##
s1<-tmerge(base,d2,id=id,
finish=tdc(scend))
id age time job tstart tstop finish
1 1 19 3 1 0 1 0
2 1 19 3 1 1 3 1
3 2 20 4 1 0 4 0
4 3 29 13 0 0 8 0
5 3 29 13 0 8 13 1
I think tmerge() is more or less comparable with stsplit function in STATA.
I've got a mess of data and am trying to efficiently wrangle it into shape. Here's a simplified short sample of the general format of my data.frame right now. The main difference is that I have a few more data labels like Label1 for my sampling units - each has a set of data similar to the data.frame I'm including but in my situation they are all in the same data.frame. I don't think that will complicate the reformatting so I've just included the single sampling unit of mock data here. StatsType levels Ave, Max, and Min are effectively nested within MeasureType.
tastycheez<-data.frame(
Day=rep((1:3),9),
StatsType=rep(c(rep("Ave",3),rep("Max",3),rep("Min",3)),3),
MeasureType=rep(c("Temp","H2O","Tastiness"),each=9),
Data_values=1:27,
Label1=rep("SamplingU1",27))
Ultimately, I would like a data frame where for each sampling unit and each Day there are columns holding the Data_values for my categories, like this:
Day Label1 Ave.Temp Ave.H2O Ave.Tastiness Max.Temp ...
1 SamplingU1 1 10 19 4 ...
2 SamplingU1 2 11 20 5 ...
I think some combination of functions from reshape,dplyr,tidyr, and/or data.table could do the job but I can't figure out how to code it. Here's what I've tried:
First, I spread the tastycheez (yum!), and that got me partway:
test<-spread(tastycheez,StatsType,Data_values)
Now I'm trying to spread it again or to cast, but with no luck:
test2<-spread(test,MeasureType,(Ave,Max,Min))
test2 <- recast(Day ~ MeasureType+c(Ave,Max,Min), data=test)
(I also tried melting the tastycheez but the results were a sticky, gooey mess and my tongue got burnt. that doesn't seem to be the right function for this.)
If you hate my puns please excuse them, I really can't figure this out!
Here are a couple related questions:
Combining two subgroups of data in the same dataframe
How can I spread repeated measures of multiple variables into wide format?
reshape2 You could use dcast from reshape2:
library(reshape2)
dcast(tastycheez,
Day + Label1 ~ paste(StatsType, MeasureType, sep="."),
value.var = "Data_values")
which gives
Day Label1 Ave.H2O Ave.Tastiness Ave.Temp Max.H2O Max.Tastiness Max.Temp Min.H2O Min.Tastiness Min.Temp
1 1 SamplingU1 10 19 1 13 22 4 16 25 7
2 2 SamplingU1 11 20 2 14 23 5 17 26 8
3 3 SamplingU1 12 21 3 15 24 6 18 27 9
tidyr Stealing #DavidArenburg's comment, here's the tidyr way:
library(tidyr)
tastycheez %>%
unite(temp, StatsType, MeasureType, sep = ".") %>%
spread(temp, Data_values)
which gives
Day Label1 Ave.H2O Ave.Tastiness Ave.Temp Max.H2O Max.Tastiness Max.Temp Min.H2O Min.Tastiness Min.Temp
1 1 SamplingU1 10 19 1 13 22 4 16 25 7
2 2 SamplingU1 11 20 2 14 23 5 17 26 8
3 3 SamplingU1 12 21 3 15 24 6 18 27 9
I have the following:
heads(dataframe):
ID Result Days
1 70 0
1 80 23
2 90 15
2 89 30
2 99 40
3 23 24
ect...
what I am trying to do is: Create a spaghetti plot with the above datast. What I use is this:
interaction.plot(dataframe$Days,dataframe$ID,dataframe$Result,xlab="Time",ylab="Results",legend=F) but none of the patient lines are continuous even when they were supposed to be a long line.
Also I want to convert the above dataframe to something like this:
ID Result Days
1 70 0
1 80 23
2 90 0
2 89 15
2 99 25
3 23 0
ect... ( I am trying to take the first (or minimum) of each id and have their dating starting from zero and up). Also in the spaghetti plot i want all patients to have the same color IF a condition in met, and another color if the condition is not met.
Thank you for your time and patience.
How about this, using ggplot2 and data.table
# libs
library(ggplot2)
library(data.table)
# your data
df <- data.table(ID=c(1,1,2,2,2,3),
Result=c(70,80,90,89,99,23),
Days=c(0,23,15,30,40,24))
# adjust each ID to start at day 0, sort
df <- merge(df, df[, list(min_day=min(Days)), by=ID], by='ID')
df[, adj_day:=Days-min_day]
df <- df[order(ID, Days)]
# plot
ggplot(df, aes(x=adj_day, y=Result, color=factor(ID))) +
geom_line() + geom_point() +
theme_bw()
Contents of updated data.frame (actually a data.table):
ID Result Days min_day adj_day
1 70 0 0 0
1 80 23 0 23
2 90 15 15 0
2 89 30 15 15
2 99 40 15 25
3 23 24 24 0
You can handle the color coding easily using scale_color_manual()
I am trying to do a two way mixed factorial ANOVA with repeated measures. From:
aov(Estimate ~ Dose*Visit, data = AUClast)
I get 3 sums of squares: two main effects (Visit and Dose) and their interaction (Dose:Visit) which I figured out by hand are correct.
Both Dose and Visit are explanatory variables with Dose being a between subject variable with 4 levels, 3, 10, 30, 100 and Visit being a within subjects variable (repeated measure) of 2 levels, 1 and 28. Also, the subjectID variable is 'Animal'
I want to include one more effect into the result but do not know how. The desired effect is variance between Animal within Dose, or how SAS puts it Animal(Dose). The SS is calculated by:
sum((mean(Animal(ik))-mean(Dose(i))^2)
Where k is the animal of a dose i (averaging the Estimates of the observation in Visit 1 and Visit 28 for each Animal and subtracting the mean Estimate of animals in that Dose quantity squared for all Animals in this study).
Does anyone know how to adjust the formula accordingly to include the Animal(Dose) effect?
Thanks in advance for the help and sorry if all of this is too unspecific.
If I understand you correctly, I have a suggestion. First, a sample data set
#sample data
set.seed(15)
AUClast<-data.frame(
expand.grid(
Animal=1:3,
Dose=c(3,10,30,100),
Visit=c(1,28)
), Estimate=runif(24)
)
Now we calculate the interaction term as requested. First, we split the data into dosage groups, then for each does, we subtract the overall mean from the mean for each animal. Then we sum the squared of those differences. Finally, we expand them back out to does group using unsplit.
animaldose<-unsplit(lapply(split(AUClast, AUClast$Dose), function(x) {
rep(
sum((tapply(x$Estimate, x$Animal, mean) - mean(x$Estimate))^2)
, nrow(x))
}), AUClast$Dose)
And we can see what that looks like next to the original data.frame
cbind(AUClast, animaldose)
Which gives the result
Animal Dose Visit Estimate animaldose
1 1 3 1 0.60211404 0.1181935
2 2 3 1 0.19504393 0.1181935
3 3 3 1 0.96645873 0.1181935
4 1 10 1 0.65090553 0.1641363
5 2 10 1 0.36707189 0.1641363
6 3 10 1 0.98885921 0.1641363
7 1 30 1 0.81519341 0.0419291
8 2 30 1 0.25396837 0.0419291
9 3 30 1 0.68723085 0.0419291
10 1 100 1 0.83142902 0.1881314
11 2 100 1 0.10466936 0.1881314
12 3 100 1 0.64615091 0.1881314
13 1 3 28 0.50909039 0.1181935
14 2 3 28 0.70662857 0.1181935
15 3 3 28 0.86231366 0.1181935
16 1 10 28 0.84178515 0.1641363
17 2 10 28 0.44744372 0.1641363
18 3 10 28 0.96466695 0.1641363
19 1 30 28 0.14118707 0.0419291
20 2 30 28 0.77671251 0.0419291
21 3 30 28 0.80372740 0.0419291
22 1 100 28 0.79334595 0.1881314
23 2 100 28 0.35756312 0.1881314
24 3 100 28 0.05800106 0.1881314
So you can see each does group has it's own adjustment.
i am trying to plot the time series x_t = A + (-1)^t B
To do this i am using the following code. The problem is, that the ggplot is wrong.
require (ggplot2)
set.seed(42)
N<-2
A<-sample(1:20,N)
B<-rnorm(N)
X<-c(A+B,A-B)
dat<-sapply(1:N,function(n) X[rep(c(n,N+n),20)],simplify=FALSE)
dat<-data.frame(t=rep(1:20,N),w=rep(A,each=20),val=do.call(c,dat))
ggplot(data=dat,aes(x=t, y=val, color=factor(w)))+
geom_line()+facet_grid(w~.,scale = "free")
looking at the head of dat everything looks right:
> head(dat)
t w val
1 1 12 10.5533
2 2 12 13.4467
3 3 12 10.5533
4 4 12 13.4467
5 5 12 10.5533
6 6 12 13.4467
So the lower (blue) line should only have values 10.5533 and 13.4467. But it also takes different values. What is wrong in my code?
Thanks in advance for any help
You really should be more careful before asserting that something is "wrong". The way you are creating dat the rows are not ordered by dat$t, so head(...) is not displaying the extra values:
head(dat[order(dat$w,dat$t),],10)
# t w val
# 21 1 18 18.43530
# 61 1 18 18.36313
# 22 2 18 19.56470
# 62 2 18 17.63687
# 23 3 18 18.43530
# 63 3 18 18.36313
# 24 4 18 19.56470
# 64 4 18 17.63687
# 25 5 18 18.43530
# 65 5 18 18.36313
Note the row numbers.