Transposing rows of unequal length into one column - r

I have a dataset that looks like this..
MX000003035 LORETO 26.0170 111.3330 7.0 1938 2014
1941 1 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 2 28 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 3 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 4 30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
where the data for each station begins with a description of the station - code, name, latitude etc.. The first column is the year, the second is the month of the year, the third the number of days and the following values are the daily precipitation values for that month.
There are 860 stations in this single dataset. How do I convert this into the following format in R?
Station Code Name Lat Long Year Month Precip
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
MX000003035 LORETO 26.017 111.333 1941 1 0
.. and so on
EDIT: Here is the link to the dataset
https://www.dropbox.com/s/o0yp1pe4rze8amd/gdcn_SWUS.txt
And here are some snippets...
1940 10 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1940 11 30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1940 12 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 1 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 2 28 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 3 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 4 30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
...
2014 9 30-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
2014 10 31-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
2014 11 30-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
2014 12 31-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
MX000003068 CIUDAD CONSTITUCION 24.9500 -111.7000 48.0 1957 2014
1957 1 31-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
1957 2 28-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
1957 3 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 190 80 0 0 0 0 0 0 0 0 0 0 0 0
1957 4 30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1957 5 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1957 6 30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1957 7 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1957 8 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 50 0 0 0 0 50 0 50 0 0 0 0 0 5 0 0 0
...
2014 9 30-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
2014 10 31-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
2014 11 30-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
2014 12 31-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
USC00040983 BORREGO DESERT PARK 33.2314 -116.4144 245.4 1942 2014
1942 1 31-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
1942 2 28-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
1942 3 31-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
1942 4 30-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
1942 5 31-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
1942 6 30-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999

Bring the data into R with readLines
dat <- raedLines( filename )
Then get the row numbers with station names and lat/long:
stations <- dat[ grep( "[[:alpha:]]{2}", dat) ]
Identify the row numbers of data lines:
breaks <- grep( "[[:alpha:]]{2}", dat)
breaks
#[1] 1 6 10
Make sequence of breaks:
breaks <- c(breaks, length(dat)+1 )
Then pull in data between breaks and let the R "auto repeat" function duplicate the station data:
newdf <- lapply( seq_along(breaks[-1]),
function(idx){
data.frame( stations[idx],
read.table(text=dat[(breaks[idx]+1):(breaks[idx+1]-1)], fill=TRUE))})
Then row-bind the rows back together:
newdf2 <- do.call(rbind, newdf)
Test data:
dat <- readLines( textConnection("MX000003035 LORETO 26.0170 111.3330 7.0 1938 2014
1941 1 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 2 28 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 3 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 4 30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
MX000003036 Laredo 27.0170 112.3330 7.0 1938 2014
1941 1 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 2 28 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 3 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 4 30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
MX000003037 Another 28.0170 113.3330 7.0 1938 2014
1941 1 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 2 28 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 3 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1941 4 30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0")
Output: (not quite done but if you pass stations to read.table first and then do the lapply(cbind(..)) operation it should work fine:
## stations <- read.table(text=stations)
# Remove column 7 and add desired row names
> newdf ### the unfinished version
stations.idx. V1
1 MX000003035 LORETO 26.0170 111.3330 7.0 1938 2014 1941
2 MX000003035 LORETO 26.0170 111.3330 7.0 1938 2014 1941
3 MX000003035 LORETO 26.0170 111.3330 7.0 1938 2014 1941
4 MX000003035 LORETO 26.0170 111.3330 7.0 1938 2014 1941
5 MX000003036 Laredo 27.0170 112.3330 7.0 1938 2014 1941
6 MX000003036 Laredo 27.0170 112.3330 7.0 1938 2014 1941
7 MX000003036 Laredo 27.0170 112.3330 7.0 1938 2014 1941
8 MX000003036 Laredo 27.0170 112.3330 7.0 1938 2014 1941
9 MX000003037 Another 28.0170 113.3330 7.0 1938 2014 1941
10 MX000003037 Another 28.0170 113.3330 7.0 1938 2014 1941
11 MX000003037 Another 28.0170 113.3330 7.0 1938 2014 1941
12 MX000003037 Another 28.0170 113.3330 7.0 1938 2014 1941
V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12 V13 V14 V15 V16 V17 V18 V19 V20 V21 V22 V23
1 1 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 2 28 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 3 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 4 30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 1 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 2 28 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 3 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8 4 30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
9 1 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
10 2 28 0 0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11 3 31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
12 4 30 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
V24 V25 V26 V27 V28 V29 V30 V31 V32 V33 V34
1 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 NA NA NA
3 0 0 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0 0 NA
5 0 0 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 NA NA NA
7 0 0 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0 0 0 NA
9 0 0 0 0 0 0 0 0 0 0 0
10 0 0 0 0 0 0 0 0 NA NA NA
11 0 0 0 0 0 0 0 0 0 0 0
12 0 0 0 0 0 0 0 0 0 0 NA

Related

Fined Find common names across several years

I have data with two columns, the first column is years from 1980 :2020 and the second column contains characters or names within each year. I am interested in producing two columns` table with the number of times specific name was present in one column and the years that the name was replicated in, in another column.
Any idea how to do that?
Thank you in advance for your time and help
Do you mean aggregate like below?
aggregate(year ~ ., df, toString)
such that
> aggregate(year ~ ., df, toString)
name year
1 A 1983, 1990, 2012
2 B 1984, 2007
3 C 2014
4 D 1981
5 E 2001, 2005, 2006
6 F 2015, 2018
7 G 1982, 1997
8 I 1998, 2002
9 J 1993, 1996, 2008, 2016, 2017
10 K 1986
11 L 2010
12 N 1987, 1995, 2004
13 O 1999, 2011, 2019
14 R 1988
15 S 1989
16 T 2013, 2020
17 U 1991, 1992, 2000
18 V 1994
19 W 1985
20 Y 1980, 2003, 2009
or table
> table(df)
name
year A B C D E F G I J K L N O R S T U V W Y
1980 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
1981 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1982 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
1983 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1984 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1985 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
1986 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
1987 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
1988 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
1989 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
1990 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1991 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
1992 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
1993 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
1994 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
1995 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
1996 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
1997 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
1998 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
1999 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
2000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
2001 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2002 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
2003 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
2004 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
2005 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2006 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2007 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2008 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
2009 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
2010 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
2011 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
2012 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2013 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
2014 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2015 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2016 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
2017 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
2018 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2019 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
2020 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
Data
set.seed(1)
df <- data.frame(
year = 1980:2020,
name = sample(LETTERS, 41, replace = TRUE)
)
I like #ThomasIsCoding solution a lot, but here are solutions using dplyr and data.table.
Example data:
dat <- data.frame(
year = sample(1990:1993, 10, replace=TRUE),
name = sample(letters[1:4], 10, replace=TRUE))
tidyverse & dplyr:
library(dplyr)
dat %>%
group_by(name) %>%
summarize(n = n(), years = toString(sort(unique(year))))
#> # A tibble: 4 x 3
#> name n years
#> <chr> <int> <chr>
#> 1 a 1 1993
#> 2 b 2 1992, 1993
#> 3 c 2 1990
#> 4 d 5 1992, 1993
data.table solution:
library(data.table)
dt = data.table(dat)
dt[, .(.N, years=toString(sort(unique(year)))), by="name"]
#> name N years
#> 1: b 2 1992, 1993
#> 2: d 5 1992, 1993
#> 3: c 2 1990
#> 4: a 1 1993

Estimation transition matrix with low observation count

I am building a markov model with an relativ low count of observations for a given number of states.
Are there other methods to estimate the real transition probabilities than the cohort method? Especially to ensure that the probabilities are decreasing with increasing distance from the current state. The pair (11,14) does not behave in that manner and the underlying model wouldn't support this.
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
2 4 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 1 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 0 1 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 0 0 2 10 8 0 0 0 0 0 0 0 0 0 0 0 0 0
6 0 0 0 9 53 13 2 0 0 0 0 0 0 0 0 0 0 0
7 0 0 0 0 17 42 17 0 0 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 21 71 21 0 0 0 0 0 0 0 0 0 0
9 0 0 0 0 0 0 23 102 21 3 0 0 0 0 0 0 0 0
10 0 0 0 0 0 0 0 23 57 33 0 0 0 0 0 0 0 0
11 0 0 0 0 0 0 0 1 34 142 28 1 3 0 0 0 0 0
12 0 0 0 0 0 0 0 0 1 28 127 27 0 0 0 0 0 0
13 0 0 0 0 0 0 0 0 0 0 28 134 27 0 0 0 0 0
14 0 0 0 0 0 0 0 0 0 0 0 27 93 20 2 0 0 0
15 0 0 0 0 0 0 0 0 0 0 0 0 23 133 19 0 0 0
16 0 0 0 0 0 0 0 0 0 0 0 0 0 22 114 20 0 0
17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 21 192 19 0
18 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 21 263 21
19 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 24 827
Thanks

Select n rows after specific number

I work with a data.frame like this:
Country Date balance_of_payment business_confidence_indicator consumer_confidence_indicator CPI Crisis_IMF
1 Australia 1980-01-01 -0.87 100.215 99.780 25.4 0
2 Australia 1980-04-01 -1.62 100.061 99.746 26.2 0
3 Australia 1980-07-01 -3.70 100.599 100.049 26.6 0
4 Australia 1980-10-01 -3.13 100.597 100.735 27.2 0
5 Australia 1981-01-01 -2.73 101.149 101.016 27.8 0
6 Australia 1981-04-01 -4.11 100.936 100.150 28.4 0
I want to create a summary statistic with describe(dataset)from the HmiscPackage.
I need to differentiate between the timespans n-quarters before Crisis_IMF is 1, the time in which Crisis_IMF is 1 and the state n-quaters after Crisis_IMF is 1.
To select the time in which Crisis_IMF is 1, I did describe(dataset[dataset$Crisis_IMF==1,"balance_of_payment"]).
But I do not know how to make the command over the timespan of n-quarters (e.g. 8) after the event.
Edit:
dataset$Crisis_IMF
[1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[60] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[119] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[178] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[237] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[296] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[355] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[414] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[473] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[532] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[591] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[650] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[709] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
[768] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[827] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[886] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[945] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1004] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1063] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1122] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1181] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1240] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1299] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1358] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1417] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1476] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1
[1535] 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
[1594] 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1653] 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1712] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1771] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0
[1830] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1889] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[1948] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2007] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2066] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[2125] 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1
[2184] 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2243] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2302] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2361] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[2420] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2479] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2538] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2597] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2656] 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2715] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2774] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2833] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2892] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[2951] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3010] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3069] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3128] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3187] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3246] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3305] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3364] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3423] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3482] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3541] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3600] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3659] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3718] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3777] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3836] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3895] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[3954] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
[4013] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[4072] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[4131] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[4190] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[4249] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[4308] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Edit2; further information on the dataset:
Country Date balance_of_payment Crisis_IMF
1 Australia 1980-01-01 -0.87 0
2 Australia 1980-04-01 -1.62 0
3 Australia 1980-07-01 -3.70 0
4 Australia 1980-10-01 -3.13 0
5 Australia 1981-01-01 -2.73 0
6 Australia 1981-04-01 -4.11 0
7 Australia 1981-07-01 -3.98 0
8 Australia 1981-10-01 -5.27 0
9 Australia 1982-01-01 -5.31 0
10 Australia 1982-04-01 -4.67 0
11 Australia 1982-07-01 -3.30 0
12 Australia 1982-10-01 -3.24 0
13 Australia 1983-01-01 -3.45 0
14 Australia 1983-04-01 -2.86 0
15 Australia 1983-07-01 -3.58 0
...
137 Australia 2014-01-01 -2.18 0
138 Australia 2014-04-01 -3.44 0
139 Australia 2014-07-01 -3.04 0
140 Australia 2014-10-01 -2.39 0
141 Austria 1980-01-01 -3.97 0
142 Austria 1980-04-01 -3.89 0
143 Austria 1980-07-01 -1.84 0
144 Austria 1980-10-01 -1.60 0
145 Austria 1981-01-01 -2.74 0
146 Austria 1981-04-01 -2.88 0
147 Austria 1981-07-01 -2.83 0
148 Austria 1981-10-01 -2.06 0
149 Austria 1982-01-01 -0.63 0
150 Austria 1982-04-01 0.61 0
151 Austria 1982-07-01 2.42 0
152 Austria 1982-10-01 2.70 0
There can be more then one crisis period for one country. That e.g. in Australia are Crisis from 1990-01-01 to 1991-04-01 and 2002-01-01 to 2005-01-01. I want to create 3 different describe commands, which show the behaviour of the variable in the above mentioned states.
You haven't provided your full data, so I have to guess that your Crisis_IMF column has an unbroken sequence of zeroes (before the crisis), followed by an unbroken sequence of ones (during which the IMF crisis was considered to be in effect), finally followed by an unbroken sequence of zeroes (after the crisis).
Below I've synthesized my own data for testing. I only synthesized columns Crisis_IMF and balance_of_payment, because those appear to be the only columns relevant to your problem. I used 30 rows, with the first 10 before, the next 10 during, and the last 10 after the crisis. I used sort of a random parabolic arc for the balance_of_payment, but that was entirely random.
library('Hmisc');
set.seed(1);
N <- 30;
df <- data.frame(balance_of_payment=-5+2*seq(-1.5,1.5,len=N)^2+rnorm(N,0,0.2), Crisis_IMF=c(rep(0,N/3),rep(1,N/3),rep(0,N/3)) );
df;
## balance_of_payment Crisis_IMF
## 1 -0.6252908 0
## 2 -1.0625579 0
## 3 -1.8228927 0
## 4 -1.8503850 0
## 5 -2.5744076 0
## 6 -3.2324647 0
## 7 -3.3561408 0
## 8 -3.6484112 0
## 9 -3.9805631 0
## 10 -4.4136342 0
## 11 -4.2642312 1
## 12 -4.6598435 1
## 13 -4.9904788 1
## 14 -5.3947830 1
## 15 -4.7696630 1
## 16 -5.0036359 1
## 17 -4.9550811 1
## 18 -4.6774634 1
## 19 -4.5735679 1
## 20 -4.4478071 1
## 21 -4.1687610 0
## 22 -3.9392921 0
## 23 -3.7811631 0
## 24 -3.8514970 0
## 25 -2.9444058 0
## 26 -2.6515349 0
## 27 -2.2006002 0
## 28 -1.9499174 0
## 29 -1.1949166 0
## 30 -0.4164117 0
crisisRange <- range(which(df$Crisis_IMF==1));
crisisRange;
## [1] 11 20
df$Off_Crisis <- c((1-crisisRange[1]):-1,rep(0,diff(crisisRange)+1),1:(nrow(df)-crisisRange[2]));
df;
## balance_of_payment Crisis_IMF Off_Crisis
## 1 -0.6252908 0 -10
## 2 -1.0625579 0 -9
## 3 -1.8228927 0 -8
## 4 -1.8503850 0 -7
## 5 -2.5744076 0 -6
## 6 -3.2324647 0 -5
## 7 -3.3561408 0 -4
## 8 -3.6484112 0 -3
## 9 -3.9805631 0 -2
## 10 -4.4136342 0 -1
## 11 -4.2642312 1 0
## 12 -4.6598435 1 0
## 13 -4.9904788 1 0
## 14 -5.3947830 1 0
## 15 -4.7696630 1 0
## 16 -5.0036359 1 0
## 17 -4.9550811 1 0
## 18 -4.6774634 1 0
## 19 -4.5735679 1 0
## 20 -4.4478071 1 0
## 21 -4.1687610 0 1
## 22 -3.9392921 0 2
## 23 -3.7811631 0 3
## 24 -3.8514970 0 4
## 25 -2.9444058 0 5
## 26 -2.6515349 0 6
## 27 -2.2006002 0 7
## 28 -1.9499174 0 8
## 29 -1.1949166 0 9
## 30 -0.4164117 0 10
n <- 8;
describe(df[df$Off_Crisis>=-n&df$Off_Crisis<=-1,'balance_of_payment']);
## df[df$Off_Crisis >= -n & df$Off_Crisis <= -1, "balance_of_payment"]
## n missing unique Info Mean
## 8 0 8 1 -3.11
##
## -4.41363415781177 (1, 12%), -3.98056311135777 (1, 12%), -3.64841115885525 (1, 12%), -3.35614082447269 (1, 12%), -3.23246466374394 (1, 12%), -2.57440760140387 (1, 12%), -1.85038498107066 (1, 12%), -1.82289266659616 (1, 12%)
describe(df[df$Off_Crisis==0,'balance_of_payment']);
## df[df$Off_Crisis == 0, "balance_of_payment"]
## n missing unique Info Mean .05 .10 .25 .50 .75 .90 .95
## 10 0 10 1 -4.774 -5.219 -5.043 -4.982 -4.724 -4.595 -4.429 -4.347
##
## -5.39478302143074 (1, 10%), -5.00363594891363 (1, 10%), -4.99047879387293 (1, 10%), -4.95508109661503 (1, 10%), -4.76966304348196 (1, 10%), -4.67746343562751 (1, 10%), -4.65984348113626 (1, 10%), -4.57356788939893 (1, 10%), -4.44780713171369 (1, 10%), -4.26423116226702 (1, 10%)
describe(df[df$Off_Crisis>=1&df$Off_Crisis<=n,'balance_of_payment']);
## df[df$Off_Crisis >= 1 & df$Off_Crisis <= n, "balance_of_payment"]
## n missing unique Info Mean
## 8 0 8 1 -3.186
##
## -4.16876100605885 (1, 12%), -3.93929212154225 (1, 12%), -3.85149697413106 (1, 12%), -3.78116310320806 (1, 12%), -2.94440583734139 (1, 12%), -2.65153490367274 (1, 12%), -2.20060024283928 (1, 12%), -1.949917420894 (1, 12%)
The solution works by first computing the range of indexes during which the crisis was in effect as crisisRange. Then it appends to the data.frame a new column Off_Crisis which captures how many quarters offset from the crisis the row is, using negative numbers for before and positive numbers for after, and assuming each row represents exactly one quarter.
The describe() calls can then be made by subsetting on the Off_Crisis column, getting just the quarters offset from the crisis that you want for each call.
Edit: Whew! That was tough. Pretty sure I got it though:
library('Hmisc');
set.seed(1);
N <- 60;
df <- data.frame(balance_of_payment=rep(-5+2*seq(-1.5,1.5,len=N/2)^2,2)+rnorm(N,0,0.2), Crisis_IMF=c(rep(0,N/6),rep(1,N/6),rep(0,N/3),rep(1,N/6),rep(0,N/6)) );
df;
## balance_of_payment Crisis_IMF
## 1 -0.6252908 0
## 2 -1.0625579 0
## 3 -1.8228927 0
## 4 -1.8503850 0
## 5 -2.5744076 0
## 6 -3.2324647 0
## 7 -3.3561408 0
## 8 -3.6484112 0
## 9 -3.9805631 0
## 10 -4.4136342 0
## 11 -4.2642312 1
## 12 -4.6598435 1
## 13 -4.9904788 1
## 14 -5.3947830 1
## 15 -4.7696630 1
## 16 -5.0036359 1
## 17 -4.9550811 1
## 18 -4.6774634 1
## 19 -4.5735679 1
## 20 -4.4478071 1
## 21 -4.1687610 0
## 22 -3.9392921 0
## 23 -3.7811631 0
## 24 -3.8514970 0
## 25 -2.9444058 0
## 26 -2.6515349 0
## 27 -2.2006002 0
## 28 -1.9499174 0
## 29 -1.1949166 0
## 30 -0.4164117 0
## 31 -0.2282641 0
## 32 -1.1198441 0
## 33 -1.5782326 0
## 34 -2.1802021 0
## 35 -2.9157211 0
## 36 -3.1513699 0
## 37 -3.5324846 0
## 38 -3.8079388 0
## 39 -3.8757143 0
## 40 -4.1999213 0
## 41 -4.5994921 1
## 42 -4.7884845 1
## 43 -4.7268380 1
## 44 -4.8405104 1
## 45 -5.1324004 1
## 46 -5.1361483 1
## 47 -4.8789267 1
## 48 -4.7125241 1
## 49 -4.7602814 1
## 50 -4.3903659 1
## 51 -4.2729353 0
## 52 -4.2181247 0
## 53 -3.7278522 0
## 54 -3.6794993 0
## 55 -2.7817662 0
## 56 -2.2442292 0
## 57 -2.2428854 0
## 58 -1.8645939 0
## 59 -0.9853426 0
## 60 -0.5270109 0
df$Off_Crisis <- ifelse(df$Crisis_IMF==1,0,with(rle(df$Crisis_IMF),{ mids <- lengths[-c(1,length(lengths))]; c(-lengths[1]:-1,sequence(mids)-rep(rbind(0,mids+1),rbind(ceiling(mids/2),floor(mids/2))),1:lengths[length(lengths)]); }));
df;
## balance_of_payment Crisis_IMF Off_Crisis
## 1 -0.6252908 0 -10
## 2 -1.0625579 0 -9
## 3 -1.8228927 0 -8
## 4 -1.8503850 0 -7
## 5 -2.5744076 0 -6
## 6 -3.2324647 0 -5
## 7 -3.3561408 0 -4
## 8 -3.6484112 0 -3
## 9 -3.9805631 0 -2
## 10 -4.4136342 0 -1
## 11 -4.2642312 1 0
## 12 -4.6598435 1 0
## 13 -4.9904788 1 0
## 14 -5.3947830 1 0
## 15 -4.7696630 1 0
## 16 -5.0036359 1 0
## 17 -4.9550811 1 0
## 18 -4.6774634 1 0
## 19 -4.5735679 1 0
## 20 -4.4478071 1 0
## 21 -4.1687610 0 1
## 22 -3.9392921 0 2
## 23 -3.7811631 0 3
## 24 -3.8514970 0 4
## 25 -2.9444058 0 5
## 26 -2.6515349 0 6
## 27 -2.2006002 0 7
## 28 -1.9499174 0 8
## 29 -1.1949166 0 9
## 30 -0.4164117 0 10
## 31 -0.2282641 0 -10
## 32 -1.1198441 0 -9
## 33 -1.5782326 0 -8
## 34 -2.1802021 0 -7
## 35 -2.9157211 0 -6
## 36 -3.1513699 0 -5
## 37 -3.5324846 0 -4
## 38 -3.8079388 0 -3
## 39 -3.8757143 0 -2
## 40 -4.1999213 0 -1
## 41 -4.5994921 1 0
## 42 -4.7884845 1 0
## 43 -4.7268380 1 0
## 44 -4.8405104 1 0
## 45 -5.1324004 1 0
## 46 -5.1361483 1 0
## 47 -4.8789267 1 0
## 48 -4.7125241 1 0
## 49 -4.7602814 1 0
## 50 -4.3903659 1 0
## 51 -4.2729353 0 1
## 52 -4.2181247 0 2
## 53 -3.7278522 0 3
## 54 -3.6794993 0 4
## 55 -2.7817662 0 5
## 56 -2.2442292 0 6
## 57 -2.2428854 0 7
## 58 -1.8645939 0 8
## 59 -0.9853426 0 9
## 60 -0.5270109 0 10
n <- 8;
describe(df[df$Off_Crisis>=-n&df$Off_Crisis<=-1,'balance_of_payment']);
## df[df$Off_Crisis >= -n & df$Off_Crisis <= -1, "balance_of_payment"]
## n missing unique Info Mean .05 .10 .25 .50 .75 .90 .95
## 16 0 16 1 -3.133 -4.253 -4.090 -3.825 -3.294 -2.476 -1.837 -1.762
##
## -4.41363415781177 (1, 6%), -4.19992133068899 (1, 6%), -3.98056311135777 (1, 6%), -3.87571430729169 (1, 6%), -3.80793877922333 (1, 6%), -3.64841115885525 (1, 6%)
## -3.53248462570045 (1, 6%), -3.35614082447269 (1, 6%), -3.23246466374394 (1, 6%), -3.15136989958027 (1, 6%), -2.91572106713267 (1, 6%), -2.57440760140387 (1, 6%)
## -2.1802021496148 (1, 6%), -1.85038498107066 (1, 6%), -1.82289266659616 (1, 6%), -1.57823262180228 (1, 6%)
describe(df[df$Off_Crisis==0,'balance_of_payment']);
## df[df$Off_Crisis == 0, "balance_of_payment"]
## n missing unique Info Mean .05 .10 .25 .50 .75 .90 .95
## 20 0 20 1 -4.785 -5.149 -5.133 -4.964 -4.765 -4.645 -4.442 -4.384
##
## lowest : -5.395 -5.136 -5.132 -5.004 -4.990, highest: -4.599 -4.574 -4.448 -4.390 -4.264
describe(df[df$Off_Crisis>=1&df$Off_Crisis<=n,'balance_of_payment']);
## df[df$Off_Crisis >= 1 & df$Off_Crisis <= n, "balance_of_payment"]
## n missing unique Info Mean .05 .10 .25 .50 .75 .90 .95
## 16 0 16 1 -3.157 -4.232 -4.193 -3.873 -3.312 -2.244 -2.075 -1.929
##
## -4.27293530430708 (1, 6%), -4.21812466033862 (1, 6%), -4.16876100605885 (1, 6%), -3.93929212154225 (1, 6%), -3.85149697413106 (1, 6%), -3.78116310320806 (1, 6%)
## -3.72785216159621 (1, 6%), -3.67949925417454 (1, 6%), -2.94440583734139 (1, 6%), -2.78176624658013 (1, 6%), -2.65153490367274 (1, 6%), -2.24422917606577 (1, 6%)
## -2.24288543679152 (1, 6%), -2.20060024283928 (1, 6%), -1.949917420894 (1, 6%), -1.86459386937746 (1, 6%)
For this demo I synthesized five periods: 10 rows of non-crisis, 10 rows of crisis (the first), 20 rows of non-crisis, 10 rows of crisis (the second), and 10 rows of non-crisis. The algorithm is the same, namely to compute an Off_Crisis column (which was much more difficult this time!) and then use it to subset the data.frame for each describe() call. Only now, data points from different crises will be combined in the subsets.

3D Array value assignment ruins the structure of array

Here is how to reproduce my problem. I want to create a 3D array
> g=array(0,dim=c(3,31,31))
> dim(g)
[1] 3 31 31
> dim(g[1,,])
[1] 31 31
This is x with dimension 31 by 31
> dim(x)
[1] 31 31
> x
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
1 NA 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0
2 0 NA 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
3 2 1 NA 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
4 0 0 0 NA 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
5 0 0 0 0 NA 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 NA 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0
7 0 0 0 0 0 1 NA 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 NA 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
9 0 0 0 0 0 0 0 0 NA 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
10 0 1 1 0 0 0 1 0 0 NA 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0
11 0 0 0 0 0 0 0 0 2 0 NA 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
12 0 0 0 0 0 0 1 1 0 0 0 NA 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
13 0 0 0 0 0 0 0 0 0 0 0 0 NA 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
14 0 0 0 0 0 0 0 0 0 0 0 0 0 NA 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
15 0 0 1 1 1 0 0 0 0 0 0 0 0 1 NA 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1
16 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 NA 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
17 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 NA 0 0 0 0 0 0 1 0 0 0 0 0 0 0
18 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 NA 0 1 1 0 0 0 0 0 0 0 0 0 0
19 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 NA 0 0 0 0 0 0 0 0 0 0 0 0
20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 NA 0 0 0 0 0 0 0 0 0 0 0
21 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 NA 0 0 0 0 0 0 0 0 0 0
22 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 NA 1 0 0 0 0 0 0 0 0
23 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 NA 0 0 0 0 0 0 0 0
24 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 NA 0 0 1 0 0 0 0
25 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 NA 0 0 0 0 0 0
26 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 NA 0 0 0 0 0
27 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 NA 0 1 0 0
28 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 NA 0 0 0
29 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 NA 0 0
30 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 NA 0
31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 NA
when I try to assign x to the first 'section' of g using
> g[1,,] = x
The array structure of g is totally changed, as now it becomes:
> dim(g)
NULL
> head(g)
[[1]]
[1] NA 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0
[[2]]
[1] 0
[[3]]
[1] 0
[[4]]
[1] 0 NA 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
[[5]]
[1] 0
[[6]]
[1] 0
This is totally different from what I expected, I am just trying to put a matrix to g[1,,] and dim(g) should still be 3 by 31 by 31, am I wrong? where did I do wrong?
Thanks to Pascal's comments below I've amended my answer, though I've left the dimensionality changed to 31x31x3 for perhaps easier understanding. The problem is the way that the data are converted from your data.frame object before storing in your array. I think by converting first to a matrix you should get what you're looking for:
g <- array(0,dim=c(31,31,3))
m <- matrix(1, 31, 31)
x <- as.data.frame(m)
## Storing x as it is will result in g becoming a list...
#g[,,1] <- x
## Converting the data.frame to a matrix will result in
## g remaining an array:
g[,,1] <- as.matrix(x)

Why can't I use aggregate with cbind on a range of columns in a data.frame?

20 Lines of the data I'm working on:
Zv9_NA110 6176 7276 5'to3'IntronExon 0 + 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA110 10126 11226 5'to3'IntronExon 0 + 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 9 9 15 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA110 11219 12319 5'to3'ExonIntron 0 + 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA110 14887 15987 5'to3'IntronExon 0 + 1100 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
Zv9_NA110 18923 20023 5'to3'IntronExon 0 + 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA110 21069 22169 5'to3'ExonIntron 0 + 1100 0 135 115 65 54 45 36 27 16 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA113 1615 2715 5'to3'IntronExon 0 - 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA113 2335 3435 5'to3'ExonIntron 0 - 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA113 5398 6498 5'to3'IntronExon 0 - 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA113 7173 8273 5'to3'ExonIntron 0 - 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA118 11674 12774 5'to3'IntronExon 0 + 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA118 12711 13811 5'to3'ExonIntron 0 + 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA123 38151 39251 5'to3'ExonIntron 0 - 1100 0 1061 958 844 796 695 600 464 346 265 210 150 133 94 81 72 46 18 4 0 0 0 0 0 0 0 0 0 7 9 9 9 11 21 35 43 58 91 108 180 268 406 547 712 833 882 960 1094 1172 1245 1331 1432 1510 1604 1711 1810 1830 1837 1823 1781 1690 1638 1560 1489 1257 854 731 631 589 551 497 439 404 369 301 231 168 123 76 58 50 42 28 20 11 9 9 24 27 27 27 27 27 25 18 18 18 18 18 18 18 18 18 18 18 18 18 14 5 0 0
Zv9_NA124 2578 3678 5'to3'ExonIntron 0 + 1100 0 423 407 401 377 357 345 324 304 249 185 111 54 30 12 0 0 0 0 0 0 0 0 0 0 0 0 0 1 9 9 9 9 14 18 25 27 27 27 27 27 27 27 27 27 27 27 26 18 18 18 18 18 18 16 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA129 4939 6039 5'to3'IntronExon 0 + 1100 226 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 9 9 9 9 9 9 9 9 9 9 9 9 14 34 45 60 97 128 175 293 395 524 621 764 894 1036 1164 1334 1469 1639 1801 1885 1983
Zv9_NA132 12589 13689 5'to3'ExonIntron 0 - 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA132 13634 14734 5'to3'IntronExon 0 - 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA132 14481 15581 5'to3'ExonIntron 0 - 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 9 9 9 9 9 9 9 9 9 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA132 19534 20634 5'to3'IntronExon 0 - 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Zv9_NA132 28708 29808 5'to3'ExonIntron 0 - 1100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 9 15 18 24 27 42 46 73 112 142 157 162 162 162 162 162 162 162 162 159 153 153 153 153 153 150 144 132 112 76 52 30 25 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I get this into R as follows:
> dat <- read.table("dat.dat",header=F)
I need to get the averages for columns 9 through 118, parsed by column 4.
This works:
> all_means <- aggregate(cbind(V9,V10,V11)~V4,data=dat,FUN=mean)
V4 V9 V10 V11
1 5'to3'ExonIntron 0 0.00 0
2 5'to3'IntronExon 0 0.75 1
But there's no way I'm typing this out to V118.
I've tried this:
> aggregate(cbind(9:118)~V4,data=blah,FUN=mean)
But I get this error:
Error in model.frame.default(formula = cbind(9:118) ~ V4, data = blah) :
variable lengths differ (found for 'V4')
Is there something dumb I'm missing?
You have a number of options.
create a formula using . and pass a subset of the data
aggregate( . ~ V4, data = dat[,c(4,9:118)], FUN = mean)
You could also create the vector of column names using paste
nn <- paste0('V', 9:118)
and refer by column name
aggregate( . ~ V4, data = dat[,c('V4',nn)], FUN = mean)
There isn't much point using cbind here, given the formula approach works, but for example.
aggregate( do.call(cbind,lapply(nn, as.name)) ~ V4, data = dat, FUN = mean)
But this is messy as it doesn't name the columns nicely. (and is hard to follow)
If speed is an issue in general (not necessary for this operation) and you want to use the data.table package, this is done as follows:
Safer solution
Thanks to mnel's comment, I would use that:
library(data.table)
dat <- as.data.table(dat)
dat[,lapply(.SD,mean),by="V4",.SDcols=paste0("V", 9:118)]
Old solution
dat[,lapply(.SD,mean),by="V4",.SDcols=9:118]
You can use
## S3 method for class 'data.frame'
aggregate(x, by, FUN, ..., simplify = TRUE)
With your data assuming your data is in dataframe DF
DF <- read.table(text = txt, header = FALSE, stringsAsFactors = FALSE)
result <- aggregate(DF[, 9:118], by = list(DF[, 4]), FUN = mean)
# Using pander to print result table nicely. It's not needed for aggregation :)
require(pander)
pandoc.table(result)
##
## ----------------------------------------------------
## Group.1 V9 V10 V11 V12 V13 V14
## ---------------- ----- ----- ----- ----- ----- -----
## 5'to3'ExonIntron 161.9 148 131 122.7 109.7 98.1
##
## 5'to3'IntronExon 0.0 0 0 0.0 0.0 0.0
## ----------------------------------------------------
##
## Table: Table continues below
##
##
## -----------------------------------------------
## V15 V16 V17 V18 V19 V20 V21 V22
## ----- ----- ----- ----- ----- ----- ----- -----
## 81.5 66.6 52.3 39.5 26.1 18.7 12.4 9.3
##
## 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
## -----------------------------------------------
##
## Table: Table continues below
##
##
## -----------------------------------------------
## V23 V24 V25 V26 V27 V28 V29 V30
## ----- ----- ----- ----- ----- ----- ----- -----
## 7.2 4.6 1.8 0.4 0 0 0 0.5
##
## 0.0 0.0 0.0 0.0 0 0 0 0.0
## -----------------------------------------------
##
## Table: Table continues below
##
##
## -----------------------------------------------
## V31 V32 V33 V34 V35 V36 V37 V38
## ----- ----- ----- ----- ----- ----- ----- -----
## 0.9 1.5 1.8 2.4 2.7 5 6.4 9.1
##
## 0.0 0.0 0.0 0.0 0.0 0 0.0 0.0
## -----------------------------------------------
##
## Table: Table continues below
##
##
## -----------------------------------------------
## V39 V40 V41 V42 V43 V44 V45 V46
## ----- ----- ----- ----- ----- ----- ----- -----
## 13 16.2 19.2 21.5 23 24.7 28 29.7
##
## 0 0.0 0.0 0.0 0 0.0 0 0.0
## -----------------------------------------------
##
## Table: Table continues below
##
##
## -----------------------------------------------
## V47 V48 V49 V50 V51 V52 V53 V54
## ----- ----- ----- ----- ----- ----- ----- -----
## 36.9 45.7 59.5 73.3 89.2 101.3 106.2 114
##
## 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
## -----------------------------------------------
##
## Table: Table continues below
##
##
## -----------------------------------------------
## V55 V56 V57 V58 V59 V60 V61 V62
## ----- ----- ----- ----- ----- ----- ----- -----
## 127.3 134 140.7 148.1 156.2 160.4 167.4 175.7
##
## 0.0 0 0.0 0.0 0.0 0.0 0.0 0.0
## -----------------------------------------------
##
## Table: Table continues below
##
##
## -----------------------------------------------
## V63 V64 V65 V66 V67 V68 V69 V70
## ----- ----- ----- ----- ----- ----- ----- -----
## 183.9 183.1 183.7 182.3 178.1 169 163.8 156.7
##
## 0.0 0.0 0.0 0.0 0.0 0 0.0 0.0
## -----------------------------------------------
##
## Table: Table continues below
##
##
## -----------------------------------------------
## V71 V72 V73 V74 V75 V76 V77 V78
## ----- ----- ----- ----- ----- ----- ----- -----
## 149.8 126.6 86.3 74.0 64.0 59.8 56.0 50.6
##
## 0.7 0.9 0.9 1.5 1.8 1.8 1.8 1.8
## -----------------------------------------------
##
## Table: Table continues below
##
##
## -----------------------------------------------
## V79 V80 V81 V82 V83 V84 V85 V86
## ----- ----- ----- ----- ----- ----- ----- -----
## 45.6 42.2 38.7 31.9 24.9 18.6 14.1 9.4
##
## 1.8 1.8 1.8 1.8 1.8 1.8 2.2 2.7
## -----------------------------------------------
##
## Table: Table continues below
##
##
## -----------------------------------------------
## V87 V88 V89 V90 V91 V92 V93 V94
## ----- ----- ----- ----- ----- ----- ----- -----
## 7.6 6.2 5.1 3.7 2.9 2.5 2.7 2.7
##
## 2.7 2.7 2.7 2.7 2.7 2.7 2.2 0.9
## -----------------------------------------------
##
## Table: Table continues below
##
##
## --------------------------------------------------
## V95 V96 V97 V98 V99 V100 V101 V102
## ----- ----- ----- ----- ----- ------ ------ ------
## 4.2 4.5 4.5 4.5 4.5 4.5 4.3 2.5
##
## 0.9 0.9 0.9 1.4 4.1 5.4 6.9 10.6
## --------------------------------------------------
##
## Table: Table continues below
##
##
## -------------------------------------------------------
## V103 V104 V105 V106 V107 V108 V109 V110
## ------ ------ ------ ------ ------ ------ ------ ------
## 1.8 1.8 1.8 1.8 1.8 1.8 1.8 1.8
##
## 13.7 18.4 30.2 40.4 53.3 63.0 77.3 90.3
## -------------------------------------------------------
##
## Table: Table continues below
##
##
## -------------------------------------------------------
## V111 V112 V113 V114 V115 V116 V117 V118
## ------ ------ ------ ------ ------ ------ ------ ------
## 1.8 1.8 1.8 1.8 1.4 0.5 0.0 0.0
##
## 104.5 117.3 134.3 147.8 164.8 181.0 189.4 199.2
## -------------------------------------------------------
##

Resources