How to solve this error for pollution rose? - r

I have an issue with R where I am trying to do a pollution rose graph and I am almost sure that the data and the code is correct but I still keep getting an error message and I couldn't figure out what does it mean. The error message is:
Error in `[[<-.data.frame`(`*tmp*`, vars[i], value = numeric(0)) :
replacement has 0 rows, data has 37.
My code is:
pollutionRose(pollution_rose, pollutant = "PM10",header= TRUE, cols = c("darkblue","green4","yellow2","red","red4"), key.position = "bottom",max.freq = 50)
and here is my data:
HH:MM:SS WD1 WS1 PM10 PM2.5 PM1
10:10:00 AM 0 0 0 0 0
10:20:00 AM 0 0 0 0 0
10:29:00 AM 254 0.4 0 0 0
10:30:00 AM 109 0.5 0 0 0
10:40:00 AM 21 1.9 0 0 0
10:50:00 AM 148 1.2 0 0 0
10:54:00 AM 222 1.1 0 0 0
10:55:00 AM 61 1 0 0 0
11:00:00 AM 109 0.6 19 4.3 1.8
11:10:00 AM 354 0.7 20.4 4.1 1.7
11:20:00 AM 5 2.6 8.3 3.8 1.6
11:29:00 AM 60 2.6 7.9 3.8 1.5
11:30:00 AM 97 1.5 18.6 3.8 1.5
11:40:00 AM 42 0.8 15.6 3.8 1.5
11:50:00 AM 52 0 10.5 4.3 1.6
12:00:00 PM 60 0.9 11.7 3.9 1.5
12:10:00 PM 74 1 9.6 4.1 1.4
12:20:00 PM 338 1.7 0 0 0
12:30:00 PM 285 4.4 0 0 0
12:40:00 PM 296 3.6 0 0 0
12:50:00 PM 241 3.3 0 0 0
1:00:00 PM 274 1.2 0 0 0
1:10:00 PM 287 1.3 15.8 4.4 1.6
1:20:00 PM 317 3 13.1 4.6 1.7
1:30:00 PM 309 2.6 10.5 3.5 1.4
1:31:00 PM 244 3.5 14.8 4.2 1.5
1:40:00 PM 251 0.9 12.8 4.1 1.5
1:50:00 PM 282 1.1 12.9 4.8 1.8
2:00:00 PM 254 2.5 9.6 4.9 1.7
2:10:00 PM 245 2.3 10.9 4.6 1.6
2:20:00 PM 207 2.1 0 0 0
2:30:00 PM 30 0 0 0 0
2:37:00 PM 62 0.7 12.9 4.3 1.6
2:40:00 PM 80 1.8 10.1 3.6 1.5
2:40:00 PM 0 0 10.1 3.6 1.5
2:50:00 PM 0 0 10 4.3 1.5
3:00:00 PM 0 0 0 0 0

Related

Pivoting and Distributing values based on Duration

I have a small dataset weekly_data of projects were working on, and anticipated time to be spent and duration in weeks for each of the two milestones, labeled CD and CA
# A tibble: 17 x 5
dsk_proj_number hrs_per_week_cd cd_dur_weeks hrs_per_week_ca ca_dur_weeks
<fct> <dbl> <dbl> <dbl> <dbl>
1 17061 0 0 2.43 28
2 18009 0 0 1.83 12
3 18029 0 0 2.83 24
4 19029 1.5 16 2.43 28
5 19050 0 0 2.8 20
6 20012 0 0 3.4 20
7 21016 3 8 2.43 28
8 21022 0 0 4.25 16
9 21050 0 0 3.4 20
10 21061a 17.5 24 15.8 52
11 21061b 1.5 4 7.5 8
12 21061c 7.67 12 5 12
13 21061d 0 0 0 0
14 21061e 8 1 3 1
15 21094 0 0 3 8
16 22027 0 0 0.75 8
17 22068 2.92 12 2.38 8
I want to get this into a format wheree, based on the cd_dur_weeks and ca_dur_weeks durations indicated, I have the estiamted number of hours by weeks, for all the weeks, like this:
> sched %>% head(15)
# A tibble: 15 x 17
`18009` `22068` `17061` `21050` `19029` `21016` `21022` `19050` `18029` `22027` `20012` `21094` `21061a` `21061b` `21061c` `21061d` `21061e`
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1.83 2.92 2.43 3.4 1.5 3 4.25 2.8 2.83 0.75 3.4 3 17.5 1.5 7.67 0 8
2 1.83 2.92 2.43 3.4 1.5 3 4.25 2.8 2.83 0.75 3.4 3 17.5 1.5 7.67 0 3
3 1.83 2.92 2.43 3.4 1.5 3 4.25 2.8 2.83 0.75 3.4 3 17.5 1.5 7.67 0 0
4 1.83 2.92 2.43 3.4 1.5 3 4.25 2.8 2.83 0.75 3.4 3 17.5 1.5 7.67 0 0
5 1.83 2.92 2.43 3.4 1.5 3 4.25 2.8 2.83 0.75 3.4 3 17.5 7.5 7.67 0 0
6 1.83 2.92 2.43 3.4 1.5 3 4.25 2.8 2.83 0.75 3.4 3 17.5 7.5 7.67 0 0
7 1.83 2.92 2.43 3.4 1.5 3 4.25 2.8 2.83 0.75 3.4 3 17.5 7.5 7.67 0 0
8 1.83 2.92 2.43 3.4 1.5 3 4.25 2.8 2.83 0.75 3.4 3 17.5 7.5 7.67 0 0
9 1.83 2.92 2.43 3.4 1.5 2.43 4.25 2.8 2.83 0 3.4 0 17.5 7.5 7.67 0 0
10 1.83 2.92 2.43 3.4 1.5 2.43 4.25 2.8 2.83 0 3.4 0 17.5 7.5 7.67 0 0
11 1.83 2.92 2.43 3.4 1.5 2.43 4.25 2.8 2.83 0 3.4 0 17.5 7.5 7.67 0 0
12 1.83 2.92 2.43 3.4 1.5 2.43 4.25 2.8 2.83 0 3.4 0 17.5 7.5 7.67 0 0
13 0 2.38 2.43 3.4 1.5 2.43 4.25 2.8 2.83 0 3.4 0 17.5 0 5 0 0
14 0 2.38 2.43 3.4 1.5 2.43 4.25 2.8 2.83 0 3.4 0 17.5 0 5 0 0
15 0 2.38 2.43 3.4 1.5 2.43 4.25 2.8 2.83 0 3.4 0 17.5 0 5 0 0
I was able to use pivot_wider() to make the project numbers the variable names, and each row an individual week, but was forced to use for()'s and if()'s. Seems like there should be an easier way to get this done.
Here's the code I used:
sched <- data.frame(dsk_proj_number = rezvan$dsk_proj_number)
sched$weeks <- NA
sched <- sched %>% pivot_wider(names_from = dsk_proj_number, values_from = weeks)
for(proj_num in weekly_data$dsk_proj_number){
duration_cd = weekly_data[which(weekly_data$dsk_proj_number == proj_num), "cd_dur_weeks"] %>% as.numeric
duration_ca = weekly_data[which(weekly_data$dsk_proj_number == proj_num), "ca_dur_weeks"] %>% as.numeric
if(duration_cd > 0) {
sched[1:duration_cd, proj_num] = weekly_data[which(weekly_data$dsk_proj_number == proj_num), "hrs_per_week_cd"]
}
if(duration_ca > 0) {
sched[duration_cd + 1:duration_ca, proj_num] = weekly_data[which(weekly_data$dsk_proj_number == proj_num), "hrs_per_week_ca"]
}
}
sched <- sched %>% mutate_all(coalesce, 0)
You can use rep() to repeat elements a certain number of times, and then use c() to concatenate them into a long sequence. I use rowwise from dplyr to conveniently do this row-by-row.
Then you can unnest the lists of vectors.
library(tidyverse)
sched <- weekly_data %>%
mutate(max_weeks = max(cd_dur_weeks + ca_dur_weeks)) %>%
rowwise() %>%
mutate(week = list(c(rep(hrs_per_week_cd, cd_dur_weeks), rep(hrs_per_week_ca, ca_dur_weeks), rep(0, max_weeks-cd_dur_weeks-ca_dur_weeks)))) %>%
ungroup() %>%
select(dsk_proj_number, week) %>%
pivot_wider(names_from = "dsk_proj_number", values_from = week) %>%
unnest(everything())
df %>%
select(1:3) %>%
slice(rep(1:nrow(.), cd_dur_weeks)) %>%
select(-3) %>%
mutate(milestone = 1) %>%
rename(hrs_per_week = hrs_per_week_cd) -> df1
df %>%
select(c(1,4,5)) %>%
slice(rep(1:nrow(.), ca_dur_weeks)) %>%
select(-3) %>%
mutate(milestone = 2) %>%
rename(hrs_per_week = hrs_per_week_ca) -> df2
rbind(df1, df2) %>%
arrange(dsk_proj_number, milestone) %>%
group_by(dsk_proj_number) %>%
mutate(week = seq_along(dsk_proj_number)) %>%
pivot_wider(id_cols=week, names_from=dsk_proj_number, values_from=hrs_per_week) %>%
replace(is.na(.), 0)

How do I extract this portion of a table from a text file using R using grep?

I have a file, "prf003.out",
150 lines of blah....~tables that report other things in this text file deleted.....
Aboveground Live Belowground Forest Total Total Carbon
----------------- ----------------- Stand ------------------------- Stand Removed Released
YEAR Total Merch Live Dead Dead DDW Floor Shb/Hrb Carbon Carbon from Fire
--------------------------------------------------------------------------------------------------------------
2000 15.6 15.6 6.0 0.5 0.0 4.5 2.6 0.0 29.1 0.0 0.0
2001 15.6 15.6 6.0 0.4 0.0 4.2 2.5 0.0 28.7 0.0 0.0
2002 15.6 15.6 6.0 0.4 0.0 3.9 2.5 0.0 28.4 0.0 0.0
2003 15.6 15.6 6.0 0.4 0.0 3.7 2.5 0.0 28.1 0.0 0.0
2004 15.6 15.6 6.0 0.4 0.0 3.5 2.5 0.0 27.9 0.0 0.0
2005 16.6 16.6 6.0 1.0 1.3 3.6 2.5 0.0 30.9 0.0 0.0
2006 16.6 16.6 6.0 0.9 0.8 3.8 2.4 0.0 30.6 0.0 0.0
2007 16.6 16.6 6.0 0.9 0.6 3.8 2.4 0.0 30.3 0.0 0.0
2008 16.6 16.6 6.0 0.9 0.4 3.7 2.4 0.0 30.0 0.0 0.0
2009 16.6 16.6 6.0 0.8 0.2 3.7 2.4 0.0 29.8 0.0 0.0
2010 18.1 18.1 6.3 1.2 1.0 3.8 2.4 0.0 32.8 0.0 0.0
2011 18.1 18.1 6.3 1.1 0.6 4.0 2.4 0.0 32.5 0.0 0.0
2012 18.1 18.1 6.3 1.1 0.4 3.9 2.4 0.0 32.2 0.0 0.0
2013 18.1 18.1 6.3 1.0 0.3 3.9 2.4 0.0 31.9 0.0 0.0
2014 18.1 18.1 6.3 1.0 0.2 3.8 2.4 0.0 31.7 0.0 0.0
2015 19.1 19.1 6.5 1.4 1.1 3.9 2.4 0.0 34.3 0.0 0.0
2016 19.1 19.1 6.5 1.3 0.7 4.1 2.4 0.0 34.0 0.0 0.0
2017 19.1 19.1 6.5 1.3 0.5 4.0 2.4 0.0 33.8 0.0 0.0
2018 19.1 19.1 6.5 1.2 0.3 4.0 2.4 0.0 33.5 0.0 0.0
2019 19.1 19.1 6.5 1.2 0.2 3.9 2.4 0.0 33.2 0.0 0.0
2020 19.0 19.0 6.3 1.9 1.8 4.2 2.4 0.0 35.6 0.0 0.0
2021 19.0 19.0 6.3 1.8 1.3 4.5 2.4 0.0 35.3 0.0 0.0
2022 19.0 19.0 6.3 1.7 1.0 4.6 2.4 0.0 35.0 0.0 0.0
2023 19.0 19.0 6.3 1.6 0.7 4.6 2.4 0.0 34.7 0.0 0.0
2024 19.0 19.0 6.3 1.6 0.5 4.6 2.4 0.0 34.4 0.0 0.0
2025 19.0 19.0 6.3 2.2 2.0 4.9 2.4 0.0 36.7 0.0 0.0
2026 19.0 19.0 6.3 2.1 1.3 5.3 2.4 0.0 36.4 0.0 0.0
2027 19.0 19.0 6.3 2.0 1.0 5.4 2.4 0.0 36.0 0.0 0.0
2028 19.0 19.0 6.3 1.9 0.7 5.4 2.4 0.0 35.7 0.0 0.0
2029 19.0 19.0 6.3 1.9 0.5 5.4 2.4 0.0 35.4 0.0 0.0
2030 19.4 19.4 6.5 2.2 1.4 5.6 2.4 0.0 37.5 0.0 0.0
2031 19.4 19.4 6.5 2.1 0.8 5.9 2.4 0.0 37.2 0.0 0.0
2032 19.4 19.4 6.5 2.0 0.6 5.9 2.4 0.0 36.8 0.0 0.0
2033 19.4 19.4 6.5 1.9 0.4 5.8 2.4 0.0 36.5 0.0 0.0
2034 19.4 19.4 6.5 1.9 0.3 5.7 2.4 0.0 36.1 0.0 0.0
2035 18.6 18.6 6.3 2.6 2.1 6.0 2.4 0.0 38.0 0.0 0.0
2036 18.6 18.6 6.3 2.5 1.5 6.4 2.4 0.0 37.6 0.0 0.0
2037 18.6 18.6 6.3 2.4 1.1 6.4 2.4 0.0 37.2 0.0 0.0
2038 18.6 18.6 6.3 2.3 0.8 6.5 2.4 0.0 36.9 0.0 0.0
2039 18.6 18.6 6.3 2.2 0.6 6.5 2.4 0.0 36.5 0.0 0.0
2040 19.4 19.4 6.7 2.3 1.0 6.6 2.4 0.0 38.3 0.0 0.0
2041 19.4 19.4 6.7 2.2 0.6 6.6 2.4 0.0 38.0 0.0 0.0
2042 19.4 19.4 6.7 2.1 0.5 6.5 2.4 0.0 37.6 0.0 0.0
2043 19.4 19.4 6.7 2.0 0.4 6.4 2.4 0.0 37.3 0.0 0.0
2044 19.4 19.4 6.7 2.0 0.3 6.3 2.4 0.0 36.9 0.0 0.0
2045 17.9 17.9 6.3 2.8 2.5 6.6 2.4 0.0 38.5 0.0 0.0
2046 17.9 17.9 6.3 2.7 1.8 7.0 2.4 0.0 38.1 0.0 0.0
2047 17.9 17.9 6.3 2.6 1.4 7.1 2.4 0.0 37.7 0.0 0.0
2048 17.9 17.9 6.3 2.5 1.0 7.2 2.4 0.0 37.3 0.0 0.0
2049 17.9 17.9 6.3 2.4 0.7 7.2 2.4 0.0 36.9 0.0 0.0
blah.....a few more tables
that I am trying to extract this particular table from. As you can see "blah" at the top represents a whole bunch of other tables generated in this .txt file.
Afterwards, I have a whole bunch of other tables outputted in that same file.
What I am trying to do is similar to this question, but now I am stuck: Extracting Data from Text Files
Here is what I did:
data <- readLines("prf003.out")
data
#VALUE=TRUE RETURNS EXACT MATCH OF TEXT.
cline <- grep("YEAR Total Merch Live Dead Dead DDW Floor Shb/Hrb Carbon Carbon from Fire", data, value= FALSE)
cline
#dont use str_extract, use str_extract_all
numstr <- sapply(str_extract_all(data[cline+1:51],"[0-9]"),as.numeric)
numstr
However, the output I get is wonky and doesn't format my data the way I want (i.e. just give me a copy of the original table so I can process it in R)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16] [,17] [,18] [,19] [,20] [,21] [,22] [,23] [,24] [,25] [,26] [,27] [,28] [,29] [,30] [,31] [,32] [,33] [,34] [,35] [,36] [,37] [,38] [,39]
[1,] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
[2,] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[3,] 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3
[4,] 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8
[5,] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[6,] 5 5 5 5 5 6 6 6 6 6 8 8 8 8 8 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 8 8 8 8
[7,] 6 6 6 6 6 6 6 6 6 6 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 4 4 4 4 4 6 6 6 6
[8,] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[9,] 5 5 5 5 5 6 6 6 6 6 8 8 8 8 8 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 8 8 8 8
[10,] 6 6 6 6 6 6 6 6 6 6 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 4 4 4 4 4 6 6 6 6
[11,] 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
[12,] 0 0 0 0 0 0 0 0 0 0 3 3 3 3 3 5 5 5 5 5 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 3 3 3 3
[13,] 0 0 0 0 0 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 1 1 2 2 2 1 1 2 2 2 2
[14,] 5 4 4 4 4 0 9 9 9 8 2 1 1 0 0 4 3 3 2 2 9 8 7 6 6 2 1 0 9 9 2 1 0 9 9 6 5 4 3
[15,] 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 0 0 2 1 1 0 0 1 0 0 0 0 2 1 1 0
[16,] 0 0 0 0 0 3 8 6 4 2 0 6 4 3 2 1 7 5 3 2 8 3 0 7 5 0 3 0 7 5 4 8 6 4 3 1 5 1 8
[17,] 4 4 3 3 3 3 3 3 3 3 3 4 3 3 3 3 4 4 4 3 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 6 6 6 6
[18,] 5 2 9 7 5 6 8 8 7 7 8 0 9 9 8 9 1 0 0 9 2 5 6 6 6 9 3 4 4 4 6 9 9 8 7 0 4 4 5
[19,] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
[20,] 6 5 5 5 5 5 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
[21,] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[22,] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[23,] 2 2 2 2 2 3 3 3 3 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
[24,] 9 8 8 8 7 0 0 0 0 9 2 2 2 1 1 4 4 3 3 3 5 5 5 4 4 6 6 6 5 5 7 7 6 6 6 8 7 7 6
[25,] 1 7 4 1 9 9 6 3 0 8 8 5 2 9 7 3 0 8 5 2 6 3 0 7 4 7 4 0 7 4 5 2 8 5 1 0 6 2 9
[26,] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[27,] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[28,] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[29,] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
As you can see, it takes each value of each number and puts that in a new coordinate. I just want the original table.
What about something like this
# figure out where the headers are & where the data starts
dataHeader1 <- which(grepl("Aboveground", txtFile))
dataHeader2 <- dataHeader1 + 2
dataStart <- dataHeader2 + 2
# extract the data
txtDat <- txtFile[dataStart:length(txtFile)]
txtDat <- do.call(rbind, strsplit(txtDat, split = "\\s{1,}", perl = TRUE))
class(txtDat) <- "numeric"
txtDat
# returns
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12]
[1,] 2000 15.6 15.6 6.0 0.5 0.0 4.5 2.6 0 29.1 0 0
[2,] 2001 15.6 15.6 6.0 0.4 0.0 4.2 2.5 0 28.7 0 0
[3,] 2002 15.6 15.6 6.0 0.4 0.0 3.9 2.5 0 28.4 0 0
[4,] 2003 15.6 15.6 6.0 0.4 0.0 3.7 2.5 0 28.1 0 0
[5,] 2004 15.6 15.6 6.0 0.4 0.0 3.5 2.5 0 27.9 0 0
[6,] 2005 16.6 16.6 6.0 1.0 1.3 3.6 2.5 0 30.9 0 0
....
Note that one can sharpen the regex in order to determine where the data starts e.g.
dataHeader1 <- which(grepl("(?=.*Aboveground)(?=.*Carbon)", txtFile, perl = TRUE))
# this can be pursued arbitrarily
. I read the data via txtFile <- readLines("Path/To/test.txt") and the raw data itself looks like this
[1] "asdsalkjdaskldas+"
[2] "jsafhnjadfnhdjkasfafdajfbnjasbfjads.kbnjdasnfadsnf"
[3] "45453342542542kj ijholijfkqaef45435314"
[4] ""
[5] "dasfjasikedfnha4454 "
[6] "a"
[7] "a"
[8] "fdgfd"
[9] "\t\t6546346343"
[10] ""
[11] ""
[12] " Aboveground Live Belowground Forest Total Total Carbon"
[13] " ----------------- ----------------- Stand ------------------------- Stand Removed Released"
[14] "YEAR Total Merch Live Dead Dead DDW Floor Shb/Hrb Carbon Carbon from Fire"
[15] "--------------------------------------------------------------------------------------------------------------"
[16] "2000 15.6 15.6 6.0 0.5 0.0 4.5 2.6 0.0 29.1 0.0 0.0"
...

Import data in R (read.table)

I have the following file, which i want to import.
MONTHLY CLIMATOLOGICAL SUMMARY for MAR. 2014
NAME: larissa CITY: STATE:
ELEV: 82 m LAT: 39° 37' 39" N LONG: 22° 23' 55" E
TEMPERATURE (°C), RAIN (mm), WIND SPEED (km/hr)
HEAT COOL AVG
MEAN DEG DEG WIND DOM
DAY TEMP HIGH TIME LOW TIME DAYS DAYS RAIN SPEED HIGH TIME DIR
------------------------------------------------------------------------------------
1 9.7 11.3 15:50 7.6 7:20 8.6 0.0 5.4 1.3 12.9 20:10 NNE
2 11.8 16.9 14:50 9.8 00:00 6.5 0.0 4.2 2.7 24.1 13:30 NE
3 9.3 11.5 14:20 7.9 00:00 9.0 0.0 6.0 0.8 9.7 3:00 N
4 10.7 17.0 15:10 4.7 6:30 7.7 0.0 1.6 0.6 16.1 18:50 SW
5 11.1 18.5 14:40 6.0 7:30 7.3 0.0 0.2 1.1 16.1 18:50 SSW
6 10.9 16.9 13:50 5.1 6:30 7.4 0.0 0.0 1.1 16.1 16:20 ENE
7 11.3 13.8 14:20 10.1 9:00 7.1 0.0 7.0 3.9 25.7 4:20 NNE
8 12.1 16.6 14:00 9.4 8:00 6.2 0.0 2.8 1.8 22.5 22:40 ENE
9 9.0 10.4 13:10 7.6 00:00 9.3 0.0 0.4 1.8 27.4 10:40 NNE
10 7.9 10.1 13:50 6.6 23:50 10.4 0.0 1.0 4.0 24.1 20:20 NE
11 7.8 10.1 14:20 5.4 5:30 10.6 0.0 0.8 1.1 16.1 11:00 N
12 11.3 18.7 15:30 6.8 7:10 7.0 0.0 0.0 1.3 20.9 14:20 SW
13 11.3 19.1 16:00 4.5 7:40 7.1 0.1 0.0 0.6 12.9 13:10 WSW
14 11.7 20.1 15:40 5.1 6:30 6.8 0.2 0.0 0.6 11.3 15:00 WNW
15 12.6 21.1 15:40 5.2 7:10 6.1 0.3 0.0 0.5 9.7 14:10 SSW
16 14.6 22.3 15:40 8.3 7:10 4.4 0.7 0.0 1.1 11.3 10:40 ENE
17 15.0 24.3 15:10 7.1 6:10 4.6 1.3 0.0 1.0 12.9 7:10 ENE
18 16.0 26.9 15:40 7.2 6:40 4.2 1.9 0.0 0.6 11.3 15:00 SSE
19 17.7 28.4 15:10 8.2 6:50 3.3 2.7 0.0 1.8 24.1 23:40 SW
20 16.6 22.5 16:00 11.1 00:00 2.6 0.8 0.0 2.7 24.1 7:50 N
21 13.8 21.9 16:30 6.7 6:20 5.0 0.6 0.0 0.8 16.1 14:50 ENE
22 14.3 24.1 15:40 5.8 5:40 4.9 0.9 0.0 0.5 9.7 13:50 SW
23 16.4 25.7 16:00 9.8 7:40 3.5 1.6 0.0 0.5 9.7 13:30 ESE
24 16.3 24.9 14:50 10.2 6:10 3.2 1.1 0.0 2.4 29.0 16:10 SSW
25 14.1 21.0 15:40 9.2 6:40 4.5 0.3 0.0 3.9 32.2 14:50 SW
26 12.9 19.0 16:20 9.6 6:10 5.4 0.0 1.6 1.0 12.9 12:50 N
27 14.3 19.2 13:50 11.3 2:30 4.1 0.1 0.2 3.2 33.8 14:20 ENE
28 13.1 19.0 15:40 7.4 6:30 5.3 0.0 0.4 1.4 17.7 15:50 SW
29 14.7 21.2 15:10 10.8 5:40 3.9 0.3 0.2 1.3 19.3 11:30 ENE
30 12.6 17.2 15:30 9.2 00:00 5.4 0.0 0.0 2.6 25.7 4:00 ENE
31 13.1 23.0 17:00 5.2 7:30 6.0 0.7 0.0 0.5 8.0 14:50 SW
-------------------------------------------------------------------------------------
12.7 28.4 19 4.5 13 187.4 13.5 31.8 1.6 33.8 27 ENE
Max >= 32.0: 0
Max <= 0.0: 0
Min <= 0.0: 0
Min <= -18.0: 0
Max Rain: 7.01 ON 07/03/14
Days of Rain: 14 (> .2 mm) 5 (> 2 mm) 0 (> 20 mm)
Heat Base: 18.3 Cool Base: 18.3 Method: Integration
By simply trying to use read.table with header=T, dec=".", sep="" as additional arguments, i get this error:
Error in read.table("C:\\blablabla\\file.txt)
more columns than column names
Execution halted
I think the file is not \t separated but rather "". I am also thinking this might be caused by the extra text before the table. Would read.csv make a difference?
Any suggestions? Thanks in advance.
The problem is the fact that there is some additional information listed above the column names. Simply skipping this will solve your issue. For this, you can use the skip parameter which is part of read.csv.
dat = read.csv('/path/to/file.csv', header = TRUE, dev = ".", sep = "", skip = 9)

How to Separate Data Column in R

I have data structured 4464 in rows, and 1 column.
Data should be in 4464 in rows and 8 columns.
Data contains:
Each file has a two
line header, followed by data.The columns are: julian day, ten minute
interval marker, temperature, pressure, wind speed, and wind direction. The
ten minute interval marker is a number between 1 and 144 representing time.
data comes from here
There are total of 12 data files like this, and my goal is put them in one 3D array.
But I am stuck to fix this data form.
Example of Data
Jan 13 Station : 8900 Harry
Lat : 83.00S Long : 121.40W Elev : 957 M
1 1 -7.8 879.0 5.6 360.0 444.0 9.1
1 2 -7.9 879.1 4.6 360.0 444.0 9.1
1 3 -7.6 879.2 4.1 345.0 444.0 9.1
1 4 -7.6 879.3 4.1 339.0 444.0 9.1
1 5 -7.6 879.4 4.8 340.0 444.0 9.1
1 6 -7.9 879.4 3.6 340.0 444.0 9.1
1 7 -8.0 879.3 4.6 340.0 444.0 9.1
1 8 -8.0 879.4 4.1 340.0 444.0 9.1
1 9 -8.2 879.4 5.8 338.0 444.0 9.1
1 10 -8.4 879.5 4.6 339.0 444.0 9.1
I tried and researched few things, but I don't know what the best way is.
My code was (Could not code with data.frame...):
setwd("/Users/Gizmo/Documents/Henry")
dir()
h13<-dir()
henry<-read.csv(h13[1],skip=2,header=FALSE)
colnames(c("J-Day","MinInter","Temp","Pressure","WindSpeed","WindDir","Ext1","Ext2"))
I looked at other questions, and guide and data.frame seems like the best way, but I could not code. (Ended up with data dimension NULL.)
Please give me advice on this. Thank you.
Your problem seems to be using read.csv instead of read.table:
henry <- read.table(text='Jan 13 Station : 8900 Harry
Lat : 83.00S Long : 121.40W Elev : 957 M
1 1 -7.8 879.0 5.6 360.0 444.0 9.1
1 2 -7.9 879.1 4.6 360.0 444.0 9.1
1 3 -7.6 879.2 4.1 345.0 444.0 9.1
1 4 -7.6 879.3 4.1 339.0 444.0 9.1
1 5 -7.6 879.4 4.8 340.0 444.0 9.1
1 6 -7.9 879.4 3.6 340.0 444.0 9.1
1 7 -8.0 879.3 4.6 340.0 444.0 9.1
1 8 -8.0 879.4 4.1 340.0 444.0 9.1
1 9 -8.2 879.4 5.8 338.0 444.0 9.1
1 10 -8.4 879.5 4.6 339.0 444.0 9.1', header=FALSE, skip=2)
names(henry) <- c("J-Day","MinInter","Temp","Pressure","WindSpeed","WindDir","Ext1","Ext2")
Result:
> henry
J-Day MinInter Temp Pressure WindSpeed WindDir Ext1 Ext2
1 1 1 -7.8 879.0 5.6 360 444 9.1
2 1 2 -7.9 879.1 4.6 360 444 9.1
3 1 3 -7.6 879.2 4.1 345 444 9.1
4 1 4 -7.6 879.3 4.1 339 444 9.1
5 1 5 -7.6 879.4 4.8 340 444 9.1
6 1 6 -7.9 879.4 3.6 340 444 9.1
7 1 7 -8.0 879.3 4.6 340 444 9.1
8 1 8 -8.0 879.4 4.1 340 444 9.1
9 1 9 -8.2 879.4 5.8 338 444 9.1
10 1 10 -8.4 879.5 4.6 339 444 9.1
> str(henry)
'data.frame': 10 obs. of 8 variables:
$ J-Day : int 1 1 1 1 1 1 1 1 1 1
$ MinInter : int 1 2 3 4 5 6 7 8 9 10
$ Temp : num -7.8 -7.9 -7.6 -7.6 -7.6 -7.9 -8 -8 -8.2 -8.4
$ Pressure : num 879 879 879 879 879 ...
$ WindSpeed: num 5.6 4.6 4.1 4.1 4.8 3.6 4.6 4.1 5.8 4.6
$ WindDir : num 360 360 345 339 340 340 340 340 338 339
$ Ext1 : num 444 444 444 444 444 444 444 444 444 444
$ Ext2 : num 9.1 9.1 9.1 9.1 9.1 9.1 9.1 9.1 9.1 9.1

when applying the BMA packages in R : Error in terms.formula(formula, special, data = data) : '.' in formula and no 'data' argument

I am using the BMA packages in R (test.bic.surv) to estimating the Cox proportional model from a large set of variables (100 base variables and about 60 lags for each of them). When I try the first set of testing with the following codes, it works.
x1<- x[,c( "comprisk", "compriskL1", "compriskL2", "compriskL3", "compriskL4", "econrisk", "econrisk_1", "econrisk_2", "econrisk_3", "econrisk_4", "econrisk_5", "finrisk", "finrisk_1", "finrisk_2", "finrisk_3", "finrisk_4", "finrisk_5", "polrisk", "polrisk_1","polrisk_2","polrisk_3","polrisk_4","polrisk_5","polrisk_6","polrisk_7","polrisk_8","polrisk_9","polrisk_10","polrisk_11","polrisk_12")]
surv.t<- x$crisis1
cens<- x$cen1
test.bic.surv<- bic.surv(x1, surv.t, cens, factor.type=FALSE, strict=FALSE, nbest=2000)
However, whenever I tried to add in any more independent variables into x1 such as "comprisk5L" or "econriskL1", the
test.bic.surv<- bic.surv(x1, surv.t, cens, factor.type=FALSE, strict=FALSE, nbest=2000)
showed me the error like this :
"Error in terms.formula(formula, special, data = data) : '.' in formula and no 'data' argument".
I have searched through the web for several days but couldn't figure out where was the problem . Can anyone please tell me what to do?? Thank you so much in advance!!!:)
Here is what the sample data looks like:
crisis1 cen1 comprisk econrisk econrisk_1 econrisk_2 econrisk_3 econrisk_4
1 0 1 57.0 25.5 3.3 6.7 4.0 6.7
2 0 1 57.0 25.5 3.3 6.7 4.0 6.7
3 0 1 57.0 25.5 3.3 6.7 4.0 6.7
4 0 1 58.5 26.5 3.8 7.5 4.0 7.5
5 0 1 58.5 27.0 3.8 7.5 4.0 7.5
6 0 1 58.5 26.0 3.8 7.5 4.0 7.5
7 0 1 59.0 26.5 3.8 7.5 4.0 7.5
8 0 1 59.0 26.5 3.8 7.5 4.0 7.5
9 0 1 59.0 27.0 3.8 7.5 4.0 7.5
10 0 1 59.0 26.5 3.8 7.5 4.0 7.5
11 0 1 59.0 26.5 3.8 7.5 4.0 7.5
12 0 1 59.0 27.0 3.8 7.5 4.0 7.5
13 0 1 59.0 27.0 3.8 7.5 4.0 7.5
14 0 1 57.5 27.0 3.8 7.5 4.0 7.5
15 0 1 57.5 27.5 3.8 7.5 4.0 7.5
16 0 1 57.0 27.5 3.3 6.7 4.0 6.7
17 0 1 57.0 27.5 3.3 6.7 4.0 6.7
18 0 1 57.0 27.5 3.3 6.7 4.0 6.7
19 0 1 56.0 27.0 3.3 6.7 4.0 6.7
20 0 1 56.5 28.5 2.9 5.8 4.0 5.8
21 0 1 55.5 26.5 2.9 5.8 4.0 5.8
22 0 1 55.0 26.0 2.9 5.8 4.0 5.8
23 0 1 55.0 26.0 2.9 5.8 4.0 5.8
24 0 1 55.0 26.0 2.9 5.8 4.0 5.8
25 0 1 55.0 26.0 2.9 5.8 4.0 5.8
26 0 1 54.5 25.5 2.9 5.8 6.5 5.8
27 0 1 54.0 25.5 2.9 5.8 6.5 5.8
28 0 1 53.5 25.5 2.5 5.0 6.5 5.0
29 0 1 53.5 25.5 2.5 5.0 6.5 5.0
30 0 1 54.0 26.5 2.5 5.0 6.5 5.0
31 0 1 54.0 26.5 2.5 5.0 6.5 5.0
32 0 1 54.0 26.5 2.5 5.0 6.5 5.0
33 0 1 56.0 26.5 2.5 5.0 6.5 5.0
34 0 1 56.0 27.0 2.5 5.0 6.5 5.0
35 0 1 57.0 27.0 2.5 5.0 6.5 5.0
36 0 1 58.0 27.0 2.9 5.8 6.5 5.8
37 1 1 59.0 28.5 2.9 5.8 6.5 5.8
38 1 1 60.0 29.5 2.9 5.8 6.5 5.8
39 1 1 59.5 29.5 2.9 5.8 6.5 5.8
40 1 1 60.0 29.5 2.9 5.8 6.5 5.8
41 1 1 59.5 29.5 2.9 5.8 6.5 5.8
42 1 1 59.0 28.0 2.9 5.8 6.5 5.8
43 1 1 59.5 28.0 2.9 5.8 6.5 5.8
44 1 1 59.5 28.0 2.9 5.8 6.5 5.8
45 1 1 59.5 28.5 2.9 5.8 6.5 5.8
46 1 1 56.0 28.0 2.9 5.8 6.5 5.8
47 1 1 54.0 28.0 2.5 5.0 6.5 5.0
48 1 1 53.0 24.5 2.1 4.2 6.5 4.2
49 1 1 53.0 25.0 2.1 4.2 6.5 4.2
50 1 1 54.0 26.0 2.1 4.2 6.5 4.2
51 1 1 54.5 26.0 2.1 4.2 6.5 4.2
52 1 1 54.5 25.5 2.1 4.2 6.5 4.2
53 1 1 54.0 24.0 2.1 4.2 6.0 4.2
54 1 1 54.0 24.0 2.1 4.2 6.0 4.2
55 1 1 55.0 24.0 2.1 4.2 6.0 4.2
56 1 1 55.0 24.0 2.1 4.2 6.0 4.2
57 1 1 55.0 24.0 2.1 4.2 6.0 4.2
58 1 1 55.0 24.5 2.1 4.2 6.0 4.2
59 1 1 55.0 24.5 2.1 4.2 6.0 4.2
60 1 1 55.0 25.0 2.1 4.2 6.0 4.2
61 1 1 55.0 23.5 2.1 4.2 6.0 4.2
62 1 1 55.0 24.0 2.1 4.2 6.0 4.2
63 1 1 55.0 23.5 2.1 4.2 6.5 4.2
64 1 1 55.0 23.5 1.7 3.3 6.5 3.3
65 1 1 55.0 22.5 1.7 3.3 6.5 3.3
66 1 1 56.0 25.5 1.3 2.5 6.5 2.5
67 1 1 56.0 25.5 1.3 2.5 6.5 2.5
68 1 1 56.5 25.0 1.3 2.5 6.5 2.5
69 1 1 58.5 29.5 1.3 2.5 6.5 2.5
70 1 1 58.5 28.5 1.3 2.5 6.5 2.5
71 1 1 58.5 28.5 1.3 2.5 6.5 2.5
72 1 1 59.5 29.5 1.3 2.5 6.5 2.5
73 1 1 61.5 33.0 1.3 2.5 6.0 2.5
74 1 1 61.0 33.0 1.3 2.5 6.0 2.5
75 1 1 61.5 32.0 1.7 3.3 6.0 3.3
76 1 1 59.5 32.0 1.7 3.3 6.0 3.3
77 1 1 60.0 32.5 1.7 3.3 6.0 3.3
78 1 1 57.5 32.5 2.1 4.2 6.0 4.2
79 1 1 58.0 33.0 2.1 4.2 6.0 4.2
80 1 1 58.5 32.5 2.1 4.2 6.0 4.2
81 1 1 57.5 31.5 2.1 4.2 5.0 4.2
82 1 1 57.5 31.5 2.1 4.2 5.0 4.2
83 1 1 59.0 31.5 2.5 5.0 5.0 5.0
84 1 1 58.5 30.5 2.5 5.0 4.0 5.0
85 0 1 55.5 27.5 2.5 5.0 3.5 5.0
86 0 1 54.0 27.5 2.5 5.0 3.5 5.0
87 0 1 53.5 27.0 2.5 5.0 3.5 5.0
88 0 1 53.0 27.0 2.5 5.0 3.5 5.0
89 0 1 53.0 27.5 2.1 4.2 3.5 4.2
90 0 1 52.5 27.0 2.1 4.2 3.5 4.2
91 0 1 50.5 27.5 2.1 4.2 3.5 4.2
92 0 1 51.5 27.5 2.1 4.2 3.5 4.2
93 0 1 51.5 27.0 2.5 5.0 3.5 5.0
94 0 1 52.0 27.0 2.5 5.0 3.5 5.0
95 0 1 52.0 27.0 2.5 5.0 3.5 5.0
96 0 1 52.0 28.0 2.5 5.0 3.5 5.0
97 0 1 52.5 28.5 2.5 5.0 3.5 5.0
98 0 1 54.0 28.5 2.5 5.0 3.5 5.0
99 0 1 54.0 29.0 2.5 5.0 4.0 5.0
100 0 1 53.0 28.0 2.5 5.0 4.0 5.0
101 0 1 52.5 28.0 2.1 4.2 3.5 4.2
102 0 1 52.5 28.0 2.1 4.2 3.5 4.2
103 0 1 53.0 28.0 2.1 4.2 3.5 4.2
104 0 1 53.0 28.0 2.1 4.2 3.5 4.2
105 0 1 52.5 26.0 2.1 4.2 4.0 4.2
106 0 1 54.0 26.5 2.1 4.2 4.0 4.2
107 0 1 53.5 26.5 2.1 4.2 4.0 4.2
108 0 1 53.5 26.5 2.1 4.2 4.0 4.2
109 1 1 56.0 29.5 2.1 4.2 5.0 4.2
110 1 1 53.5 27.0 2.1 4.2 4.0 4.2
111 1 1 53.5 27.0 2.1 4.2 4.0 4.2
112 1 1 53.5 26.5 2.1 4.2 5.0 4.2
113 1 1 54.0 26.5 2.1 4.2 5.0 4.2
114 1 1 52.5 24.0 2.1 4.2 4.0 4.2
115 1 1 53.0 24.5 2.1 4.2 5.0 4.2
116 1 1 54.0 26.0 2.1 4.2 4.0 4.2
117 1 1 54.0 26.0 2.1 4.2 4.0 4.2
118 1 1 54.5 26.0 2.1 4.2 4.0 4.2
119 1 1 52.5 24.5 2.1 4.2 3.5 4.2
120 1 1 52.5 24.5 2.1 4.2 3.5 4.2
121 1 1 54.0 27.5 2.1 4.2 4.0 4.2
122 1 1 54.0 27.5 2.1 4.2 4.0 4.2
123 1 1 53.0 28.5 2.1 4.2 4.0 4.2
124 1 1 53.0 28.5 2.1 4.2 4.0 4.2
125 1 1 52.5 28.0 2.1 4.2 4.0 4.2
126 1 1 52.5 27.5 2.1 4.2 4.0 4.2
127 1 1 53.0 28.0 2.1 4.2 4.5 4.2
128 1 1 53.5 28.0 2.5 5.0 4.5 5.0
129 1 1 54.5 28.0 2.5 5.0 4.5 5.0
130 1 1 54.0 26.5 2.5 5.0 3.5 5.0
131 1 1 53.5 26.0 2.5 5.0 3.5 5.0
132 1 1 54.5 26.5 2.5 5.0 3.5 5.0
133 0 1 55.5 28.0 2.5 5.0 3.5 5.0
134 0 1 56.0 28.0 2.5 5.0 3.5 5.0
135 0 1 56.0 28.0 2.5 5.0 3.5 5.0
136 0 1 54.5 27.5 2.5 5.8 3.5 5.8
137 0 1 56.0 24.5 2.9 5.8 5.0 5.8
138 0 1 58.5 29.0 2.9 5.8 5.0 5.8
139 0 1 57.5 28.5 2.9 5.8 5.0 5.8
140 0 1 57.0 28.5 2.9 5.8 5.0 5.8
141 0 1 57.0 28.5 2.9 5.8 5.0 5.8
142 0 1 58.0 28.5 2.9 5.8 5.0 5.8
143 0 1 58.0 29.5 2.9 5.8 5.0 5.8
144 0 1 59.0 29.5 2.9 5.8 5.0 5.8
145 0 1 59.0 31.0 2.9 5.8 5.5 5.8
146 0 1 59.0 31.0 2.9 5.8 5.5 5.8
147 0 1 58.5 31.0 2.9 5.8 5.5 5.8
148 0 1 58.5 31.0 2.9 5.8 5.5 5.8
149 0 1 58.5 32.0 2.5 5.0 5.5 5.0
150 0 1 58.0 32.0 2.5 5.0 5.5 5.0
151 0 1 56.8 32.5 2.5 5.0 5.5 5.0
152 0 1 58.3 31.5 3.8 7.5 5.5 7.5
153 0 1 59.0 37.0 0.5 8.5 5.5 9.5
154 0 1 59.2 37.5 1.0 8.5 5.5 9.5
155 0 1 61.0 39.5 0.5 9.0 8.0 9.0
156 0 1 60.5 39.5 0.5 9.0 8.0 9.0
157 0 1 60.0 39.5 0.5 9.0 8.0 9.0
158 0 1 59.2 39.0 0.5 8.5 8.0 9.0
159 0 1 59.5 39.5 0.5 8.5 8.5 9.0
160 0 1 59.5 39.5 0.5 8.5 8.5 9.0
161 0 1 59.5 39.5 0.5 8.5 8.5 9.0
162 0 1 59.2 39.0 0.5 8.0 8.5 9.0
163 0 1 58.7 39.0 0.5 8.0 8.5 9.0
164 0 1 58.5 38.5 0.5 7.5 8.5 9.0
165 0 1 58.0 35.0 1.0 4.0 8.5 8.0
166 0 1 57.0 35.0 1.0 4.0 8.5 8.0
167 0 1 56.2 33.5 0.5 4.0 7.5 8.0
168 0 1 56.5 34.0 1.0 4.0 7.5 8.0
169 0 1 54.7 33.5 1.0 8.5 7.5 6.0
170 0 1 52.7 30.5 1.0 6.0 7.5 6.0
171 0 1 52.7 30.5 1.0 6.0 7.5 6.0
172 0 1 54.0 33.0 1.0 8.5 7.5 6.0
173 0 1 52.1 32.7 0.2 8.5 8.0 6.0
174 0 1 50.8 32.2 0.2 8.0 8.0 6.0
175 0 1 52.1 32.2 0.2 8.0 8.0 6.0
176 0 1 51.9 32.2 0.2 8.0 8.0 6.0
177 0 1 51.7 31.5 1.0 7.0 7.5 6.0
178 0 1 51.5 31.5 1.0 7.0 7.5 6.0
179 0 1 52.7 31.5 1.0 7.0 7.5 6.0
180 0 1 52.5 31.5 1.0 7.0 7.5 6.0
181 0 1 54.5 33.5 1.0 8.5 8.5 3.5
182 0 1 55.5 33.5 1.0 8.5 8.5 3.5
183 0 1 56.7 35.0 1.0 9.0 8.5 3.5
184 0 1 56.2 35.0 1.0 9.0 8.5 3.5
185 0 1 55.5 35.0 1.0 9.0 8.5 3.5
186 0 1 56.2 35.0 1.0 9.0 8.5 3.5
187 0 1 56.7 35.0 1.0 9.0 8.5 3.5
188 0 1 56.0 34.0 1.0 9.0 7.5 3.5
189 0 1 55.0 34.0 1.0 9.0 7.5 3.5
190 0 1 55.5 34.0 1.0 9.0 7.5 3.5
191 0 1 55.2 34.0 1.0 9.0 7.5 3.5
192 0 1 59.0 37.0 1.0 9.0 8.5 3.5
193 0 1 62.2 42.0 1.0 9.5 8.0 8.5
194 0 1 61.8 42.0 1.0 9.5 8.0 8.5
195 0 1 60.2 41.0 1.0 9.5 8.0 8.5
196 0 1 63.7 41.0 1.0 9.5 8.0 8.5
197 0 1 60.2 37.0 1.0 8.5 8.0 8.5
198 0 1 64.2 42.0 1.0 9.5 9.0 8.5
199 0 1 63.0 40.0 1.0 8.5 8.0 8.5
200 0 1 61.5 38.5 1.0 8.5 8.0 8.5
201 0 1 61.7 38.5 1.0 8.5 8.0 8.5
202 0 1 62.0 38.5 1.0 8.5 8.0 8.5
203 0 1 62.0 38.5 1.0 8.5 8.0 8.5
204 0 1 62.2 38.5 1.0 8.5 8.0 8.5
205 0 1 61.5 38.5 1.0 8.5 8.0 8.5
206 0 1 61.2 38.0 1.0 8.5 8.0 8.5
207 0 1 60.5 38.0 1.0 8.5 8.0 8.5
208 0 1 61.0 38.0 1.0 8.5 8.0 8.5
209 0 1 61.5 38.0 1.0 8.5 8.0 8.5
210 0 1 61.7 38.0 1.0 8.5 8.0 8.5
211 0 1 62.0 38.0 1.0 8.5 8.0 8.5
212 0 1 61.7 38.0 1.0 8.5 8.0 8.5
213 0 1 61.5 38.0 1.0 8.5 8.0 8.5
214 0 1 61.2 38.0 1.0 8.5 8.0 8.5
215 0 1 63.7 40.5 1.0 8.0 9.0 8.5
216 0 1 63.7 40.5 1.0 8.0 9.0 8.5
217 0 1 63.7 40.5 1.0 8.0 9.0 8.5
218 0 1 65.7 43.5 1.0 9.5 8.5 9.5
219 0 1 65.5 43.5 1.0 9.5 8.5 9.5
220 0 1 65.5 43.5 1.0 9.5 8.5 9.5
221 0 1 65.0 43.5 1.0 9.5 8.5 9.5
222 0 1 65.0 43.5 1.0 9.5 8.5 9.5
223 0 1 65.0 43.5 1.0 9.5 8.5 9.5
224 0 1 66.2 43.5 1.0 10.0 9.5 8.0
225 0 1 66.2 43.5 1.0 10.0 9.5 8.0
226 0 1 66.2 43.5 1.0 10.0 9.5 8.0
227 0 1 66.0 44.0 1.0 10.0 9.5 8.5
228 0 1 65.7 44.0 1.0 10.0 9.5 8.5
229 0 1 65.5 43.5 1.0 9.5 9.5 8.5
230 0 1 65.5 43.0 1.0 10.0 9.0 8.5
231 0 1 65.5 43.0 1.0 10.0 9.0 8.5
232 0 1 68.2 43.0 1.0 10.0 9.0 8.5
233 0 1 71.5 44.5 1.0 10.0 9.0 9.5
234 0 1 71.7 44.5 1.0 10.0 9.0 9.5
235 0 1 73.2 44.5 1.0 10.0 9.0 9.5
236 0 1 74.7 44.5 1.0 10.0 9.0 9.5
237 0 1 74.7 44.5 1.0 10.0 9.0 9.5
238 0 1 74.7 44.5 1.0 10.0 9.0 9.5
239 0 1 75.5 45.0 1.0 10.0 9.0 10.0
240 0 1 75.5 45.0 1.0 10.0 9.0 10.0
241 0 1 76.0 45.0 1.0 10.0 9.0 10.0
242 0 1 76.7 44.5 1.0 10.0 8.5 10.0
243 0 1 76.7 44.5 1.0 10.0 8.5 10.0
244 0 1 76.7 44.5 1.0 10.0 8.5 10.0
245 0 1 78.0 44.5 1.0 10.0 8.5 10.0
246 0 1 78.0 44.5 1.0 10.0 8.5 10.0
247 0 1 77.0 44.5 1.0 10.0 8.5 10.0
248 0 1 77.2 44.5 1.0 10.0 8.5 10.0
249 0 1 77.2 44.5 1.0 10.0 8.5 10.0
250 0 1 77.7 44.5 1.0 10.0 8.5 10.0
Here is your answer:
test.bic.surv <- bic.surv(
x[, 3:ncol(x)],
x$crisis1, x$cen1, factor.type=FALSE, strict=FALSE, nbest=2000, maxCol=50
)
You have to provide maxCol parameter. Default is 30 so it is probably not enough for your needs.

Resources