Data table from raw text file: number of columns vary - r

I have a raw text file in the following format:
RELEASE VERSION: 20150514 (May 14, 2015)
======================================================================== VERSION
STUDY VARIABLE: Version Number Of Release
QUESTION:
--------- Version of Cumulative Data File
NOTES:
------ This variable appears in the data as:
ANES_cdf_VERSION:YYYY-mmm-DD where mmm is standard 3-character month abbreviation (Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec).
TYPE:
----- Character-1
======================================================================== VCF0004
STUDY VARIABLE: Year of Study
QUESTION:
--------- Year of study (4-digit)
TYPE:
----- Numeric Dec 0-1
===================================================================== VCF0006
...
and so on
Observations are bounded by "=" row and each of observation has some amount of variables (not all may be presented)
I am trying to create a data table out of it.
I created a vector of observations, in each observations columns are separated by '|'. Then I use fread to make a data table:
dt <- fread(paste(rawObs, collapse = '\n'),sep = '|',header = F, fill = T)
However, this is not really a solution. Fill = T only considers missing columns at the end of the observations and not in between:
In the example we have it should be:
id | study_var | question | notes | type
version | s1 | q1 | notes1 | character-1
VCF0004 | s2 | q2 | NA | numeric
But R creates it as
id | study_var | question | notes | type
version | s1 | q1 | notes1 | character-1
VCF0004 | s2 | q2 | numeric | NA
Type of the second observation is shifted leftward. As a solution, I was thinking to determine a missing columns within each observation and insert NAs explicitly in the input file, using max number of variables found but it might be slow for large files.
Thanks for help. Any comments are appreciated.
Here is all code:
library(magrittr)
library(data.table)
path <- 'Downloads/anes_timeseries_cdf_codebook_var.txt'
raw_data <- readLines(path)
head(raw_data)
#remove empty lines
raw_data <- raw_data[raw_data != ""]
#remove header
raw_data <- raw_data[-c(1,2)]
data_entries_index <- grep('^=+', raw_data)+1
#add end position of the last observation
data_entries_index <- c(data_entries_index, length(raw_data))
#opening file shows editor couldn't read two characters - we can ignore it though
data_entries_index
parseRawObservation <- function(singleRawObs, VariableIndex){
count=length(VariableIndex)-1
for (i in 1:count){
start = VariableIndex[i]+2
end = VariableIndex[i+1]-1
varValue <- paste(singleRawObs[start:end],collapse = ' ')
if (i==1)
obsSpaced <- varValue
else
obsSpaced <- paste(obsSpaced,varValue, sep = '|')
}
obsSpaced
}
#create a vector of raw observations
numObs <- length(data_entries_index)
count=numObs-1
rawObs=vector()
for (i in 1:count) {
start <- data_entries_index[i]
end <- data_entries_index[i+1]-2
singleRawObs <-raw_data[start:end]
VariableIndex <- grep("^-+",singleRawObs)-1
#add end of the last variable index
VariableIndex <- c(VariableIndex, length(singleRawObs)+1)
rawObs[i] <- parseRawObservation(singleRawObs,VariableIndex)
#add first two columns separately as they do not have dashes at the next line
rawObs[i] <- paste(singleRawObs[1], singleRawObs[2], rawObs[i], sep = '|')
}
#determine max number of fields
numOfCol <- max(sapply(rawObs, FUN = function(x) length(strsplit(x,'|')[[1]])))
which.max(sapply(rawObs, FUN = function(x) length(strsplit(x,'|')[[1]])))
dt <- fread(textConnection(rawObs),sep = '|',header = F)
dt <- fread(paste(rawObs[1:2], collapse = '\n'),sep = '|',header = F, fill = T)
rawObs[653]

There is a handy alternative for reading files like this one: read.dcf().
read.dcf() reads files in Debian Control Format (DCF) which consist of regular lines of form tag:value. Records are separated by one or more empty lines.
However, the input file needs to be modified to conform with the DCF format (plus some additional modifications to meet OP's expected result):
Empty rows need to be removed as they would be mistaken as record separator.
The streaks of equal signs = which are used as record separator need to be replaced by multiple empty lines and the missing tag id:.
The streaks of dashes should be removed.
The first row containing RELEASE VERSION: should be removed to be in line with OP's expectations.
The code below assumes that the raw text file is named "raw.txt".
library(data.table)
library(magrittr)
# read raw file, skip first row
raw <- fread("raw.txt", sep = "\n", header = FALSE, skip = 1L)
# replace streaks of "=" and "-"
raw[, V1 := V1 %>%
stringr::str_replace("[=]+", "\n\nid:") %>%
stringr::str_replace(": [-]+", ": ")][]
# now read the modified data using DCF format skipping empty rows
dt <- as.data.table(read.dcf(textConnection(raw[V1 != "", V1])))
dt
id STUDY VARIABLE QUESTION
1: VERSION Version Number Of Release Version of Cumulative Data File
2: VCF0004 Year of Study Year of study (4-digit)
3: VCF0006 NA NA
NOTES
1: This variable appears in the data as: ANES_cdf_VERSION:YYYY-mmm-DD [...]
2: NA
3: NA
TYPE
1: Character-1
2: Numeric Dec 0-1
3: NA

Related

How to plot multiple, separate graphs in R

I have a dataset of over 300K rows and over 20 years. I'm trying to create a Load Duration Curve for every year for XX years (so # of MW used every hour of the year (8760 hours for every year or 8784 for leap year). Currently I make a new dataframe by filtering by year and then reordering by descending order of MW used (descending order for the curve) and then create another column to match the row order so that I can use that column as a placeholder for the x-axis. Seems pretty inefficient and could be difficult to update if needed (see playground for what I've been doing). I also don't want to use facet_wrap() because the graphs are too small for what is needed.
Dummy_file:
Where hrxhr is the running total of hours in a given year.
YEAR
MONTH
DAY
HOUR OF DAY
MW
Month_num
Date
Date1
hrxhr
2023
Dec
31
22
2416
12
2023-12-31
365
8758
2023
Dec
31
23
2412
12
2023-12-31
365
8759
2023
Dec
31
24
2400
12
2023-12-31
365
8760
2024
Jan
01
1
2271
12
2024-01-01
1
1
2023
Jan
01
2
2264
12
2024-01-01
1
2
### ------------ Load in source ------------ ###
dummy_file <- 'Dummydata.csv'
forecast_df <- read_csv(dummy_file)
### ---- Order df by MW (load) and YEAR ---- ###
ordered_df <- forecast_df[order(forecast_df$MW, decreasing = TRUE), ]
ordered_df <- ordered_df[order(ordered_df$YEAR, decreasing = FALSE), ]
### -------------- Playground -------------- ###
## Create a dataframe for the forecast for calendar year 2023
cy23_df <- ordered_df[ordered_df$YEAR == 2023,]
## Add placeholder column for graphing purposes (add order number)
cy23_df$placeholder <- row.names(cy23_df)
## Check df structure and change columns as needed
str(cy23_df)
# Change placeholder column from character to numeric for graphing purposes
cy23_df$placeholder <- as.numeric(cy23_df$placeholder)
# Check if changed correctly
class(cy23_df$placeholder) #YES
## Load duration curve - Interactive
LF_cy23_LDC <- plot_ly(cy23_df,
x= ~placeholder,
y= ~MW,
type= 'scatter',
mode = 'lines',
hoverinfo = 'text',
text = paste("Megawatts: ", cy23_df$MW,
"Date: ", cy23_df$MONTH, cy23_df$DAY,
"Hour: ", cy23_df$hrxhr)) %>%
layout(title = 'CY2023 Load Forecast - LDC')
# "Hour: ", orderby_MW$yrhour))
saveWidget(LF_cy23_LDC, "cy23_LDC.html")
Current Output for CY2023:
Yaxis Megawatts used (MW) and Xaxis is a placeholder (placeholder) and then I just repeat the playground code for the rest of the years, but change 2023 to 2024, then 2025, etc.
Sorry if this is a long post, tmi, or not enough information. I'm fairly new to R and this community. Many thanks for your help!
Simply generalize your playground process in a user-defined method, then iterate through years with lapply.
# USER DEFINED METHOD TO RUN A SINGLE YEAR
build_year_plot <- function(year) {
### -------------- Playground -------------- ###
## Create a dataframe for the forecast for calendar year
cy_df <- ordered_df[ordered_df$YEAR == year,]
## Add placeholder column for graphing purposes (add order number)
cy_df$placeholder <- row.names(cy_df)
## Check df structure and change columns as needed
str(cy_df)
# Change placeholder column from character to numeric for graphing purposes
cy_df$placeholder <- as.numeric(cy_df$placeholder)
# Check if changed correctly
class(cy_df$placeholder) #YES
## Load duration curve - Interactive
LF_cy_LDC <- plot_ly(
cy_df, x = ~placeholder, y = ~MW, type= 'scatter',
mode = 'lines', hoverinfo = 'text',
text = paste(
"Megawatts: ", cy_df$MW,
"Date: ", cy_df$MONTH, cy_df$DAY,
"Hour: ", cy_df$hrxhr
)
) %>% layout( # USING BASE R 4.1.0+ PIPE
title = paste0('CY', year, ' Load Forecast - LDC')
)
saveWidget(LF_cy_LDC, paste0("cy", year-2000, "_LDC.html"))
return(LF_cy_LDC)
}
# CALLER TO RUN THROUGH SEVERAL YEARS
LF_cy_plots <- lapply(2023:2025, build_year_plot)
Consider even by (object-oriented wrapper to tapply and roughly equivalent to split + lapply) and avoid the year indexing. Notice input parameter changes below and variables used in title and filename:
# USER DEFINED METHOD TO RUN A SINGLE DATA FRAME
build_year_plot <- function(cy_df) {
### -------------- Playground -------------- ###
## Add placeholder column for graphing purposes (add order number)
cy_df$placeholder <- row.names(cy_df)
...SAME AS ABOVE...
) %>% layout(
title = paste0('CY', cy_df$YEAR[1], ' Load Forecast - LDC')
)
saveWidget(LF_cy_LDC, paste0("cy", cy_df$YEAR[1]-2000, "_LDC.html"))
return(LF_cy_LDC)
}
# CALLER TO RUN THROUGH SEVERAL YEARS
LF_cy_plots <- by(ordered_df, ordered_df$YEAR, build_year_plot)
Counterparts in tidyverse would be purrr.map:
# METHOD RECEIVES YEAR (lapply counterpart)
LF_cy_plots <- purrr::map(2023:2025, build_year_plot)
# METHOD RECEIVES DATA FRAME (by counterpart)
LF_cy_plots <- ordered_year %>%
split(.$YEAR) %>%
purrr::map(build_year_plot)

Dealing with character variables containing semicolons in CSV files

I have a file separated by semicolons in which one of the variables of type character contains semicolon inside it. The readr::read_csv2 function splits the contents of those variables that have semicolons into more columns, messing up the formatting of the file.
For example, when using read_csv2 to open the file below, Bill's age column will show jogging, not 41.
File:
name;hobbies;age
Jon;cooking;38
Bill;karate;jogging;41
Maria;fishing;32
Considering that the original file doesn't contain quotes around the character type variables, how can I import the file so that karate and jogging belong in the hobbies column?
read.csv()
You can use the read.csv() function. But there would be some warning messages (or use suppressWarnings() to wrap around the read.csv() function). If you wish to avoid warning messages, using the scan() method in the next section.
library(dplyr)
read.csv("./path/to/your/file.csv", sep = ";",
col.names = c("name", "hobbies", "age", "X4")) %>%
mutate(hobbies = ifelse(is.na(X4), hobbies, paste0(hobbies, ";" ,age)),
age = ifelse(is.na(X4), age, X4)) %>%
select(-X4)
scan() file
You can first scan() the CSV file as a character vector first, then split the string with pattern ; and change it into a dataframe. After that, do some mutate() to identify your target column and remove unnecessary columns. Finally, use the first row as the column name.
library(tidyverse)
library(janitor)
semicolon_file <- scan(file = "./path/to/your/file.csv", character())
semicolon_df <- data.frame(str_split(semicolon_file, ";", simplify = T))
semicolon_df %>%
mutate(X4 = na_if(X4, ""),
X2 = ifelse(is.na(X4), X2, paste0(X2, ";" ,X3)),
X3 = ifelse(is.na(X4), X3, X4)) %>%
select(-X4) %>%
janitor::row_to_names(row_number = 1)
Output
name hobbies age
2 Jon cooking 38
3 Bill karate;jogging 41
4 Maria fishing 32
Assuming that you have the columns name and age with a single entry per observation and hobbies with possible multiple entries the following approach works:
read in the file line by line instead of treating it as a table:
tmp <- readLines(con <- file("table.csv"))
close(con)
Find the position of the separator in every row. The entry before the first separator is the name the entry after the last is the age:
separator_pos <- gregexpr(";", tmp)
name <- character(length(tmp) - 1)
age <- integer(length(tmp) - 1)
hobbies <- vector(length=length(tmp) - 1, "list")
fill the three elements using a for loop:
# the first line are the colnames
for(line in 2:length(tmp)){
# from the beginning of the row to the first";"
name[line-1] <- strtrim(tmp[line], separator_pos[[line]][1] -1)
# between the first ";" and the last ";".
# Every ";" is a different elemet of the list
hobbies[line-1] <- strsplit(substr(tmp[line], separator_pos[[line]][1] +1,
separator_pos[[line]][length(separator_pos[[line]])]-1),";")
#after the last ";", must be an integer
age[line-1] <- as.integer(substr(tmp[line],separator_pos[[line]][length(separator_pos[[line]])]+1,
nchar(tmp[line])))
}
Create a separate matrix to hold the hobbies and fill it rowwise:
hobbies_matrix <- matrix(NA_character_, nrow = length(hobbies), ncol = max(lengths(hobbies)))
for(line in 1:length(hobbies))
hobbies_matrix[line,1:length(hobbies[[line]])] <- hobbies[[line]]
Add all variable to a data.frame:
df <- data.frame(name = name, hobbies = hobbies_matrix, age = age)
> df
name hobbies.1 hobbies.2 age
1 Jon cooking <NA> 38
2 Bill karate jogging 41
3 Maria fishing <NA> 32
You could also do:
read.csv(text=gsub('(^[^;]+);|;([^;]+$)', '\\1,\\2', readLines('file.csv')))
name hobbies age
1 Jon cooking 38
2 Bill karate;jogging 41
3 Maria fishing 32
Ideally you'd ask whoever generated the file to do it properly next time :) but of course this is not always possible.
Easiest way is probably to read the lines from the file into a character vector, then clean up and make a data frame by string matching.
library(readr)
library(dplyr)
library(stringr)
# skip header, add it later
dataset <- read_lines("your_file.csv", skip = 1)
dataset_df <- data.frame(name = str_match(dataset, "^(.*?);")[, 2],
hobbies = str_match(dataset, ";(.*?);\\d")[, 2],
age = as.numeric(str_match(dataset, ";(\\d+)$")[, 2]))
Result:
name hobbies age
1 Jon cooking 38
2 Bill karate;jogging 41
3 Maria fishing 32
Using the file created in the Note at the end
1) read.pattern can read this by specifying the pattern as a regular expression with the portions within parentheses representing the fields.
library(gsubfn)
read.pattern("hobbies.csv", pattern = '^(.*?);(.*);(.*)$', header = TRUE)
## name hobbies age
## 1 Jon cooking 38
## 2 Bill karate;jogging 41
## 3 Maria fishing 32
2) Base R Using base R we can read in the lines, put quotes around the middle field and then read it in normally.
L <- "hobbies.csv" |>
readLines() |>
sub(pattern = ';(.*);', replacement = ';"\\1";')
read.csv2(text = L)
## name hobbies age
## 1 Jon cooking 38
## 2 Bill karate;jogging 41
## 3 Maria fishing 32
Note
Lines <- "name;hobbies;age
Jon;cooking;38
Bill;karate;jogging;41
Maria;fishing;32
"
cat(Lines, file = "hobbies.csv")

R: Spread long single column dataframe into two columns, spliting by alpha and numeric, ignoring punctuation

I have a large dataset that contains keywords, followed eventually by a value. I have managed to read the data in from a pdf format, and am left with data that looks like the following:
myData <- c("adjuster", "7", "hours", "rate", "oct 2 - 16," , "19", "hours", "rate", "_NA_NA_NA_NA_", "total", "gross", "pay", "6500", "_NA_NA_NA_table", "NA_copy", "of", "9.16.19 to 9.30.19.xlsx_NA")
myDataDF <- as.data.frame(myData)
My goal is to 'spread' that single column of character data into two columns, one for the alpha values, the second for the numeric values that follow below. I would like to to bring over punctuation, but ignore it as a means of separating keywords from values, as some of the numeric values have punctuation. I would like to collapse (with a space) the keywords, until a numeric value is found, which then is placed in the values column.
I have tried a number of things with this data in different formats (long strings and string splitting), but this format seems the most conducive and clean to get me to the end goal (having data to actually analyze and perform calculations). I just don't know how to qualify keep collapsing until you hit a number in R.
Ultimately, it would be nice if looked as such:
+==========================================+============================+
| keyword | value |
+==========================================+============================+
| adjuster | 7 |
+------------------------------------------+----------------------------+
| hours rate oct 2 - 16 | 19 |
+------------------------------------------+----------------------------+
| hours rate _NA_NA_NA_NA_ total gross pay | 6500 |
+------------------------------------------+----------------------------+
| _NA_NA_NA_table NA_copy of | 9.16.19 to 9.30.19.xlsx_NA |
+------------------------------------------+----------------------------+
The last row pattern is not very clear. Based on the data, we could create a grouping column by detecting only numeric values or the 'xlsx' in the 'myData' column and then summarise by pasteing the values except the last and the second column as the last value
library(dplyr)
library(stringr)
myDataDF %>%
group_by(grp = lag(cumsum(str_detect(myData, '^\\d+$|xlsx')),
default = 0)) %>%
summarise(keyword = str_c(myData[-n()], collapse = ' '),
value = last(myData), .groups = 'drop') %>%
select(-grp)
-output
# A tibble: 4 x 2
# keyword value
# <chr> <chr>
#1 adjuster 7
#2 hours rate oct 2 - 16, 19
#3 hours rate _NA_NA_NA_NA_ total gross pay 6500
#4 _NA_NA_NA_table NA_copy of 9.16.19 to 9.30.19.xlsx_NA

Looping row numbers from one dataframe to create new data using logical operations in R

I would like to extract a dataframe that shows how many years it takes for NInd variable (dataset p1) to recover due to some culling happening, which is showed in dataframe e1.
I have the following datasets (mine are much bigger, but just to give you something to play with):
# Dataset 1
Batch <- c(2,2,2,2,2,2,2,2,2,2)
Rep <- c(0,0,0,0,0,0,0,0,0,0)
Year <- c(0,0,1,1,2,2,3,3,4,4)
RepSeason <- c(0,0,0,0,0,0,0,0,0,0)
PatchID <- c(17,25,19,16,21,24,23,20,18,33)
Species <- c(0,0,0,0,0,0,0,0,0,0)
Selected <- c(1,1,1,1,1,1,1,1,1,1)
Nculled <- c(811,4068,1755,449,1195,1711,619,4332,457,5883)
e1 <- data.frame(Batch,Rep,Year,RepSeason,PatchID,Species,Selected,Nculled)
# Dataset 2
Batch <- c(2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2)
Rep <- c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)
Year <- c(0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2)
RepSeason <- c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)
PatchID <- c(17,25,19,16,21,24,23,20,18,33,17,25,19,16,21,24,23,20,18,33,17,25,19,16,21,24,23,20,18,33)
Ncells <- c(6,5,6,4,4,5,6,5,5,5,6,5,6,4,4,5,6,7,3,5,4,4,3,3,4,4,5,5,6,4)
Species <- c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)
NInd <- c(656,656,262,350,175,218,919,218,984,875,700,190,93,127,52,54,292,12,43,68,308,1000,98,29,656,656,262,350,175,300)
p1 <- data.frame(Batch, Rep, Year, RepSeason, PatchID, Ncells, Species, NInd)
The dataset called e1 shows only those year where some culled happened to the population on specific PatchID.
I have created the following script that basically use each row from e1 to create a Recovery number. Maybe there is an easier way to get to the end, but this is the one I managed to get...
When you run this, you are working on ONE row of e1, so we focus on the first PatchID encounter and then do some calculation to match that up with p1, and finally I get a number named Recovery.
Now, the thing is my dataframe has 50,000 rows, so doing this over and over looks quite tedious. So, that's where I thought a loop may be useful. But have tried and no luck on how to make it work at all...
#here is where I would like the loop
e2 <- e1[1,] # Trial for one row only # but the idea is having here a loop that keep doing of comes next for each row
e3 <- e2 %>%
select(1,2,4,5)
p2 <- p1[,c(1,2,4,5,3,6,7,8)] # Re-order
row2 <- which(apply(p2, 1, function(x) return(all(x == e3))))
p3 <- p1 %>%
slice(row2) # all years with that particular patch in that particular Batch
#How many times was this patch cull during this replicate?
e4 <- e2[,c(1,2,4,5,3,6,7,8)]
e4 <- e4 %>%
select(1,2,3,4)
c_batch <- e1[,c(1,2,4,5,3,6,7,8)]
row <- which(apply(c_batch, 1, function(x) return(all(x == e4))))
c4 <- c_batch %>%
slice(row)
# Number of year to recover to 95% that had before culled
c5 <- c4[1,] # extract the first time was culled
c5 <- c5 %>%
select(1:5)
row3 <- which(apply(p2, 1, function(x) return(all(x == c5))))
Before <- p2 %>%
slice(row3)
NInd <- Before[,8] # Before culling number of individuals
Year2 <- Before[,5] # Year number where first culling happened (that actually the number corresponds to individuals before culling, as the Pop file is developed during reproduction, while Cull file is developed after!)
Percent <- (95*NInd)/100 # 95% recovery we want to achieve would correspond to having 95% of NInd BEFORE culled happened (Year2)
After <- p3 %>%
filter(NInd >= Percent & Year > Year2) # Look rows that match number of ind and Year
After2 <- After[1,] # we just want the first year where the recovery was successfully achieved
Recovery <- After2$Year - Before$Year
# no. of years to reach 95% of the population immediately before the cull
I reckon that the end would have to change somehow to to tell R that we are creating a dataframe with the Recovery, something like:
Batch <- c(1,1,2,2)
Rep <- c(0,0,0,0)
PatchID <- c(17,25,30,12)
Recovery <- c(1,2,1,5)
Final <- data.frame(Batch, Rep, PatchID, Recovery)
Would that be possible? OR this is just too mess-up and I may should try a different way?
Does the following solve the problem correectly?
I have first added a unique ID to your data.frames to allow matching of the cull and population files (this saves most of you complicated look-up code):
# Add a unique ID for the patch/replicate etc. (as done in the example code)
e1$RepID = paste(e1$Batch, e1$Rep, e1$RepSeason, e1$PatchID, sep = ":")
p1$RepID = paste(p1$Batch, p1$Rep, p1$RepSeason, p1$PatchID, sep = ":")
If you want a quick overview of the number of times each patch was culled, the new RepID makes this easy:
# How many times was each patch culled?
table(p1$RepID)
Then you want a loop to check the recovery time after each cull.
My solutions uses an sapply loop (which also retains the RepIDs so you can match to other metadata later):
sapply(unique(e1$RepID), function(rep_id){
all_cull_events = e1[e1$RepID == rep_id, , drop = F]
first_year = order(all_cull_events$Year)[1] # The first cull year (assuming data might not be in temporal order)
first_cull_event = all_cull_events[first_year, ] # The row corresponding to the first cull event
population_counts = p1[p1$RepID == first_cull_event$RepID, ] # The population counts for this plot/replicate
population_counts = population_counts[order(population_counts$Year), ] # Order by year (assuming data might not be in temporal order)
pop_at_first_cull_event = population_counts[population_counts$Year == first_cull_event$Year, "NInd"]
population_counts_after_cull = population_counts[population_counts$Year > first_cull_event$Year, , drop = F]
years_to_recovery = which(population_counts_after_cull$NInd >= (pop_at_first_cull_event * .95))[1] # First year to pass 95% threshold
return(years_to_recovery)
})
2:0:0:17 2:0:0:25 2:0:0:19 2:0:0:16 2:0:0:21 2:0:0:24 2:0:0:23 2:0:0:20 2:0:0:18 2:0:0:33
1 2 1 NA NA NA NA NA NA NA
(The output contains some NAs because the first cull year was outside the range of population counts in the data you gave us)
Please check this against your expected output though. There were some aspects of the question and example code that were not clear (see comments).

Difficulty combining lists, characters, and numbers into data frame

I'm lost on how to combine my data into a usable data frame. I have a list of lists of character and number vectors Here is a working example of my code so far:
remove(list=ls())
# Headers for each of my column names
headers <- c("name","p","c","prophylaxis","control","inclusion","exclusion","conversion excluded","infection criteria","age criteria","mean age","age sd")
#_name = author and year
#_p = no. in experimental arm.
#_c = no. in control arm
#_abx = antibiotic used
#_con = control used
#_inc = inclusion criteria
#_exc = exclusion criteria
#_coexc = was conversion to open excluded?
#_infxn = infection criteria
#_agecrit = age criteria
#_agemean = mean age of study
#_agesd = sd age of study
# Passos 2016
passos_name <- c("Passos","2016")
passos_p <- 50
passos_c <- 50
passos_abx <- "cefazolin 1g at induction"
passos_con <- "none"
passos_inc <- c("elective LC","symptomatic cholelithiasis","low risk")
passos_exc <- c("renal impairment","hepatic impairment","immunosuppression","regular steroid use","antibiotics within 48H","acute cholecystitis","choledocolithiasis")
passos_coexc <- TRUE
passos_infxn <- c("temperature >37.8C","tachycardia","asthenia","local pain","local purulence")
passos_agecrit <- NULL
passos_agemean <- 48
passos_agesd <- 13.63
passos <- list(passos_name,passos_p,passos_c,passos_abx,passos_con,passos_inc,passos_exc,passos_coexc,passos_infxn,passos_agecrit,passos_agemean,passos_agesd)
names(passos) <- headers
# Darzi 2016
darzi_name <- c("Darzi","2016")
darzi_p <- 182
darzi_c <- 247
darzi_abx <- c("cefazolin 1g 30min prior to induction","cefazolin 1g 6H after induction","cefazolin 1g 12H after induction")
darzi_con <- "NaCl"
darzi_inc <- c("elective LC","first time abdominal surgery")
darzi_exc <- c("antibiotics within 7 days","immunosuppression","acute cholecystitis","choledocolithiasis","cholangitis","obstructive jaundice",
"pancreatitis","previous biliary tract surgery","previous ERCP","DM","massive intraoperative bleeding","antibiotic allergy","major thalassemia",
"empyema")
darzi_coexc <- TRUE
darzi_infxn <- c("temperature >38C","local purulence","intra-abdominal collection")
darzi_agecrit <- c(">18", "<75")
darzi_agemean <- 43.75
darzi_agesd <- 13.30
darzi <- list(darzi_name,darzi_p,darzi_c,darzi_abx,darzi_con,darzi_inc,darzi_exc,darzi_coexc,darzi_infxn,darzi_agecrit,darzi_agemean,darzi_agesd)
names(darzi) <- headers
# Matsui 2014
matsui_name <- c("Matsui","2014")
matsui_p <- 504
matsui_c <- 505
matsui_abx <- c("cefazolin 1g at induction","cefazolin 1g 12H after induction","cefazolin 1g 24H after induction")
matsui_con <- "none"
matsui_inc <- "elective LC"
matsui_exc <- c("emergent","concurrent surgery","regular insulin use","regular steroid use","antibiotic allergy","HD","antibiotics within 7 days","hepatic impairment","chemotherapy")
matsui_coexc <- FALSE
matsui_infxn <- c("local purulence","intra-abdominal collection","distant infection","temperature >38C")
matsui_agecrit <- ">18"
matsui_agemean <- NULL
matsui_agesd <- NULL
matsui <- list(matsui_name,matsui_p,matsui_c,matsui_abx,matsui_con,matsui_inc,matsui_exc,matsui_coexc,matsui_infxn,matsui_agecrit,matsui_agemean,matsui_agesd)
names(matsui) <- headers
# Find unique exclusion critieria in order to create the list of all possible levels
exc <- ls()[grepl("_exc",ls())]
exclist <- sapply(exc,get)
exc.levels <- unique(unlist(exclist,use.names = F))
# Find unique inclusion critieria in order to create the list of all possible levels
inc <- ls()[grepl("_inc",ls())]
inclist <- sapply(inc,get)
inc.levels <- unique(unlist(inclist,use.names = F))
# Find unique antibiotics order to create the list of all possible levels
abx <- ls()[grepl("_abx",ls())]
abxlist <- sapply(abx,get)
abx.levels <- unique(unlist(abxlist,use.names = F))
# Find unique controls in order to create the list of all possible levels
con <- ls()[grepl("_con",ls())]
conlist <- sapply(con,get)
con.levels <- unique(unlist(conlist,use.names = F))
# Find unique age critieria in order to create the list of all possible levels
agecrit <- ls()[grepl("_agecrit",ls())]
agecritlist <- sapply(agecrit,get)
agecrit.levels <- unique(unlist(agecritlist,use.names = F))
I have been struggling with:
1) Turn each of the _exc, _inc, _abx, _con, _agecrit lists into factors using the levels generated at the end of the code block. I have been trying to use a for loop such as:
for (x in exc) {
as.name(x) <- factor(get(x),levels = exc.levels)
}
This only creates a variable, x, that stores the last parsed list as a factor.
2) Combine all of my data into a data frame formatted as such:
name, p, c, prophylaxis, control, inclusion, exclusion, conversion excluded, infection criteria, age criteria, mean age, age sd
"Passos 2016", 50, 50, "cefazolin 1g at induction", "none", ["elective LC","symptomatic cholelithiasis","low risk"], ["renal impairment","hepatic impairment","immunosuppression","regular steroid use","antibiotics within 48H","acute cholecystitis","choledocolithiasis"], TRUE, ["temperature >37.8C","tachycardia","asthenia","local pain","local purulence"], NULL, 48, 13.63
...
# [] = factors
# columns correspond to each studies variables (i.e. passos_name, passos_p, passos_c, etc..)
# rows correspond to each study (i.e., passos, darzi, matsui)
I have tried various solutions on StackOverflow, but have not found any that work; for example:
studies <- list(passos,darzi,matsui,ruangsin,turk,naqvi,hassan,sharma,uludag,yildiz,kuthe,koc,maha,tocchi,higgins,mahmoud,kumar)
library(data.table)
rbindlist(lapply(studies,as.data.frame.list))
I suspect my data may not be exactly amenable to a data frame? Primarily because of trying to store a list of factors in a column. Is that allowed? If not, how is this type of data normally stored? My goal is to be able to meaningfully compare these various criterion across studies.
This is too long for a comment, so I turn it into an "answer":
To start with, have a look at what happens here:
data.frame(name = "Passos, 2016", p = 50)
name p
1 Passos, 2016 50
data.frame(name = c("Passos", "2016"), p = 50)
name p
1 Passos 50
2 2016 50
In the first one, we created a dataframe with the column "name" which contained one entry "Passos, 2016", i.e. one character containing both pieces of information, and the column "p". All fine. Now, in the second version, I specified the column "name" as you did above, using c(Passos, 2016). This is a two-element vector, and hence we get two rows in the dataframe: one with name Passos, one with name 2016, and the column p gets recycled.
Clearly, the latter is probably not what you intended. But it works anyway because R just recycles the shorter vector. Now, what do you think happens if I add a vector that contains three elements?
And this highlights the main issue with what you are doing: you are trying to get a dataframe from many vectors with different lengths. Now, in some cases this is fine if you want the shorter vector to be repeated (in R speech, we call this "recycled"), but it does not look like something you want to do here.
So, my recommendation would be this: try to imagine a matrix and make sure you understand what each element (row and column) is supposed to be. Then specify your data accordingly. If in doubt, look up "tidy data".

Resources