Using the pdf_data function from the pdftools package efficiently - r

The end goal is to use the pdftools package to efficiently move through a thousand pages of pdf documents to consistently, and safely, produce a useable dataframe/tibble. I have attempted to use the tabulizer package, and pdf_text functions, but the results were inconsistent. Therefore, started working through the pdf_data() function, which I prefer.
For those unfamiliar with the pdf_data function, it converts a pdf page into a coordinate grid, with the 0,0 coordinate being in the upper-left corner of the page. Therefore, by arranging the x,y coordinates, then pivoting the document into a wide format, all of the information is displayed as it would on the page, only with NAs for whitespaces
Here is a simple example using the familiar mtcars dataset.
library(pdftools)
library(tidyverse)
library(janitor)
pdf_file <- "https://github.com/ropensci/tabulizer/raw/master/inst/examples/data.pdf"
mtcars_pdf_df <- pdf_data(pdf_file)[[1]]
mtcars_pdf_df%>%
arrange(x, y)%>%
pivot_wider(id_cols = y, names_from = x, values_from = text)%>%
unite(col = Car_type, `154`:`215`, sep = " ", remove = TRUE, na.rm = TRUE)%>%
arrange(y)%>%
rename("Page Number" = `303`)%>%
unite(col = mpg, `253`:`254`, sep = "", remove = TRUE, na.rm = TRUE)%>%
unite(col = cyl, `283` : `291` , sep = "", remove = TRUE, na.rm = TRUE)%>%
unite(col = disp, `308` : `313`, sep = "", remove = TRUE, na.rm = TRUE)
It would be nice to not use a dozen or so unite functions in order to rename the various columns. I used the janitor package row_to_names() function at one point to convert row 1 to column names, which worked well but maybe someone has a better thought?
The central problem; removing the NAs from the dataset through uniting multiple columns, or shifting columns over so that NAs are filled by adjacent columns.
I'm trying to make this efficient. Possible using the purrr package? any help with making this process more efficient would be very appreciated.
The only information I had on the pdf_data() function going into this is from here...
https://ropensci.org/technotes/2018/12/14/pdftools-20/
Any additional resources would also be greatly appreciated (apart from the pdftools package help documentation/literature).
Thanks everyone! I hope this also helps others use the pdf_data() too :)

Here is one approach that could perhaps be generalised if you know the PDF is a reasonably neat table...
library(pdftools)
library(tidyverse)
pdf_file <- "https://github.com/ropensci/tabulizer/raw/master/inst/examples/data.pdf"
df <- pdf_data(pdf_file)[[1]]
df <- df %>% mutate(x = round(x/3), #reduce resolution to minimise inconsistent coordinates
y = round(y/3)) %>%
arrange(y, x) %>% #sort in reading order
mutate(group = cumsum(!lag(space, default = 0))) %>% #identify text with spaces and paste
group_by(group) %>%
summarise(x = first(x),
y = first(y),
text = paste(text, collapse = " ")) %>%
group_by(y) %>%
mutate(colno = row_number()) %>% #add column numbers for table data
ungroup() %>%
select(text, colno, y) %>%
pivot_wider(names_from = colno, values_from = text) %>% #pivot into table format
select(-y) %>%
set_names(c("car", .[1,-ncol(.)])) %>% #shift names from first row
slice(-1, -nrow(.)) %>% #remove names row and page number row
mutate_at(-1, as.numeric)
df
# A tibble: 32 x 12
car mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Mazda RX4 21 6 160 110 3.9 2.62 16.5 0 1 4 4
2 Mazda RX4 Wag 21 6 160 110 3.9 2.88 17.0 0 1 4 4
3 Datsun 710 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1
4 Hornet 4 Drive 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1
5 Hornet Sportabout 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2
6 Valiant 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1
7 Duster 360 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4
8 Merc 240D 24.4 4 147. 62 3.69 3.19 20 1 0 4 2
9 Merc 230 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2
10 Merc 280 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4
# ... with 22 more rows

I'll present a partial solution here, but please allow me to give you some background information first.
I am currently writing a pdf text / table extraction package from scratch in C++ with R bindings, which has required many months and many thousands of lines of code. I started writing it pretty much to do what you are looking to do: reliably extract tabular data from pdfs. I have got it to the point where it can quickly and reliably extract the text from a pdf document, with the associated positions and font of each text element (similar to pdftools).
I assumed that the technical part of reading the xrefs, handling encryption, writing a deflate decompressor, parsing the dictionaries, tokenizing and reading the page description programs would be the real challenges, and that figuring out a general algorithm to extract tabular data was just a detail I would figure out at the end.
Let me tell you, I'm stuck. I can assure you there is no simple, generalizable parsing function that you can write in a few lines of R to reliably extract tabular data from a pdf.
You have three options, as far as I can tell:
Stick to documents where you know the exact layout
Write a function with filter parameters that you can twiddle and check the results
Use a very complex / AI solution to get very good (though never perfect) reliability
For the pdf example you provided, something like the following works fairly well. It falls into the "twiddling parameters" category, and works by cutting the text into columns and rows based on the density function of the x and y co-ordinates of the text elements.
It could be refined a great deal to generalize it, but that would add a lot of complexity and would have to be tested on lots of documents
tabulize <- function(pdf_df, filter = 0.01)
{
xd <- density(pdf_df$x, filter)
yd <- density(pdf_df$y, filter)
pdf_df$col <- as.numeric(cut(pdf_df$x, c(xd$x[xd$y > .5] - 2, max(xd$x) + 3)))
pdf_df$row <- as.numeric(cut(pdf_df$y, c(yd$x[yd$y > .5] - 2, max(yd$x) + 3)))
pdf_df %<>% group_by(row, col) %>% summarise(label = paste(text, collapse = " "))
res <- matrix(rep("", max(pdf_df$col) * max(pdf_df$row)), nrow = max(pdf_df$row))
for(i in 1:nrow(pdf_df)) res[pdf_df$row[i], pdf_df$col[i]] <- pdf_df$label[i]
res <- res[which(apply(r, 1, paste, collapse = "") != ""), ]
res <- res[,which(apply(r, 2, paste, collapse = "") != "")]
as.data.frame(res[-1,])
}
which gives the following result:
tabulize(mtcars_pdf_df)
#> V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12
#> 1 Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
#> 2 Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
#> 3 Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
#> 4 Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
#> 5 Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
#> 6 Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
#> 7 Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
#> 8 Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
#> 9 Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
#> 10 Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
#> 11 Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4
#> 12 Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3
#> 13 Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3
#> 14 Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
#> 15 Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4
#> 16 Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4
#> 17 Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4
#> 18 Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1
#> 19 Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2
#> 20 Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1
#> 21 Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1
#> 22 Dodge Challenger 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2
#> 23 AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
#> 24 Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4
#> 25 Pontiac Firebird 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2
#> 26 Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1
#> 27 Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2
#> 28 Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2
#> 29 Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4
#> 30 Ferrari Dino 19.7 6 145.0 175 3.62 2.770 15.50 0 1 5 6
#> 31 Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8
#> 32 Volvo 142E 21.4 4 1 121.0 109 4.11 2.780 18.60 1 1 4 2

Related

How to randomly sample multiple consecutive rows of a dataframe in R?

I've a dataframe with 100 rows and 20 columns and want to randomly sample 5 times 10 consecutive rows, e.g. 10:19, 25:34, etc. With: sample_n( df, 5 ) I'm able to extract 5 unique, randomly sampled rows, but don't know how to sample consecutive rows. Any help? Thanks!
df <- mtcars
df$row_nm <- seq(nrow(df))
set.seed(7)
sample_seq <- function(n, N) {
i <- sample(seq(N), size = 1)
ifelse(
test = i + (seq(n) - 1) <= N,
yes = i + (seq(n) - 1),
no = i + (seq(n) - 1) - N
)
}
replica <- replicate(n = 5, sample_seq(n = 10, N = nrow(df)))
# result
lapply(seq(ncol(replica)), function(x) df[replica[, x], ])
#> [[1]]
#> mpg cyl disp hp drat wt qsec vs am gear carb row_nm
#> Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4 10
#> Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4 11
#> Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3 12
#> Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3 13
#> Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3 14
#> Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4 15
#> Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4 16
#> Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4 17
#> Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1 18
#> Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2 19
#>
#> [[2]]
#> mpg cyl disp hp drat wt qsec vs am gear carb row_nm
#> Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2 19
#> Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1 20
#> Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1 21
#> Dodge Challenger 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2 22
#> AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2 23
#> Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4 24
#> Pontiac Firebird 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2 25
#> Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1 26
#> Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2 27
#> Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2 28
#>
#> [[3]]
#> mpg cyl disp hp drat wt qsec vs am gear carb row_nm
#> Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8 31
#> Volvo 142E 21.4 4 121.0 109 4.11 2.780 18.60 1 1 4 2 32
#> Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4 1
#> Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4 2
#> Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1 3
#> Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1 4
#> Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2 5
#> Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1 6
#> Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4 7
#> Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2 8
#>
#> [[4]]
#> mpg cyl disp hp drat wt qsec vs am gear carb row_nm
#> Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2 28
#> Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4 29
#> Ferrari Dino 19.7 6 145.0 175 3.62 2.770 15.50 0 1 5 6 30
#> Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8 31
#> Volvo 142E 21.4 4 121.0 109 4.11 2.780 18.60 1 1 4 2 32
#> Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4 1
#> Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4 2
#> Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1 3
#> Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1 4
#> Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2 5
#>
#> [[5]]
#> mpg cyl disp hp drat wt qsec vs am gear carb row_nm
#> Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4 7
#> Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2 8
#> Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2 9
#> Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4 10
#> Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4 11
#> Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3 12
#> Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3 13
#> Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3 14
#> Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4 15
#> Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4 16
Created on 2022-01-24 by the reprex package (v2.0.1)
You could so something like:
#sample data
df <- data.table(value = 1:100000)
#function which sampled consecutive rows (x = dataframe, rows = nr of consecutive rows, nr = amount of times you want to sample consecutive rows)
sample_fun <- function(x, rows, nr){
#maximum number which can be sampled
numbers <- 1:(nrow(x) - rows)
#randomly sample 5 numbers
sampled.numbers <- sample(numbers, nr)
#convert to vector (5 consecutive)
sampled.rows <- lapply(sampled.numbers, function(x){seq(x, (x+rows-1), 1)})
sampled.rows <- do.call(c, sampled.rows)
#sample and return
result <- x[sampled.rows,]
return(result)
}
sample_fun(x = df, rows = 5, nr = 2)
You don't mention if this can include replacement (i.e. if you sample 10:19, can you then also sample 15:24?). You also don't mention if you can sample anything above row 91, which would mean that sample of 10 gets cut off (i.e. 98,99,100 would only be 3 consecutive rows unless you want that to loop back to row 1). Assuming you can sample any value with replacement, the solution can be done in one line:
sapply(sample(1:100,5),function(x){seq(x,x+9)})
This applies the sequence function to each of 5 individually sampled numbers. The output will be a matrix, where each column is a sample of 10 consecutive rows, but as noted, these will potentially overlap, or go above 100.
If you want a solution where the rows will not overlap at all, and avoiding values over 100, without making values above 91 less likely to be sampled, this actually gets kind of trick, but I think the code below should work. You cant just sample from 1:91 without affect probability of your random sample, because then this means a value like 100 actually only has a 1/91 probability of being sampled (the sample value has to be 91), where other values don't involve that same constraint. This solution makes it so all rows are equally likely to be sampled.
Rows=c(1:100,1:100)
SampleRows=matrix(0,nrow=10,ncol=5)
for(i in 1:ncol(SampleRows)){
SampledValue=sample(Rows,1)
RowsIndex=min(which(Rows==SampledValue))
Sequence=Rows[RowsIndex:(RowsIndex+9)]
SampleRows[,i]=Sequence
Rows=Rows[!(Rows %in% Sequence)]
}
This approach creates a vector that sequences from 1:100, repeated twice (variable Rows), you'll see why this is important in a bit. For each of 5 iterations (corresponding to 5 samples), we take a sampled value from Rows, which will be a number 1:100, we then figure out where that number is in Rows, and take all 9 values next to it. In the first sample this will always be 10 consecutive numbers (e.g. 20:29). But then we remove those sampled values from Rows. If we happen to get the next sample as a value that would lead to overlap (like 18), then instead it samples (18,19,30,31,32,33,34...) since 20:29 have been removed. We need to do 1:100 twice in Rows, so that if we sample a value like 99, it resets from 100, back to 1.
If you want your output in a vector,throw in this at the end
sort(as.vector(SampleRows))
Let me know if this works for the needs of your problem.

R data.table: Difference between nested regressions results

I am comparing two alternative strategies to estimate linear regression models on subsets of data using the data.table package for R. The two strategies produce the same coefficients, so they appear equivalent. This appearance is deceiving. My question is:
Why is the data stored inside the lm models different?
library(data.table)
dat = data.table(mtcars)
# strategy 1
mod1 = dat[, .(models = .(lm(hp ~ mpg, data = .SD))), by = vs]
# strategy 2
mod2 = dat[, .(data = .(.SD)), by = vs][
, models := lapply(data, function(x) lm(hp ~ mpg, x))]
At first glance, the two approaches seem to produce identical results:
# strategy 1
coef(mod1$models[[1]])
#> (Intercept) mpg
#> 357.97866 -10.12576
# strategy 2
coef(mod2$models[[1]])
#> (Intercept) mpg
#> 357.97866 -10.12576
However, if I try to extract data from the (expanded) model.frame, I get different results:
# strategy 1
expanded_frame1 = expand.model.frame(mod1$models[[1]], "am")
table(expanded_frame1$am)
#>
#> 0 1
#> 7 11
# strategy 2
expanded_frame2 = expand.model.frame(mod2$models[[1]], "am")
table(expanded_frame2$am)
#>
#> 0 1
#> 12 6
This is a trivial minimal working example. My real use-case is that I obtained radically different results when applying sandwich::vcovCL to computed clustered standard errors for my models.
Edit:
I'm accepting the answer by #TimTeaFan (excellent detective work!) but adding a bit of useful info here for future readers.
As #achim-zeileis pointed out elsewhere, we can replicate a similar behavior in the global environment:
d <- subset(mtcars, subst = vs == 0)
m0 <- lm(hp ~ mpg, data = d)
d <- mtcars[0, ]
expand.model.frame(m0, "am")
[1] hp mpg am
<0 rows> (or 0-length row.names)
This does not appear to be a data.table-specific issue. And in general, we have to be careful when re-evaluating the data from a model.
I don't have a complete answer, but I was able to pinpoint the problem to some extent.
When we compare the output of the two models, we can see that the result is equal except for the calls, which are different (which makes sense, since they actually are different):
# compare models
purrr::map2(mod1$models[[1]], mod2$models[[1]], all.equal)
#> $coefficients
#> [1] TRUE
#>
#> $residuals
#> [1] TRUE
#>
#> $effects
#> [1] TRUE
#>
#> $rank
#> [1] TRUE
#>
#> $fitted.values
#> [1] TRUE
#>
#> $assign
#> [1] TRUE
#>
#> $qr
#> [1] TRUE
#>
#> $df.residual
#> [1] TRUE
#>
#> $xlevels
#> [1] TRUE
#>
#> $call
#> [1] "target, current do not match when deparsed"
#>
#> $terms
#> [1] TRUE
#>
#> $model
#> [1] TRUE
So it seems that the initial call is working correctly with both approaches, the problem arises once we try to access the underlying data.
If we have a look at how expand.model.frame gets its data, we can see that it calls eval(model$call$data, envir) where envir is defined as environment(formula(model)) the environment associated with the formula of the lm object.
If we have a look at the data in the associated environment of each model and compare it with the data we expect it to hold, we can see that the second approach yields the data we expect, while the first approach using .SD in the call yields some different data.
It is still not clear to me, why and what is happening, but we now know the problem is in the call to .SD. I first thought, it might be caused by naming a data.table .SD, but after playing around with models where the data is a data.table called .SD this does not seem to be the issue.
# data of model 2 (identical to subsetted mtcars)
environment(formula(mod2$models[[1]]))$x[order(mpg),]
#> mpg cyl disp hp drat wt qsec am gear carb
#> 1: 10.4 8 472.0 205 2.93 5.250 17.98 0 3 4
#> 2: 10.4 8 460.0 215 3.00 5.424 17.82 0 3 4
#> 3: 13.3 8 350.0 245 3.73 3.840 15.41 0 3 4
#> 4: 14.3 8 360.0 245 3.21 3.570 15.84 0 3 4
#> 5: 14.7 8 440.0 230 3.23 5.345 17.42 0 3 4
#> 6: 15.0 8 301.0 335 3.54 3.570 14.60 1 5 8
#> 7: 15.2 8 275.8 180 3.07 3.780 18.00 0 3 3
#> 8: 15.2 8 304.0 150 3.15 3.435 17.30 0 3 2
#> 9: 15.5 8 318.0 150 2.76 3.520 16.87 0 3 2
#> 10: 15.8 8 351.0 264 4.22 3.170 14.50 1 5 4
#> 11: 16.4 8 275.8 180 3.07 4.070 17.40 0 3 3
#> 12: 17.3 8 275.8 180 3.07 3.730 17.60 0 3 3
#> 13: 18.7 8 360.0 175 3.15 3.440 17.02 0 3 2
#> 14: 19.2 8 400.0 175 3.08 3.845 17.05 0 3 2
#> 15: 19.7 6 145.0 175 3.62 2.770 15.50 1 5 6
#> 16: 21.0 6 160.0 110 3.90 2.620 16.46 1 4 4
#> 17: 21.0 6 160.0 110 3.90 2.875 17.02 1 4 4
#> 18: 26.0 4 120.3 91 4.43 2.140 16.70 1 5 2
# subset and order mtcars data
mtcars_vs0 <- subset(mtcars, vs == 0)
mtcars_vs0[order(mtcars_vs0$mpg), ]
#> mpg cyl disp hp drat wt qsec vs am gear carb
#> Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4
#> Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4
#> Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4
#> Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
#> Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4
#> Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8
#> Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
#> AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
#> Dodge Challenger 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2
#> Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4
#> Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3
#> Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3
#> Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
#> Pontiac Firebird 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2
#> Ferrari Dino 19.7 6 145.0 175 3.62 2.770 15.50 0 1 5 6
#> Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
#> Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
#> Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2
# data of model 1 (not identical to mtcars)
environment(formula(mod1$models[[1]]))$.SD[order(mpg),]
#> mpg cyl disp hp drat wt qsec am gear carb
#> 1: 15.0 8 301.0 335 3.54 3.570 14.60 1 5 8
#> 2: 15.8 8 351.0 264 4.22 3.170 14.50 1 5 4
#> 3: 17.8 6 167.6 123 3.92 3.440 18.90 0 4 4
#> 4: 18.1 6 225.0 105 2.76 3.460 20.22 0 3 1
#> 5: 19.2 6 167.6 123 3.92 3.440 18.30 0 4 4
#> 6: 19.7 6 145.0 175 3.62 2.770 15.50 1 5 6
#> 7: 21.4 6 258.0 110 3.08 3.215 19.44 0 3 1
#> 8: 21.4 4 121.0 109 4.11 2.780 18.60 1 4 2
#> 9: 21.5 4 120.1 97 3.70 2.465 20.01 0 3 1
#> 10: 22.8 4 108.0 93 3.85 2.320 18.61 1 4 1
#> 11: 22.8 4 140.8 95 3.92 3.150 22.90 0 4 2
#> 12: 24.4 4 146.7 62 3.69 3.190 20.00 0 4 2
#> 13: 26.0 4 120.3 91 4.43 2.140 16.70 1 5 2
#> 14: 27.3 4 79.0 66 4.08 1.935 18.90 1 4 1
#> 15: 30.4 4 75.7 52 4.93 1.615 18.52 1 4 2
#> 16: 30.4 4 95.1 113 3.77 1.513 16.90 1 5 2
#> 17: 32.4 4 78.7 66 4.08 2.200 19.47 1 4 1
#> 18: 33.9 4 71.1 65 4.22 1.835 19.90 1 4 1
Add on
I tried digging a little deeper to see whats going on. First I called debug(as.formula) and then looked at the following objects in each iteration:
object
ls(environment(object))
We can see that in "strategy 2" each formula is associated with a different environment, and when looking at the environment we see it contains one object x, which when inspected (environment(object)$x) contains the expected mtcars data.
In "strategy 1" however, we can observe that each call to as.formula associates the same environment with the formula being created. Further, when inspecting the environment we can see that it is populated with the single vectors of the subsetted mtcars data (e.g. am, carb, cyl etc.) as well as some functions (e.g. .POSIXt, Cfastmean, strptime etc.). This is probably where things go awry. I would suspect that when associating the same environment with two different formulas (models), the first models underlying data gets "updated" when the second model is calculated. This should also be the reason why the model output itself is correct. To the time the first model is being calculated, the data is still correct. It is overwritten by the second model, which therefore is correct, too. But when accessing the underlying data afterwards things get messy.
Side note
I was curious if we can observe similar problems and differences in the tidyverse when using expand.model.frame and the answer is "yes". Here, the new rowwise notation throws an error, while the group_map as well as the map approach work:
# dplyr approaches:
# group_map: works
mod3 <- mtcars %>%
group_by(vs) %>%
group_map(~ lm(hp ~ mpg, data = .x))
expand.model.frame(mod3[[1]], "am")
# mutate / rowwise: does not work
mod4 <- mtcars %>%
nest_by(vs) %>%
mutate(models = list(lm(hp ~ mpg, data = data)))
expand.model.frame(mod4$models[[1]], "am")
# mutate / map: works
mod5 <- mtcars %>%
tidyr::nest(data = !vs) %>%
mutate(models = purrr::map(data, ~ lm(hp ~ mpg, data = .x)))
expand.model.frame(mod5$models[[1]], "am")

Replacing values in R dataframes based on conditional

I'm having trouble replacing values in a column of a R dataframe based upon conditions related to other data variables.
I've created a new dataframe called VAED1 based on the left join between the original data frame VAED (has over 20 variables) and another dataframe called new_map (has only 3 variables and one is called Category)
Here is the code i wrote that works fine:
#join the left side table (VAED) with the right side table (new_map) with the left join function
VAED1 <- VAED %>%
left_join(new_map, by = c("ID1" = "ID2"), suffix= c("_VAED", "_MAP"))***
I then added a three extra columns (nnate, NICU, enone) to the dataframe VAED1 using mutate function to create a new dataframe VAED2:
VAED2 <- VAED1 %>%
mutate(nnate = if_else((substr(W25VIC,1,1) == "P") & (CARE != "U") & (AGE < 1) , "Y", "N"))%>%
mutate(NICU = if_else((nnate == "Y") & (ICUH > 0), "Y", "N"))%>%
mutate(enone = if_else((EMNL == "E") , "Emerg", "Non-emerg")%>%***
Everything works fine to this point.
Finally I wanted to replace the values in one column called Category (this was a character variable in the original joined dataset new_map) based upon certain conditions of other variables in the dataframe. So only change values in the Category column when W25VIC and CARE variables equal certain values. Otherwise leave the original value,)
Use the code:
Category <- if_else((W25VIC == "R03A") & (SAMEDAY == "Y"), "08 Other multiday", Category)
This always shows an error - object 'W25VIC' and 'SAMEDAY' not found. It seems straightforward but the last line of code doesn't work no matter what i do. I check the dataframe using a Head command to make sure the data columns are there during each step. They exist but the code doesn't seem to recognise them.
Grateful for any ideas on what I am doing wrong.
Also used the command
Category[(W25VIC == "R03A") & (SAMEDAY == "Y")] <- "08 Other multiday"
Still same error message.
I think it is worth to readup on how the magrittr pipe works. The pipe takes an object from the left-hand side of an expression and moves it as the first argument into a function on the right.
So x %>% f() becomes f(x) and x %>% f(y) becomes f(x, y). In your last statement
Category <- if_else((W25VIC == "R03A") & (SAMEDAY == "Y"), "08 Other multiday", Category)
the x and the function of what to do following the evaluation of the if_else statement is missing. Here is an example how to use the pipe operator together with an if_else statement to generate a new column:
library(tidyverse)
data <- mtcars
new_data <- data %>% mutate( evaluation = if_else(hp > 150, "awesome", "lame"))
head(new_data, 20)
#> mpg cyl disp hp drat wt qsec vs am gear carb evaluation
#> 1 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4 lame
#> 2 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4 lame
#> 3 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1 lame
#> 4 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1 lame
#> 5 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2 awesome
#> 6 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1 lame
#> 7 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4 awesome
#> 8 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2 lame
#> 9 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2 lame
#> 10 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4 lame
#> 11 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4 lame
#> 12 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3 awesome
#> 13 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3 awesome
#> 14 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3 awesome
#> 15 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4 awesome
#> 16 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4 awesome
#> 17 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4 awesome
#> 18 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1 lame
#> 19 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2 lame
#> 20 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1 lame
Created on 2021-01-07 by the reprex package (v0.3.0)

How to search for string within variable labels of a data frame, and return a vector with all these variables in R

I have a data frame with a large amount of math and science related items, and I want all math related variables removed.
Variable names has no consistent naming for neither math nor science, so it's hard to search and select based variable name. However, the variable labels are descriptive of what the variable represents. I essentially want all variables with labels that contain the word "math" removed. I tried the following code:
library(dplyr)
library(Hmisc)
# Sample data frame:
M <- c(1, 2)
S <- c(3, 4)
old_df <- data.frame(M, S)
label(old_df$M) <- "My Mathematics Variable"
label(old_df$S) <- "My Science Variable"
#dplyr syntax:
new_df <- old_df %>% select( -contains(hmisc::label(.) == "MATH" ) )
using the Hmisc::label()-function to retrieve a vector with labels.
Sample code of the label()-function:
> label(old_df)
M S
"My Mathematics Variable" "My Science Variable"
> str(label(old_df))
Named chr [1:2] "My Mathematics Variable" "My Science Variable"
- attr(*, "names")= chr [1:2] "M" "S"
I need a what to search through the label items and find the string "math" within. I tried coerce to a matrix and data frame, but I still can't figure out how to search and retrive the variable names. Any suggestions that will get this to work is welcome.
You mean something like this? (UPDATED to more closely map grepl to your example.)
library(Hmisc)
library(dplyr)
Hmisc::label(mtcars$mpg) <- "Miles per Gallon" # grepl WILL catch this
Hmisc::label(mtcars$hp) <- "Not here" # nope
Hmisc::label(mtcars$qsec) <- "MILES all caps here" # nope unless you ignore_case = TRUE
Hmisc::label(mtcars$drat) <- "later in the label Miles is here" # yepp
mtcars %>% select_if(.predicate = !(grepl("Miles", Hmisc::label(.), ignore.case = TRUE)))
#> cyl disp hp wt qsec vs am gear carb
#> Mazda RX4 6 160.0 110 2.620 16.46 0 1 4 4
#> Mazda RX4 Wag 6 160.0 110 2.875 17.02 0 1 4 4
#> Datsun 710 4 108.0 93 2.320 18.61 1 1 4 1
#> Hornet 4 Drive 6 258.0 110 3.215 19.44 1 0 3 1
#> Hornet Sportabout 8 360.0 175 3.440 17.02 0 0 3 2
#> Valiant 6 225.0 105 3.460 20.22 1 0 3 1
#> Duster 360 8 360.0 245 3.570 15.84 0 0 3 4
#> Merc 240D 4 146.7 62 3.190 20.00 1 0 4 2
#> Merc 230 4 140.8 95 3.150 22.90 1 0 4 2
#> Merc 280 6 167.6 123 3.440 18.30 1 0 4 4
#> Merc 280C 6 167.6 123 3.440 18.90 1 0 4 4
#> Merc 450SE 8 275.8 180 4.070 17.40 0 0 3 3
#> Merc 450SL 8 275.8 180 3.730 17.60 0 0 3 3
#> Merc 450SLC 8 275.8 180 3.780 18.00 0 0 3 3
#> Cadillac Fleetwood 8 472.0 205 5.250 17.98 0 0 3 4
#> Lincoln Continental 8 460.0 215 5.424 17.82 0 0 3 4
#> Chrysler Imperial 8 440.0 230 5.345 17.42 0 0 3 4
#> Fiat 128 4 78.7 66 2.200 19.47 1 1 4 1
#> Honda Civic 4 75.7 52 1.615 18.52 1 1 4 2
#> Toyota Corolla 4 71.1 65 1.835 19.90 1 1 4 1
#> Toyota Corona 4 120.1 97 2.465 20.01 1 0 3 1
#> Dodge Challenger 8 318.0 150 3.520 16.87 0 0 3 2
#> AMC Javelin 8 304.0 150 3.435 17.30 0 0 3 2
#> Camaro Z28 8 350.0 245 3.840 15.41 0 0 3 4
#> Pontiac Firebird 8 400.0 175 3.845 17.05 0 0 3 2
#> Fiat X1-9 4 79.0 66 1.935 18.90 1 1 4 1
#> Porsche 914-2 4 120.3 91 2.140 16.70 0 1 5 2
#> Lotus Europa 4 95.1 113 1.513 16.90 1 1 5 2
#> Ford Pantera L 8 351.0 264 3.170 14.50 0 1 5 4
#> Ferrari Dino 6 145.0 175 2.770 15.50 0 1 5 6
#> Maserati Bora 8 301.0 335 3.570 14.60 0 1 5 8
#> Volvo 142E 4 121.0 109 2.780 18.60 1 1 4 2

Randomly sampling rows from particular months in a data set

I was given this task in R:
"Randomly select 10 trading days from each of the following months: January 2019 to June 2019 (6 months total)".
I have a CSV file of a company's stock trading history from the last 5 years (dates, opening price, closing price, changes, etc.) that I imported into R using this code (reading the file; setting date format; extracting all 6 relevant months):
SHAPIRENG5YEARS <- read.csv(file="C:\\Users\\Ron\\OneDrive\\5year.csv", header=TRUE, sep=",") #Choosing Shapir Engineering stock (last 5 years)
SHAPIRENG5YEARS$Date = as.Date(as.character(SHAPIRENG5YEARS$Date), format = "%d/%m/%Y")
January19=SHAPIRENG5YEARS[(SHAPIRENG5YEARS$Date > "2019-01-01" & SHAPIRENG5YEARS$Date < " 2019-01-31" ) ,]
February19=SHAPIRENG5YEARS[(SHAPIRENG5YEARS$Date > "2019-02-03" & SHAPIRENG5YEARS$Date < " 2019-02-28" ) ,]
March19=SHAPIRENG5YEARS[(SHAPIRENG5YEARS$Date > "2019-03-09" & SHAPIRENG5YEARS$Date < " 2019-03-31" ) ,]
April19=SHAPIRENG5YEARS[(SHAPIRENG5YEARS$Date > "2019-04-01" & SHAPIRENG5YEARS$Date < " 2019-04-30" ) ,]
May19=SHAPIRENG5YEARS[(SHAPIRENG5YEARS$Date > "2019-05-01" & SHAPIRENG5YEARS$Date < " 2019-05-30" ) ,]
June19=SHAPIRENG5YEARS[(SHAPIRENG5YEARS$Date > "2019-06-02" & SHAPIRENG5YEARS$Date < " 2019-06-30" ) ,]
Now I don't know what should I do. I can sample one month using
January19sample <-January19[sample(nrow(January19), 10), ]
but I want to avoid doing this six times (once for each month).
Ideally I'd like to sample all 10*6=60 values from the original big data frame.
Edit: I'm still struggling. I tried this (It is not good because I'm getting a list of 6 lists, each with length of 18 and not random 10 picks):
SamplesOfMonths=list(c(January19),c(February19),c(March19),c(April19),c(May19),c(June19))
TopSamples=c(1:10)
LowSamples=c(1:10)
for (i in 1:6)
{
Changer=unlist(SamplesOfMonths[i])
TopSamples[i]=sample(Changer, 10)[2]
LowSamples[i]=sample(Changer, 10)[1]
print(sample(Changer, 10))
}
You can use the sample() function and bracketted [ subsetting.
mtcars[sample(1:nrow(mtcars), size = 10, replace = FALSE),]
#> mpg cyl disp hp drat wt qsec vs am gear carb
#> Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
#> Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4
#> Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2
#> Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2
#> Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
#> Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4
#> Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8
#> Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4
#> Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1
#> Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
Broken down, step by step:
rows_in_data <- nrow(mtcars)
rows_in_data
#> [1] 32
# Sample from 1 to number of rows, selecting some using `size = ` argument
index_of_random_rows <- sample(1:rows_in_data, size = 10, replace = FALSE)
#use bracketted subsetting data[rows, columns]
mtcars[index_of_random_rows, ]
#> mpg cyl disp hp drat wt qsec vs am gear carb
#> Ferrari Dino 19.7 6 145.0 175 3.62 2.770 15.50 0 1 5 6
#> Dodge Challenger 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2
#> Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4
#> Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
#> Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8
#> Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4
#> Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2
#> Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
#> Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
#> Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
Created on 2020-01-13 by the reprex package (v0.3.0)
Using map:
#custom function
my_samples <- function(df = mtcars, num_rows = 10){
sample(1:nrow(df), size = num_rows)
}
purrr::map(list(mtcars, iris), my_samples)
#> [[1]]
#> [1] 16 1 4 22 30 2 21 14 23 10
#>
#> [[2]]
#> [1] 73 31 112 1 43 91 87 23 19 16
Created on 2020-01-13 by the reprex package (v0.3.0)

Resources