Here is the data
http://drdpat.bih.nic.in/Downloads/Rice-Varieties-1996-2012.pdf
It is a pdf. If you open the pdf, you will on page 2, there is a table that I need to extract and store it in a dataframe. I followed this link to do this
https://ropensci.org/blog/2016/03/01/pdftools-and-jeroen
library(pdftools)
text <- pdf_text("data.pdf")
dat<-text[2] # this reads the second page
After this, no matter what I try it does not convert it into a tabular format. I
tried this:
dat1 <- matrix(dat, byrow = TRUE,nrow = 12, ncol = 8) # it didn't work
Tried to use the scan function
dat.s <- scan(dat, what = "character", sep = " ", skip = 2) # no use
Can anyone help me with this? Also I am only looking to achieve this in R
Thanks
The structure of the tables in the PDF is a bit messed up: some columns overlap with each other and the tabulizer algorithm does not extract them correctly.
I was only able to extract the first 6 columns from page 2; the last 2 columns (Salient Features, "Recommended for cultivation") remain problematic...
library(tabulizer)
library(dplyr)
out1 <- extract_tables("Rice-Varieties-1996-2012.pdf", pages=2)[[1]]
## With a moderate amount of hacking,
## the following columns are correctly extracted:
## 1. Sl. No.
## 4. Year of Notification
## 5. Duration (in days)
## 6. Eco-System
sel <- gsub(" ","",out1[ ,c(1,4,5,6)])
## To extract Parentage column, you can use the `area` parameter:
## I figured out the values by trial and error
out2 <- extract_tables("Rice-Varieties-1996-2012.pdf", guess=FALSE,
pages=2,
area=list(c(80,120,2000,420) ) )[[1]]
sel <- cbind(sel,out2[1:nrow(sel),1])
## The header is contained in the first 3 rows of `sel`
## which can be aggregated by `paste0`
print(sel)
head <- aggregate(sel[1:3, ], by=list(rep(1,3)), paste0, collapse="") %>%
select(-Group.1)
## The body is a bit harder, because each record might be split across
## a variable number of rows, depending on the entries.
## I have used non-empty records for column 1 (Sl.No.)
## to identify the breakpoints where to split sel into row blocks
## pertaining to the same record.
body <- sel[-(1:3), ]
brks <- body[ ,1]!=""
ibrk <- c((1:nrow(body))[brks], nrow(body)+1)
ll <- unlist(sapply(1:(length(ibrk)-1), function(k) rep(ibrk[k],ibrk[k+1]-ibrk[k])))
stopifnot(length(ll)==nrow(body))
body <- data.frame(body, stringsAsFactors=FALSE)
colnames(body) <- head
tab <- aggregate(body, by=list(ll), paste0, collapse="") %>%
select(-Group.1)
print(tab)
## Using the same trick as above with brks and ibrk,
## one is able to extract column "Name of variety"
## (again, I found the values of area by trial and error).
out3 <- extract_tables("Rice-Varieties-1996-2012.pdf", guess=FALSE,
pages=2,
area=list(c(80,20,2000,130) ) )[[1]]
sel3 <- gsub(" ","",out3)
head3 <- aggregate(sel3[1:2, ], by=list(rep(1,2)), paste0, collapse="") %>%
select(-Group.1)
body3 <- sel3[-(1:2), ]
brks3 <- body3[ ,1]!=""
ibrk3 <- c((1:nrow(body3))[brks3], nrow(body3)+1)
ll3 <- unlist(sapply(1:(length(ibrk3)-1), function(k) rep(ibrk3[k],ibrk3[k+1]-ibrk3[k])))
stopifnot(length(ll3)==nrow(body3))
body3 <- data.frame(body3, stringsAsFactors=FALSE)
colnames(body3) <- head3
tab3 <- aggregate(body3, by=list(ll3), paste0, collapse="") %>%
select(-Group.1)
print(tab3)
## I have not managed to find a value of `area` which correctly splits
## the last two columns *and* allows to identify the rows in each record...
tab <- tab %>% left_join(tab3)
Related
I am scraping a WebPage using rvest library, my, interest is to extract all the data from the table present in the webpage.
library(rvest)
library(tidyr)
url <- ''
# Parsing the HTML Code from Website
hdb_webpage <- read_html(url)
## Grabbing Page Info - Table Input 1
dat_1 <- hdb_webpage %>%
html_table(header=FALSE) %>%
.[[2]] %>%
as.data.frame()
# Transposing
dat_1 <- as.data.frame(t(dat_1$X3))
# Changing colnames
colnames(dat_1) <- c("Name", "Address", "Category", "TradeType", "Contact")
I continue to do the same manually for the rest of the dataframes present in the list. There are actually 18 dataframe present in the list which consists of varying Variables and Observations, which leads to spending much time in cleaning the data.
Alternatively, for scraping the whole table, I use the following code;
tbls_ls <- hdb_webpage %>%
html_nodes("table") %>%
html_table(header = FALSE) %>%
.[2:18]
df <- data.frame(matrix(unlist(tbls_ls), nrow=279, byrow=T),stringsAsFactors=FALSE)
df <- unique(df)
This code extracts all the information from the table into list and then I use unlist to convert into a dataframe and then apply unique to get relevant data.
Is there a way through which I can extract all the data from the table without going thorough one by one.
When you look at the raw list rw.list read in from html_table() there are three if-cases to be handled differently.
library(rvest)
path <- 'https://services2.hdb.gov.sg/webapp/AA16RMSBusinessDirectory/AA16SLevelmap?SearchOption=1&BLK=166&STREET=WOODLANDS+STREET+13++++++++++++++++++++++++++++++++++++++++++++++++++%EF%BF%BD&pcode=730166&STREETLIST=--&MAIN_TRADE_CODE=0000Please+Select+Category%24&Forward=&FROMHOME=true&Slvl=1&SEARCHPANEL=1&MAIN_TRADE_DESC'
# Parsing the HTML Code from Website
rw <- read_html(path)
rw.list <- html_table(rw)[-1]
names(rw.list) <- lapply(rw.list, function(x) # attribute clean names
unique(gsub("\\n|\\r|\\t|\\s+(More Information)?", "", x[1, ])))
l1 <- lapply(rw.list, function(x) t(x[-(1:2), ]))
l1 <- lapply(1:length(l1), function(x) {
d <- as.data.frame(l[[x]], stringsAsFactors=FALSE)
names(d) <- d[1, ]
if (length(d) == 10 | length(d) == 6)
out <- matrix(unlist(d[3, grep("Category|Trade|(Tel No)", names(d), )]),
ncol=2,
dimnames=list(NULL, d[1, 1:2]))
else if (length(d) == 8)
out <- matrix(unlist(t(d[3, grep("Category|Trade|(Tel No)", names(d), )])),
ncol=3, byrow=TRUE, dimnames=list(NULL, d[1, 1:3]))
else
out <- d[3, ]
return(cbind(id=names(l)[x], out))
})
The clean list we can merge with Reduce().
result <- Reduce(function(...) merge(..., all=TRUE), l1)
Result
head(result, 3)
# id Category Trade Tel No
# 1 1.GREENEMERALDAQUARIA Pets Aquarium Fish (freshwater/marine) And Accessories 68160208
# 2 2.SEEMRALICIOUS Beauty Beauty Salon 66357994
# 3 3.MORRISONOPTICALPTELTD Shopping Optical Goods & Eyewear 63666300
I'm struggling with the following issue: I have many data frames with different names (For instance, Beverage, Construction, Electronic etc., dim. 540x1000). I need to clean each of them, calculate and save as zoo object and R data file. Cleaning is the same for all of them - deleting the empty columns and the columns with some specific names.
For example:
Beverages <- Beverages[,colSums(is.na(Beverages))<nrow(Beverages)] #removing empty columns
Beverages_OK <- Beverages %>% select (-starts_with ("X.ERROR")) # dropping X.ERROR column
Beverages_OK[, 1] <- NULL #dropping the first column
Beverages_OK <- cbind(data[1], Beverages_OK) # adding a date column
Beverages_zoo <- read.zoo(Beverages_OK, header = FALSE, format = "%Y-%m-%d")
save (Beverages_OK, file = "StatisticsInRFormat/Beverages.RData")
I tied to use 'lapply' function like this:
list <- ls() # the list of all the dataframes
lapply(list, function(X) {
temp <- X
temp <- temp [,colSums(is.na(temp))< nrow(temp)] #removing empty columns
temp <- temp %>% select (-starts_with ("X.ERROR")) # dropping X.ERROR column
temp[, 1] <- NULL
temp <- cbind(data[1], temp)
X_zoo <- read.zoo(X, header = FALSE, format = "%Y-%m-%d") # I don't know how to have the zame name as X has.
save (X, file = "StatisticsInRFormat/X.RData")
})
but it doesn't work. Is any way to do such a job? Is any r-package that facilitates it?
Thanks a lot.
If you are sure the you have only the needed data frames in the environment this should get you started:
df1 <- mtcars
df2 <- mtcars
df3 <- mtcars
list <- ls()
lapply(list, function(x) {
tmp <- get(x)
})
I have a table, called table_wo_nas, with multiple columns, one of which is titled ID. For each value of ID there are many rows. I want to write a function that for input x will output a data frame containing the number of rows for each ID, with column headers ID and nobs respectively as below for x <- c(2,4,8).
## id nobs
## 1 2 1041
## 2 4 474
## 3 8 192
This is what I have. It works when x is a single value (ex. 3), but not when it contains multiple values, for example 1:10 or c(2,5,7). I receive the warning "In ID[counter] <- x : number of items to replace is not a multiple of replacement length". I've just started learning R and have been struggling with this for a week and have searched manuals, this site, Google, everything. Can someone help please?
counter <- 1
ID <- vector("numeric") ## contain x
nobs <- vector("numeric") ## contain nrow
for (i in x) {
r <- subset(table_wo_nas, ID %in% x) ## create subset for rows of ID=x
ID[counter] <- x ## add x to ID
nobs[counter] <- nrow(r) ## add nrow to nobs
counter <- counter + 1 } ## loop
result <- data.frame(ID, nobs) ## create data frame
In base R,
# To make a named vector, either:
tmp <- sapply(split(table_wo_nas, table_wo_nas$ID), nrow)
# OR just:
tmp <- table(table_wo_nas$ID)
# AND
# arrange into data.frame
nobs_df <- data.frame(ID = names(tmp), nobs = tmp)
Alternately, coerce the table into a data.frame directly, and rename:
nobs_df <- data.frame(table(table_wo_nas$ID))
names(nobs_df) <- c('ID', 'nobs')
If you only want certain rows, subset:
nobs_df[c(2, 4, 8), ]
There are many, many more options; these are just a few.
With dplyr,
library(dplyr)
table_wo_nas %>% group_by(ID) %>% summarise(nobs = n())
If you only want certain IDs, add on a filter:
table_wo_nas %>% group_by(ID) %>% summarise(nobs = n()) %>% filter(ID %in% c(2, 4, 8))
Seems pretty straightforward if you just use table again:
tbl <- table( table_wo_nas[ , 'ID'] )
data.frame( IDs = names(tbl), nobs= tbl)
Could also get a quick answer although with different column names using:
as.data.frame(table( table_wo_nas[ , 'ID'] ))
Try this.
x=c(2,4,8)
count_of_id=0
#df is your data frame table_wo_nas
count_of<-function(x)
{for(i in 1 : length(x))
{count_of_id[i]<-length(which(df$id==x[i])) #find out the n of rows for each unique value of x
}
df_1<-cbind(id,count_of_id)
return(df_1)
}
update: Code below seems to work
I'm not entire sure to how this question, so I apologise if this is worded badly. I tried looking for "combine different elements of a list using apply" but that doesn't seem to work.
Anyways, as the result of scraping a website, I have two vectors giving identifying information and a list that contains a number of different tables. A simplified version looks something like this:
respondents <- c("A", "B")
questions <- c("question1", "question2")
df1 <- data.frame(
option = c("yes", "no"),
percentage = c(70, 30), stringsAsFactors = FALSE)
df2 <- data.frame(
option= c("today", "yesterday"),
percentage =c(30, 70), stringsAsFactors = FALSE)
df3 <- data.frame(
option = c("yes", "no"),
percentage = c(60, 40), stringsAsFactors = FALSE)
df4 <- data.frame(
option= c("today", "yesterday"),
percentage =c(20, 80), stringsAsFactors = FALSE)
lst <- list(df1, df2, df3, df4)
The first two tables are questions and responses from the first participant, and the second two tables are questions are from the second participant. What i would like to do is to create two tables that contain the answers to the questions for the two participants. So I would like something that looks like this:
question1 <- data.frame(
option = c("yes", "no"),
A = c(70, 30),
B = c(60, 40), stringsAsFactors = FALSE)
question2 <- data.frame(
option = c("today", "yesterday"),
A = c(30, 70),
B = c(20, 80), stringsAsFactors = FALSE)
In my case, I have 122 responses from 51 participants, and it ordered so that tables 1-122 are from the first participant, the next 122 tables are from the second participant, etc. Ultimately, then, I would like to have 122 tables (one table per question), with each table containing 51 columns that correspond to each participant. I am more or less at a loss as to how to do this, so I would appreciate any suggestions.
This should now work:
library("RCurl")
library("XML")
# Get the data
## Create URL address
mainURL <- 'http://www4.uwm.edu/FLL/linguistics/dialect/staticmaps/'
stateURL <- 'states.html'
url <- paste0(mainURL, stateURL)
## Download URL
tmp <- getURL(url)
## Parse
tmp <- htmlTreeParse(tmp, useInternalNodes = TRUE)
## Extract page addresses and save to subURL
subURL <- unlist(xpathSApply(tmp, '//a[#href]', xmlAttrs))
## Remove pages that aren't state's names
subURL <- subURL[-(1:4)]
## Show first four states
head(subURL, 4)
# Get questions
## Select first state
suburl <- subURL[1]
## Paste it at the end of the main URL
url <- paste0(mainURL, suburl)
## Download URL
tmp <- getURL(url)
## Read data from html
tb <- readHTMLTable(tmp, stringsAsFactors = FALSE)
##Remove empty strings
Questions <- Questions[Questions!= '']
# Create objects to populate later
stateNames <- rep('', length(subURL))
## Populate stateNames
### Remove state_ from stateNames
stateNames <- gsub('state_','',subURL)
### Remove .html from stateNames
stateNames <- gsub('.html','',stateNames)
# Remove pictures in the data representing IPA symbols with their names (e.g., names of the pictures)
## Get url
url <- paste0(mainURL, subURL)
tmp <- getURL(url)
## Replace .gif with _
tmp <- gsub(".gif>", '_', tmp)
## Replace "<img\\s+src=./images/" with _
tmp <- gsub("<img\\s+src=./images/", '_', tmp)
# Read in data
tb <- readHTMLTable(tmp, stringsAsFactors = FALSE)
## Subset 2nd and 4th columns and apply to every item on list
tb <- lapply(tb, function(x) x[,c(2,4)])
## Remove quotation marks, percent sign and convert to number; apply to every item
tb <- lapply(tb, function(x) {
x [,2 ] = gsub('\\(','',x[,2] )
x [,2 ] = gsub('%\\)','',x[,2])
x [,2 ] = as.numeric(x[,2])
x
}
)
## Assign column names to all dataframes
tb <- lapply(tb, setNames , nm = c("option", "percentage"))
#get rid of extra tables
tb1 <- tb[-seq(1, length(tb), by=123)]
## Function to clean data sets
f1 <- function(list1){ Reduce(function(...) merge(..., by= 'option', all=TRUE), list1) }; res <- lapply(1:122, function(i) {indx <- seq(i, length(tb), by=122); f1(tb[indx])})
## Function to merge datasets together
res1 <- lapply(1:122, function(i) f1(tb1[seq(i, length(tb1), by=122)]))
## Create names for the states
stateNames2 <- c("option", stateNames)
# Rename columns in the new dataframes
res2 <- lapply(res1, setNames , nm = stateNames2)
# Test to see whether it works
test <- res2[[122]]
Thanks to akrun (see comments), I got this to work. The full code is here:
library("RCurl")
library("XML")
# Get the data
## Create URL address
mainURL <- 'http://www4.uwm.edu/FLL/linguistics/dialect/staticmaps/'
stateURL <- 'states.html'
url <- paste0(mainURL, stateURL)
url
## Download URL
tmp <- getURL(url)
## Parse
tmp <- htmlTreeParse(tmp, useInternalNodes = TRUE)
## Extract page addresses and save to subURL
subURL <- unlist(xpathSApply(tmp, '//a[#href]', xmlAttrs))
## Remove pages that aren't state's names
subURL <- subURL[-(1:4)]
## Show first four states
head(subURL, 4)
# Get questions
## Select first state
suburl <- subURL[1]
## Paste it at the end of the main URL
url <- paste0(mainURL, suburl)
## Download URL
tmp <- getURL(url)
## Read data from html
tb <- readHTMLTable(tmp, stringsAsFactors = FALSE)
## Remove first column
Questions <- tb[[1]][,1]
##Remove empty strings
Questions <- Questions[Questions!= '']
# Create objects to populate later
survey <- vector(length(subURL), mode = "list")
i <- 1
stateNames <- rep('', length(subURL))
## Populate stateNames
### Remove state_ from stateNames
stateNames <- gsub('state_','',subURL)
### Remove .html from stateNames
stateNames <- gsub('.html','',stateNames)
# Remove pictures in the data representing IPA symbols with their names (e.g., names of the pictures)
## Get url
url <- paste0(mainURL, subURL)
tmp <- getURL(url)
## Replace .gif with _
tmp <- gsub(".gif>", '_', tmp)
## Replace "<img\\s+src=./images/" with _
tmp <- gsub("<img\\s+src=./images/", '_', tmp)
# Read in data
tb <- readHTMLTable(tmp, stringsAsFactors = FALSE)
#tb <- tb[-1]
## Subset 2nd and 4th columns and apply to every item on list
tb <- lapply(tb, function(x) x[,c(2,4)])
## Remove quotation marks, percent sign and convert to number; apply to every item
tb <- lapply(tb, function(x) {
x [,2 ] = gsub('\\(','',x[,2] )
x [,2 ] = gsub('%\\)','',x[,2])
x [,2 ] = as.numeric(x[,2])
x
}
)
## Assign column names to all dataframes
tb <- lapply(tb, setNames , nm = c("option", "percentage"))
## Remove unneeded dataframes in list
tb1 <- tb[-seq(1, length(tb), by=123)]
## Function to clean data sets
f1 <- function(list1){ Reduce(function(...) merge(..., by= 'option', all=TRUE), list1) }; res <- lapply(1:122, function(i) {indx <- seq(i, length(tb), by=122); f1(tb[indx])})
## Function to merge datasets together
res1 <- lapply(1:122, function(i) f1(tb1[seq(i, length(tb1), by=122)]))
## Create names for the states
stateNames2 <- c("Options", stateNames)
# Rename columns in the new dataframes
res2 <- lapply(res1, setNames , nm = stateNames2)
# Test to see whether it works
test <- res2[[1]]
I have two data frames. First one looks like
dat <- data.frame(matrix(nrow=2,ncol=3))
names(dat) <- c("Locus", "Pos", "NVAR")
dat[1,] <- c("ACTC1-001_1", "chr15:35087734..35087734", "1" )
dat[2,] <- c("ACTC1-001_2 ", "chr15:35086890..35086919", "2")
where chr15:35086890..35086919 indicates all the numbers within this range.
The second looks like:
dat2 <- data.frame(matrix(nrow=2,ncol=3))
names(dat2) <- c("VAR","REF.ALT"," FUNC")
dat2[1,] <- c("chr1:116242719", "T/A", "intergenic" )
dat2[2,] <- c("chr1:116242855", "A/G", "intergenic")
I want to merge these by the values in dat$Pos and dat2$VAR. If the single number in a cell in dat2$VAR is contained within the range of a cell in dat$Pos, I want to merge those rows. If this occurs more than once (dat2$VAR in more than one range in dat$Pos, I want it merged each time). What's the easiest way to do this?
Here is a solution, quite short but not particularly efficient so I would not recommend it for large data. However, you seemed to indicate your data was not that large so give it a try and let me know:
library(plyr)
exploded.dat <- adply(dat, 1, function(x){
parts <- strsplit(x$Pos, ":")[[1]]
chr <- parts[1]
range <- strsplit(parts[2], "..", fixed = TRUE)[[1]]
start <- range[1]
end <- range[2]
data.frame(VAR = paste(chr, seq(from = start, to = end), sep = ":"), x)
})
merge(dat2, exploded.dat, by = "VAR")
If it is too slow or uses too much memory for your needs, you'll have to implement something a bit more complex and this other question looks like a good starting point: Merge by Range in R - Applying Loops.
Please try this out and let us know how it works. Without a larger data set it is a bit hard to trouble shoot. If for whatever reason it does not work, please share a few more rows from your data tables (specifically ones that would match)
SPLICE THE DATA
range.strings <- do.call(rbind, strsplit(dat$Pos, ":"))[, 2]
range.strings <- do.call(rbind, strsplit(range.strings, "\\.\\."))
mins <- as.numeric(range.strings[,1])
maxs <- as.numeric(range.strings[,2])
d2.vars <- as.numeric(do.call(rbind, str_split(dat2$VAR, ":"))[,2])
names(d2.vars) <- seq(d2.vars)
FIND THE MATCHES
# row numebr is the row in dat
# col number is the row in dat2
matches <- sapply(d2.vars, function(v) mins < v & v <= maxs)
MERGE
# create a column in dat to merge-by
dat <- cbind(dat, VAR=NA)
# use the VAR in dat2 as the merge id
sapply(seq(ncol(matches)), function(i)
dat$VAR <- dat2[i, "VAR"] )
merge(dat, dat2)