Webscraping Tables From Wikipedia in R - r

I was wondering if anyone had useful ideas or code for web scraping tables from Wikipedia.
Specifically, I'm interested in the Presidential election results table in the "Results by county" section on Wikipedia.
An example table can be found using the following link and scrolling down to the "Results by county" section: https://en.wikipedia.org/wiki/1948_United_States_presidential_election_in_Texas
The table looks like this:
I've tried some solutions from the following StackOverflow post: Importing wikipedia tables in R
However, they don't appear to be appliable to the type of table I want to scrape in Wikipedia.
Any advice, solutions, or code would be greatly appreciated. Thank you!

Making use of the rvest package you could get the table by first selecting the element containing the desired table via html_element("table.wikitable.sortable") and then extracting the table via html_table() like so:
library(rvest)
url <- "https://en.wikipedia.org/wiki/1948_United_States_presidential_election_in_Texas"
html <- read_html(url)
county_table <- html %>%
html_element("table.wikitable.sortable") %>%
html_table()
head(county_table)
#> # A tibble: 6 x 14
#> County `Harry S. Truman… `Harry S. Truman… `Thomas E. Dewey… `Thomas E. Dewe…
#> <chr> <chr> <chr> <chr> <chr>
#> 1 County # % # %
#> 2 Anders… 3,242 62.37% 1,199 23.07%
#> 3 Andrews 816 85.27% 101 10.55%
#> 4 Angeli… 4,377 69.05% 1,000 15.78%
#> 5 Aransas 418 61.02% 235 34.31%
#> 6 Archer 1,599 86.20% 191 10.30%
#> # … with 9 more variables: Strom ThurmondStates’ Rights Democratic <chr>,
#> # Strom ThurmondStates’ Rights Democratic.1 <chr>,
#> # Henry A. WallaceProgressive <chr>, Henry A. WallaceProgressive.1 <chr>,
#> # Various candidatesOther parties <chr>,
#> # Various candidatesOther parties.1 <chr>, Margin <chr>, Margin.1 <chr>,
#> # Total votes cast[11] <chr>

Related

Scrap webpage that requires button click

I am trying to scrap data from the link below. I need to click and download a csv file available in the csv button from the webpage.
library(netstat)
library(RSelenium)
url <- https://gtr.ukri.org/search/project?term=%22climate+change%22+OR+%22climate+crisis%22&fetchSize=25&selectedSortableField=&selectedSortOrder=&fields=pro.gr%2Cpro.t%2Cpro.a%2Cpro.orcidId%2Cper.fn%2Cper.on%2Cper.sn%2Cper.fnsn%2Cper.orcidId%2Cper.org.n%2Cper.pro.t%2Cper.pro.abs%2Cpub.t%2Cpub.a%2Cpub.orcidId%2Corg.n%2Corg.orcidId%2Cacp.t%2Cacp.d%2Cacp.i%2Cacp.oid%2Ckf.d%2Ckf.oid%2Cis.t%2Cis.d%2Cis.oid%2Ccol.i%2Ccol.d%2Ccol.c%2Ccol.dept%2Ccol.org%2Ccol.pc%2Ccol.pic%2Ccol.oid%2Cip.t%2Cip.d%2Cip.i%2Cip.oid%2Cpol.i%2Cpol.gt%2Cpol.in%2Cpol.oid%2Cprod.t%2Cprod.d%2Cprod.i%2Cprod.oid%2Crtp.t%2Crtp.d%2Crtp.i%2Crtp.oid%2Crdm.t%2Crdm.d%2Crdm.i%2Crdm.oid%2Cstp.t%2Cstp.d%2Cstp.i%2Cstp.oid%2Cso.t%2Cso.d%2Cso.cn%2Cso.i%2Cso.oid%2Cff.t%2Cff.d%2Cff.c%2Cff.org%2Cff.dept%2Cff.oid%2Cdis.t%2Cdis.d%2Cdis.i%2Cdis.oid%2Ccpro.rtpc%2Ccpro.rcpgm%2Ccpro.hlt&type=#/csvConfirm
I am struggling to implement that using Selenium. Here is the code I have so far.
rD <- rsDriver(port= free_port(), browser = "chrome", chromever = "106.0.5249.21", check = TRUE, verbose = TRUE)
remote_driver <- rD[["client"]]
remDr <- rD$client
remDr$navigate(url)
webElem <- remDr$findElement(using = "css", "content gtr-body d-flex flex-column ng-scope")
webElem$clickElement()
You can often just record the network log and see what request is sent when hitting the download button. In Chrome, right click Inspect, then look for the network tab. In this case there is only one request sent:
Right click and "copy as cURL" to see the whole request or just click copy URL, since the cookies and headers are not necessary here. I wrote a quick function around the task of querying the site:
dl_ukri <- function(query,
destfile = paste0(query, ".csv"),
size = 25L,
quiet_download = FALSE) {
url <- paste0(
"https://gtr.ukri.org/search/project/csv?term=",
urltools::url_encode(query),
"&selectedFacets=&fields=acp.d,is.t,prod.t,pol.oid,acp.oid,rtp.t,pol.in,prod.i,per.pro.abs,acp.i,col.org,acp.t,is.d,is.oid,cpro.rtpc,prod.d,stp.oid,rtp.i,rdm.oid,rtp.d,col.dept,ff.d,ff.c,col.pc,pub.t,kf.d,dis.t,col.oid,pro.t,per.sn,org.orcidId,per.on,ff.dept,rdm.t,org.n,dis.d,prod.oid,so.cn,dis.i,pro.a,pub.orcidId,pol.gt,rdm.i,rdm.d,so.oid,per.fnsn,per.org.n,per.pro.t,pro.orcidId,pub.a,col.d,per.orcidId,col.c,ip.i,pro.gr,pol.i,so.t,per.fn,col.i,ip.t,ff.oid,stp.i,so.i,cpro.rcpgm,cpro.hlt,col.pic,so.d,ff.t,ip.d,dis.oid,ip.oid,stp.d,rtp.oid,ff.org,kf.oid,stp.t&type=&selectedSortableField=score&selectedSortOrder=DESC"
)
curl::curl_download(url, destfile, quiet = quiet_download)
}
Testing this with your original search:
dl_ukri('"climate change" OR "climate crisis"', destfile = "test.csv")
readr::read_csv("test.csv")
#> Rows: 5894 Columns: 25
#> ── Column specification ────────────────────────────────────────────────────────
#> Delimiter: ","
#> chr (23): FundingOrgName, ProjectReference, LeadROName, Department, ProjectC...
#> dbl (2): AwardPounds, ExpenditurePounds
#>
#> ℹ Use `spec()` to retrieve the full column specification for this data.
#> ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
#> # A tibble: 5,894 × 25
#> FundingOrgN…¹ Proje…² LeadR…³ Depar…⁴ Proje…⁵ PISur…⁶ PIFir…⁷ PIOth…⁸ PI OR…⁹
#> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
#> 1 ESRC ES/W00… Univer… School… Fellow… Thew Harriet Christ… http:/…
#> 2 AHRC AH/W00… Univer… Arts L… Resear… Scott Peter Manley <NA>
#> 3 AHRC 2609218 Queen … Drama Studen… <NA> <NA> <NA> <NA>
#> 4 UKRI MR/V02… Univer… Politi… Fellow… Spaiser Viktor… <NA> http:/…
#> 5 MRC MC_PC_… Univer… <NA> Intram… Alessi Dario Renato <NA>
#> 6 AHRC 1948811 Royal … School… Studen… <NA> <NA> <NA> <NA>
#> 7 EPSRC 2688399 Brunel… Chemic… Studen… <NA> <NA> <NA> <NA>
#> 8 ESRC ES/T01… Univer… Social… Resear… Walker Cather… Louise http:/…
#> 9 AHRC AH/X00… Queen … Drama Resear… Herita… Paul <NA> http:/…
#> 10 ESRC 2272756 Univer… Sch of… Studen… <NA> <NA> <NA> <NA>
#> # … with 5,884 more rows, 16 more variables: StudentSurname <chr>,
#> # StudentFirstName <chr>, StudentOtherNames <chr>, `Student ORCID iD` <chr>,
#> # Title <chr>, StartDate <chr>, EndDate <chr>, AwardPounds <dbl>,
#> # ExpenditurePounds <dbl>, Region <chr>, Status <chr>, GTRProjectUrl <chr>,
#> # ProjectId <chr>, FundingOrgId <chr>, LeadROId <chr>, PIId <chr>, and
#> # abbreviated variable names ¹​FundingOrgName, ²​ProjectReference, ³​LeadROName,
#> # ⁴​Department, ⁵​ProjectCategory, ⁶​PISurname, ⁷​PIFirstName, ⁸​PIOtherNames, …
Created on 2022-10-17 with reprex v2.0.2
Voilà. I also played around with the fetchSize=25 which is in the original URL. But it does not seem to do anything, so I just omitted it.

scraping data table from clinicaltrials.gov with rvest

I'd like to scrape this data table when I put in search terms on clinicaltrials.gov. Specifically, I'd like to scrape the table you see on this page: https://clinicaltrials.gov/ct2/results?term=nivolumab+AND+Overall+Survival. See below for screenshot:
I've tried this code, but I don't think I got the right css selector:
# create custom url
ctgov_url <- "https://clinicaltrials.gov/ct2/results?term=nivolumab+AND+Overall+Survival"
# read HTML page
ct_page <- rvest::read_html(ctgov_url)
# extract related terms
ct_page %>%
# find elements that match a css selector
rvest::html_element("t") %>%
# retrieve text from element (html_text() is much faster than html_text2())
rvest::html_table()
You don't need rvest here at all. The page provides a download button to get a csv of the search items. This has a basic url-encoded GET syntax which allows you to create a simple little API:
get_clin_trials_data <- function(terms, n = 1000) {
terms<- URLencode(paste(terms, collapse = " AND "))
df <- read.csv(paste0(
"https://clinicaltrials.gov/ct2/results/download_fields",
"?down_count=", n, "&down_flds=shown&down_fmt=csv",
"&term=", terms, "&flds=a&flds=b&flds=y"))
dplyr::as_tibble(df)
}
This allows you to pass in a vector of search terms and a maximum number of results to return. No need for complex parsing as would be required with web scraping.
get_clin_trials_data(c("nivolumab", "Overall Survival"), n = 10)
#> # A tibble: 10 x 8
#> Rank Title Status Study.Results Conditions Interventions Locations URL
#> <int> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
#> 1 1 A Study ~ Compl~ No Results A~ Hepatocel~ "" "Bristol~ http~
#> 2 2 Nivoluma~ Activ~ No Results A~ Glioblast~ "Drug: Nivol~ "Duke Un~ http~
#> 3 3 Nivoluma~ Unkno~ No Results A~ Melanoma "Biological:~ "CHU d'A~ http~
#> 4 4 Study of~ Compl~ Has Results Advanced ~ "Biological:~ "Highlan~ http~
#> 5 5 A Study ~ Unkno~ No Results A~ Brain Met~ "Drug: Fotem~ "Medical~ http~
#> 6 6 Trial of~ Compl~ Has Results Squamous ~ "Drug: Nivol~ "Stanfor~ http~
#> 7 7 Nivoluma~ Compl~ No Results A~ MGMT-unme~ "Drug: Nivol~ "New Yor~ http~
#> 8 8 Study of~ Compl~ Has Results Squamous ~ "Biological:~ "Mayo Cl~ http~
#> 9 9 Study of~ Compl~ Has Results Non-Squam~ "Biological:~ "Mayo Cl~ http~
#> 10 10 An Open-~ Unkno~ No Results A~ Squamous-~ "Drug: Nivol~ "IRCCS -~ http~
Created on 2022-06-21 by the reprex package (v2.0.1)

Webscraping Table

I am trying to webscrape the table from this following page (https://www.coya.com/bike/fahrrad-index-2019), namely the values the bike index for 50 german cities (if u click "Alle Ergebnisse +", you ll see all 50 cities.
I need especially some columns ("Bewertung spezielle Radwege & Qualität der Radwege", "Investitionen & QUalität der Infrastruktur", "Bewertung der Infrastruktur", "Fahrradsharing-Score", "Autofreier Tag", "Critical-Mass-Fahrrad-aktionen, "Event-Score).
This is what I tried:
library(rvest)
num_link="https://www.coya.com/bike/fahrrad-index-2019"
num_page= read_html(num_link)
xyc= num_page %>% html_nodes("._1200:nth-child(2)") %>% html_text()
I tried Selectorgadget, unfortunately I get all the values of the table in a long String (str_split is challenging, because commas in numbers got mixed with commas between the numbers:
"[1] "Ergebnisse für DeutschlandKriminalitätInfrastrukturFahrrad-SharingEvents#StadtLandSizeTotal Score1OldenburgDeutschlandK57,90,4271,94588,3594,4684,5227,153,0590,3454,1836,4515,0525,75N31,5216,2669,122MünsterDeutschlandK58,740,3910,53445,5883,0488,4328,1551,2388,0453,0535,522630,76N23,8412,4265,933Freiburg i. Breisg.DeutschlandK59,350,"
Could someone help me scraping the table, if possible, especially only some values of specific columns (see above)? Very thankful for any help/tip.
Thank you in advance.
(I am a newbie, please be gentle.)
Here is one way to solve the puzzle. Though the row names use a lot of icons so I just leave empty column name. You can create a vector names and assign them manually using
names(table_content) <- names_vector
Here is the code
library(rvest)
#> Loading required package: xml2
library(dplyr, warn.conflicts = FALSE)
library(purrr)
# Here is just reuse your code
num_link <- "https://www.coya.com/bike/fahrrad-index-2019"
num_page <- read_html(num_link)
# Extract the items from your code but go further down
table_content <- num_page %>% html_nodes("._1200:nth-child(2)") %>%
# Extract the node that contain the table
html_nodes(css = ".w-dyn-list") %>%
# Extract the nodes corresponded to each row
html_nodes(css = ".bike-collection-item") %>%
# Then map the function that take each rows in and convert them to a table
# and bind them together into one table
map_dfr(function(x) {
# suppress the message due to no column name was feed into map_dfc
suppressMessages(
x %>% html_nodes(".td") %>%
map_dfc(function(x) { x %>% html_text })
)
})
Here is the extracted content
#> # A tibble: 70 x 21
#> ...1 ...2 ...3 ...4 ...5 ...6 ...7 ...8 ...9 ...10 ...11 ...12 ...13
#> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
#> 1 1 Olde… Deut… K 57,9 0,427 1,94 588,… 94,46 84,52 27,1 53,05 90,34
#> 2 2 Müns… Deut… K 58,74 0,391 0,53 445,… 83,04 88,43 28,15 51,23 88,04
#> 3 3 Frei… Deut… K 59,35 0,34 2,27 962,… 88,87 77,52 32,57 48,11 93,49
#> 4 4 Bamb… Deut… K 55,59 0,302 0 456,… 89,04 92,66 30,29 47,74 93,75
#> 5 5 Gött… Deut… K 62,66 0,28 3,07 379,… 92,8 80,99 23,03 48,07 89,18
#> 6 6 Heid… Deut… K 63,14 0,22 1,21 394,… 90,39 88,33 29,02 47,88 94,21
#> 7 7 Karl… Deut… K 57,39 0,25 4,23 725,… 90,35 71,62 18,75 46,33 93,93
#> 8 8 Brau… Deut… K 67,36 0,21 0 522,… 85,89 90,97 20,55 49,2 89,78
#> 9 9 Kons… Deut… K 62,77 0,22 4,6 121,… 93,62 76,98 23 48,49 94,09
#> 10 10 Brem… Deut… M 58,86 0,21 1,38 334,… 87,34 87,15 18,64 59,78 94,64
#> # … with 60 more rows, and 8 more variables: ...14 <chr>, ...15 <chr>,
#> # ...16 <chr>, ...17 <chr>, ...18 <chr>, ...19 <chr>, ...20 <chr>,
#> # ...21 <chr>
Created on 2021-04-08 by the reprex package (v1.0.0)
You probably want to do one table at a time and then the columns one at a time, so you can then create a data frame. Try this for example:
col1 <- num_page %>% html_nodes(paste0(".w-dyn-item :nth-child(2) div")) %>%
html_text()
The selector gadget is nifty but I usually need to experiment a lot to get the right selectors.

How to parse non xml as xml? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
http://api.bart.gov/api/stn.aspx?cmd=stns&key=MW9S-E7SL-26DU-VV8V
How do I port this as an XML document? I'm trying to parse this in R.
You can use xml2 to read and parse:
library(xml2)
library(tidyverse)
xml <- read_xml('https://api.bart.gov/api/stn.aspx?cmd=stns&key=MW9S-E7SL-26DU-VV8V')
bart <- xml %>% xml_find_all('//station') %>% # select all station nodes
map_df(as_list) %>% # coerce each node to list, collect to data.frame
unnest() # unnest list columns of data.frame
bart
#> # A tibble: 46 × 9
#> name abbr gtfs_latitude gtfs_longitude
#> <chr> <chr> <chr> <chr>
#> 1 12th St. Oakland City Center 12TH 37.803768 -122.271450
#> 2 16th St. Mission 16TH 37.765062 -122.419694
#> 3 19th St. Oakland 19TH 37.808350 -122.268602
#> 4 24th St. Mission 24TH 37.752470 -122.418143
#> 5 Ashby ASHB 37.852803 -122.270062
#> 6 Balboa Park BALB 37.721585 -122.447506
#> 7 Bay Fair BAYF 37.696924 -122.126514
#> 8 Castro Valley CAST 37.690746 -122.075602
#> 9 Civic Center/UN Plaza CIVC 37.779732 -122.414123
#> 10 Coliseum COLS 37.753661 -122.196869
#> # ... with 36 more rows, and 5 more variables: address <chr>, city <chr>,
#> # county <chr>, state <chr>, zipcode <chr>
Using library rvest. The base idea is to find nodes (xml_nodes) of interest with XPath selectors, then grab the values with xml_text
library(rvest)
doc <- read_xml("http://api.bart.gov/api/stn.aspx?cmd=stns&key=MW9S-E7SL-26DU-VV8V")
names <- doc %>%
xml_nodes(xpath = "/root/stations/station/name") %>%
xml_text()
names[1:5]
# [1] "12th St. Oakland City Center" "16th St. Mission" "19th St. Oakland" "24th St. Mission"
# [5] "Ashby"
I had some problems using the URL within read_html directly. So I used readLines first. After that, its finding all the nodesets with <station>. Transform it into a list and feed it into data.table::rbindlist. Idea of using rbindlist came from here
library(xml2)
library(data.table)
nodesets <- read_html(readLines("http://api.bart.gov/api/stn.aspx?cmd=stns&key=MW9S-E7SL-26DU-VV8V")) %>%
xml_find_all(".//station")
data.table::rbindlist(as_list(nodesets))

r rvest webscraping hltv

Yes, that's just another "how-to-scrape" question. Sorry for that, but I've read the previous answers and the manual for rvest as well.
I'm doing web-scraping for my homework (so I do not plan to use the data for any commercial issue). The idea is to show that average skill of team affect individual skill. I'm trying to use CS:GO data from HLTV.org for it.
The information is available at http://www.hltv.org/?pageid=173&playerid=9216
I need two tables: Keystats (data only) and Teammates (data and URLs). I try to use CSS selectors generated by SelectorGadget and I also tryed to analyze the source code of webpage. I've failed. I'm doing the following:
library(rvest)
library(dplyr)
url <- 'http://www.hltv.org/?pageid=173&playerid=9216'
info <- html_session(url) %>% read_html()
info %>% html_node('.covSmallHeadline') %>% html_text()
Can you please tell me that is the right CSS selector?
If you look at the source, those tables aren't HTML tables, but just piles of divs with inconsistent nesting and inline CSS for alignment. Thus, it's easiest to just grab all the text and fix the strings afterwards, as the data is either all numeric or not at all.
library(rvest)
library(tidyverse)
h <- 'http://www.hltv.org/?pageid=173&playerid=9216' %>% read_html()
h %>% html_nodes('.covGroupBoxContent') %>% .[-1] %>%
html_text(trim = TRUE) %>%
strsplit('\\s*\\n\\s*') %>%
setNames(map_chr(., ~.x[1])) %>% map(~.x[-1]) %>%
map(~data_frame(variable = gsub('[.0-9]+', '', .x),
value = parse_number(.x)))
#> $`Key stats`
#> # A tibble: 9 × 2
#> variable value
#> <chr> <dbl>
#> 1 Total kills 9199.00
#> 2 Headshot %% 46.00
#> 3 Total deaths 6910.00
#> 4 K/D Ratio 1.33
#> 5 Maps played 438.00
#> 6 Rounds played 11242.00
#> 7 Average kills per round 0.82
#> 8 Average deaths per round 0.61
#> 9 Rating (?) 1.21
#>
#> $TeammatesRating
#> # A tibble: 4 × 2
#> variable value
#> <chr> <dbl>
#> 1 Gabriel 'FalleN' Toledo 1.11
#> 2 Fernando 'fer' Alvarenga 1.11
#> 3 Joao 'felps' Vasconcellos 1.09
#> 4 Epitacio 'TACO' de Melo 0.98

Resources