Click button using R + httr - r

I'm trying to scrape randomly generated names from a website.
library(httr)
library(rvest)
url <- "https://letsmakeagame.net//tools/PlanetNameGenerator/"
mywebsite <- read_html(url) %>%
html_nodes(xpath="//div[contains(#id,'title')]")
However, that does not work. I'm assuming I have to «click» the «generate» button before extracting the content. Is there a simple way (without RSelenium) to achieve that?
Something similar to:
POST(url,
body = list("EntryPoint.generate()" = T),
encode = "form") -> res
res_t <- content(res, as="text")
Thanks!

rvest isn't much of a help here as planet names are not requested from a remote service, names are generated locally with javascript, that's what the EntryPoint.generate() call does. A relatively simple way is to use chromote, though its session/process closing seems kind of messy at the moment:
library(chromote)
b <- ChromoteSession$new()
{
b$Page$navigate("https://letsmakeagame.net/tools/PlanetNameGenerator")
b$Page$loadEventFired()
}
# call EntryPoint.generate(), read result from <p id="title></p> element,
# replicate 10x
replicate(10, b$Runtime$evaluate('EntryPoint.generate();document.getElementById("title").innerText')$result$value)
#> [1] "Torade" "Ukiri" "Giconerth" "Dunia" "Brihoria"
#> [6] "Tiulaliv" "Giahiri" "Zuthewei 4A" "Elov" "Brachomia"
b$close()
#> [1] TRUE
b$parent$close()
#> Error in self$send_command(msg, callback = callback_, error = error_, : Chromote object is closed.
b$parent$get_browser()$close()
#> [1] TRUE
Created on 2023-01-25 with reprex v2.0.2

Related

rvest object cloned with rlang::duplicate is not properly cloned

rvest doesn't seem to offer any way to extract text from parent object only (ignoring children). One workaround uses xml_remove(), which mutates the original object - all the way up the memory chain given R's default lazy evaluation.
I look to rlang::duplicate(), which is supposed for "modifying the copy leaves the original object intact", but the clone does not appear to be truly independent. For example:
require(rvest)
h = '<ul>
<li id="target">
text to extract
<ul><li>text to ignore</li><li>this too</li></ul>
</li>
</ul>'
doc = xml2::read_html(h)
x = html_node(doc, '#target')
html_text(x)
#> [1] "\ntext to extract\ntext to ignorethis too"
Now clone x, remove its children, and extract the text:
x2 = rlang::duplicate(x, shallow = FALSE)
children = html_children(x2)
xml2::xml_remove(children)
html_text(x2)
#> [1] "\ntext to extract\n"
That works as intended, however x has also been mutated:
html_text(x)
#> [1] "\ntext to extract\n"
Any suggestions why and how to workaround this? I do not want to start re-attaching children..
First of all let me say that I think yoo can solve the issue without copying the data. I'm not an expert in xpath, but I think you can use it to just select only direct text descendents, ignoring text nested in other xml nodes. I.e. the following seems to do the trick without any copy (x defined as in your question):
html_text(html_elements(x, xpath = "text()"))
# [1] "\ntext to extract\n"
That being said, I also have an answer to the question on how to make a deep copy:
The problem is that rlang::duplicate() can only copy R data structures. However, rvest builds on xml2, and xml2 builds on the C library libxml2.
When you create the xml_node object in R, the corresponding data structure is created in libxml2. On the R side, there is basically just a pointer to the libxml2 object. So rlang::duplicate() will only create a copy of that pointer, but not of the underlying data. It cannot do so, because it has no access to it as it is in a different library (that rlang doesn't know of).
The easiest way to create a copy of the underlying data seems to be to serialize and deserialze the xml. I suspect this is not very efficent though.
Example:
Read in the original data:
require(rvest)
h <- '<ul>
<li id="target">
text to extract
<ul><li>text to ignore</li><li>this too</li></ul>
</li>
</ul>'
doc <- xml2::read_html(h)
x <- html_node(doc, '#target')
Create two copies - one with rlang:duplicate() and one with xml2::xml_unserialize():
x1 <- rlang::duplicate(x, shallow = FALSE)
x2 <- xml2::xml_unserialize(xml2::xml_serialize(x, NULL))
Check that x and x1 are in fact identical, while x2 is a true copy (the memory locations you get will be of course be different to the ones shown here):
x$doc
# <pointer: 0x0000023911334ea0>
x1$doc
# <pointer: 0x0000023911334ea0>
# --> same as x
x2$doc
# <pointer: 0x00000239113377d0>
# --> different to x
Test that everything works as intented:
children <- html_children(x2)
xml2::xml_remove(children)
html_text(x2)
# [1] "\n text to extract\n "
html_text(x)
# [1] "\n text to extract\n text to ignorethis too"
Another potential solution (maybe a more general approach) is to use the html_children() function to obtain the text of all the child nodes and then remove that from the full text.
require(rvest)
h = '<ul>
<li id="target">
text to extract
<ul><li>text to ignore</li><li>this too</li></ul>
</li>
</ul>'
doc = xml2::read_html(h)
x = html_node(doc, '#target')
fulltext <- html_text(x)
# [1] "\ntext to extract\ntext to ignorethis too"
#find the text in the children nodes
childtext <- html_children(x) %>% html_text()
# "text to ignorethis too"
#replace the child node text with a numm
gsub(childtext, "", fulltext) %>% trimws()
#"text to extract"
#alternative using the text from the first child node
firstchild <- xml_child(x, search=1) %>% xml_text()
gsub(paste0(firstchild, ".*"), "", fulltext)
Of course, if there are additional newline "\n" or formatting character, the gsub() may break.

Ignore error when importing JSON files in R

I have this for loop that download a json file from a solr search server.
It loops over a vector that contain keywords (100, in this case):
library(jsonlite)
for (i in 1:100) {
docs <- fromJSON(paste("http://myurl.com/solr/select?df=topic&fq=",keywords[i],"&indent=on&q=*:*&rows=1&wt=json",sep=""))
numFound <- docs$response$numFound
print(numFound)
}
It works fine, until it reaches a certain keyword that is not found on the solr, and returns this error:
Error in open.connection(con, "rb") : HTTP error 400.
And then the loop stops.
Is there a way to ignore the error and proceed the loop?
I've read something using tryCatch but still couldn't figure it out.
Simpler than tryCatch, you can use the function try inside your keyword loop. This will attempt to load the URL, but if an error is encountered will print the error but continue to the next keyword.
library(jsonlite)
for (i in 1:100) {
try({
docs <- fromJSON(paste("http://myurl.com/solr/select?df=topic&fq=",keywords[i],"&indent=on&q=*:*&rows=1&wt=json",sep=""))
numFound <- docs$response$numFound
print(numFound)
})
}
If you also don't want to have the errors printed, specify silent = TRUE:
library(jsonlite)
for (i in 1:100) {
try({
docs <- fromJSON(paste("http://myurl.com/solr/select?df=topic&fq=",keywords[i],"&indent=on&q=*:*&rows=1&wt=json",sep=""))
numFound <- docs$response$numFound
print(numFound)
}, silent = TRUE)
}
I'm partial to purrr's safely for this kind of task, which works well in purrr's map functions. You can test it by getting JSONs from GitHub's API:
keywords <- c("hadley", "gershomtripp", "lsjdflkaj")
url <- "https://api.github.com/users/{.}/repos"
Now we can get the JSONs and extract the repo IDs
library(jsonlite)
library(purrr)
library(glue)
json_list <- map(keywords, safely(~ fromJSON(glue(url)) %>% .$id))
This will return a list of elements containing result and error. If there was an error it will be saved in error, otherwise the results will be save in result.
[[1]]
[[1]]$result
[1] 40423928 40544418 14984909 12241750 5154874 9324319 20228011 82348 888200 3116998
[11] 8296284 137344416 133734429 2788278 28724058 9470424 116708612 34325557 41144 41157
[21] 78543290 66588778 35225488 14507273 15718805 18562209 12522 115742443 119107571 201908
[[1]]$error
NULL
[[2]]
[[2]]$result
[1] 150995700 141743224 127107806 130802586 185857872 131488780 148619375 165221804 135417803 127116088
[11] 181662388 173351888 127131146 136896011
[[2]]$error
NULL
[[3]]
[[3]]$result
NULL
[[3]]$error
<simpleError in open.connection(con, "rb"): HTTP error 404.>

Webscraping with R a continuous page with "view more"

I'm new to R and need to scrape the titles and the dates on the posts on this website https://www.healthnewsreview.org/news-release-reviews/
Using rvest I was able to write the basic code to get the info:
url <- 'https://www.healthnewsreview.org/?post_type=news-release-review&s='
webpage <- read_html(url)
date_data_html <- html_nodes(webpage,'span.date')
date_data <- html_text(date_data_html)
head(date_data)
webpage <- read_html(url)
title_data_html <- html_nodes(webpage,'h2')
title_data <- html_text(title_data_html)
head(title_data)
But since the website only displays 10 items at first, and then you have to click "view more" I don't know how to scrape the whole site. Thank you!!
Introducing third-party dependencies should be done as a last resort. RSelenium (as r2evans posited as the only solution, originally) is not necessary the vast majority of the time, including now. (It is necessary for gosh-awful sites that use horrible tech like SharePoint since maintaining state without a browser context for that is more pain than it's worth).)
If we start with the main page:
library(rvest)
pg <- read_html("https://www.healthnewsreview.org/news-release-reviews/")
We can get the first set of links (10 of them):
pg %>%
html_nodes("div.item-content") %>%
html_attr("onclick") %>%
gsub("^window.location.href='|'$", "", .)
## [1] "https://www.healthnewsreview.org/news-release-review/more-unwarranted-hype-over-the-unique-benefits-of-proton-therapy-this-time-in-combo-with-thermal-therapy/"
## [2] "https://www.healthnewsreview.org/news-release-review/caveats-and-outside-expert-balance-speculative-claim-that-anti-inflammatory-diet-might-benefit-bipolar-disorder-patients/"
## [3] "https://www.healthnewsreview.org/news-release-review/plug-for-study-of-midwifery-for-low-income-women-is-fuzzy-on-benefits-costs/"
## [4] "https://www.healthnewsreview.org/news-release-review/tiny-safety-trial-prematurely-touts-clinical-benefit-of-cancer-vaccine-for-her2-positive-cancers/"
## [5] "https://www.healthnewsreview.org/news-release-review/claim-that-milk-protein-alleviates-chemotherapy-side-effects-based-on-study-of-just-12-people/"
## [6] "https://www.healthnewsreview.org/news-release-review/observational-study-cant-prove-surgery-better-than-more-conservative-prostate-cancer-treatment/"
## [7] "https://www.healthnewsreview.org/news-release-review/recap-of-mental-imagery-for-weight-loss-study-requires-that-readers-fill-in-the-blanks/"
## [8] "https://www.healthnewsreview.org/news-release-review/bmjs-attempt-to-hook-readers-on-benefits-of-golf-slices-way-out-of-bounds/"
## [9] "https://www.healthnewsreview.org/news-release-review/time-to-test-all-infants-gut-microbiomes-or-is-this-a-product-in-search-of-a-condition/"
## [10] "https://www.healthnewsreview.org/news-release-review/zika-vaccine-for-brain-cancer-pr-release-headline-omits-crucial-words-in-mice/"
I guess you want to scrape the content of those ^^ so have at it.
But, there's that pesky "View more" button.
When you click on it, it issues this POST request:
With curlconverter we can convert it into a callable httr function (which may not exist given the impossibility of this task). We can wrap that function call in in another function with a pagination parameter:
view_more <- function(current_offset=10) {
httr::POST(
url = "https://www.healthnewsreview.org/wp-admin/admin-ajax.php",
httr::add_headers(
`X-Requested-With` = "XMLHttpRequest"
),
body = list(
action = "viewMore",
current_offset = as.character(as.integer(current_offset)),
page_id = "22332",
btn = "btn btn-gray",
active_filter = "latest"
),
encode = "form"
) -> res
list(
links = httr::content(res) %>%
html_nodes("div.item-content") %>%
html_attr("onclick") %>%
gsub("^window.location.href='|'$", "", .),
next_offset = current_offset + 4
)
}
Now, we can run it (since it defaults to the 10 issued in the first View More click):
x <- view_more()
str(x)
## List of 2
## $ links : chr [1:4] "https://www.healthnewsreview.org/news-release-review/university-pr-misleads-with-claim-that-preliminary-blood-t"| __truncated__ "https://www.healthnewsreview.org/news-release-review/observational-study-on-testosterone-replacement-therapy-fo"| __truncated__ "https://www.healthnewsreview.org/news-release-review/recap-of-lung-cancer-screening-test-relies-on-hyperbole-co"| __truncated__ "https://www.healthnewsreview.org/news-release-review/ties-to-drugmaker-left-out-of-postpartum-depression-drug-study-recap/"
## $ next_offset: num 14
We can pass that new offset to another call:
y <- view_more(x$next_offset)
str(y)
## List of 2
## $ links : chr [1:4] "https://www.healthnewsreview.org/news-release-review/sweeping-claims-based-on-a-single-case-study-of-advanced-c"| __truncated__ "https://www.healthnewsreview.org/news-release-review/false-claims-of-benefit-weaken-news-release-on-experimenta"| __truncated__ "https://www.healthnewsreview.org/news-release-review/contrary-to-claims-heart-scans-dont-save-lives-but-subsequ"| __truncated__ "https://www.healthnewsreview.org/news-release-review/breastfeeding-for-stroke-prevention-kudos-to-heart-associa"| __truncated__
## $ next_offset: num 18
You can do the hard part of scraping the initial article count (it's on the main page) and doing the math to put that in a loop and stop efficiently.
NOTE: If you are doing this scraping to archive the complete site (whether for them or independently) since it's dying at the end of the year, you should comment to that effect and I have better suggestions for that use-case than manual coding in any programming language. There are free, industrial "site preservation" frameworks designed to preserve these types of dying resources. If you just need the article content, then an iterator and custom scraper is likely a 👍🏼 (but, apparently impossible) choice.
NOTE also that the pagination increment of 4 is what the site does when you literally press the button, so this just mimics that functionality.

Rvest unexpectedly stopped working - Scraping tables

After having used the following script for a while, it suddenly stopped working. I constructed a simple function that finds a table - based on its xpath - within a web page.
library(rvest)
url <- c('http://finanzalocale.interno.it/apps/floc.php/certificati/index/codice_ente/1010020010/cod/4/anno/1999/md/0/cod_modello/CCOU/tipo_modello/U/cod_quadro/08'
find_table <- function(x){read_html(x) %>%
html_nodes(xpath = '//*[#id="center"]/table[2]') %>%
html_table() %>%
as.data.frame()}
table <- find_table(url)
I also tried to use httr::GET before read_html, passing the following argument:
query = list(r_date = "2017-12-22")
but nothing changed. Any ideas?
Well, that code doesn't work since you missed a ) in the url <- line.
We'll add in httr:
library(httr)
library(rvest)
url is the name of a base function. using base function names as variables can make problems in code hard to debug. Unless you write perfect code, it's a good idea to not use the names that way.
URL <- c('http://finanzalocale.interno.it/apps/floc.php/certificati/index/codice_ente/1010020010/cod/4/anno/1999/md/0/cod_modello/CCOU/tipo_modello/U/cod_quadro/08')
I don't know if you know the "rules" about web scraping but if you're making repeated requests to this site, then a "crawl delay" should be used. They don't have one set in their robots.txt so 5 seconds is the accepted alternative. I point this out as you may be getting rate limited.
find_table <- function(x, crawl_delay=5) {
Sys.sleep(crawl_delay) # you can put this in a loop vs here if you aren't often doing repeat gets
# switch to httr::GET so you can get web server interaction info.
# since you're scraping, it's expected that you use a custom user agent
# that also supplies contact info.
res <- GET(x, user_agent("My scraper"))
# check to see if there's a non HTTP 200 response which there may be
# if you're getting rate-limited
stop_for_status(res)
# now, try to do the parsing. It looks like you're trying to target a
# single table, so i switched it from `html_nodes()` to `html_node()` since
# the latter returns a `list` and the pipe will error out if there's more
# than on list element.
content(res, "parsed") %>%
html_node(xpath = '//*[#id="center"]/table[2]') %>%
html_table() %>%
as.data.frame()
}
table is a base function name, too (see above)
result <- find_table(URL)
Worked fine for me:
str(result)
## 'data.frame': 11 obs. of 5 variables:
## $ ENTI EROGATORI : chr "Cassa DD.PP." "Istituti di previdenza amministrati dal Tesoro" "Istituto per il credito sportivo" "Aziende di credito" ...
## $ : logi NA NA NA NA NA NA ...
## $ ACCENSIONE ACCERTAMENTI : chr "4.638.500,83" "0,00" "0,00" "953.898,47" ...
## $ ACCENSIONE RISCOSSIONI C|COMP. + RESIDUI: chr "2.177.330,12" "0,00" "129.114,22" "848.935,84" ...
## $ RIMBORSO IMPEGNI : chr "438.696,57" "975,07" "45.584,55" "182.897,01" ...

List and description of all packages in CRAN from within R

I can get a list of all the available packages with the function:
ap <- available.packages()
But how can I also get a description of these packages from within R, so I can have a data.frame with two columns: package and description?
Edit of an almost ten-year old accepted answer. What you likely want is not to scrape (unless you want to practice scraping) but use an existing interface: tools::CRAN_package_db(). Example:
> db <- tools::CRAN_package_db()[, c("Package", "Description")]
> dim(db)
[1] 18978 2
>
The function brings (currently) 66 columns back of which the of interest here are a part.
I actually think you want "Package" and "Title" as the "Description" can run to several lines. So here is the former, just put "Description" in the final subset if you really want "Description":
R> ## from http://developer.r-project.org/CRAN/Scripts/depends.R and adapted
R>
R> require("tools")
R>
R> getPackagesWithTitle <- function() {
+ contrib.url(getOption("repos")["CRAN"], "source")
+ description <- sprintf("%s/web/packages/packages.rds",
+ getOption("repos")["CRAN"])
+ con <- if(substring(description, 1L, 7L) == "file://") {
+ file(description, "rb")
+ } else {
+ url(description, "rb")
+ }
+ on.exit(close(con))
+ db <- readRDS(gzcon(con))
+ rownames(db) <- NULL
+
+ db[, c("Package", "Title")]
+ }
R>
R>
R> head(getPackagesWithTitle()) # I shortened one Title here...
Package Title
[1,] "abc" "Tools for Approximate Bayesian Computation (ABC)"
[2,] "abcdeFBA" "ABCDE_FBA: A-Biologist-Can-Do-Everything of Flux ..."
[3,] "abd" "The Analysis of Biological Data"
[4,] "abind" "Combine multi-dimensional arrays"
[5,] "abn" "Data Modelling with Additive Bayesian Networks"
[6,] "AcceptanceSampling" "Creation and evaluation of Acceptance Sampling Plans"
R>
Dirk has provided an answer that is terrific and after finishing my solution and then seeing his I debated for some time posting my solution for fear of looking silly. But I decided to post it anyway for two reasons:
it is informative to beginning scrapers like myself
it took me a while to do and so why not :)
I approached this thinking I'd need to do some web scraping and choose crantastic as the site to scrape from. First I'll provide the code and then two scraping resources that have been very helpful to me as I learn:
library(RCurl)
library(XML)
URL <- "http://cran.r-project.org/web/checks/check_summary.html#summary_by_package"
packs <- na.omit(XML::readHTMLTable(doc = URL, which = 2, header = T,
strip.white = T, as.is = FALSE, sep = ",", na.strings = c("999",
"NA", " "))[, 1])
Trim <- function(x) {
gsub("^\\s+|\\s+$", "", x)
}
packs <- unique(Trim(packs))
u1 <- "http://crantastic.org/packages/"
len.samps <- 10 #for demo purpose; use:
#len.samps <- length(packs) # for all of them
URL2 <- paste0(u1, packs[seq_len(len.samps)])
scraper <- function(urls){ #function to grab description
doc <- htmlTreeParse(urls, useInternalNodes=TRUE)
nodes <- getNodeSet(doc, "//p")[[3]]
return(nodes)
}
info <- sapply(seq_along(URL2), function(i) try(scraper(URL2[i]), TRUE))
info2 <- sapply(info, function(x) { #replace errors with NA
if(class(x)[1] != "XMLInternalElementNode"){
NA
} else {
Trim(gsub("\\s+", " ", xmlValue(x)))
}
}
)
pack_n_desc <- data.frame(package=packs[seq_len(len.samps)],
description=info2) #make a dataframe of it all
Resources:
talkstats.com thread on web scraping (great beginner
examples)
w3schools.com site on html stuff (very
helpful)
I wanted to try to do this using a HTML scraper (rvest) as an exercise, since the available.packages() in OP doesn't contain the package Descriptions.
library('rvest')
url <- 'https://cloud.r-project.org/web/packages/available_packages_by_name.html'
webpage <- read_html(url)
data_html <- html_nodes(webpage,'tr td')
length(data_html)
P1 <- html_nodes(webpage,'td:nth-child(1)') %>% html_text(trim=TRUE) # XML: The Package Name
P2 <- html_nodes(webpage,'td:nth-child(2)') %>% html_text(trim=TRUE) # XML: The Description
P1 <- P1[lengths(P1) > 0 & P1 != ""] # Remove NULL and empty ("") items
length(P1); length(P2);
mdf <- data.frame(P1, P2, row.names=NULL)
colnames(mdf) <- c("PackageName", "Description")
# This is the problem! It lists large sets column-by-column,
# instead of row-by-row. Try with the full list to see what happens.
print(mdf, right=FALSE, row.names=FALSE)
# PackageName Description
# A3 Accurate, Adaptable, and Accessible Error Metrics for Predictive\nModels
# abbyyR Access to Abbyy Optical Character Recognition (OCR) API
# abc Tools for Approximate Bayesian Computation (ABC)
# abc.data Data Only: Tools for Approximate Bayesian Computation (ABC)
# ABC.RAP Array Based CpG Region Analysis Pipeline
# ABCanalysis Computed ABC Analysis
# For small sets we can use either:
# mdf[1:6,] #or# head(mdf, 6)
However, although working quite well for small array/dataframe list (subset), I ran into a display problem with the full list, where the data would be shown either column-by-column or unaligned. I would have been great to have this paged and properly formatted in a new window somehow. I tried using page, but I couldn't get it to work very well.
EDIT:
The recommended method is not the above, but rather using Dirk's suggestion (from the comments below):
db <- tools::CRAN_package_db()
colnames(db)
mdf <- data.frame(db[,1], db[,52])
colnames(mdf) <- c("Package", "Description")
print(mdf, right=FALSE, row.names=FALSE)
However, this still suffers from the display problem mentioned...

Resources