Simple rvest code returning empty character - r

I am trying to scrape the href from the 'Printer-Friendly Minutes' link on this website using Selector gadget. Usually works, but this time I'm just getting an empty character in place of the href I'm trying to grab.
Here's the code:
url <- "http://www.richmond.ca/cityhall/council/agendas/council/2021/012521_minutes.htm"
try <- url %>% read_html %>% html_nodes(".first-child a") %>% html_attr("href")
Anyone know what might be going wrong?

As PFM is used as the abbreviation for the minutes you can target the href by that substring
library(rvest)
library(magrittr)
url <- "http://www.richmond.ca/cityhall/council/agendas/council/2021/012521_minutes.htm"
read_html(url) %>%
html_element('[href*=PFM]') %>%
html_attr('href')
You could also use its adjacent sibling relationship to the preceedingimg tag, which can be nicely targeted by its alt attribute value:
read_html(url) %>%
html_element('[alt="PDF Document"] + a') %>%
html_attr('href')

I think you have just not selected the node correctly. It's really helpful to learn xpath, which allows precise node navigation in html:
library(rvest)
domain <- "http://www.richmond.ca"
url <- paste0(domain, "/cityhall/council/agendas/council/2021/012521_minutes.htm")
pdf_url <- url %>%
read_html %>%
html_nodes(xpath = "//a[#title='PFM_CNCL_012521']") %>%
html_attr("href") %>%
paste0(domain, .)
pdf_url
#> [1] "http://www.richmond.ca/__shared/assets/PFM_CNCL_01252157630.pdf"
We can see this is a valid link by GETting the result:
httr::GET(pdf_url)
#> Response [https://www.richmond.ca/__shared/assets/PFM_CNCL_01252157630.pdf]
#> Date: 2021-10-18 20:35
#> Status: 200
#> Content-Type: application/pdf
#> Size: 694 kB
#> <BINARY BODY>
Created on 2021-10-18 by the reprex package (v2.0.0)

Related

not able to scrape review score from airbnb using Rvest

library(rvest)
library(dplyr)
page = "https://www.airbnb.ae/rooms/585742764031233504?preview_for_ml=true&source_impression_id=p3_1660929108_esIxWS5HCyk890Im"
### for average Review score
page %>% read_html() %>% html_nodes("._17p6nbba") %>% html_text2()
### for review count
page %>% read_html() %>% html_nodes("span._s65ijh7") %>% html_text2()
Both are returning "character(0)"
You can get this in JSON format with Selenium:
library(rvest)
library(jsonlite)
json1 <- read_html("https://www.airbnb.ae/rooms/585742764031233504?preview_for_ml=true&source_impression_id=p3_1660929108_esIxWS5HCyk890Im") %>%
html_element(xpath = "/html/body//script[#id='data-deferred-state']") %>%
html_text() %>%
fromJSON()
json1$niobeMinimalClientData[[1]][[2]]$data$presentation$stayProductDetailPage$sections$metadata$loggingContext$eventDataLogging
The trick I learned for these instances is to download the raw HTML with read_html(url), write it to disk with xml2::write_html and then open with Chrome, inspect, command f for the search term (such as 4.50), get that element, and then parse the JSON.

Scraping links from a web page at a specific position

I am trying to collect a number of links from a website.
For example I have the following and my idea was to collect the link where it says leer más which is where I get the xpath from.
url = "https://www.fotocasa.es/es/alquiler/viviendas/madrid-capital/todas-las-zonas/l/181"
x <- GET(url, add_headers('user-agent' = desktop_agents[sample(1:10, 1)]))
x %>%
read_html() %>%
html_nodes(xpath = '//*[#id="App"]/div[2]/div[1]/main/div/div[3]/section/article[1]/div/a/p/span[2]')
This gives me the following but not the link:
{xml_nodeset (1)}
[1] <span class="re-CardDescription-link">Leer más</span>
Additionally, I thought about collecting all links:
x %>%
read_html() %>%
html_nodes("a") %>%
html_attr("href")
This gives me a lot of links but not the links to the individual webpages I want.
I would like to have a list of links such as:
https://www.fotocasa.es/es/alquiler/vivienda/madrid-capital/aire-acondicionado-calefaccion-terraza-trastero-ascensor-amueblado-internet/162262978/d
https://www.fotocasa.es/es/alquiler/vivienda/madrid-capital/aire-acondicionado-calefaccion-trastero-ascensor-amueblado/159750574/d
https://www.fotocasa.es/es/alquiler/vivienda/madrid-capital/aire-acondicionado-calefaccion-jardin-zona-comunitaria-ascensor-patio-amueblado-parking-television-internet-piscina/162259162/d
Those links are stored inside a JavaScript object within a script tag. You can regex out the string defining that object, do some unescapes to enable jsonlite to parse, then apply a custom function to extract just the urls of interest to the json object
library(rvest)
library(jsonlite)
library(magrittr)
library(stringr)
library(purrr)
link <- 'https://www.fotocasa.es/es/alquiler/viviendas/madrid-capital/todas-las-zonas/l/181'
p <- read_html(url) %>% html_text()
s <- str_match(p, 'window\\.__INITIAL_PROPS__ = JSON\\.parse\\("(.*)".*?;')[,2]
data <- jsonlite::parse_json(gsub('\\\\\\"', '\\\"', gsub('\\\\"', '"', s)))
links <- purrr::map(data$initialSearch$result$realEstates, ~ .x$detail$`es-ES` %>% url_absolute(link))

character (0) after scraping webpage in read_html

I'm trying to scrape "1,335,000" from the screenshot below (the number is at the bottom of the screenshot). I wrote the following code in R.
t2<-read_html("https://fortune.com/company/amazon-com/fortune500/")
employee_number <- t2 %>%
rvest::html_nodes('body') %>%
xml2::xml_find_all("//*[contains(#class, 'info__value--2AHH7')]") %>%
rvest::html_text()
However, when I call "employee_number", it gives me "character(0)". Can anyone help me figure out why?
As Dave2e pointed the page uses javascript, thus can't make use of rvest.
url = "https://fortune.com/company/amazon-com/fortune500/"
#launch browser
library(RSelenium)
driver = rsDriver(browser = c("firefox"))
remDr <- driver[["client"]]
remDr$navigate(url)
remDr$getPageSource()[[1]] %>%
read_html() %>% html_nodes(xpath = '//*[#id="content"]/div[5]/div[1]/div[1]/div[12]/div[2]') %>%
html_text()
[1] "1,335,000"
Data is loaded dynamically from a script tag. No need for expense of a browser. You could either extract the entire JavaScript object within the script, pass to jsonlite to handle as JSON, then extract what you want, or, if just after the employee count, regex that out from the response text.
library(rvest)
library(stringr)
library(magrittr)
library(jsonlite)
page <- read_html('https://fortune.com/company/amazon-com/fortune500/')
data <- page %>% html_element('#preload') %>% html_text() %>%
stringr::str_match(. , "PRELOADED_STATE__ = (.*);") %>% .[, 2] %>% jsonlite::parse_json()
print(data$components$page$`/company/amazon-com/fortune500/`[[6]]$children[[4]]$children[[3]]$config$employees)
#shorter version
print(page %>%html_text() %>% stringr::str_match('"employees":"(\\d+)?"') %>% .[,2] %>% as.integer() %>% format(big.mark=","))

How to select "href" of a web page of a specific "target"?

<a class="image teaser-image ng-star-inserted" target="_self" href="/politik/inland/neuwahlen-2022-welche-szenarien-jetzt-realistisch-sind/401773131">
I just want to extract the "href" (for example the upper HTML tag) in order to concat it with the domain name of this website "https://kurier.at" and web scrape all articles on the home page.
I tried the following code
library(rvest)
library(lubridate)
kurier_wbpg <- read_html("https://kurier.at")
# I just want the "a" tags which come with the attribute "_self"
articleLinks <- kurier_wbpg %>% html_elements("a")%>%
html_elements(css = "tag[attribute=_self]") %>%
html_attr("href")%>%
paste("https://kurier.at",.,sep = "")
When I execute up to the html_attr("href") part of the above code block, the result I get is
character(0)
I think something wrong with selecting the HTML element tag.
I need some help with this?
You need to narrow down your css to the second teaser block image which you can do by using the naming conventions of the classes. You can use url_absolute() to add the domain.
library(rvest)
library(magrittr)
url <- 'https://kurier.at/'
result <- read_html(url) %>%
html_element('.teasers-2 .image') %>%
html_attr('href') %>%
url_absolute(url)
Same principle to get all teasers:
results <- read_html(url) %>%
html_elements('.teaser .image') %>%
html_attr('href') %>%
url_absolute(url)
Not sure if you want the bottom block of 5 included. If so, you can again use classes
articles <- read_html(url) %>%
html_elements('.teaser-title') %>%
html_attr('href') %>%
url_absolute(url)
It works with xpath -
library(rvest)
kurier_wbpg <- read_html("https://kurier.at")
articleLinks <- kurier_wbpg %>%
html_elements("a") %>%
html_elements(xpath = '//*[#target="_self"]') %>%
html_attr('href') %>%
paste0("https://kurier.at",.)
articleLinks
# [1] "https://kurier.at/plus"
# [2] "https://kurier.at/coronavirus"
# [3] "https://kurier.at/politik"
# [4] "https://kurier.at/politik/inland"
# [5] "https://kurier.at/politik/ausland"
#...
#...

rvest how to get last page number

Trying to get the last page number:
library(rvest)
url <- "https://www.immobilienscout24.de/Suche/de/wohnung-kaufen"
page <- read_html(url)
last_page_number <- page %>%
html_nodes("#pageSelection > select > option") %>%
html_text() %>%
length()
The result is empty for some reason.
I can access the pages by this url, for example to get page #3:
https://www.immobilienscout24.de/Suche/de/wohnung-kaufen?pagenumber=3
You are in the right direction but I think you have got wrong css selectors. Try :
library(rvest)
url <- 'https://www.immobilienscout24.de/Suche/de/wohnung-kaufen'
url %>%
read_html() %>%
html_nodes('div.select-container select option') %>%
html_text() %>%
tail(1L)
#[1] "1650"
An alternative :
url %>%
read_html() %>%
html_nodes('div.select-container select option') %>%
magrittr::extract2(length(.)) %>%
html_text()

Resources