Extract href tag using Rselenium - r

I am trying to get the store address of apple stores for multiple countries using Rselenium.
library(RSelenium)
library(tidyverse)
library(netstat)
# start the server
rs_driver_object <- rsDriver(browser = "chrome",
chromever = "100.0.4896.60",
verbose = F,
port = free_port())
# create a client object
remDr <- rs_driver_object$client
# maximise window size
remDr$maxWindowSize()
# navigate to the website
remDr$navigate("https://www.apple.com/uk/retail/storelist/")
# click on search bar
search_box <- remDr$findElement(using = "id", "dropdown")
country_name <- "United States" # for a single country. I can loop over multiple countries
# in the search box, pass on the country name and hit enter
search_box$sendKeysToElement(list(country_name, key = "enter"))
search_box$clickElement() # I am not sure if I need to click but I am doing anyway
The page now shows me the location of each store. Each store has a hyperlink that will take me to the store website where the full address is which I want to extract
However, I am stuck on how do I click on individual store address in the last step.
I thought I will get the href for all the stores in the particular page
store_address <- remDr$findElement(using = 'class', 'store-address')
store_address$getElementAttribute('href')
But it returns me an empty list. How do I go from here?

After obtaining page with list of stores we can do,
link = remDr$getPageSource()[[1]] %>%
read_html() %>% html_nodes('.state') %>% html_nodes('a') %>% html_attr('href') %>% paste0('https://www.apple.com', .)
[1] "https://www.apple.com/retail/thesummit/" "https://www.apple.com/retail/bridgestreet/"
[3] "https://www.apple.com/retail/anchorage5thavenuemall/" "https://www.apple.com/retail/chandlerfashioncenter/"
[5] "https://www.apple.com/retail/santanvillage/" "https://www.apple.com/retail/arrowhead/"

Related

rselenium clicking on dropdown element to collect data

I am trying to collect some area names from a website and in order to do so I want to click the drop-down box to expand the downwards pointing arrow.
i.e. on the following page if I click on the "distritos" drop down I can see further drop down-availability
https://www.fotocasa.es/es/comprar/viviendas/barcelona-capital/todas-las-zonas/l
For Ciutat Vella I see I have 4 additional items Barri Gòtic, EL Raval, La Barceloneta and Sant Pare, Sta...
I would like to collect these names also. I have the following code to collect the following:
library(RSelenium)
library(rvest)
library(tidyverse)
# 1.a) Open URL, click on provincias
rD <- rsDriver(browser="firefox", port=4536L)
remDr <- rD[["client"]]
url2 = "https://www.fotocasa.es/es/comprar/viviendas/barcelona-capital/todas-las-zonas/l"
remDr$navigate(url2)
remDr$maxWindowSize()
# accept cookies
remDr$findElement(using = "xpath",'/html/body/div[1]/div[4]/div/div/div/footer/div/button[2]')$clickElement()
#click on Distrito
remDr$findElement(using = "xpath", '/html/body/div[1]/div[2]/div[1]/div[3]/div/div[1]/div')$clickElement()
html_distrito_full_page = remDr$getPageSource()[[1]] %>%
read_html()
Distritos_Names = html_distrito_full_page %>%
html_nodes('.re-GeographicSearchNext-checkboxItem') %>%
html_nodes('.re-GeographicSearchNext-checkboxItem-literal') %>%
html_text()
Distritos_Names
Which gives:
[1] "Ciutat Vella" "Eixample" "Gràcia" "Horta - Guinardó" "Les Corts" "Nou Barris" "Sant Andreu" "Sant Martí"
[9] "Sants - Montjuïc" "Sarrià - Sant Gervasi"
However, this is missing the names of the regions in the drop-down boxes.
How can I collect these drop-down links also? i.e. RSelenium to navigate to the page, expand all "downwards facing arrows" then use rvest to scrape the whole page once these downwards facing arrows have been expanded.
You could just use rvest to get the mappings by extracting the JavaScript variable housing the mappings + some other data. Use jsonlite to deserialize the extracted string into a JSON object, then apply a custom function to extract the actual mappings for each dropdown. Wrap that function in a map_dfr() call to get a final combined dataframe of all dropdown mappings.
TODO: Review JSON to see if can remove magic number 4 and dynamically determine the correct item to retrieve from parent list.
library(tidyverse)
library(rvest)
library(jsonlite)
extract_data <- function(x) {
tibble(
location = x$literal,
sub_location = map(x$subLocations, "literal", pluck)
)
}
p <- read_html("https://www.fotocasa.es/es/comprar/viviendas/barcelona-capital/todas-las-zonas/l") %>% html_text()
s <- str_match(p, 'window\\.__INITIAL_PROPS__ = JSON\\.parse\\("(.*)"')[, 2]
data <- jsonlite::parse_json(gsub('\\\\\\"', '\\\"', gsub('\\\\"', '"', s)))
location_data <- data$initialSearch$result$geographicSearch[4]
df <- map_dfr(location_data, extract_data)

Webscraping in R where input is required

I have used the rvest package in R to scrape unique URLs before.
However, I am now stuck with a particular website. The URL stays static and I need to select the following dropdowns now and scrape the resulting table that appears.
Will be helpful if someone can guide me on what direction to take with websites like these? Is R even capable of doing this?
Edit: I have researched and it seems RSelenium can handle such tasks. Unfortunately, I have no exposure to it. Can someone recommend an example/blog/material online on using Selenium specifically for clicking and scraping for someone as noob as I am?
I have made a blog post about an RSelenium example:
https://guillaumepressiat.github.io/blog/2021/04/RSelenium-paginated-tables
this website contains a lot of things about selenium, you will have to plug it to RSelenium api package.(verbs are almost the same in all languages, findElement, etc) https://www.guru99.com/selenium-tutorial.html
But as an example based on your question maybe something like this to begin:
# https://stackoverflow.com/q/67021563/10527496
# java -jar selenium-server-standalone-3.9.1.jar
library(RSelenium)
library(tidyverse)
library(rvest)
library(httr)
remDr <- remoteDriver(
remoteServerAddr = "localhost",
port = 4444L, # change port according to terminal
browserName = "firefox"
)
remDr$open()
# remDr$getStatus()
url <- "https://fcainfoweb.nic.in/reports/Report_Menu_Web.aspx"
remDr$navigate(url)
Sys.sleep(5)
# first : radio buttons
u1 <- remDr$findElement(using = "id", value = 'ctl00_MainContent_Rbl_Rpt_type_0')
u2 <- remDr$findElement(using = "id", value = 'ctl00_MainContent_Rbl_Rpt_type_1')
u3 <- remDr$findElement(using = "id", value = 'ctl00_MainContent_Rbl_Rpt_type_2')
u4 <- remDr$findElement(using = "id", value = 'ctl00_MainContent_Rbl_Rpt_type_3')
dynam <- remDr$mouseMoveToLocation(webElement = u1)
u1$click()
Sys.sleep(5)
# second : Select input
s1 <- remDr$findElement(using = "id", value = 'ctl00_MainContent_Ddl_Rpt_Option0')
# get available choices
s_choices <- read_html(s1$getElementAttribute('innerHTML')[[1]]) %>%
html_nodes('option') %>%
html_attrs() %>%
unlist() %>%
.[3:length(.)] %>%
as.vector()
dynam <- remDr$mouseMoveToLocation(webElement = s1)
s1$click()
s1$sendKeysToElement(sendKeys = list(s_choices[1], key = "enter"))
# s_choices[1] is "Daily Prices"
Sys.sleep(5)
# get date choices
s_date_choices <- remDr$findElement(using = "id", value = "ctl00_MainContent_Txt_FrmDate")
dynam <- remDr$mouseMoveToLocation(webElement = s_date_choices)
s_date_choices$click()
s_date_choices$sendKeysToElement(sendKeys = list('01/01/2021', key = "enter"))
Sys.sleep(5)
s_table <- remDr$findElement(using = "id", value = "Panel1")
# get first tables as an example
results_1 <- read_html(s_table$getElementAttribute('innerHTML')[[1]]) %>%
html_table(fill = TRUE) %>%
.[2:length(.)]
we get a list of three tables as a result:
Making a function from this code to loop on a date vector is possible after that I think (you will have to reload a fresh start page on base URL for each date I suppose).

How to scrape hrefs embedded in a dropdown list of a web table using rselenium R

I'm trying to scrape links to all minutes and agenda provided in this website: https://www.charleston-sc.gov/AgendaCenter/
I've managed to scrape section IDs associated with each category (and years for each category) to loop through the contents within each category-year (please see below). But I don't know how to scrape the hrefs that lives inside the contents. Especially because the links to Agenda lives inside the drop down menu under 'download', it seems like I need to go through extra clicks to scrape the hrefs.
How do I scrape the minutes and agenda (inside the download dropdown) for each table I select? Ideally, I would like a table with the date, title of the agenda, links to minutes, and links to agenda.
I'm using RSelenium for this. Please see the code I have so far below, which allows me to click through each category and year, but not else much. Please help!
rm(list = ls())
library(RSelenium)
library(tidyverse)
library(httr)
library(XML)
library(stringr)
library(RCurl)
t <- readLines('https://www.charleston-sc.gov/AgendaCenter/', encoding = 'UTF-8')
co <- str_match(t, 'aria-label="(.*?)"[ ]href="java')[,2]
yr <- str_match(t, 'id="(.*?)" aria-label')[,2]
df <- data.frame(cbind(co, yr)) %>%
mutate_all(as.character) %>%
filter_all(any_vars(!is.na(.))) %>%
mutate(id = ifelse(grepl('^a0', yr), gsub('a0', '', yr), NA)) %>%
tidyr::fill(c(co,id), .direction='down')%>% drop_na(co)
remDr <- remoteDriver(port=4445L, browserName = "chrome")
remDr$open()
remDr$navigate('https://www.charleston-sc.gov/AgendaCenter/')
remDr$screenshot(display = T)
for (j in unique(df$id)){
remDr$findElement(using = 'xpath',
value = paste0('//*[#id="cat',j,'"]/h2'))$clickElement()
for (k in unique(df[which(df$id==j),'yr'])){
remDr$findElement(using = 'xpath',
value = paste0('//*[#id="',k,'"]'))$clickElement()
# NEED TO SCRAPE THE HREF ASSOCIATED WITH MINUTES AND AGENDA DOWNLOAD HERE #
}
}
Maybe you don't really need to click through all the elements? You can use the fact that all downloadable links have ViewFile in their href:
t <- readLines('https://www.charleston-sc.gov/AgendaCenter/', encoding = 'UTF-8')
viewfile <- str_extract_all(t, '.*ViewFile.*', simplify = T)
viewfile <- viewfile[viewfile!='']
library(data.table) # I use data.table because it's more convenient - but can be done without too
dt.viewfile <- data.table(origStr=viewfile)
# list the elements and patterns we will be looking for:
searchfor <- list(
Title='name=[^ ]+ title=\"(.+)\" href',
Date='<strong>(.+)</strong>',
href='href=\"([^\"]+)\"',
label= 'aria-label=\"([^\"]+)\"'
)
for (this.i in names(searchfor)){
this.full <- paste0('.*',searchfor[[this.i]],'.*');
dt.viewfile[grepl(this.full, origStr), (this.i):=gsub(this.full,'\\1',origStr)]
}
# Clean records:
dt.viewfile[, `:=`(Title=na.omit(Title),Date=na.omit(Date),label=na.omit(label)),
by=href]
dt.viewfile[,Date:=gsub('<abbr title=".*">(.*)</abbr>','\\1',Date)]
dt.viewfile <- unique(dt.viewfile[,.(Title,Date,href,label)]); # 690 records
What you have as the result is a table with the links to all downloadable files. You can now download them using any tool you like, for example using download.file() or GET():
dt.viewfile[, full.url:=paste0('https://www.charleston-sc.gov', href)]
dt.viewfile[, filename:=fs::path_sanitize(paste0(Title, ' - ', Date), replacement = '_')]
for (i in seq_len(nrow(dt.viewfile[1:10,]))){ # remove `1:10` limitation to process all records
url <- dt.viewfile[i,full.url]
destfile <- dt.viewfile[i,filename]
cat('\nDownloading',url, ' to ', destfile)
fil <- GET(url, write_disk(destfile))
# our destination file doesn't have extension, we need to get it from the server:
serverFilename <- gsub("inline;filename=(.*)",'\\1',headers(fil)$`content-disposition`)
serverExtension <- tools::file_ext(serverFilename)
# Adding the extension to the file we just saved
file.rename(destfile,paste0(destfile,'.',serverExtension))
}
Now the only problem we have is that the original webpage was only showing records for the last 3 years. But instead of clicking View More through RSelenium, we can simply load the page with earlier dates, something like this:
t <- readLines('https://www.charleston-sc.gov/AgendaCenter/Search/?term=&CIDs=all&startDate=10/14/2014&endDate=10/14/2017', encoding = 'UTF-8')
then repeat the rest of the code as necessary.

How to perform web scraping to get all the reviews of the an app in Google Play?

I pretend to be able to get all the reviews that users leave on Google Play about the apps. I have this code that they indicated there Web scraping in R through Google playstore . But the problem is that you only get the first 40 reviews. Is there a possibility to get all the comments of the app?
`` `
#Loading the rvest package
library(rvest)
library(magrittr) # for the '%>%' pipe symbols
library(RSelenium) # to get the loaded html of
#Specifying the url for desired website to be scraped
url <- 'https://play.google.com/store/apps/details?
id=com.phonegap.rxpal&hl=en_IN&showAllReviews=true'
# starting local RSelenium (this is the only way to start RSelenium that
is working for me atm)
selCommand <- wdman::selenium(jvmargs = c("-
Dwebdriver.chrome.verboseLogging=true"), retcommand = TRUE)
shell(selCommand, wait = FALSE, minimized = TRUE)
remDr <- remoteDriver(port = 4567L, browserName = "firefox")
remDr$open()
# go to website
remDr$navigate(url)
# get page source and save it as an html object with rvest
html_obj <- remDr$getPageSource(header = TRUE)[[1]] %>% read_html()
# 1) name field (assuming that with 'name' you refer to the name of the
reviewer)
names <- html_obj %>% html_nodes(".kx8XBd .X43Kjb") %>% html_text()
# 2) How much star they got
stars <- html_obj %>% html_nodes(".kx8XBd .nt2C1d [role='img']") %>%
html_attr("aria-label")
# 3) review they wrote
reviews <- html_obj %>% html_nodes(".UD7Dzf") %>% html_text()
# create the df with all the info
review_data <- data.frame(names = names, stars = stars, reviews = reviews,
stringsAsFactors = F)
`` `
You can get all the reviews from the web store of GooglePlay.
If you scroll through the reviews, you can see the XHR request is sent to:
https://play.google.com/_/PlayStoreUi/data/batchexecute
With form-data:
f.req: [[["rYsCDe","[[\"com.playrix.homescapes\",7]]",null,"55"]]]
at: AK6RGVZ3iNlrXreguWd7VvQCzkyn:1572317616250
And params of:
rpcids=rYsCDe
f.sid=-3951426241423402754
bl=boq_playuiserver_20191023.08_p0
hl=en
authuser=0
soc-app=121
soc-platform=1
soc-device=1
_reqid=839222
rt=c
After playing around with different parameters, I find out many are optional, and the request can be simplified as:
form-data:
f.req: [[["UsvDTd","[null,null,[2, $sort,[$review_size,null,$page_token]],[$package_name,7]]",null,"generic"]]]
params:
hl=$review_language
The response is cryptic, but it's essentially JSON data with keys stripped, similar to protobuf, I wrote a parser for the response that translate it to regular dict object.
https://gist.github.com/xlrtx/af655f05700eb76bb29aec876493ed90

Google Play web scraping: How to identify response to app reviews in R?

I am doing web scraping in R of the reviews of a Google Play application, but I cannot identify the lack of response to the reviews.
I explain. I intend to set up a database with two columns. One with the text of the review and another column with the app's response to that review. In this last column, it will have empty values when there is no response. However, I only get the answers and I cannot identify the absence of an answer. How can this be done?
INPUT
OUPUT What I want to be returned
How I can get this? Identify the absence of response
FULL CODE
#Loading the rvest package
library(rvest)
library(magrittr) # for the '%>%' pipe symbols
library(RSelenium) # to get the loaded html of
url <- 'https://play.google.com/store/apps/details?id=com.gospace.parenteral&showAllReviews=true'
# starting local RSelenium (this is the only way to start RSelenium that is working for me atm)
selCommand <- wdman::selenium(jvmargs = c("-Dwebdriver.chrome.verboseLogging=true"), retcommand = TRUE)
shell(selCommand, wait = FALSE, minimized = TRUE)
remDr <- remoteDriver(port = 4567L, browserName = "firefox")
remDr$open()
# go to website
remDr$navigate(url)
# get page source and save it as an html object with rvest
html_obj <- remDr$getPageSource(header = TRUE)[[1]] %>% read_html()
#1 column
reviews <- html_obj %>% html_nodes(".UD7Dzf") %>% html_text()
#2 column
reply <- html_obj %>% html_nodes('.LVQB0b') %>% html_text()
# create the df with all the info
review_data <- data.frame(reviews = reviews, reply = reply, stringsAsFactors = F)

Resources