I’m trying to automate browsing on a site with RSelenium in order to retrieve the latest planned release dates. My problem lies in that there is an age-check that pops up when I visit the URL. The page(age-check-page) concists of two buttons, which I haven’t succeeded to click on through RSelenium. The code that I use thus far is appended below, what is the solution for this problem?
#Varialble and URL
s4 <- "https://www.systembolaget.se"
#Start Server
rd <- rsDriver()
remDr <- rd[["client"]]
#Load Page
remDr$navigate(s4)
webE <- remDr$findElements("class name", "action")
webE$isElementEnabled()
webE$clickElement()
You need to more accurately target the selector:
#Varialble and URL
s4 <- "https://www.systembolaget.se"
#Start Server
rd <- rsDriver()
remDr <- rd[["client"]]
#Load Page
remDr$navigate(s4)
webE <- remDr$findElement("css", "#modal-agecheck .action.primary")
webE$clickElement()
Related
I am trying to scrape a page, getting the move list of a game of chess, which is located in the menu on the right, under the "moves" tab.
library(RSelenium)
url <- "https://play.xiangqi.com/game/oX00ly"
rD <- RSelenium::rsDriver(browser = "firefox", check = F)
remDr <- rD$client
remDr$navigate(url = url)
when manually clicking the Moves tab in the browser, I can get the desired text via
webElem <- remDr$findElement("css selector", ".Wrapper__MovesTabWrapper-sc-13rqht3-2")
webElem$getElementText()[[1]]
which (correctly) returns
[1] "1\np3+1\nP3+1\n2\ne3+5\nH2+3\n3\nh8+7\nH8+7\n4\nh2+3\nR1+1\n5\nc8=9\nH3+2\n6\nc2+1\nE7+5\n7\nh3+4\nA6+5\n8\nh4+3\nR9=6\n9\nr1=3\nR6+6\n10\nc2+2\nH2+3\n11\nr9=8\nC2=3\n12\nr8+3\nR1=4\n13\nc2-1\nR6=8\n14\nr8+4\nH3+1\n15\ne7+9\nC3+5\n16\ne9-7\nR4+3\n17\nc2=1\nR8=9\n18\nh3-4\nR4=6\n19\nc1=2\nR9-1\n20\nr3=2\nC8+7\n21\ne5-3\nR9=8\n22\nh4-3\nR8+2\n23\nh3-2\nR8+2\n24\ne7+5\nH7+8\n25\nr8-5\nC3+1\n26\nr8+2\nH8+7\n27\np9+1\nH7+5\n28\na6+5\nH5+7\n29\nk5=6\nR6=4\n30\na5+6\nR4+3"
Problem
When trying to click the button through RSelenium, by using
webElem <- remDr$findElement("css selector", "#moves-tab")
webElem <-webElem$clickElement() # or webElem$click()
Nothing seems to happen, and I'm at a loss on how to proceed troubleshooting.
Question
How can I switch to the Moves tab by simulating a click (active event listener)?
Bonus pts: is this possible using the rvest package?
Sometimes being too trigger happy is a problem.
Adding
webElem <- webElem$clickElement()
Sys.sleep(2)
solved the problem.
EDIT: From the comments I received so far, I managed to use RSelenium to access the PDF files I am looking for, using the following code:
library(RSelenium)
driver <- rsDriver(browser = "firefox")
remote_driver <- driver[["client"]]
remote_driver$navigate("https://www.rad.cvm.gov.br/enetconsulta/frmGerenciaPaginaFRE.aspx?CodigoTipoInstituicao=1&NumeroSequencialDocumento=62398")
# It needs some time to load the page
option <- remote_driver$findElement(using = 'xpath', "//select[#id='cmbGrupo']/option[#value='PDF|412']")
option$clickElement()
Now, I need R to click the download button, but I could not manage to do so. I tried:
button <- remote_driver$findElement(using = "xpath", "//*[#id='download']")
button$clickElement()
But I get the following error:
Selenium message:Unable to locate element: //*[#id="download"]
For documentation on this error, please visit: https://www.seleniumhq.org/exceptions/no_such_element.html
Build info: version: '4.0.0-alpha-2', revision: 'f148142cf8', time: '2019-07-01T21:30:10'
Erro: Summary: NoSuchElement
Detail: An element could not be located on the page using the given search parameters.
class: org.openqa.selenium.NoSuchElementException
Further Details: run errorDetails method
Can someone tell what is wrong here?
Thanks!
Original question:
I have several webpages from which I need to download embedded PDF files and I am looking for a way to automate it with R. This is one of the webpages: https://www.rad.cvm.gov.br/enetconsulta/frmGerenciaPaginaFRE.aspx?CodigoTipoInstituicao=1&NumeroSequencialDocumento=62398
This is a webpage from CVM (Comissão de Valores Mobiliários, the Brazilian equivalent to the US Securities and Exchange Commission - SEC) to download Notes to Financial Statements (Notas Explicativas) from Brazilian companies.
I tried several options but the website seems to be built in a way that makes it difficult to extract the direct links.
I tried what is suggested in here Downloading all PDFs from URL, but the html_nodes(".ms-vb2 a") %>% html_attr("href") yields an empty character vector.
Similarly, when I tried the approach in here https://www.samuelworkman.org/blog/scraping-up-bits-of-helpfulness/, the html_attr("href") generates an empty vector.
I am not used to web scraping codes in R, so I cannot figure out what is happening.
I appreciate any help!
If someone is facing the same problem I did, I am posting the solution I used:
# set Firefox profile to download PDFs automatically
pdfprof <- makeFirefoxProfile(list(
"pdfjs.disabled" = TRUE,
"plugin.scan.plid.all" = FALSE,
"plugin.scan.Acrobat" = "99.0",
"browser.helperApps.neverAsk.saveToDisk" = 'application/pdf'))
driver <- rsDriver(browser = "firefox", extraCapabilities = pdfprof)
remote_driver <- driver[["client"]]
remote_driver$navigate("https://www.rad.cvm.gov.br/enetconsulta/frmGerenciaPaginaFRE.aspx?CodigoTipoInstituicao=1&NumeroSequencialDocumento=62398")
Sys.sleep(3) # It needs some time to load the page (set to 3 seconds)
option <- remote_driver$findElement(using = 'xpath', "//select[#id='cmbGrupo']/option[#value='PDF|412']") # select the option to open PDF file
option$clickElement()
# Find iframes in the webpage
web.elem <- remote_driver$findElements(using = "css", "iframe") # get all iframes in the webpage
sapply(web.elem, function(x){x$getElementAttribute("id")}) # see their names
remote_driver$switchToFrame(web.elem[[1]]) # Move to the first iframe (Formularios Filho)
web.elem.2 <- remote_driver$findElements(using = "css", "iframe") # get all iframes in the webpage
sapply(web.elem.2, function(x){x$getElementAttribute("id")}) # see their names
# The pdf Viewer iframe is the only one inside Formularios Filho
remote_driver$switchToFrame(web.elem.2[[1]]) # Move to the first iframe (pdf Viewer)
Sys.sleep(3) # It needs some time to load the page (set to 3 seconds)
# Download the PDF file
button <- remote_driver$findElement(using = "xpath", "//*[#id='download']")
button$clickElement() # download
Sys.sleep(3) # Need sometime to finish download and then close the window
remote_driver$close() # Close the window
For example, I want to scrape the data from this web-page(The Space,Amenities,Prices...and reviews
https://www.airbnb.com/rooms/9985824?guests=1&s=d2dNfFMd
I want to use for this purpose rselenium package.
This is my code:
url <- "https://www.airbnb.com/rooms/9985824?guests=1&s=d2dNfFMd"
library('RSelenium')
pJS <- phantom()
library('XML')
shell.exec(paste0("C:\\Users\\Daniil\\Desktop\\R-language,Python\\file.bat"))
Sys.sleep(10)
checkForServer()
startServer()
remDr <- remoteDriver(browserName="chrome", port=4444)
remDr$open(silent=T)
and then with the help of SelectorGadget I found, I think, right elements for scraping:
var <- remDr$findElements('css selector','#details hr+ .row')
My question is: how can I bring it into the text(character strings)?
Or maybe exists other approach with rselenium for collecting data.
Many thanks
I'm not sure what is in file.bat but it appears you are primarily interested in collecting data about the amenities of the listing. I just used firefox and skipped over the phantomjs parts of your code:
url <- "https://www.airbnb.com/rooms/9985824?guests=1&s=d2dNfFMd"
library('RSelenium')
checkForServer()
startServer()
remDr <- remoteDriver(browserName="firefox", port=4444)
remDr$open(silent=T)
remDr$navigate(url)
var <- remDr$findElement('css selector','#details hr+ .row')
print(var$getElementText())
[[1]]
[1] "The Space\nAccommodates: 2\nBathrooms: 1.5\nBed type: Real Bed\nBedrooms: 1\nBeds: 1\nProperty type: Apartment\nRoom type: Private room\nHouse Rules"
From here you can parse the string or perform additional data collecting.
I have been trying everything I can find online to log in and set cookies and certificates.... can't seem to get past the redirect to a login screen.
Here is what I am trying to do:
##################################################
library("RCurl")
library("XML")
loginURL <- "http://games.espn.go.com/ffl/signin"
dataURL <- "http://games.espn.go.com/ffl/clubhouse?leagueId=123456&teamId=8&seasonId=2014"
# ESPN Fantasy Football Login Screen
userID <- dQuote("myUsername")
pword <-dQuote("myPassword")
pushbutton <- dQuote("OK")
# concatenate the url and log in options
FFLsigninURL <- paste(loginURL ,
"&username=",userID,
"&password=",pword,
"&submit=",pushbutton)
page <- getURL(loginURL , verbose = TRUE)
and this seems to be leading me to a redirect for logging in - so Problem 1 - login not working
Part 2- one logged in - How can I proceed to the dataURL to scrape the tables? I tried login parameters on the data page as well but still get redirected to a login screen.
I'm sure I am missing something simple - just not seeing it...
It should be possible to follow location etc using RCurl alternatively you could use selenium and drive a browser:
library(RSelenium)
loginURL <- "http://games.espn.go.com/ffl/signin"
user <- 'myPass'
pass <- 'myUser'
RSelenium::checkForServer()
RSelenium::startServer()
remDr <- remoteDriver()
remDr$open()
remDr$navigate(loginURL)
webElem <- remDr$findElement('name', 'username')
webElem$sendKeysToElement(list(user))
webElem <- remDr$findElement('name', 'password')
webElem$sendKeysToElement(list(pass))
remDr$findElement('name', 'submit')$clickElement()
dataURL <- "http://games.espn.go.com/ffl/clubhouse?leagueId=123456&teamId=8&seasonId=2014"
remDr$navigate(dataURL)
# YOU can get the page source for example
pageSrc <- remDr$getPageSource()[[1]]
# now operate on pageSrc using for example library(XML) etc
# readHTMLTable(pageSrc) # for example
remDr$close()
remDr$closeServer()
Like the beginning to any problem before I post it on stack overflow I think I have tried everything. This is a learning experience for me on how to work with javascript and xml so I'm guessing my problem is there.
My question is how to get the results of clicking on the parcel number links that are javascript links? I've tried getting the xpath of the link and using the $click method which following my intuition but this wasn't right or is at least not working for me.
Firefox 26.0
R 3.0.2
require(relenium)
library(XML)
library(stringr)
initializing_parcel_number <- "00000000000"
firefox <- firefoxClass$new()
firefox$get("http://www.muni.org/pw/public.html")
inputElement <- firefox$findElementByXPath("/html/body/form[2]/table/tbody/tr[2]/td/table[1]/tbody/tr[3]/td[4]/input[1]")
inputElement$sendKeys(initializing_parcel_number)
inputElement$sendKeys(key = "ENTER")
##xpath to the first link. Or is it?
first_link <- "/html/body/table/tbody/tr[2]/td/table[5]/tbody/tr[2]/td[1]/a"
##How I'm trying to click the thing.
linkElement <- firefox$findElementByXPath("/html/body/table/tbody/tr[2]/td/table[5]/tbody/tr[2]/td[1]/a")
linkElement$click()
You can do this using RSelenium. See http://johndharrison.github.io/RSelenium/ . DISCLAIMER I am the author of the RSelenium package. A basic vignette on operation can be viewed at RSelenium basics and
RSelenium: Testing Shiny apps
If you are unsure of what element is selected you can use the highlightElement utility method in the webElement class see the commented out code.
The element click event wont work in this case. You need to simulate a click using javascript:
require(RSelenium)
# RSelenium::startServer # if needed
initializing_parcel_number <- "00000000000"
remDr <- remoteDriver()
remDr$open()
remDr$navigate("http://www.muni.org/pw/public.html")
webElem <- remDr$findElement(using = "name", "PAR1")
# webElem$highlightElement() # to visually check what elemnet is selected
webElem$sendKeysToElement(list(initializing_parcel_number, key = "enter"))
# get first link containing javascript:getParcel
webElem <- remDr$findElement(using = "css selector", '[href*="javascript:getParcel"]')
# webElem$highlightElement() # to visually check what elemnet is selected
# send a webElement as an argument.
remDr$executeScript("arguments[0].click();", list(webElem))
#