I am trying to scrape some information from a web page using R. The only problem (so far) is that when I inspect the HTML object that was returned, I see that the key DIV element (from which I want to return data) has the message that it is loading.
The code I am using is below.
How can ensure that all elements on the web page have been rendered before harvesting the HTML.
library(xml2)
html <- xml2::read_html("https://www.holidayhouses.co.nz/")
lst_node <- xml_find_all(html, "//body/div[#class = 'MapView js-MapView']/h1")
lst_node
# returns <h1 class="LoadingMessage">Loading...</h1>
Thanks for any suggestions...
I am editing my answer as the two ideas might apply if you use the package RSelenium. It is not the fastest way for scraping, but it should do the job. What you can do with this package, is to use R to interact with your internet browser.
So once you use RSelenium to go tu the url, you can:
If you are confident that the div will load in a certain amount of time, then you can set some delay using Sys.sleep() before you save the div in lst_node
Iterating until lst_node!="Loading..." with a while loop
Related
My goal is to get links to all challenges of Kaggle with their title. I am using the library rvest for it but I do not seem to come far. The nodes are empty when I am a few divs in.
I am trying to do it for the first challenge at first and should be able to transfer that to every entry afterwards.
The xpath of the first entry is:
/html/body/div[1]/div[2]/div/div/div[2]/div/div/div[2]/div[2]/div/div/div[2]/div/div/div[1]/a
My idea was to get the link via html_attr( , "href") once I am in the right tag.
My idea is:
library(rvest)
url = "https://www.kaggle.com/competitions"
kaggle_html = read_html(url)
kaggle_text = html_text(kaggle_html)
kaggle_node <- html_nodes(kaggle_html, xpath = "/html/body/div[1]/div[2]/div/div/div[2]/div/div/div[2]/div[2]/div/div/div[2]/div/div/div[1]/a")
html_attr(kaggle_node, "href")
I cant go past a certain div. The following snippet shows the last node I can access
node <- html_nodes(kaggle_html, xpath="/html/body/div[1]/div[2]/div")
html_attrs(node)
Once I go one step further with html_nodes(kaggle_html,xpath="/html/body/div[1]/div[2]/div/div"), the node will be empty.
I think the issue is that kaggle uses a smart list that expands the further I scroll down.
(I am aware that I can use %>%. I am saving every step so that I am able to access and view them more easily to be able to learn how it properly works.)
I solved the issue. I think that I can not access the full html code of the site from R because the table is loaded by a script which expands the table (thus the HTML) with a user scrolling through.
I resolved it, by expanding the table manually, downloading the whole HTML webpage and loading the local file.
I am looking to scrape some data from a chemical database using R, mainly name, CAS Number, and molecular weight for now. However, I am having trouble getting rvest to extract the information I'm looking for. This is the code I have so far:
library(rvest)
library(magrittr)
# Read HTML code from website
# I am using this format because I ultimately hope to pull specific items from several different websites
webpage <- read_html(paste0("https://pubchem.ncbi.nlm.nih.gov/compound/", 1))
# Use CSS selectors to scrape the chemical name
chem_name_html <- webpage %>%
html_nodes(".short .breakword") %>%
html_text()
# Convert the data to text
chem_name_data <- html_text(chem_name_html)
However, when I'm trying to create name_html, R only returns character (empty). I am using SelectorGadget to get the HTML node, but I noticed that SelectorGadget gives me a different node than what the Inspector does in Google Chrome. I have tried both ".short .breakword" and ".summary-title short .breakword" in that line of code, but neither gives me what I am looking for.
I have recently run into the same issues using rvest to scrape PubChem. The problem is that the information on the page is rendered using javascript as you are scrolling down the page, so rvest is only getting minimal information from the page.
There are a few workarounds though. The simplest way to get the information that you need into R is using an R package called webchem.
If you are looking up name, CAS number, and molecular weight then you can do something like:
library(webchem)
chem_properties <- pc_prop(1, properties = c('IUPACName', 'MolecularWeight'))
The full list of compound properties that can be extracted using this api can be found here. Unfortunately there isn't a property through this api to get CAS number, but webchem gives us another way to query that using the Chemical Translation Service.
chem_cas <- cts_convert(query = '1', from = 'CID', to = 'CAS')
The second way to get information from the page that is a bit more robust but not quite as easy to work with is by grabbing information from the JSON api.
library(jsonlite)
chem_json <-
read_json(paste0("https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/compound/", "1", "/JSON/?response_type=save$response_basename=CID_", "1"))
With that command you'll get a list of lists, which I had to write a convoluted script to parse the information that I needed from the page. If you are familiar with JSON, you can parse far more information from the page, but not quite everything. For example, in sections like Literature, Patents, and Biomolecular Interactions and Pathways, the information in these sections will not fully show up in the JSON information.
The final and most comprehensive way to get all information from the page is to use something like Scrapy or PhantomJS to render the full html output of the PubChem page, then use rvest to scrape it like you originally intended. This is something that I'm still working on as it is my first time using web scrapers as well.
I'm still a beginner in this realm, but hopefully this helps you a bit.
I am trying to use the Selenium package in R to scrape the following page: http://www.wbsec.gov.in/(S(njkinc55hbv2hw55xksxdv45))/DetailedResult/Detailed_gp.aspx. I am interested in all combinations of the drop-downs selected but keep getting the
Couldnt connect to host on http://localhost:4444/wd/hub.Please ensure a Selenium server is running.
Error in queryRD(paste0(serverURL, "/session"), "POST", qdata = toJSON(serverOpts)) :
library(RSelenium)
library(XML)
library(magrittr)
checkForServer()
startServer()
remDrv<-remoteDriver()
remDrv$open()
remDrv$navigate("http://www.wbsec.gov.in/(S(njkinc55hbv2hw55xksxdv45))/DetailedResult/Detailed_gp.aspx")
Any help would be appreciated.
Use an intermediary such as burpsuite to capture what's going on and use the results in combination with rvest's html_session and/or httr's POST.
In this case, you'd see your original URL contains the initial <select> menu and you'd also see that selecting one issues a POST to:
http://www.wbsec.gov.in/(S(njkinc55hbv2hw55xksxdv45))/DetailedResult/Detailed_gp.aspx
with a number of the hidden variables in the original form element as well as ddldistrict, ddlblock and ddlgp. The response contains the subsequent <select> menu options.
Use rvest to get the value attribute of each dropdown and make subsequent POSTs to the Detailed_gp.aspx URL until you've got all the combinations.
You'll probably get a Selenium answer, but this problem only requires posting to forms, which is something httr and rvest excel at.
You don't seem to have set up Selenium properly. Make sure you have Selenium downloaded and R Selenium loaded in R. This link might be helpful.
Once Selenium is set up properly, all you have to do is find the css selectors (selectorgadget is a great tool for this), and send the required information to the dropdowns, scrape the website and repeat. I would do three dropdowns.
I am trying to get data from the site https://bill.torrentpower.com/. I desire to input the city "Ahmedabad" and Service number "3031629" and extract the table which gives the bill details.
My code is simple
a<- postForm("https://bill.torrentpower.com/billdetails.aspx",
"ctl00$cph1$drpCity" = 1,
"ctl00$cph1$txtServiceNo" = "3031629",
.opts = list(ssl.verifypeer = FALSE)
)
write(a,file="a.html")
When I open the file a.html, I do not see the table containing the bill details. All other details are visible on a.html. My aim is to capture the tablular output as an R object.
The issue here is that the table is generated by the JavaScript code after the page has loaded and hence you will not get the content of the table.
This is a common problem with scraping information that has lots of dynamic content.
A work around this is to stimulate a web browser using RSelenium.
http://cran.r-project.org/web/packages/RSelenium/RSelenium.pdf
This will stimulate with web browser in your R session and you can navigate the webpages using various methdos ( see the user manual for info)
Personally, I find RSelenium with PhantomJS combination the most useful since I use a lot of JavaScript. Alternatively, if you find using R Syntax abit troublesome you may use PhantomJS on its own as well. http://phantomjs.org/
I am trying to screen scrape tennis results data (point by point data, not just final result) from this page using R.
http://www.scoreboard.com/au/match/wang-j-karlovic-i-2014/M1mWYtEF/#point-by-point;1
Using the regular R screen scraping functions like readlines(),htmlParseTree() etc I am able to scrape the source html for the page, but that does not contain the results data.
Is it possible to scrape all the text from the page, as if I were on the page in my browser and selected all and then copied?
That data is loaded using AJAX from http://d.scoreboard.com/au/x/feed/d_mh_M1mWYtEF_en-au_1, so R will not be able to just load it for you. However, because both use the code M1mWYtEF, you can go directly to the page that has the data you want. Using Chrome's devtools, I was able to see that the page sends a header of X-Fsign: SW9D1eZo that will let you access that page (you get a 401 Unauthorized error otherwise).
Here is R code for getting the html that holds the data you want from your example page:
library(httr)
page_code <- "M1mWYtEF"
linked_page <- paste0("http://d.scoreboard.com/au/x/feed/d_mh_",
page_code, "_en-au_1")
GET(linked_page, add_headers("X-Fsign" = "SW9D1eZo"))