Web scraping using rvest works partially ok - r

I'm new into web scraping using rvest in R and I'm trying to access to the left column match names of this betting house using xpath. So i know the names are under the tag. But i cant access to them using the next code:
html="https://www.supermatch.com.uy/live#/5009370062"
a=read_html(html)
a %>% html_nodes(xpath="//span") %>% html_text()
But i only access to some of the text. I was reading that this may be because the website dynamically pull data from databases using JavaScript and jQuery. Do you know how i can access to these match names? Already thank you guys.

Some generic notes about basic scraping strategies
Following refers to Google Chrome and Chrome DevTools, but those same concepts apply to other browsers and built-in developer tools too. One thing to remember about rvest is that it can only handle response delivered for that specific request, i.e. content that is not fetched / transformed / generated by JavasScript running on the client side.
Loading the page and inspecting elements to extract xpath or css selector for rvest seems to be most most common approach. Though the static content behind that URL versus the rendered page in browser and elemts in inspector can be quite different. To take some guesswork out of the process, it's better to start by checking what is the actual content that rvest might receive - open the page source and skim through it or just search for a term you are interested in. At the time of writing Viettel is playing, but they are not listed anywhere in the source:
Meaning there's no reason to expect that rvest would be able to extract that data.
You could also disable JavaScript for that particular site in your browser and check if that particular piece of information is still there. If not, it's not there for rvest either.
If you want to step further and/or suspect that rvest receives something different compared to your browser session (target site is checking request headers and delivers some anti-scraping notice when it doesn't like the user-agent, for example), you can always check the actual content rvest was able to retrieve, for example read_html(some_url) %>% as.character() to dump the whole response, read_html(some_url) %>% xml2::html_structure() to get formatted stucture of the page or read_html(some_url) %>% xml2::write_html("temp.html") to save the page content and inspect it in editor or browser.
Coming back to Supermatch & DevTools. That data on a left pane must be coming from somewhere. What usually works is a search on the network pane - open network, clear current content, refresh the page and make sure page is fully loaded; run a search (for "Viettel" for example):
And you'll have the URL from there. There are some IDs in that request (https://www.supermatch.com.uy/live_recargar_menu/32512079?_=1656070333214) and it's wise to assume those values might be related to current session or are just shortlived. So sometimes it's worth trying what would happen if we just clean it up a bit, i.e. remove 32512079?_=1656070333214. In this case it happens to work.
While here it's just a fragment of html and it makes sense to parse it with rvest , in most cases you'll end up landing on JSON and the process transforms into working with APIs. When it happens it's time to switch from rvest to something more apropriate for JSON, jsonlite + httr for example.
Sometimes plane rvest is not enough and you either want or need to work with the page as it would have been rendered in your JavaScript-enabled browser. For this there's RSelenium

Related

R selenium webdriver not loading element even after wait and scroll down

I'm trying to design a scraper for a page in R using the selenium webdriver package and the part of the page I want to scrape is not loading, no matter how long I wait for it to. It may be to do with javascript which I admittedly know nothing about.
I've tried forcing it to scroll down to load the element (in this case a table) but to no avail.
It loads fine in normal browsers.
It's like the severalth site for which this has happened so I thought I'd pop my stackoverflow cherry and ask the experts.
Sorry I have no reprex as I just don't know where the issue is coming from!
The link to the page is
https://jdih.kemenkeu.go.id/#/home
an image showing what selenium says it sees - yellow highlighted area is where the table should load.
how it is supposed to display shown in firefox
Thanks for reading!
(18 months later and I can answer my own question!)
The issue was that the page is loading content dynamically using an API request.
When scraping using direct GET requests of a URL to extract the page contents, this initial request alone may not load the desired content.
In this case, I found the exact issue by reloading the page with the developer interface open (F12) with the 'Network' (or similar) tab open.
This then shows all the requests made when the browser loads the page.
One of these will be a request for the desired data - in this case by filtering on XHR requests only, I was able to identify one which loaded content through an internal API.
Right-click the request, open in new tab, and voilà, you have a URL which you can use in the same way you would normally with this scraping method that will provide the page content required.
Sometimes the URL alone may not be enough. You will need the correct request headers sent with the request. These can be seen in the request data as mentioned above in the Developer interface in one's browser. Right-click and select 'Copy headers' or similar to get them.
In this case, i.e. when using R, the httr package can be used to send get requests with specific headers thus:
headers = c(
"Host" = "website.com"
[other headers here]
)
page <- httr::GET(url = "www.website.com/data",
httr::add_headers(.headers = headers)) %>%
httr::content()
When you have the page content, it is possible to parse the HTML or whatever else is required as usual.

How to know the direct link of a website table for HTTPGET

I am currently working on some automation thing to retrieve all currency rates in a specific bank website.
It was working before as the website provides the rates in HTML format when I use HTTP GET.
However, it seems that they have changed the way on how they built the website. Now, the HTML doesn't contain the rates. It is from my understanding inside a table.
Is there a way to retrieve the table content from HTTP GET?
Can some one teach me how to access the table contents with a direct link if possible.
Below is the webpage that I got problem with.
https://www.dbs.com.sg/personal/rates-online/foreign-currency-foreign-exchange.page
Seems that they changed their website to fetch data via ajax now. You can use your browsers web developer tools and check the network tab to see that there gets additional data loaded, e.g.
https://www.dbs.com.sg/flplscsapi/personal/default.page?q=(path:=templatedata/MMContent/RatesSGFX/data/personal/en/fx_rates.xml)&max=10&start=0&format=json&includeDCRContent=true to get a JSON holding Information about the display name of the current, the image to be displayed as well as a shortcut for the current
https://www.dbs.com.sg/sg-rates-api/v1/api/sgrates/getSGFXRates to get a JSON which holds Information about the currency rates.

Getting the download url of a video from a site

I'm trying to build a web scraper which downloads videos from "fmovies.se".
I was not able to fully extract the video url given the webpage.
The webpage I'm considering is "https://fmovies.se/film/la-cage-doree.5283j".
Two queries are required to retrieve the video url.
The initial one is 'https://fmovies.se/ajax/episode/info?ts=1483027200&=2399&id=9076jn&update=0'.
The query is composed of "ts", "_", "id" and "update" elements. Everything except "_" part was mentioned in html code of the webpage.
I couldn't get from where "_2399" part was coming.
Can anyone help me with this ?
Even if you figure out how those parameters are computed, they can change their algorithm at any moment, which this site specifically has done in the past, see this thread.
You need a long lasting solution — a headless browser.
You can use a headless browser to simulate user interactions programmatically and intercept the XHR request that you are looking for (e.g. https://fmovies.se/ajax/episode/info?ts=1483027200&=2399&id=9076jn&update=0).
One of the best headless browsers out there is Puppeteer and there's a lot of information on how to use it.

fill out search on website and screen scrape result in r

this is my first post, so if my question is too vague or not clear, please tell me so.
I'm trying to scrape a website with news-articles for a research project. But the link to the modified search on that webpage won't work, because the intranet-authentication will spit out an error.
So my idea was, that I fill out the search form and use the resulting link to scrape the website.
Since my boss likes to work with R, he would like me to write an R-skript to do so, but I have no idea how to and haven't found anything working.
You need two packages: RCurl and XML.
The RCurl package is used for internet browsing. It can access HTML forms with _GET or _PUT arguments. So, with it you can login or fill out the any form.
The output from the server would be in HTML. If you want to grep the links, you can use XLM package. I helps to get any data form XML format.
But before start, you have to find out that is the search form in webpage (and that arguments should be used). The Firefox browser could be useful. You need two add-ins: Live HTTP header and Firebug. With those add-ins you can inspect webpage much more easier.
I know that it did not solve you problem, but I could not say any more, since it deepens on particular situation and webpage structure. I believe that the tool I have mentioned is quite enough to achieve that you want.
Bet regards.

Webscraping a tricky asp.net page

The overall goal is to perform a search on the following webpage http://www.cma-cgm.com/eBusiness/Tracking/Default.aspx with a container value of CMAU1173561. I have tried two approaches, the php extension cURL and python's mechanized. The php approached involves a performing a POST submit using the input fields found on the page (NOTE: These are really ugly on the asp.net page). The returned page does not contain any of the search results. The second approaches involves using python's mechanize module. In this approach I load the page, select the form, then change the text field ctl00$ContentPlaceBody$TextSearch to the container value. When I load the response again no search results.
I am at a really dead end. Any help would be appreciate because as it stands my next step is to become a asp.net expertm which i perfer not to.
The source of that page is pretty scary (giant viewstate, tables all over the place, inline CSS, styles that look like they were copied from Word).
Regardless...an ASP.Net form still passes the same raw data to the server as any other form (though it is abstracted to the developer).
It's very possible that you are missing the cookies which go along with the request. If the search page (or any piece of the site) uses session state, the ASP.Net session cookie must be included in the request. You will be able to tell it from its name (contains "asp.net" and "session").
I assume that you have used a tool like Firebug or Chrome to view the complete outgoing request when the page is submitted. From my quick test, it looks like the request may be performed with a GET, not a POST. I submitted a form, looked at the request, and pasted the URL into a new browser window.
Example: http://www.cma-cgm.com/eBusiness/Tracking/Default.aspx?ContNum=CMAU1173561&T=57201202648
This may be all you need to do.

Resources