Scraping JSON link is not working using fromJSON(url) - r

For web scraping I normally use the jsonlite::fromJSON(url) command which usually does the job for me. However this time it is inside another text.
Basically like this:
jQuery([
JSON stuff that I am more used to
]);
How do I get around this easily?
The actual data looks like this when I call the address (I have tapped it more pretty):
jQuery(
[
{"Date":"2019-05-31T00:00:00+02:00","FromTime":"2019-05-31T00:00:00+02:00","ToTime":"2019-05-31T00:15:00+02:00","Value":3315.9120000000003,"Value2":2584.244,"Value3":731.668},
{"Date":"2019-05-31T00:00:00+02:00","FromTime":"2019-05-31T00:15:00+02:00","ToTime":"2019-05-31T00:30:00+02:00","Value":3386.238,"Value2":2655.814,"Value3":730.424}
]
);
The errormessage I get when I try to make the function parse it is
Error in parse_con(txt, bigint_as_char) :
lexical error: invalid char in json text.
jQuery([{"Date":"2019-05-29T00:
(right here) ------^
End goal is just to have a dataframe to continue work on.

You can substr what you want from rvest return. It looks like jquery return will have start and end syntax
library(rvest)
library(jsonlite)
url <- 'https://ws.50hertz.com/web02/api/PhotovoltaicActual/ListRecords?filterDateTime=2019-05-30T22:23:14.716Z&callback=jQuery&_=1559254994256'
r <- read_html(url) %>%
html_node("p") %>%
html_text()
x <- jsonlite::fromJSON(substr(r[1], 8, nchar(r) - 2))

Related

Using R to mimic “clicking” a download file button on a webpage

There are 2 parts of my questions as I explored 2 methods in this exercise, however I succeed in none. Greatly appreciated if someone can help me out.
[PART 1:]
I am attempting to scrape data from a webpage on Singapore Stock Exchange https://www2.sgx.com/derivatives/negotiated-large-trade containing data stored in a table. I have some basic knowledge of scraping data using (rvest). However, using Inspector on chrome, the html hierarchy is much complex then I expected. I'm able to see that the data I want is hidden under < div class= "table-container" >,and here's what I've tied:
library(rvest)
library(httr)
library(XML)
SGXurl <- "https://www2.sgx.com/derivatives/negotiated-large-trade"
SGXdata <- read_html(SGXurl, stringsASfactors = FALSE)
html_nodes(SGXdata,".table-container")
However, nothing has been picked up by the code and I'm doubt if I'm using these code correctly.
[PART 2:]
As I realize that there's a small "download" button on the page which can download exactly the data file i want in .csv format. So i was thinking to write some code to mimic the download button and I found this question Using R to "click" a download file button on a webpage, but i'm unable to get it to work with some modifications to that code.
There's a few filtera on the webpage, mostly I will be interested downloading data for a particular business day while leave other filters blank, so i've try writing the following function:
library(httr)
library(rvest)
library(purrr)
library(dplyr)
crawlSGXdata = function(date){
POST("https://www2.sgx.com/derivatives/negotiated-large-trade",
body = NULL
encode = "form",
write_disk("SGXdata.csv")) -> resfile
res = read.csv(resfile)
return(res)
}
I was intended to put the function input "date" into the “body” argument, however i was unable to figure out how to do that, so I started off with "body = NULL" by assuming it doesn't do any filtering. However, the result is still unsatisfactory. The file download is basically empty with the following error:
Request Rejected
The requested URL was rejected. Please consult with your administrator.
Your support ID is: 16783946804070790400
The content is loaded dynamically from an API call returning json. You can find this in the network tab via dev tools.
The following returns that content. I find the total number of pages of results and loop combining the dataframe returned from each call into one final dataframe containing all results.
library(jsonlite)
url <- 'https://api.sgx.com/negotiatedlargetrades/v1.0?order=asc&orderby=contractcode&category=futures&businessdatestart=20190708&businessdateend=20190708&pagestart=0&pageSize=250'
r <- jsonlite::fromJSON(url)
num_pages <- r$meta$totalPages
df <- r$data
url2 <- 'https://api.sgx.com/negotiatedlargetrades/v1.0?order=asc&orderby=contractcode&category=futures&businessdatestart=20190708&businessdateend=20190708&pagestart=placeholder&pageSize=250'
if(num_pages > 1){
for(i in seq(1, num_pages)){
newUrl <- gsub("placeholder", i , url2)
newdf <- jsonlite::fromJSON(newUrl)$data
df <- rbind(df, newdf)
}
}

web scraping a table with R

i am trying to web scrape a table from pitch book web site .
But using simple HTML does not work because pitch book uses java script instead of HTML to load the data so i need execute the JS in order to extract the info from the json file .
this is my code :
library(httr)
library(jsonlite)
library(magrittr)
json=get("https://my.pitchbook.com/old/
homeContent.64ea0536fd321cc1dd3b.js") %>%
content(as='text') %>%
fromJSON()
i get this error :
Error in
get("https://my.pitchbook.com/old/homeContent.64ea0536fd321cc1dd3b.js")
:
object
'https://my.pitchbook.com/old/homeContent.64ea0536fd321cc1dd3b.js'
not found
what ever data i am trying to load it returns the same error .
would appreciate your help :)
thank you :)
You have called base::get and not httr::GET.
So it should be
library(httr)
library(jsonlite)
library(magrittr)
json <- GET(
"https://my.pitchbook.com/old/homeContent.64ea0536fd321cc1dd3b.js"
) %>%
content("text") %>%
fromJSON()
but I'm not entirely sure that your website url gives a valid json. This in itself will give
lexical error: invalid char in json text.

R Web scrape - Error

Okay, So I am stuck on what seems would be a simple web scrape. My goal is to scrape Morningstar.com to retrieve a fund name based on the entered url. Here is the example of my code:
library(rvest)
url <- html("http://www.morningstar.com/funds/xnas/fbalx/quote.html")
url %>%
read_html() %>%
html_node('r_title')
I would expect it to return the name Fidelity Balanced Fund, but instead I get the following error: {xml_missing}
Suggestions?
Aaron
edit:
I also tried scraping via XHR request, but I think my issue is not knowing what css selector or xpath to select to find the appropriate data.
XHR code:
get.morningstar.Table1 <- function(Symbol.i,htmlnode){
try(res <- GET(url = "http://quotes.morningstar.com/fundq/c-header",
query = list(
t=Symbol.i,
region="usa",
culture="en-US",
version="RET",
test="QuoteiFrame"
)
))
tryCatch(x <- content(res) %>%
html_nodes(htmlnode) %>%
html_text() %>%
trimws()
, error = function(e) x <-NA)
return(x)
} #HTML Node in this case is a vkey
still the same question is, am I using the correct css/xpath to look up? The XHR code works great for requests that have a clear css selector.
OK, so it looks like the page dynamically loads the section you are targeting, so it doesn't actually get pulled in by read_html(). Interestingly, this part of the page also doesn't load using an RSelenium headless browser.
I was able to get this to work by scraping the page title (which is actually hidden on the page) and doing some regex to get rid of the junk:
library(rvest)
url <- 'http://www.morningstar.com/funds/xnas/fbalx/quote.html'
page <- read_html(url)
title <- page %>%
html_node('title') %>%
html_text()
symbol <- 'FBALX'
regex <- paste0(symbol, " (.*) ", symbol, ".*")
cleanTitle <- gsub(regex, '\\1', title)
As a side note, and for your future use, your first call to html_node() should include a "." before the class name you are targeting:
mypage %>%
html_node('.myClass')
Again, this doesn't help in this specific case, since the page is failing to load the section we are trying to scrape.
A final note: other sites contain the same info and are easier to scrape (like yahoo finance).

web scrape with rvest

I'm trying to grab a table of data using read_html from the r package rvest.
I've tried the below code:
library(rvest)
raw <- read_html("https://demanda.ree.es/movil/peninsula/demanda/tablas/2016-01-02/2")
I don't believe the above pulled the data from the table, since I see 'raw' is a list of 2:
'node:<externalptr>' and 'doc:<externalptr>'
I've tried grabbing the xpath too:
html_nodes(raw,xpath = '//*[(#id = "tabla_generacion")]//*[contains(concat( " ", #class, " " ), concat( " ", "ng-scope", " " ))]')
Any advice on what to try next?
Thanks.
This website is using angular to make a call to get the data. You can just use that call to get the raw JSON. The response is not pure JSON, so you can't just run fromJSON(url), you have to download the data and get rid of the non-JSON stuff before you parse it.
library(jsonlite)
library(httr)
url <- "https://demanda.ree.es/WSvisionaMovilesPeninsulaRest/resources/demandaGeneracionPeninsula?callback=angular.callbacks._2&curva=DEMANDA&fecha=2016-01-02"
a <- GET(url)
a <- content(a, as="text")
# get rid of the non-JSON stuff...
a <- gsub("^angular.callbacks._2\\(", "", a)
a <- gsub("\\);$", "", a)
df <- fromJSON(a, simplifyDataFrame = TRUE)
I found this by pushing F12 in Chrome and looking at the "Sources" tab. The data to fill the table had to come from somewhere... so it's just a matter of figuring out where. I was unable to use rvest to scrape the table. I'm not sure if that call to get the data was executed in R as it was in chrome... so there may have been no data for rvest to scrape.

How can I scrape this data?

I want to scrape the statistics from this page:
url <- "http://www.pgatour.com/players/player.20098.stuart-appleby.html/statistics"
Specifically, I want to grab the data in the table that's underneath Stuart's headshot. It's headlined by "Stuart Appleby - 2015 STATS PGA TOUR"
I attempt to use rvest, in combo with the Selector Gadget (http://selectorgadget.com/).
url_html <- url %>% html()
url_html %>%
html_nodes(xpath = '//*[(#id = "playerStats")]//td')
'Should' get me the table without, for example, the row on top that says "Recap -- Rank -- Additional Stats"
url_html <- url %>% html()
url_html %>%
html_nodes(xpath = '//*[(#id = "playerStats")] | //th//*[(#id = "playerStats")]//td')
'Should' get me the table with that "Recap -- Rank -- Add'l Stats" line.
Neither do.
Obvs I'm a complete newb when it comes to web scraping. When I click on 'view source' for that webpage, the data contained in the table isn't there.
In the source code, where I think the table should be starting, is this bit of code:
<script id="playerStatsTourTemplate" type="text/x-jquery-tmpl">
{{each(t, tour) tours}}
{{if pgatour.players.shouldProcessTour(tour.tourCodeLC)}}
<div class="statistics-head">
<h2 class="title">Stuart Appleby - <b>${year} STATS
.
.
.
So, it appears the table is stored somewhere (Json? Jquery? Javascript? Are those terms applicable here?) that isn't accessible to the html() function. Is there anyway to use rvest to grab this data? Is there an rvest equivalent for grabbing data that is stored in this manner?
Thanks.
I'd probably use the GET request that the page is making to get the raw data from their API and work on parsing that...
content(a) gives you a list representation... basically the output from fromJSON()
or
as(a, "character") gives you the raw JSON
library("httr")
a <- GET("http://www.pgatour.com/data/players/20098/2014stat.json")
content(a)
as(a, "character")
Check this out.
Open source project on GitHub scraping PGA data: https://github.com/zachwill/golf/blob/master/pga.py

Resources