Web scraping a table in R Fantasy Premier League - r

I am trying to extract the table from the website: https://fantasy.premierleague.com/statistics?fbclid=IwAR1vShDx0eEefTus-dcxA6anpurcmxz2p4fKHcq1uu9xLj54BYhdpF4pxvc
but it gives 0 elements. Can you please help me?
Thank you in advance.

This is best done using the fantasy premier league API. Go here for more info on the API link: https://www.reddit.com/r/FantasyPL/comments/c64rrx/fpl_api_url_has_been_changed/
Using an API in R will require you to do some research, but have a look at the fromJSON() function in the jsonlite package.
This should be enough to get you started, let me know if you have more questions.

Related

Web-scraping in R - How can I collect information of all products from a page instead of the first product only?

I have started to learn web scraping using R. My first project is to collect a list of all cooking books from Indigo and do some analysis.
But currently, I can only select the first book from the page. I use “rvest” package and Google Chrome's selector gadget. I have watched YouTube videos and links but no one seems to have this issue, happy to get any ideas on listing all books from the page and in all available pages.
Code:
library(rvest) library(tidyverse)
indigo_page = read_html("https://www.chapters.indigo.ca/en-ca/books/top-tens/cookbooks/")
indigo_page%>% html_node(".product-list__product-title")%>% html_text()
Output:
[1] "The Comfortable Kitchen: 105 Laid-back, Healthy, And Wholesome Recipes"
Donjazz, I guess the first suggestion would be to use html_nodes(), rather than html_node(). This minor change seems to output all of the titles for you.

Looking for R functions to access Pubchem API to query the classification browser

I am trying to look for ways to use R in accessing the classification browser in PubChem using Rest API and download bulk data at once. Can someone guide me on how to go about with this ?
Thanks in advance

rtweet to scrape a whole thread of a twitter status?

I am trying to use R to scrape the whole conversation thread of a twitter status. I am exploring rtweet, which is the latest package for extracting twitter data in R. I could not not find any. I wonder if anybody could help
Thanks

Rgexf dynamic node attributes

I'm trying to build a dynamic gexf graph-file using the R library Rgexf. It is great so far but I would like to add node attributes changing over time. As I understand it the gexf format supports this but I don't know how to add this using the R library.
Is this possible with rgexf?
If not which would be another way to do it?
I have some basic python knowlegde would pygexf a mor powerfull alternative?
As author/developer of rgexf I can tell you that for the current version it is not possible handling spells or dynamic attributes. The next version will (hoping to have it in the next two weeks).
For now you can try using Paul Girard's pygexf (gexf for python) https://github.com/paulgirard/pygexf , which supports spells.
Here you will fine a really good example of network dynamics with spells and dynamic attributes http://gexf.net/format/dynamics.html
If you have another concern/suggestion about the library please do not hesitate to send me an email at george [dot] vega [at] nodoschile.org . I would be very grateful.
Best wishes
George Vega

how to find what isbns are in use

I am trying to find a list of what ISBNs are in use. I guess I could scrape a website like Amazon but that would waste a lot of bandwidth. Is there a better (free) way?
Maybe you could use the remote API for isbndb.com.
Trying to keep an enormous ISBN list up-to-date yourself is quite a huge task if you ask me.
Just for the record: note that if you actually want an ISBN for your publication, you need to go to the official agency in your country. In the US this is http://www.isbn.org/ , but it varies by country. In Australia, for example, it is here.
This might help: What is the most complete (free) ISBN API?
As the accepted answer states there is also an API to search Amazon but it's not actually supposed to be used in the way you wish to.
ended up using partial list from http://my.linkbaton.com/isbn/
Yes, try isbndb.com

Resources