How to use read_html reading a character vector of url - r

I am using rvest package, and belowing are the codes:
library(rvest)
url <- 'https://www.zhihu.com/people/excited-vczh'
webpage <- read_html(url)
profile_data <- html_nodes(webpage, '.Profile-sideColumnItemLink')
profile_data_text <- html_text(profile_data)
The codes read one single url and parse. What if I have a charactor vector which storing multiple urls. How should I put these urls to the above codes?
For instance, urlist is a charactor storing 1000 urls. How can I change my codes to scrapy all specific content in urlist?

You could just use lapply to run through each URL to grab the text you need:
library(rvest)
urlist <- rep('https://www.zhihu.com/people/excited-vczh', 100)
profile_data_list <- lapply(urlist, function(x) {
webpage <- read_html(x)
profile_data <- html_nodes(webpage, '.Profile-sideColumnItemLink')
html_text(profile_data)
})

Related

Simple solution to scrape and loop with rvest, storing the results of the for-loop in a variable

I need to collect the links from 3 pages, each having 150 links, using R with rvest library. I used a for-loop to crawl through the pages. I know that it's a very basic question, which has been answered elsewhere:
R web scraping across multiple pages
Scrape and Loop with Rvest
I tried different versions of the following code. Most of them worked but returned only 50 instead of 150 links
library(rvest)
baseurl <- "https://www.ebay.co.uk/sch/i.html?_from=R40&_nkw=chain+and+sprocket&_sacat=0&_pgn="
n <- 1:3
nextpages <- paste0(baseurl, n)
for(i in nextpages){
html <- read_html(nextpages)
links <- html %>% html_nodes("a.vip") %>% html_attr("href")
}
The code is expected to return all the 150, instead of just 50.
You're overwriting the links variable in every iteration, so you would only end up with the last 50 links.
But you're looping using the 'i' variable, whereas your read_html() function uses the nextpages variable, which is actually a vector of 3 urls. You should be getting an error.
Try this:
links <- c()
for(i in nextpages){
html <- read_html(i)
links <- c(links, html %>% html_nodes("a.vip") %>% html_attr("href"))
}
We can use map instead of a for loop.
library(rvest)
library(purrr)
map(nextpages, . %>% read_html %>%
html_nodes("a.vip") %>%
html_attr("href")) %>% flatten_chr()
#[1] "https://www.ebay.co.uk/itm/Genuine-Honda-Chain-and-sprocket-set-Honda-Cub-C50-C70-C90-Heavy-Duty/254287014069?hash=item3b34afe8b5:g:wjEAAOSwqaBdH69W"
#[2] "https://www.ebay.co.uk/itm/DID-Heavy-Duty-Drive-Chain-And-JT-Sprocket-Kit-For-Honda-MSX125-Grom-2013-2019/223130604262?hash=item33f39ed2e6:g:QmwAAOSwdrpcAQ4c"
#.....
#...

Rvest, html_nodes return empty list and string, wield website

For this website: https://www.coinopsy.com/dead-coins/, I'm using R and the rvest package to scrape names, summary, etc., that kind of info, to make my own form. I've done this with other websites and it was really successful, but this one is odd.
I used SelectorGadget, which is useful, in my previous jobs, to figure out the css nodes' names, but html_nodes and html_text return empty character, I don't know if it's because the website is structured under a totally different format!
An example of the css code:
td class="all sorting_1">a class="coin_name" href="007coin">007Coin /a>/td>
a class="coin_name" href="007coin">007Coin /a>
url <- "https://www.coinopsy.com/dead-coins/"
webpage <- read_html(url)
Item_html <- html_nodes(webpage,'.coin_name')
Item <- html_text(Item_html)
> Item
character(0)
Can someone help me out on this issue?
If you disable javascript in the browser you will see that that content is not loaded. If you then inspect the html you will see the data is stored in a script tag; presumably loaded into the table when javascript runs in the browser. Javascript doesn't run with the method you are using. You can extract the javascript array of arrays from the response html. Then parse into a dataframe. I am new to R so looking into how this can be done in this case. I will include a full example with python at the end. I will update if my research yields something. Otherwise, you can regex out contents from returned string in data.
library(rvest)
library(stringr)
library(magrittr)
url = 'https://www.coinopsy.com/dead-coins/'
r <- read_html(url) %>%
html_node('body') %>%
html_text() %>%
toString()
data <- str_match_all(r,'var table_data = (.*?);')
data <- data[[1]][,2] # string representation of list of lists
#step to convert string to object
#step to convert object to dataframe
In python there is the ast library which makes the conversion easy and the result of the below is the table you see on the page.
import requests
import re
import ast
import pandas as pd
r = requests.get('https://www.coinopsy.com/dead-coins/')
p = re.compile(r'var table_data = (.*?);') #p1 = re.compile(r'(\[".*?"\])')
data = p.findall(r.text)[0]
listings = ast.literal_eval(data)
df = pd.DataFrame(listings)
print(df)
Edit:
Currently I can't find a library which does the conversion I mentioned. Below is ugly way of combining and feels inefficient. I would welcome suggestions on improvements (though that may be for code review later). I'm still looking at this so will update.
library(rvest)
library(stringr)
library(magrittr)
url = 'https://www.coinopsy.com/dead-coins/'
headers <- c("Column To Drop","Name","Summary","Project Start Date","Project End Date","Founder","urlId")
# https://www.coinopsy.com/dead-coins/bigone-token/ where bigone-token is urlId
r <- read_html(url) %>%
html_node('body') %>%
html_text() %>%
toString()
data <- str_match_all(r,'var table_data = (.*?);')
data <- data[[1]][,2]
z <- substr(data, start = 2, stop = nchar(data)-1) %>% str_match_all(., "\\[(.*?)\\]")
z <- z[[1]][,2]
for(i in seq(1,length(z))){
if(i==1){
df <- rapply(as.list(strsplit(z[i], ",")[[1]][2:7]), function(x) trimws(sub("'(.*?)'", "\\1", x)))
}else{
df <- rbind(df,rapply(as.list(strsplit(z[i], ",")[[1]][2:7]), function(x) trimws(sub("'(.*?)'", "\\1", x))))
}
}
maybe it will help someone, I had the same problem, the solution was that at the beginning I have to specify the label to which the script is to be directed followed by the ".". In your case you want to address a class named coin_name, when specifying that class in the html_nodes function you don't specify the tag, same as I did. To solve it, I only had to specify the label, which in your case is the "a" label, and it would look like this.
Item_html <- html_nodes(webpage,'a.coin_name')
That way the html_nodes function would not return empty.
I know you already solved it but I hope someone can help you.

Scrape table in R using rvest

I'm unable to scrape the table in the link mentioned below, i've inspected the source code and noted that the table has class name : tablesaw-sortable
I tested the method below on a wikipedia page and it's able to extract the table, any way to read the particular table?
url <- read_html("https://www.wunderground.com/history/airport/KNYC/2015/01/01/DailyHistory.html?HideSpecis=0")
weather_hourly <- url %>%
html_nodes(xpath='//*[#class="tablesaw-sortable"]') %>%
html_table()
Ok, something like this should get you pretty close to where you want to be.
library("httr")
URL <- "https://www.timeanddate.com/weather/usa/new-york/historic?month=8&year=2018"
temp <- tempfile(fileext = ".html")
GET(url = URL, user_agent("Mozilla/5.0"), write_disk(temp))
library("XML")
df <- readHTMLTable(temp)
df <- df[[2]]
df
Create a small loop if you want to iterate through a bunch of URLs and import data from each.

rvest only returns the headers

The code below only returns only the column headers. I have tried several ways to do it but with no luck.
library(rvest)
the <- read_html("https://www.timeshighereducation.com/world-university-rankings/2018/regional-ranking#!/page/0/length/25/sort_by/rank/sort_order/asc/cols/stats")
rating <- the %>%
html_nodes("table") %>%
html_table()
rating
The issue is that the Table is loaded before the page. There are many ways to do :
One of the most simple in this case is to use RSelenium as webdriver, and collect the results with :
library(RSelenium)
library(rvest)
url <- "https://www.timeshighereducation.com/world-university-rankings/2018/regional-ranking#!/page/0/length/25/sort_by/rank/sort_order/asc/cols/stats"
rD <- rsDriver()
remDr <- rD[["client"]]
remDr$navigate(url)
page <- read_html(remDr$getPageSource()[[1]])
table <- page %>% html_nodes("table") %>% html_table()
table
Another way,is to interpret the json result of the website transaction, the corresponding url https://www.timeshighereducation.com/sites/default/files/the_data_rankings/asia_university_rankings_2018_limit0_c36ae779f4180136af6e4bf9e6fc1081.json.
Hope this will helps
Gottavianoni

Extract the inner text of the anchor elements correctly using R

I am using R to scrape the link titles in this link www.jamesaltucher.com/sitemap.xml
This is my code.
library(XML)
library(RCurl)
url.link <- 'http://www.jamesaltucher.com/sitemap.xml'
blog <- getURL(url.link)
blog <- htmlParse(blog, encoding = "UTF-8")
titles <- xpathSApply (blog ,"//a",xmlValue) ## titles
My titles is an empty list.
See the screenshot:
Did I use the xpath incorrectly?
Yes. You are looking for loc element and not a element.
titles <- xpathSApply (html ,"//loc",xmlValue)
web_page <- readLines("http://vueloeyewear.com/shop/retro/black-cia/")
author_lines <- web_page[grep("strong", web_page)]
author_lines <- author_lines [7:15]
test <- gsub(", ","",toString(author_lines))
test <- gsub("","",test)
author_lines <- htmlParse(test)
xpathSApply (author_lines,"//p",xmlValue)
Look at this one, //Loc means the actual tag ..

Resources