I am trying to scrape some data from yahoo finance. Usually I have no problem doing this. Today however, I have run into a problem trying to pull a certain container. What might be the reason this is giving me such a difficult time?
I have tried many combos of xpaths. Selector gadget for some reason can not pick up the xpath. I have posted some attempts and the url below.
The green aea is what I am trying to bring into my console.
library(tidyverse)
library(rvest)
library(httr)
read_html("https://ca.finance.yahoo.com/quote/SPY/holdings?p=SPY") %>% html_nodes(xpath = '//*[#id="Col1-0-Holdings-Proxy"]/section/div[1]/div[1]')
{xml_nodeset (0)}
#When I search for all tables using the following function.
read_html("https://finance.yahoo.com/quote/xlk/holdings?p=xlk") %>% html_nodes("table") %>% .[1] %>% html_table(fill = T)
I get the table at the bottom of the page. Trying different numbers in the [] leads to errors.
What am I doing wrong? This seems like such an easy scrape. Thanks a bunch for your help.
Your data doesn't reside within an actual html table.
You could use the following css selectors currently - though a lot of the page looks dynamic and I suspect attributes and classes will change in future. I tried to keep a little more generic to compensate but you should definitely seek to make this even more generic if possible.
I use css selectors throughout for the flexibility and specificity gained. The [] denote attribute selectors, the . denotes class selector, * is the contains operator specifiying that the left hand side attribute's value contains the right hand side string e.g. with [class*=screenerBorderGray] this means the class attribute contains the stringscreenerBorderGray.
The " " ,">" , "+" between selectors are called combinators and are used to specify relationships between nodes matched by consecutive parts of the selector sequence.
I generate a left column list of nodes and a right column list of nodes (ignoring the chart col in between). I then join these into a final dataframe.
R
library(rvest)
library(magrittr)
pg <- read_html('https://finance.yahoo.com/quote/xlk/holdings?p=xlk&guccounter=1')
lhs <- pg %>%
html_nodes('[id*=Holdings] section > .Fl\\(start\\) [class*=screenerBorderGray] > span:nth-child(1)') %>%
html_text()
rhs <- pg %>%
html_nodes('[id*=Holdings] section > .Fl\\(start\\) [class*=screenerBorderGray] span + span:last-child') %>%
html_text()
df <- data.frame(lhs,rhs) %>% set_names(., c('Title','value'))
df <- df[-c(3),]
rownames(df) <- NULL
print(df)
Py
import requests
import pandas as pd
from bs4 import BeautifulSoup as bs
r = requests.get('https://finance.yahoo.com/quote/xlk/holdings?p=xlk&guccounter=1')
soup = bs(r.content, 'lxml')
lhs = [i.text.strip() for i in soup.select('[id*=Holdings] section > .Fl\(start\) .Bdbc\(\$screenerBorderGray\) > span:nth-child(1)')]
rhs = [i.text.strip() for i in soup.select('[id*=Holdings] section > .Fl\(start\) .Bdbc\(\$screenerBorderGray\) span + span:last-child')]
df = pd.DataFrame(zip(lhs, rhs), columns = ['Title','Value'])
df = df.drop([2]).reset_index(drop = True)
print(df)
References:
Row re-numbering #thelatemail
Related
I would like to extract the following data from four nodes all at the same level and sharing the same code name.
# I was able to extract the first of the four nodes - Property Amenities, using google chrome selector gadget as to identify the nodes.
library(rvest)
page0_url<-read_html ("https://www.tripadvisor.com/Hotel_Review-g1063979-d1447619-Reviews-
Solana_del_Ter-Ripoll_Province_of_Girona_Catalonia.html")
result_amenities <- html_text (html_node(page0_url,"._1nAmDotd") %>% html_nodes("div") )
However, I cannot figure out how to pass the code to extract the elements within the second object named "Room Features". This is at the same node level and has the same name code as the one above =.This is also the case for the two objects following to this last one and by the names of "Room types" and "Good to know".
You need to query all of the nodes with same class using the html_nodes() function then parse each of those nodes individually.
For Example
library(rvest)
url<- "https://www.tripadvisor.com/Hotel_Review-g1063979-d1447619-Reviews-Solana_del_Ter-Ripoll_Province_of_Girona_Catalonia.html"
page0_url<-read_html(url)
result_amenities <- html_text(html_nodes(page0_url,"._1nAmDotd") %>% html_nodes("div") )
names <- html_nodes(page0_url,"div._1mJdgpMJ") %>% html_text()
groupNodes <- html_nodes(page0_url,"._1nAmDotd")
outputlist <-lapply(groupNodes, function(node){
results <- node %>% html_nodes("div") %>% html_text()
})
On the reference page there is no corresponding "_1nAmDotd" node the "Good to Know" section thus leading to an unbalance in the results.
Almost all desirable data (including everything you requested) is available via the page manifest, within a script tag, as that is where it is loaded from. You can regex out that enormous amount of data with regex. Then write user defined functions to extract desired info.
I initially parse the regex matched group into a json object all_data. I then look through that list of lists to find strings only associated with the data of interest. For example, starRating is associated with the location data you are interested in. get_target_list returns that list and then I extract from that what I want. You can see
that location_info holds the data related to hotel amenities (including room amentities), the star rating (hotel class) and languages spoken etc.
E.g. location_info$hotelAmenities$languagesSpoken or location_info$hotelAmenities$highlightedAmenities$roomFeatures ........
N.B. As currently written, it is intended that search_string is unique to the desired list, within the list of lists initially held in the json object. I wasn't sure if the names, of the named lists, would remain constant, so chose to dynamically retrieve the right list.
R:
library(rvest)
library(jsonlite)
library(stringr)
library(magrittr)
is_target_list <- function(x, search_string) {
return(str_detect(x %>% toString(), search_string))
}
get_target_list <- function(data_list, search_string) {
mask <- lapply(data_list, is_target_list, search_string) %>% unlist()
return(subset(data_list, mask))
}
r <- read_html("https://www.tripadvisor.com/Hotel_Review-g1063979-d1447619-Reviews-Solana_del_Ter-Ripoll_Province_of_Girona_Catalonia.html") %>%
toString()
all_data <- gsub("pageManifest:", '"pageManifest":', stringr::str_match(r, "(\\{pageManifest:.*);\\(")[, 2]) %>%
jsonlite::parse_json()
data_list <- all_data$pageManifest$urqlCache
# target_info <- get_target_list(data_list, 'hotelAmenities')
location_info <- get_target_list(data_list, "starRating") %>%
unname() %>%
.[[1]] %>%
{
.$data$locations[[1]]$detail
}
Regex:
For this website: https://www.coinopsy.com/dead-coins/, I'm using R and the rvest package to scrape names, summary, etc., that kind of info, to make my own form. I've done this with other websites and it was really successful, but this one is odd.
I used SelectorGadget, which is useful, in my previous jobs, to figure out the css nodes' names, but html_nodes and html_text return empty character, I don't know if it's because the website is structured under a totally different format!
An example of the css code:
td class="all sorting_1">a class="coin_name" href="007coin">007Coin /a>/td>
a class="coin_name" href="007coin">007Coin /a>
url <- "https://www.coinopsy.com/dead-coins/"
webpage <- read_html(url)
Item_html <- html_nodes(webpage,'.coin_name')
Item <- html_text(Item_html)
> Item
character(0)
Can someone help me out on this issue?
If you disable javascript in the browser you will see that that content is not loaded. If you then inspect the html you will see the data is stored in a script tag; presumably loaded into the table when javascript runs in the browser. Javascript doesn't run with the method you are using. You can extract the javascript array of arrays from the response html. Then parse into a dataframe. I am new to R so looking into how this can be done in this case. I will include a full example with python at the end. I will update if my research yields something. Otherwise, you can regex out contents from returned string in data.
library(rvest)
library(stringr)
library(magrittr)
url = 'https://www.coinopsy.com/dead-coins/'
r <- read_html(url) %>%
html_node('body') %>%
html_text() %>%
toString()
data <- str_match_all(r,'var table_data = (.*?);')
data <- data[[1]][,2] # string representation of list of lists
#step to convert string to object
#step to convert object to dataframe
In python there is the ast library which makes the conversion easy and the result of the below is the table you see on the page.
import requests
import re
import ast
import pandas as pd
r = requests.get('https://www.coinopsy.com/dead-coins/')
p = re.compile(r'var table_data = (.*?);') #p1 = re.compile(r'(\[".*?"\])')
data = p.findall(r.text)[0]
listings = ast.literal_eval(data)
df = pd.DataFrame(listings)
print(df)
Edit:
Currently I can't find a library which does the conversion I mentioned. Below is ugly way of combining and feels inefficient. I would welcome suggestions on improvements (though that may be for code review later). I'm still looking at this so will update.
library(rvest)
library(stringr)
library(magrittr)
url = 'https://www.coinopsy.com/dead-coins/'
headers <- c("Column To Drop","Name","Summary","Project Start Date","Project End Date","Founder","urlId")
# https://www.coinopsy.com/dead-coins/bigone-token/ where bigone-token is urlId
r <- read_html(url) %>%
html_node('body') %>%
html_text() %>%
toString()
data <- str_match_all(r,'var table_data = (.*?);')
data <- data[[1]][,2]
z <- substr(data, start = 2, stop = nchar(data)-1) %>% str_match_all(., "\\[(.*?)\\]")
z <- z[[1]][,2]
for(i in seq(1,length(z))){
if(i==1){
df <- rapply(as.list(strsplit(z[i], ",")[[1]][2:7]), function(x) trimws(sub("'(.*?)'", "\\1", x)))
}else{
df <- rbind(df,rapply(as.list(strsplit(z[i], ",")[[1]][2:7]), function(x) trimws(sub("'(.*?)'", "\\1", x))))
}
}
maybe it will help someone, I had the same problem, the solution was that at the beginning I have to specify the label to which the script is to be directed followed by the ".". In your case you want to address a class named coin_name, when specifying that class in the html_nodes function you don't specify the tag, same as I did. To solve it, I only had to specify the label, which in your case is the "a" label, and it would look like this.
Item_html <- html_nodes(webpage,'a.coin_name')
That way the html_nodes function would not return empty.
I know you already solved it but I hope someone can help you.
I'm trying to scrape tabulated data on previous US statewide election results, and I think ballotpedia.org is a good place to be getting this data from - as URLs are in a consistent format for all states.
Here's the code I set up to test it:
library(dplyr)
library(rvest)
# STEP 1 - URL COMPONENTS TO SCRAPE FROM
senate_base_url <- "https://ballotpedia.org/United_States_Senate_elections_in_"
senate_state_urls <- gsub(" ", "_", state.name)
senate_year_urls <- c(",_2012", ",_2014", ",_2016")
# TEST
test_url <- paste0(senate_base_url, senate_state_urls[10], senate_year_urls[2])
this results in the following URL: https://ballotpedia.org/United_States_Senate_elections_in_Georgia,_2014
Using the 'selectorgadget' chrome plugin, I selected the table in question containing the election result, and tried parsing it into R as follows:
test_data <- read_html(test_url)
test_data <- test_data %>%
html_node(xpath = '//*[#id="collapsibleTable0"]') %>%
html_table()
However, I'm getting the following error:
Error in UseMethod("html_table") :
no applicable method for 'html_table' applied to an object of class "xml_missing"
Furthermore, the R object test_data yields a list with 2 empty elements.
Can anyone tell me what I'm doing wrong here? Is the html_table() function the wrong one? Using html_text() simply returns an NA character vector. Any help would be greatly appreciated, thanks very much :).
Your xpath statement is incorrect, thus the html_node function is returning a null value.
Here is a solution using the html tags. "Look for a table tag within a center tag"
library(rvest)
test_data <- read_html(test_url)
test_data <- test_data %>% html_nodes("center table") %>% html_table()
Or to retrieve the fully collapsed table use the html tag with class name:
collapsedtable<-test_data %>% html_nodes("table.collapsible") %>%
html_table(fill=TRUE)
this works for me:
library(httr)
library(XML)
r <- httr::GET("https://ballotpedia.org/United_States_Senate_elections_in_Georgia,_2014")
XML::readHTMLTable(rawToChar(r$content))[[2]]
Using a simple code to extract the links to my articles (one by one)
library(rvest)
url = ("http://www.time.mk/week/2016/22")
frontpage = read_html(url) %>%
html_nodes(".other_articles") %>%
html_attr("href") %>%
paste0()
print(frontpage)
mark = "http://www dot time dot mk/"
frontpagelinks = paste0(mark, frontpage)
final = list()
final = read_html(frontpagelinks[1]) %>%
html_nodes("h1 a") %>%
html_attr("href")%>%
paste0()
I used
a1onJune = str_extract_all(frontpage, ".*a1on.*") to extract articles from the website a1on dot mk, which worked like a charm finding only the articles I needed.
After getting some help here as to how to make my code more efficient, i.e. extract numerous links at once, via:
linksList <- lapply(frontpagelinks, function(i) {
read_html(frontapagelinks[i]) %>%
html_nodes("h1 a") %>%
html_attr("href")%>%
paste0()
which extracts all of the links I need, the same stringr code returns oddly enough something like this
"\"standard dot mk/germancite-ermenskiot-genocid/\", \"//plusinfo dot mk/vest/72702/turcija-ne-go-prifakja-zborot-genocid\", \"/a1on dot mk/wordpress/archives/618719\", \"sitel dot mk/na-povidok-nov-sudir-megju-turcija-i-germanija\",
Where as shown in bold I also extract the links to the website I need, but also a bunch of other noise that I definitely don't want there. I tried a variety of regex expressions, however I've not managed to define only those lines of code that contain a1on posts.
Given that the list which I am attempting to clear out outputs separated links I am a bit baffled by the fact that when I use stringr it (as far as im concerned) randomly divides them into strings of multiple links:
[93] "http://telegraf dot mk /aktuelno/svet/ns-newsarticle-vo-znak-na-protest-turcija-go-povlece-svojot-ambasador-od-germanija.nspx"
[94] "http://tocka dot mk /1/197933/odnosite-pomegju-berlin-i-ankara-pred-totalen-kolaps-germanija-go-prizna-turskiot-genocid-nad-ermencite"
[95] "lokalno dot mk /merkel-vladata-na-germanija-e-podgotvena-da-pomogne-vo-dijalogot-megju-turcija-i-ermenija/"
Any thoughts as to how I can go about this? Perhaps something that is more general, given that I need to do the same type of cleaning for five different portals.
Thank you.
Using a simple code to extract the links to my articles (one by one)
library(rvest)
url = ("http://www.time.mk/week/2016/22")
frontpage = read_html(url) %>%
html_nodes(".other_articles") %>%
html_attr("href") %>%
paste0()
print(frontpage)
mark = "http://www.time.mk/"
frontpagelinks = paste0(mark, frontpage)
# lappy returns a list of lists, so use unlist to flatten
linksList <- unlist( lapply(frontpagelinks, function(i) {
read_html(i) %>%
html_nodes("h1 a") %>%
html_attr("href") %>%
paste0()}))
# grab the lists of interest
a1onLinks <- linksList[grepl(".*a1on.*", linksList)]
# [1] "http://a1on.mk/wordpress/archives/621196" "http://a1on.mk/wordpress/archives/621038"
# [3] "http://a1on.mk/wordpress/archives/620576" "http://a1on.mk/wordpress/archives/620686"
# [5] "http://a1on.mk/wordpress/archives/620364" "http://a1on.mk/wordpress/archives/620399"
I am trying to parse a number of documents using the excellent xml2 R library. As an example, consider the following XML file:
pg <- read_xml("https://www.theyworkforyou.com/pwdata/scrapedxml/westminhall/westminster2001-01-24a.xml")
Which contains a number of <speech> tags which are separated, though not nested within, a number of <minor-heading> and <major-heading> tags. I would like to be process this document to a resulting data.frame with the following structure:
major_heading_id speech_text
heading_id_1 text1
heading_id_1 text2
heading_id_2 text3
heading_id_2 text4
Unfortunately, because the tags are not nested, I cannot figure out how to do this! I have code that successfully recovers the relevant information (see below), but matching the speech tags to their respective major-headings is beyond me.
My intuition is that it would probably be best to split the XML document at the heading tags, and then process each as an individual document, but I couldn't find a function in the xml2 package that would let me do this!
Any help would be great.
Where I have got to so far:
speech_recs <- xml_find_all(pg, "//speech")
speech_text <- trimws(xml_text(speech_recs))
heading_recs <- xml_find_all(pg, "//major-heading")
major_heading_id <- xml_attr(heading_recs, "id")
You can do this as follows:
require(xml2)
require(tidyverse)
doc <- read_xml("https://www.theyworkforyou.com/pwdata/scrapedxml/westminhall/westminster2001-01-24a.xml")
# Get the headings
heading_recs <- xml_find_all(doc, "//major-heading")
# path creates the structure you want
# so the speech nodes that have exactly n headings above them.
path <- sprintf("//speech[count(preceding-sibling::major-heading)=%d]",
seq_along(heading_recs))
# Get the text of the speech nodes
map(path, ~xml_text(xml_find_all(doc, .x))) %>%
# Combine it with the id of the headings
map2_df(xml_attr(heading_recs, "id"),
~tibble(major_heading_id = .y, speech_text = .x))
This results in: