Scraping password protected forum in r - r

I have a problem with logging in in my script. Despite all other good answers that I found on stackoverflow, none of the solutions worked for me.
I am scraping a web forum for my PhD research, its URL is http://forum.axishistory.com.
The webpage I want to scrape is the memberlist - a page that lists the links to all member profiles. One can only access the memberlist if logged in. If you try to access the memberlist without logging in, it shows you the log in form.
The URL of the memberlist is this: http://forum.axishistory.com/memberlist.php.
I tried the httr-package:
library(httr)
members <- GET("http://forum.axishistory.com/memberlist.php", authenticate("username", "password"))
members_html <- html(members)
The output is the log in form.
Then I tried RCurl:
library(RCurl)
members_html <- htmlParse(getURL("http://forum.axishistory.com/memberlist.php", userpwd = "username:password"))
members_html
The output is the log in form - again.
Then i tried the list() function from this topic - Scrape password-protected website in R :
handle <- handle("http://forum.axishistory.com/")
path <- "ucp.php?mode=login"
login <- list(
amember_login = "username"
,amember_pass = "password"
,amember_redirect_url =
"http://forum.axishistory.com/memberlist.php"
)
response <- POST(handle = handle, path = path, body = login)
and again! The output is the log in form.
The next thing I am working on is RSelenium, but after all these attempts I am trying to figure out whether I am probably missing something (probably something completely obvious).
I have looked at other relevant posts in here, but couldn't figure out how to apply the code to my case:
How to use R to download a zipped file from a SSL page that requires cookies
Scrape password-protected website in R
How to use R to download a zipped file from a SSL page that requires cookies
https://stackoverflow.com/questions/27485311/scrape-password-protected-https-website-in-r
Web scraping password protected website using R

Thanks to Simon I found the answer here: Using rvest or httr to log in to non-standard forms on a webpage
library(rvest)
url <-"http://forum.axishistory.com/memberlist.php"
pgsession <-html_session(url)
pgform <-html_form(pgsession)[[2]]
filled_form <- set_values(pgform,
"username" = "username",
"password" = "password")
submit_form(pgsession,filled_form)
memberlist <- jump_to(pgsession, "http://forum.axishistory.com/memberlist.php")
page <- html(memberlist)
usernames <- html_nodes(x = page, css = "#memberlist .username")
data_usernames <- html_text(usernames, trim = TRUE)

Related

Unable to download a file with rvest/httr after submitting search form

This seems like a simple problem but I've been struggling with it for a few days. This is a minimum working example rather than the actual problem:
This question seemed similat but I was unable to use the answer to solve my problem.
In a browser, I go to this url, and click on [Search] (no need to make any choices from the lists), and then on [Download Results] (choosing, for example, the Xlsx option). The file then downloads.
To automate this in R I have tried:
library(rvest)
url1 <- "https:/secure.gamblingcommission.gov.uk/PublicRegister/Search"
sesh1 <- html_session(url1)
form1 <-html_form(sesh1)[[1]]
subform <- submit_form(sesh1, form1)
Using Chrome Developer tools I find the url being used to initiate the download, so I try:
url2 <- "https:/secure.gamblingcommission.gov.uk/PublicRegister/Search/Download"
res <- GET(url = url2, query = list(format = "xlsx"))
However this does not download the file:
> res$content
raw(0)
I also tried
download.file(url = paste0(url2, "?format=xlsx") , destfile = "down.xlsx", mode = "wb")
But this downloads nothing:
> Content type '' length 0 bytes
> downloaded 0 bytes
Note that, in the browser, pasting url2 and adding the format query does initiate the download (after doing the search from url1)
I thought that I should somehow be using the session info from the initial code block to do the download, but so far I can't see how.
Thanks in advance for any help !
You are almost there and your intuition is correct about using the session info.
You just need to use rvest::jump_to to navigate to the second url and then write it to disk:
library(rvest)
url1 <- "https:/secure.gamblingcommission.gov.uk/PublicRegister/Search"
sesh1 <- html_session(url1)
form1 <-html_form(sesh1)[[1]]
subform <- submit_form(sesh1, form1)
url2 <- "https://secure.gamblingcommission.gov.uk/PublicRegister/Search/Download"
#### The above is your original code - below is the additional code you need:
download <- jump_to(subform, paste0(url2, "?format=xlsx"))
writeBin(download$response$content, "down.xlsx")

Login to a website (billboard.com) for scraping purposes using R, when the login is done through a pop-up window

I want to scrape some "pro" billboard data, which access requires a premium billboard account. I already have one, but obvisouly I need to login to the billboard website through R in order to be able to scrape this data.
I have no issues with such a thing with regular login pages (for instance, stackoverflow):
##### Stackoverflow login #####
# Packages installation and loading ---------------------------------------
if (!require("pacman")) install.packages("pacman")
pacman::p_load(rvest,dplyr,tidyr)
packages<-c("rvest","dplyr","tidyr")
lapply(packages, require, character.only = TRUE)
#Address of the login webpage
login_test<-"https://stackoverflow.com/users/login?ssrc=head&returnurl=https%3a%2f%2fstackoverflow.com%2f"
#create a web session with the desired login address
pgsession_test<-html_session(login_test)
pgform_test<-html_form(pgsession_test)[[2]] #in this case the submit is the 2nd form
filled_form_test<-set_values(pgform_test, email="myemail", password="mypassword")
filled_form_test$fields[[5]]$type <- "button"
submit_form(pgsession_test, filled_form_test)
The issue with the billboard website is that the login is done through a clickable "sign up" button in the header of the page, which trigger a pop-up window allowing to type in an email address and password, thus allowing the user to login.
So far, I've tried to guess where the login form is in the html output of the billboard page, as it is not obvious, but I don't even think it actually appears, and I think scraping the html code from pop-up windows might require a specific process.
Here is what I've done so far:
##### Billboard scraping #####
# Packages installation and loading ---------------------------------------
if (!require("pacman")) install.packages("pacman")
pacman::p_load(rvest,dplyr,tidyr)
packages<-c("rvest","dplyr","tidyr")
lapply(packages, require, character.only = TRUE)
# Session setup (required to scrape restricted access web pages) -----------
login<-"https://www.billboard.com/myaccount"
#create a web session with the desired login address
pgsession<-html_session(login)
pgform<-html_form(pgsession)[[2]]
pgform$fields[[2]]$value <- 'myemailaddress'
pgform$fields[[1]]$value <- 'mypassword'
filled_form<-set_values(pgform)
fake_submit_button <- list(name = NULL,
type = "submit",
value = NULL,
checked = NULL,
disabled = NULL,
readonly = NULL,
required = FALSE)
attr(fake_submit_button, "class") <- "input"
filled_form[["fields"]][["submit"]] <- fake_submit_button
# filled_form$fields[[3]]$type <- "button"
submit_form(pgsession, filled_form)
The returned error is :
Warning message:
In request_GET(session, url = url, query = request$values, ...) :
Not Found (HTTP 404).
Which I understand as simply the result of me not using the right login form, that I also suspect not be available in my html output (given by pgform<-html_form(pgsession)[[2]] in the code above).
Note that I've also tried with pgform<-html_form(pgsession)[[1]].
Thank you in advance for your help.

Scraping website that requires login (password-protected)

I am scraping a newspaper website (http://politiken.dk) and I could get all the titles from the news I need.
But I can't get the headlines + full text.
When I try without login, the code just get the first headline of the day I am scraping (not even the one as my first in the RData list).
I believe I need to log in to get right?
So I got a user and a password, but I cannot make any code work.
And I need to get the headlines from the articles in my RData, in the section URL. So the specific urls for all the articles I need are already in this code (under).
I saw this code to create a login in this website but I cannot apply to my case
library(httr)
library(XML)
handle <- handle("http://subscribers.footballguys.com") # I DONT KNOW WHAT TO PUT HERE
path <- "amember/login.php" ##I DONT KNOW WHAT TO PUT HERE
# fields found in the login form.
login <- list(
amember_login = "username"
,amember_pass = "password"
,amember_redirect_url =
"http://subscribers.footballguys.com/myfbg/myviewprojections.php?projector=2"
)
response <- POST(handle = handle, path = path, body = login)
This is my code to get the headlines:
headlines <- rep("",nrow(politiken.unique))
for(i in 1:nrow(politiken.unique)){
try({
text <- read_html(as.character(politiken.unique$urls[i])) %>%
html_nodes(".summary__p") %>%
html_text(trim = T)
headlines[i] = paste(text, collapse = " ")
})
}
I tried this suggestion: Scrape password-protected website in R
But it did not work, or I don't know how to do.
Thanks in advance!

Download CSV from a password protected website

If you go to the website https://www.myfxbook.com/members/iseasa_public1/rush/2531687 then click that dropdown box Export, then choose CSV, you will be taken to https://www.myfxbook.com/statements/2531687/statement.csv and the download (from the browser) will proceed automatically. The thing is, you need to be logged in to https://www.myfxbook.com in order to receive the information; otherwise, the file downloaded will contain the text "Please login to Myfxbook.com to use this feature".
I tried using read.csv to get the csv file in R, but only got that "Please login" message. I believe R has to simulate an html session (whatever that is, I am not sure about this) so that access will be granted. Then I tried some scraping tools to login first, but to no avail.
library(rvest)
login <- "https://www.myfxbook.com"
pgsession <- html_session(login)
pgform <- html_form(pgsession)[[1]]
filled_form <- set_values(pgform, loginEmail = "*****", loginPassword = "*****") # loginEmail and loginPassword are the names of the html elements
submit_form(pgsession, filled_form)
url <- "https://www.myfxbook.com/statements/2531687/statement.csv"
page <- jump_to(pgsession, url) # page will contain 48 bytes of data (in the 'content' element), which is the size of that warning message, though I could not access this content.
From the try above, I got that page has an element called cookies which in turns contains JSESSIONID. From my research, it seems this JSESSIONID is what "proves" I am logged in to that website. Nonetheless, downloading the CSV does not work.
Then I tried:
library(RCurl)
h <- getCurlHandle(cookiefile = "")
ans <- getForm("https://www.myfxbook.com", loginEmail = "*****", loginPassword = "*****", curl = h)
data <- getURL("https://www.myfxbook.com/statements/2531687/statement.csv", curl = h)
data <- getURLContent("https://www.myfxbook.com/statements/2531687/statement.csv", curl = h)
It seems these libraries were built to scrape html pages and do not deal with files in other formats.
I would pretty much appreciate any help as I've been trying to make this work for quite some time now.
Thanks.

automating the login to the uk data service website in R with RCurl or httr

I am in the process of writing a collection of freely-downloadable R scripts for http://asdfree.com/ to help people analyze the complex sample survey data hosted by the UK data service. In addition to providing lots of statistics tutorials for these data sets, I also want to automate the download and importation of this survey data. In order to do that, I need to figure out how to programmatically log into this UK data service website.
I have tried lots of different configurations of RCurl and httr to log in, but I'm making a mistake somewhere and I'm stuck. I have tried inspecting the elements as outlined in this post, but the websites jump around too fast in the browser for me to understand what's going on.
This website does require a login and password, but I believe I'm making a mistake before I even get to the login page.
Here's how the website works:
The starting page should be: https://www.esds.ac.uk/secure/UKDSRegister_start.asp
This page will automatically re-direct your web browser to a long URL that starts with: https://wayf.ukfederation.org.uk/DS002/uk.ds?[blahblahblah]
(1) For some reason, the SSL certificate does not work on this website. Here's the SO question I posted regarding this. The workaround I've used is simply ignoring the SSL:
library(httr)
set_config( config( ssl.verifypeer = 0L ) )
and then my first command on the starting website is:
z <- GET( "https://www.esds.ac.uk/secure/UKDSRegister_start.asp" )
this gives me back a z$url that looks a lot like the https://wayf.ukfederation.org.uk/DS002/uk.ds?[blahblahblah] page that my browser also re-directs to.
In the browser, then, you're supposed to type in "uk data archive" and click the continue button. When I do that, it re-directs me to the web page https://shib.data-archive.ac.uk/idp/Authn/UserPassword
I think this is where I'm stuck because I cannot figure out how to have cURL followlocation and land on this website. Note: no username/password has been entered yet.
When I use the httr GET command from the wayf.ukfederation.org.uk page like this:
y <- GET( z$url , query = list( combobox = "https://shib.data-archive.ac.uk/shibboleth-idp" ) )
the y$url string looks a lot like z$url (except it's got a combobox= on the end). Is there any way to get through to this uk data archive authentication page with RCurl or httr?
I can't tell if I'm just overlooking something or if I absolutely must use the SSL certificate described in my previous SO post or what?
(2) At the point I do make it through to that page, I believe the remainder of the code would just be:
values <- list( j_username = "your.username" ,
j_password = "your.password" )
POST( "https://shib.data-archive.ac.uk/idp/Authn/UserPassword" , body = values)
But I guess that page will have to wait...
The relevant data variables returned by the form are action and origin, not combobox. Give action the value selection and origin the value from the relevant entry in combobox
y <- GET( z$url, query = list( action="selection", origin = "https://shib.data-archive.ac.uk/shibboleth-idp") )
> y$url
[1] "https://shib.data-archive.ac.uk:443/idp/Authn/UserPassword"
Edit
It looks as though the handle pool isn't keeping your session alive correctly. You therefore need to pass the handles directly rather than automatically. Also for the POST command you need to set multipart=FALSE as this is the default for HTML forms. The R command has a different default as it is mainly designed for uploading files. So:
y <- GET( handle=z$handle, query = list( action="selection", origin = "https://shib.data-archive.ac.uk/shibboleth-idp") )
POST(body=values,multipart=FALSE,handle=y$handle)
Response [https://www.esds.ac.uk/]
Status: 200
Content-type: text/html
...snipped...
<title>
Introduction to ESDS
</title>
<meta name="description" content="Introduction to the ESDS, home page" />
I think one way to address "enter your organization" page goes like this:
library(tidyverse)
library(rvest)
library(stringr)
org <- "your_organization"
user <- "your_username"
password <- "your_password"
signin <- "http://esds.ac.uk/newRegistration/newLogin.asp"
handle_reset(signin)
# get to org page and enter org
p0 <- html_session(signin) %>%
follow_link("Login")
org_link <- html_nodes(p0, "option") %>%
str_subset(org) %>%
str_match('(?<=\\")[^"]*') %>%
as.character()
f0 <- html_form(p0) %>%
first() %>%
set_values(origin = org_link)
fake_submit_button <- list(name = "submit-btn",
type = "submit",
value = "Continue",
checked = NULL,
disabled = NULL,
readonly = NULL,
required = FALSE)
attr(fake_submit_button, "class") <- "btn-enabled"
f0[["fields"]][["submit"]] <- fake_submit_button
c0 <- cookies(p0)$value
names(c0) <- cookies(p0)$name
p1 <- submit_form(session = p0, form = f0, config = set_cookies(.cookies = c0))
Unfortunately, that doesn't solve the whole problem—(2) is harder than it looks. I've got more of what I think is a solution posted here: R: use rvest (or httr) to log in to a site requiring cookies. Hopefully someone will help us get the rest of the way.

Resources