how to download a large binary file with RCurl *after* server authentication - r

i originally asked this question about performing this task with the httr package, but i don't think it's possible using httr. so i've re-written my code to use RCurl instead -- but i'm still tripping up on something probably related to the writefunction.. but i really don't understand why.
you should be able to reproduce my work by using the 32-bit version of R, so you hit memory limits if you read anything into RAM. i need a solution that downloads directly to the hard disk.
to start, this code to works -- the zipped file is appropriately saved to the disk.
library(RCurl)
filename <- tempfile()
f <- CFILE(filename, "wb")
url <- "http://www2.census.gov/acs2011_5yr/pums/csv_pus.zip"
curlPerform(url = url, writedata = f#ref)
close(f)
# 2.1 GB file successfully written to disk
now here's some RCurl code that does not work. as stated in the previous question, reproducing this exactly will require creating an extract on ipums.
your.email <- "email#address.com"
your.password <- "password"
extract.path <- "https://usa.ipums.org/usa-action/downloads/extract_files/some_file.csv.gz"
library(RCurl)
values <-
list(
"login[email]" = your.email ,
"login[password]" = your.password ,
"login[is_for_login]" = 1
)
curl = getCurlHandle()
curlSetOpt(
cookiejar = 'cookies.txt',
followlocation = TRUE,
autoreferer = TRUE,
ssl.verifypeer = FALSE,
curl = curl
)
params <-
list(
"login[email]" = your.email ,
"login[password]" = your.password ,
"login[is_for_login]" = 1
)
html <- postForm("https://usa.ipums.org/usa-action/users/validate_login", .params = params, curl = curl)
dl <- getURL( "https://usa.ipums.org/usa-action/extract_requests/download" , curl = curl)
and now that i'm logged in, try the same commands as above, but with the curl object to keep the cookies.
filename <- tempfile()
f <- CFILE(filename, mode = "wb")
this line breaks--
curlPerform(url = extract.path, writedata = f#ref, curl = curl)
close(f)
# the error is:
Error in curlPerform(url = extract.path, writedata = f#ref, curl = curl) :
embedded nul in string: [[binary jibberish here]]
the answer to my previous post referred me to this c-level writefunction answer, but i'm clueless about how to re-create that curl_writer C program (on windows?)..
dyn.load("curl_writer.so")
writer <- getNativeSymbolInfo("writer", PACKAGE="curl_writer")$address
curlPerform(URL=url, writefunction=writer)
..or why it's even necessary, given that the five lines of code at the top of this question work without anything crazy like getNativeSymbolInfo. i just don't understand why passing in that extra curl object that stores the authentication/cookies and tells it not to verify SSL would cause code that otherwise works.. to break?

From this link create a file named curl_writer.c and save it to C:\<folder where you save your R files>
#include <stdio.h>
/**
* Original code just sent some message to stderr
*/
size_t writer(void *buffer, size_t size, size_t nmemb, void *stream) {
fwrite(buffer,size,nmemb,(FILE *)stream);
return size * nmemb;
}
Open a command window, go to the folder where you saved curl_writer.c and run the R compiler
c:> cd "C:\<folder where you save your R files>"
c:> R CMD SHLIB -o curl_writer.dll curl_writer.c
Open R and run your script
C:> R
your.email <- "email#address.com"
your.password <- "password"
extract.path <- "https://usa.ipums.org/usa-action/downloads/extract_files/some_file.csv.gz"
library(RCurl)
values <-
list(
"login[email]" = your.email ,
"login[password]" = your.password ,
"login[is_for_login]" = 1
)
curl = getCurlHandle()
curlSetOpt(
cookiejar = 'cookies.txt',
followlocation = TRUE,
autoreferer = TRUE,
ssl.verifypeer = FALSE,
curl = curl
)
params <-
list(
"login[email]" = your.email ,
"login[password]" = your.password ,
"login[is_for_login]" = 1
)
html <- postForm("https://usa.ipums.org/usa-action/users/validate_login", .params = params, curl = curl)
dl <- getURL( "https://usa.ipums.org/usa-action/extract_requests/download" , curl = curl)
# Load the DLL you created
# "writer" is the name of the function
# "curl_writer" is the name of the dll
dyn.load("curl_writer.dll")
writer <- getNativeSymbolInfo("writer", PACKAGE="curl_writer")$address
# Note that "URL" parameter is upper case, in your code it is lowercase
# I'm not sure if that has something to do
# "writer" is the symbol defined above
f <- CFILE(filename <- tempfile(), "wb")
curlPerform(URL=url, writedata=f#ref, writefunction=writer, curl=curl)
close(f)

this is now possible with the httr package. thanks hadley!
https://github.com/hadley/httr/issues/44

Related

R: How to download single file from specific branch of private GitHub repo?

How to download single file from specific branch of GitHub private repo using R?
It can be easily done for default branch, e.g.:
require(httr)
github_path = "https://api.github.com/repos/{user}/{repo}/contents/{path_to}/{file}"
github_pat = Sys.getenv("GITHUB_PAT"))
req <- content(GET(github_path,
add_headers(Authorization = paste("token", github_pat))), as = "parsed")
tmp <- tempfile()
r1 <- GET(req$download_url, write_disk(tmp))
...but I can't figure out how to do that for specific branch.
Tried to include branch name in github_path but it didn't work (Error in handle_url(handle, url, ...)).
Since it is easy with classic curl, e.g.:
curl -s -O https://{PAT}#raw.githubusercontent.com/{user}/{repo}/{branch}/{path_to}/{file}
...I tried to do it like:
tmp <- tempfile()
curl::curl_download("https://{PAT}#raw.githubusercontent.com/{user}/{repo}/{branch}/{path_to}/{file}", tmp)
But it didn't work as well.
What am I missing?
Thanks!
You can use curl in R like this to include the auth header and the path to the desired file:
library(curl)
h <- new_handle(verbose = TRUE)
handle_setheaders(h,
"Authorization" = "token ghp_XXXXXXX"
)
con <- curl("https://raw.githubusercontent.com/username/repo/branch/path/file.R", handle = h)
readLines(con)

Getting data from ftp file directly into environment or as file r

Trying to get file from ftp server with HTTR and RCurl any method doesn't work.
Real case. User and password credentials are real.
First HTTR
library(httr)
GET(url = "ftp://77.72.135.237/2993309138_Tigres.xls", authenticate("xxxxxxx", "xxxxxx"),
write_disk("~/Downloads/2993309138_Tigres.xls", overwrite = T))
#> Error: $ operator is invalid for atomic vectors
Second RCurl
library(RCurl)
my_data <- getURL(url = "ftp://77.72.135.237/2993309138_Tigres.xls", userpwd = "xxxxxx")
#> Error in curlPerform(curl = curl, .opts = opts, .encoding = .encoding): embedded nul in string: 'ÐÏ\021ࡱ\032á'
Is it server side bug or mine? :)
It seems to have something to do with the encoding. Try like this (you'll have to fill in the authentication) :
library(httr)
content(
GET(url = "ftp://77.72.135.237/2993309138_Tigres.xls",
authenticate("...", "..."),
write_disk("~/Downloads/2993309138_Tigres.xls", overwrite = T)
),
type = "text/csv",
as = "text",
encoding = "WINDOWS-1251"
)

RCurl - How to retrieve data from sftp?

I tried to retrieve data from sftp with the below code:
library(RCurl)
protocol <- "sftp"
server <- "xxxx#sftp.xxxx.com"
userpwd <- "xxx:yyy"
tsfrFilename <- "cccccc.tsv"
ouptFilename <- "out.csv"
opts = list(
#ssh.public.keyfile = "true", # file name
ssh.private.keyfile = "xxxxx.ppk",
keypasswd = "userpwd"
)
# Run #
## Download Data
url <- paste0(protocol, "://", server, tsfrFilename)
data <- getURL(url = url, .opts = opts, userpwd=userpwd)
and i received an error message:
Error in function (type, msg, asError = TRUE) : Authentication failure
What am I doing wrong?
Thanks
With a private key you do not need a password with you username. So your getURL statement will be:
data <- getURL(url = url, .opts = opts, username="username")
I had exactly the same problem and have just spent an hour trying different things out. What worked for me was changing the format of the private key to OpenSSH.
To do this, I used the key generator package puttygen. Go to the menu item "Conversions" to import the original private key and export to the OpenSSH format. I exported the converted key to the same folder that my original key was in with a new filename. I kept the *.ppk extension
Then I used the following commands:
opts <- list(
ssh.private.keyfile = "<path to my new OpenSSH Key>.ppk"
)
data <- getURL(url = URL, .opts = opts, username = username, verbose = TRUE)
This seemed to work fine.

Request to improve code to download sequence of URLs

in a file I have a table of 23,772 URLs that I need to download. In the code below, that is represented by dwsites. Due to the server restrictions, I am only able to download a block of 300 sites at a time. I have accomplished that task with the code below (it is an excerpt of the actual code), but I want to know a better way.
Can you offer any suggestions?
Thank you.
dwsites <- data.frame(sites = c(1:23772), url = rep("url", 23772))
dwsitessub <- dwsites[1:300,] # this is the part that I would like to change
curl = getCurlHandle()
pagesnew = list()
for(u in strpatnew) {pagesnew[[u]] = getURLContent(u, curl = curl)}
lapply(seq_along(strpatternew), function(u) cat(pagesnew[[u]], file = file.path("filepath", paste0(strpatternew[[u]], sep = ""))))
dwsitessub <- dwsites[301:459,]
curl = getCurlHandle()
pagesnew = list()
for(u in strpatnew) {pagesnew[[u]] = getURLContent(u, curl = curl)}
lapply(seq_along(strpatternew), function(u) cat(pagesnew[[u]], file = file.path("filepath", paste0(strpatternew[[u]], sep = ""))))
...
dwsitessub <- 23501:nrow(dwsites)
curl = getCurlHandle()
pagesnew = list()
for(u in strpatnew) {pagesnew[[u]] = getURLContent(u, curl = curl)}
lapply(seq_along(strpatternew), function(u) cat(pagesnew[[u]], file = file.path("filepath", paste0(strpatternew[[u]], sep = ""))))

JSON character size limit using curlPerform or getURL

I am running into what appears to be character size limit in a JSON string when trying retrieve data from either curlPerform() or getURL(). Here is non-reproducible code [1], but it should shed some light on the problem.
# Note that .base.url is the basic url for the API, q is a query, user
# is specified, etc.
session = getCurlHandle()
curl.opts <- list(userpwd = paste(user, ":", key, sep = ""),
httpheader = "Content-Type: application/json")
request <- paste(.base.url, q, sep = "")
txt <- getURL(url = request, curl = session, .opts = curl.opts,
write = basicTextGatherer())
or
r = dynCurlReader()
curlPerform(url = request, writefunction = r$update, curl = session,
.opts = curl.opts)
My guess is that the update or value functions in the basicTextGather or dynCurlReader text handler objects are having trouble with the large strings. In this example, r$value() will return a truncated string that is approximately 2 MB. The code given above will work fine for queries < 2 MB.
Note that I can easily do the following from the command line (or using system() in R), but writing to disc seems like a waste if I am doing the subsequent analysis in R.
curl -v --header "Content-Type: application/json" --user username:register:passwd https://base.url.for.api/getdata/select+*+from+sometable > stream.json
where stream.json is a roughly 14MB json string. I can read the string into R using either
con <- file(paste(.project.path, "data/stream.json", sep = ""), "r")
string <- readLines(con)
or directly to list as
tmp <- fromJSON(file = paste(.project.path, "data/stream.json", sep = ""))
Any thoughts are very much appreciated.
Ryan
[1] - Sorry for not providing reproducible code, but I'm dealing with a govt firewall.

Resources