Download URL links using R - r

I am new to R and would like to seek some advice.
I am trying to download multiple url links (pdf format, not html) and save it into pdf file format using R.
The links I have are in character (took from the html code of the website).
I tried using download.file() function, but this requires specific url link (Written in R script) and therefore can only download 1 link for 1 file. However I have many url links, and would like to get help in doing this.
Thank you.

I believe what you are trying to do is download a list of URLs, you could try something like this approach:
Store all the links in a vector using c(), ej:
urls <- c("http://link1", "http://link2", "http://link3")
Iterate through the file and download each file:
for (url in urls) {
download.file(url, destfile = basename(url))
}
If you're using Linux/Mac and https you may need to specify method and extra attributes for download.file:
download.file(url, destfile = basename(url), method="curl", extra="-k")
If you want, you can test my proof of concept here: https://gist.github.com/erickthered/7664ec514b0e820a64c8
Hope it helps!

URL
url = c('https://cran.r-project.org/doc/manuals/r-release/R-data.pdf',
'https://cran.r-project.org/doc/manuals/r-release/R-exts.pdf',
'http://kenbenoit.net/pdfs/text_analysis_in_R.pdf')
Designated names
names = c('manual1',
'manual2',
'manual3')
Iterate through the file and download each file with corresponding name:
for (i in 1:length(url)){
download.file(url[i], destfile = names[i], mode = 'wb')
}

Related

Down load data from HTTPS in R

I have a problem with downloading data from HTTPS in R, I try using curl, but it doesn't work.
URL <- "https://github.com/Bitakhparsa/Capstone/blob/0850c8f65f74c58e45f6cdb2fc6d966e4c160a78/Plant_1_Generation_Data.csv"
options('download.file.method'='curl')
download.file(URL, destfile = "./data.csv", method="auto")
I downloaded the CSV file with that code, but the format was changed when I checked the data. So it didn't download correctly.
Would you please someone help me?
I think you might actually have the URL wrong. I think you want:
https://raw.githubusercontent.com/Bitakhparsa/Capstone/0850c8f65f74c58e45f6cdb2fc6d966e4c160a78/Plant_1_Generation_Data.csv
Then you can download the file directly using library(RCurl) rather than creating a variable with the URL
library(RCurl)
download.file("https://raw.githubusercontent.com/Bitakhparsa/Capstone/0850c8f65f74c58e45f6cdb2fc6d966e4c160a78/Plant_1_Generation_Data.csv",destfile="./data.csv",method="libcurl")
You can also just load the file directly into R from the site using the following
URL <- "https://github.com/Bitakhparsa/Capstone/blob/0850c8f65f74c58e45f6cdb2fc6d966e4c160a78/Plant_1_Generation_Data.csv"
out <- read.csv(textConnection(URL))
You can use the 'raw.githubusercontent.com' link, i.e. in the browser, when you go to "https://github.com/Bitakhparsa/Capstone/blob/0850c8f65f74c58e45f6cdb2fc6d966e4c160a78/Plant_1_Generation_Data.csv" you can click on the link "View raw" (it's above "Sorry about that, but we can’t show files that are this big right now.") and this takes you to the actual data. You also have some minor typos.
This worked as expected for me:
url <- "https://raw.githubusercontent.com/Bitakhparsa/Capstone/0850c8f65f74c58e45f6cdb2fc6d966e4c160a78/Plant_1_Generation_Data.csv"
download.file(url, destfile = "./data.csv", method="auto")
df <- read.csv("~/Desktop/data.csv")

Is there any way to download csv file from "website button click" using R

download.file(URL, destfile = "../data.csv", method="curl") need exact url of csv but i need to download the CSV file from a website having "Download Click" option.
"http://apps.who.int/gho/data/view.main.MHSUICIDEASDRREGv?lang=en" --> Link
You can hit F12 and see the code behind that page, and pretty much any page, except maybe flash elements. Then, do something like this.
getit <- read.csv("http://apps.who.int/gho/athena/data/GHO/MH_12?filter=COUNTRY:-;REGION:*&x-sideaxis=REGION;SEX&x-topaxis=GHO;YEAR&profile=crosstable&format=csv")
head(getit)
getit <- fread("http://apps.who.int/gho/athena/data/GHO/MH_12?filter=COUNTRY:-;REGION:*&x-sideaxis=REGION;SEX&x-topaxis=GHO;YEAR&profile=crosstable&format=csv")
getit <- read_csv("http://apps.who.int/gho/athena/data/GHO/MH_12?filter=COUNTRY:-;REGION:*&x-sideaxis=REGION;SEX&x-topaxis=GHO;YEAR&profile=crosstable&format=csv")
You can find lots and lots of other ideas from the link below.
https://www.datacamp.com/community/tutorials/r-data-import-tutorial
copy paste this :
download.file(url = "http://apps.who.int/gho/athena/data/GHO/MH_12?filter=COUNTRY:-;REGION:*&x-sideaxis=REGION;SEX&x-topaxis=GHO;YEAR&profile=crosstable&format=csv", destfile = "H:/test.csv")
i have used destination file path as "H:/test.csv" you can wherever u want to save the file

Using download.file to download a zip file from URL in R

I am trying to use download.file to extract a zip file from a URL and then push all the data in each of the files into a MySQL database. I am getting stuck in the first step where I use download.file to extract the zip file
I have tried the following but to no avail
myURL = paste("https://onedrive.live.com/download.aspx?cid=D700ACC18C0F37E6&resid=D700ACC18C0F37E6%2118670&ithint=%2Ezip",sep = "")
download.file(url=myURL,destfile=zippedFile, method='auto')
myURL = paste("https://onedrive.live.com/download.aspx?cid=D700ACC18C0F37E6&resid=D700ACC18C0F37E6%2118670&ithint=%2Ezip",sep = "")
download.file(url=myURL,destfile=zippedFile, method='curl')
Please suggest where am I going wrong. Also some pointers on how to take one file at a time from the zip folder and push into a DB will be most helpful
What finally worked in AWS is the use of the package downloader
https://cran.r-project.org/web/packages/downloader/downloader.pdf
It has features to support https. Hope it will help someone
You can try this:
myURL = paste("https://onedrive.live.com/download.aspx?cid=D700ACC18C0F37E6&resid=D700ACC18C0F37E6%2118670&ithint=%2Ezip",sep = "")
dir = "zippedFile.zip"
download.file(myURL, dir, mode="wb")
destfile -- a character string with the name where the downloaded file
is saved. Tilde-expansion is performed.

Download a file keeping original filename when final link is hidden

I need to download a file, save it in a folder while keeping the original filename from the website.
url <- "http://www.seg-social.es/prdi00/idcplg?IdcService=GET_FILE&dID=187112&dDocName=197533&allowInterrupt=1"
From a web browser, if you click on that link, you get to download an excel file with this filename:
AfiliadosMuni-02-2015.xlsx
I know I can easily download it with the command download.file in R like this:
download.file(url, "test.xlsx", method = "curl")
But what I really need for my script is to download it keeping the original filename intact. I also know I can do this with curl from my console like this.
curl -O -J $"http://www.seg-social.es/prdi00/idcplg?IdcService=GET_FILE&dID=187112&dDocName=197533&allowInterrupt=1"
But, again, I need this within an R script. Is there a way similar to the one above but in R? I have looked into the RCurl package but I couldn't find a solution.
You could always do something like:
library(httr)
library(stringr)
# alternate way to "download.file"
fil <- GET("http://www.seg-social.es/prdi00/idcplg?IdcService=GET_FILE&dID=187112&dDocName=197533&allowInterrupt=1",
write_disk("tmp.fil"))
# get what name the site suggests it shld be
fname <- str_match(headers(fil)$`content-disposition`, "\"(.*)\"")[2]
# rename
file.rename("tmp.fil", fname)
I think basename() would be the simplest option https://www.rdocumentation.org/packages/base/versions/3.4.3/topics/basename
e.g.
download.file(url, basename(url))

How to download an .xlsx file from a dropbox (https:) location

I'm trying to adopt the Reproducible Research paradigm but meet people who like looking at Excel rather than text data files half way, by using Dropbox to host Excel files which I can then access using the .xlsx package.
Rather like downloading and unpacking a zipped file I assumed something like the following would work:
# Prerequisites
require("xlsx")
require("ggplot2")
require("repmis")
require("devtools")
require("RCurl")
# Downloading data from Dropbox location
link <- paste0(
"https://www.dropbox.com/s/",
"{THE SHA-1 KEY}",
"{THE FILE NAME}"
)
url <- getURL(link)
temp <- tempfile()
download.file(url, temp)
However, I get Error in download.file(url, temp) : unsupported URL scheme
Is there an alternative to download.file that will accept this URL scheme?
Thanks,
Jon
You have the wrong URL - the one you are using just goes to the landing page. I think the actual download URL is different, I managed to get it sort of working using the below.
I actually don't think you need to use RCurl or the getURL() function, and I think you were leaving out some relatively important /'s in your previous formulation.
Try the following:
link <- paste("https://dl.dropboxusercontent.com/s",
"{THE SHA-1 KEY}",
"{THE FILE NAME}",
sep="/")
download.file(url=link,destfile="your.destination.xlsx")
closeAllConnections()
UPDATE:
I just realised there is a source_XlsxData function in the repmis package, which in theory should do the job perfectly.
Also the function below works some of the time but not others, and appears to get stuck at the GET line. So, a better solution would be very welcome.
I decided to try taking a step back and figure out how to download a raw file from a secure (https) url. I adapted (butchered?) the source_url function in devtools to produce the following:
download_file_url <- function (
url,
outfile,
..., sha1 = NULL)
{
require(RCurl)
require(devtools)
require(repmis)
require(httr)
require(digest)
stopifnot(is.character(url), length(url) == 1)
filetag <- file(outfile, "wb")
request <- GET(url)
stop_for_status(request)
writeBin(content(request, type = "raw"), filetag)
close(filetag)
}
This seems to work for producing local versions of binary files - Excel included. Nicer, neater, smarter improvements in this gratefully received.

Resources