I'm hosting my first shiny app from www.shinyapps.io. My r script uses a glm I created locally that I have stored as a .RDS file.
How can I read this file into my application directly using a free file host such as dropbox or google drive? (or another better alternative?)
test<-readRDS(gzcon(url("https://www.dropbox.com/s/p3bk57sqvlra1ze/strModel.RDS?dl=0")))
however, I get the error:
Error in readRDS(gzcon(url("https://www.dropbox.com/s/p3bk57sqvlra1ze/strModel.RDS?dl=0"))) :
unknown input format
I assume this is because the URL doesn't lead directly to the file but rather a landing page for dropbox?
That being said, I can't seem to find any free file hosting sites that have that functionality.
As always, I'm sure the solution is very obvious, any help is appreciated.
I figured it out. Hosted the file in a GitHub repository. From there I was able to copy the link to the raw file and placed that link in the readRDS(gzcon(url())) wrappers.
Remotely reading using readRDS() can be disappointing. You might want to try this wrapper that saves the data set to a temporary location before reading it locally:
readRDS_remote <- function(file, quiet = TRUE) {
if (grepl("^http", file, ignore.case = TRUE)) {
# temp location
file_local <- file.path(tempdir(), basename(file))
# download the data set
download.file(file, file_local, quiet = quiet, mode = "wb")
file <- file_local
}
readRDS(file)
}
Related
I wish to read into my environment a large CSV (~ 8Gb) but I am having issues.
My data is a publicly available dataset:
# CREATE A TEMP FILE TO STORE THE DOWNLOADED DATA
temp <- tempfile()
# DOWNLOAD THE FILE FROM THE CMS
download.file("https://download.cms.gov/nppes/NPPES_Data_Dissemination_February_2022.zip",
destfile = temp)
This is where I'm running into difficulty, I am unfamiliar with linux working directories and where temp folders are created.
When I use list.dir() or list.files() I don't see any reference to this temp file.
I am working in an R project and my working director is as follows:
getwd()
[1] "/home/myName/myProjectName"
I'm able to read in the first part of the file but my system crashes after about 4Gb.
# UNZIP THE NPI FILE
npi <- unz(temp, "npidata_pfile_20050523-20220213.csv")
I then came across this post which has a function for decompressing large zip files using the system2 unzip functionality. However due to my limited R knowledge and Linux experience I couldn't get the function to point to the downloaded file in the temp folder
checking the path for temp above I get the following path:
temp
[1] "/tmp/Rtmpl6SHIJ/file7e5e6c1fc693"
Using the system2 function from the link above I tried the following:
x <- decompress_file(directory = temp,
file = "NPPES_Data_Dissemination_February_2022.zip")
But get the following error about setting the working directory:
Any pointers to how I can get this file unzipped given it's size and read it into memory would be much appreciated.
It might be a file permission issue. To get around it work in a directory you're already in, or know you have access to.
# DOWNLOAD THE FILE
# to a directory you can access, and name the file. No need to overcomplicate this.
download.file("https://download.cms.gov/nppes/NPPES_Data_Dissemination_February_2022.zip",
destfile = "/home/myName/myProjectname/npi.csv")
# use the decompress function if you need to, though unzip might work
x <- decompress_file(directory = "/home/myName/myProjectname/",
file = "npi.zip")
# remove .zip file if you need the space back
file.remove("/home/myName/myProjectname/npi.zip")
temp is the path to the file, not just the directory. By default, tempfile does not add a file extension. It can be done by using tempfile(fileext = ".zip")
Consequently, decompress_file can not set the working directory to a file. Try this:
x <- decompress_file(directory = dirname(temp), file = basename(temp))
I have been trying for three days now to download a file from an FTP server with R without a result. I have really tried everything and read all questions but still cannot manage.
The url is:
u <- "ftp://user:password#109.2.160.55/AGLO/2020/10/AGLO_00001_03-0_GDBX_1000077_202010032206_860101.CSV.gz"
When I copy paste this link in Firefox I can download the file, but with R I cannot. I tried download.file, GET, writeBin, getURL. All failed: getURL gives the following error:
Error in curlPerform(curl = curl, .opts = opts, .encoding = .encoding) :
embedded nul in string: '\037‹\b\bøÙx_\0\003AGLO_00001_03-0_GDBX_1000077_202010032206_860101.CSV\0¬\\M³£¸’Ý¿_Áîº\002I Xɶ®*\fn>nÔMÇ›\231·èͼ\210î×\021óóç¤\004\006#¹.Œ§+¢ë’צŽ’TæÉ\017¡ÎUS*üï·\030ÿ±ßbñKüÛùtøþ\033#A–ýÆc\036ãgÁy,\177Ë%~f_ŽÝ{é~,é\v%}¡\\~ÐIN¿ÿùï?~ÿ\217¿þýÏ¿þ(\017±\034oY¾ýë¯?þû÷?ÿ$qù·Å/p\v÷\177{£òð]U½.umÊ¿\235»¦/{s~ƒ”\220–7ÓŸ¢Ã¿þø¯\177þã¯ÿ)ó¸ÈŠ\022_eEœã·îúz-K\031—í £¯ZÕÑW5´º+…H9/{Uéú¨K^BfNôsT©è]·n\215ŽÊ2\021œ©²¦¯”e/æ»7e\fIy‹\231Ä_"ayF?ÈñÏýsåQUGüâ3ô¼H’d\t\177\024Hàgœ•ê=ºªópR„\235笼\002á¹VÇ\aðºRFy°y\b6Ç_xz¹d¯Àf<—I2Â.\030›\004\f°3\002ZÓõ#\027\035Z£ê\023ÀDz(+\\7CwT}ÉJ\215¿»nè£þ¢\201\230Ô^>"§\033×ø3#çLäž¾éc›õ\035§‰`wàr\022\020pg-êøë »èÖjØCï\003çeý÷\037j\210êoM}n"ÝG׫ÆCÂ:£úï]S«è¢a_*°\034¹Z\016\036C\034XŽÜ¼œBð8YX\217»&ã‘\t=†›êz=´X…`yyÓ]g-G\177é¾Ü¾(ü颶9`\235Ñ\031ÎXË\016\033*RÚwÏmèûgàešf|\001Þ]\023xžYËï/F·\235}\004p\tM{Òj\200·);ݾ\033\230ýE\003úY_u\215‡`j,´Û¾\0^°8}iëf<\023Kå\217\002,àP2aÉß\006‚%åM\r¦ªì“¸Ý`jã%°“{ú±AùiÊ_s;ô_ºÄî\004Öí0\vý4ÜTÿá+ÿÚœL5þv‡¹0ž’ÃxÁ夒\025l\
There is no proxy problem whatsoever since to get the u url I am searching in the FTP directory.
How can I download this fing file ?
Also another way I could eventially work around this is using:
browseURL(u,
browser = "C:/Program Files (x86)/Mozilla Firefox/firefox.exe")
The issue with this is that it will open a Firefox browser that will:
Ask me if I am sure I want to go to this site and then
Ask me what I want to do with this file (not so much of a problem since I can choose a default to always download, but still I do not want to)
The issue with this is that simply I do not want to open a browser and I do not want to be asked if I want to go to the site and if I want to download. There are many files on this server so I want to do all of this automatically and I will need to be working in parllel, so having a browser pop up is not great, but if all else fails I can accept.
I am so desperate that I can give you the user name and password in private.
Apparently downloading the file to disk using httr solved the problem. It is possible to combine write_disk and httr::GET to download files to disk in the following way:
library(httr)
to_download <- "https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf"
# Download pdf to disk
GET(to_download, write_disk("dummy.pdf"))
I am trying to copy files from a network drive location to a sharepoint library in R. The sharepoint library location requires user authentication and I was wondering how I can copy these files and pass authentication in code. A simple file.copy does not work. I was attempting to use the getURL() function from RCurl library but that hasn't worked either. I was wondering how I can accomplish this task - copying files while passing authentication.
Here are some code snippets that I have tried so far:
library(RCurl)
from <- "filename"
to <- "\\\\sharepoint.company.com\\Directory" #First attempt with just sharepoint location
to <- "file://sharepoint.company.com/Directory" #Another attempt with different format
h = getCurlHandle(header = TRUE, userpwd = "username:password")
getURL(to, verbose = TRUE, curl = h)
status <- file.copy(from, to)
Thank you!
Not the most elegant solution but if you're looking to save into a single library on SharePoint, you can first map that library as a drive on your local machine.
Simply use setwd() to point to whatever drive letter you mapped the library to. You can then treat that Sharepoint library as if it were any other shared drive location, reading and writing files from/to it.
I just use the following function to copy files to SharePoint. The only issue will be the file that was transferred will remain checked-out until the File is manually Checked-in for others to use.
saveToSharePoint <- function(fileName)
{
cmd <- paste("curl --max-time 7200 --connect-timeout 7200 --ntlm --user","username:password",
"--upload-file /home/username/FolderNameWhereTheFileToTransferExists/",fileName,
"teamsites.OrganizationName.com/sites/PageTitle/Documents/UserDocumentation/FolderNameWhereTheFileNeedsToBeCopied/",fileName, sep = " ")
system(cmd)
}
saveToSharePoint("SomeFileName.Ext")
If you have SharePoint online, you can navigate to that library and click on the "Sync to Computer" button (has an icon with arrows and a computer). Then you can have this as a oneDrive and write directly to it.
I'm trying to download and extract a zip file using R. Whenever I do so I get the error message
Error in unzip(temp, list = TRUE) : 'exdir' does not exist
I'm using code based on the Stack Overflow question Using R to download zipped data file, extract, and import data
To give a simplified example:
# Create a temporary file
temp <- tempfile()
# Download ZIP archive into temporary file
download.file("http://cran.r-project.org/bin/windows/contrib/r-release/ggmap_2.2.zip",temp)
# ZIP is downloaded successfully:
# trying URL 'http://cran.r-project.org/bin/windows/contrib/r-release/ggmap_2.2.zip'
# Content type 'application/zip' length 4533970 bytes (4.3 Mb)
# opened URL
# downloaded 4.3 Mb
# Try to do something with the downloaded file
unzip(temp,list=TRUE)
# Error in unzip(temp, list = TRUE) : 'exdir' does not exist
What I've tried so far:
Accessing the temp file manually and unzipping it with 7zip: Can do this no problem, file is there and accessible.
Changing the temp directory to c:\temp. Again, the file is downloaded successfully, I can access it and unzip it with 7zip but R throws the exdir error message when it tries to access it.
R version 2.15.2
R-Studio version 0.97.306
Edit: The code works if I use unz instead of unzip but I haven't been able to figure out why one works and the other doesn't. From CRAN guidance:
unz reads (only) single files within zip files...
unzip extracts files from or list a zip archive
On a windows setup:
I had this error when I had exdir specified as a path. For me the solution was removing the trailing / or \\ in the path name.
Here's an example and it did create the new folder if it didn't already exist
locFile <- pathOfMyZipFile
outPath <- "Y:/Folders/MyFolder"
# OR
outPath <- "Y:\\Folders\\MyFolder"
unzip(locFile, exdir=outPath)
This can manifest another way, and the documentation doesn't make clear the cause. Your exdir cannot end in a "/", it must be just the name of the target folder.
For example, this was failing with 'exdir' does not exist:
unzip(temp, overwrite = F, exdir = "data_raw/system-data/")
And this worked fine:
unzip(temp, overwrite = F, exdir = "data_raw/system-data")
Presumably when unzip sees the "/" at the end of the exdir path it keeps looking; whereas omitting the "/" tells unzip "you've found it, unzip here".
A couple of years late but I still get this error when trying to use unzip(). It appears to be a bug because the man pages for unzip state if exdir is specified it will be created:
exdir The directory to extract files to (the equivalent of unzip -d).
It will be created if necessary.
A workaround I've been using is to manually create the necessary directory:
dir.create("directory")
unzip("file-to-unzip.zip", exdir = "directory/")
A pain, but it seems to work, at least for me.
I am using R3.2.1 on a Windows 7 machine.
The way I found to address this issue takes a few steps, but it works for me:
Create a vector that contains the name of the url from where you are downloading the file, e.g.
file_url <- "http://your.file.com/file_name.zip"
Use download.file to specify the url where you are downloading the file from (using your newly created vector), followed by the file name of the zipped file (that should be the last part of the url name). It will be saved as such in your working directory*, e.g.
download.file(file_url, "file_name.zip")
*If you are not sure of your working directory, you can use getwd() to check it. If you want to change your working directory, you can use setwd("C:users/username/...") to set it to what you want.
Use "unzip" to unzip the file into your working directory, with the name you will set using exdir, e.g.
unzip("file_name.zip", exdir = "file_name")
To check your work, you can use list.files, e.g.
list.files("file_name")
Hope this helps!
I'm trying to download a file in R on a remote server which sits behind a number of proxies. Something - I can't figure out what - is causing the file to be returned cached whenever I try and access it on that server, whether I do so through R or just through a Web Browser.
I've tried using cacheOK=FALSE in my download.file call and this has had no effect.
Per Is there a way to force browsers to refresh/download images? I have tried adding a random suffix to the end of the URL:
download.file(url = paste("http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/F-F_Research_Data_Factors_daily.zip?",
format(Sys.time(), "%d%m%Y"),sep=""),
destfile = "F-F_Research_Data_Factors_daily.zip", cacheOK=FALSE)
This produces, e.g., the following URL:
http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/F-F_Research_Data_Factors_daily.zip?17092012
Which when accessed from a Web Browser on the remote server, indeed returns the latest version of the file. However, when accessed using download.file in R, this returns a corrupted zip archive. Both WinRAR and R's unzip function complain that the zip file is corrupt.
unzip("F-F_Research_Data_Factors_daily.zip")
1: In unzip("F-F_Research_Data_Factors_daily.zip") :
internal error in unz code
I can't see why downloading this file via R would cause a corrupted file to be returned, whereas downloading it via a Web Browser gives no problem.
Can anyone suggest either a way to beat the cache from R (about which I'm not hopeful), or a reason why download.file doesn't like my URL with ?someRandomString tacked onto the end of it?
It will work if you use mode="wb"
download.file(url = paste("http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/F-F_Research_Data_Factors_daily.zip?",format(Sys.time(),"%d%m%Y"),sep=""),
destfile = "F-F_Research_Data_Factors_daily.zip", mode='wb', cacheOK=FALSE)