In continuation of my unsolved post, I have altered my requirement to writing a file (.jpeg in this specific example) to OneDrive with the help of R Script.
#Loading required packages
library(outbreaks)
library(incidence)
library(RCurl)
#Data transformation
cases = subset(nipah_malaysia, select = c("perak", "negeri_sembilan", "selangor",
"singapore"))
i = as.incidence(cases, dates = nipah_malaysia$date, interval = 7L)
#Saving to the local working directory
jpeg(plot.jpeg)
#Trying to upload using an absolute path. However, doesn't work.
#jpeg(file = "https://1drv.ms/u/s!AtWMPT_CT0l3hB6flgne1OHU34SV?e=3zAJfL/plot.jpg")
plot(i)
dev.off()
Created on 2020-07-17 by the reprex package (v0.3.0)
Below is the error when I use OneDrive's absolute path (I've used personal account for testing. But, an organization's account is used in real-time.)
Error:
Error in jpeg(file = "https://1drv.ms/u/s!AtWMPT_CT0l3hB6flgne1OHU34SV?e=3zAJfL/plot.jpg") :
unable to start jpeg() device
In addition: Warning messages:
1: In jpeg(file = "https://1drv.ms/u/s!AtWMPT_CT0l3hB6flgne1OHU34SV?e=3zAJfL/plot.jpg") :
unable to open file 'https://1drv.ms/u/s!AtWMPT_CT0l3hB6flgne1OHU34SV?e=3zAJfL/plot.jpg' for writing
2: In jpeg(file = "https://1drv.ms/u/s!AtWMPT_CT0l3hB6flgne1OHU34SV?e=3zAJfL/plot.jpg") :
opening device failed
Of course, from the error, I understood that uploading is not so straight forward. I did some research and came across a few articles such as:
https://community.powerbi.com/t5/Service/R-script-Write-to-csv-file-on-Onedrive/td-p/487374
Cannot write csv file on a secured OneDrive folder using R script in Power BI without OneDrive API
I learnt that an API needs to be established with OneDrive using R Script but I'm uncertain in accomplishing the task. I did find the OneDrive API for uploading small files.
Any guidance or suggestions in implementing the OneDrive API using R would be highly appreciated.
Related
I'm building a R package for binary classification and I'm using opencpu to host it. Currently I've saved the h5 file as .RData file(serialized), which is then loaded in the environment using the .onLoad() function in R. This enables the R script to use the environment variable to load keras model using keras::unserialized_model().
I've tried directly using keras::load_model_hdf5() in the code, but after building and deploying on opencpu, when I try to hit the prediction API, I get error
ioerror: unable to open file (unable to open file: name = '/home/modelfile_26feb.h5', errno = 13, error message = 'permission denied', flags = 0, o_flags = 0)
I have changed permission for the file(777) and even the groups but still getting the error.
I even tried putting the file in inst/extdata folder so that it gets in the package but still same error.
Can anyone help on this, or suggest some alternative to load the h5 model directly?
Which OS does OpenCPU run on? Why does it try to write in /home/, this is very unusual? The best solution is to adapt your code to write in getwd() or tempdir(). Even better is to store data in a local database or redis server and let R read it from there, so you don't need disk access at all.
If you run on Ubuntu Server, reading from /home/ is not permitted by default. If you want to allow this, you need to add apparmor rules, see section 3.5 of the server manual.
Some relevant topics from the opencpu mailing list:
write in home dir: https://groups.google.com/d/msg/opencpu/5vRvgSKY-qE/4xMzZCGJBAAJ
keras in opencpu: https://groups.google.com/d/msg/opencpu/HhRzFVVFdaA/n5Nu1sxyFgAJ
write tmp folder: https://groups.google.com/d/msg/opencpu/Y1tYhaQUzwU/ubSEd_CDCgAJ
I was getting hung up on Shiny Apps Tutorial Lesson 5 because I was unable to open the counties.rds file. readRDS() threw: error reading from connection.
I figured out I could open the .rds fine if I downloaded it with download.file(URL, dest, mode = "wb") or simply used my browser to download the file to my local directory.
Outstanding Question: Why does the counties.rds file not open properly if I use download.file() without setting mode = "wb"? I expect the answer will be something obvious like: "Duh, counties.rds is a binary file." However, before I try to answer my own question, I'd like confirmation from someone with more experience.
Repro steps:
download.file("http://shiny.rstudio.com/tutorial/lesson5/census-app/data/counties.rds",
"counties.rds")
counties <- readRDS("counties.rds")
Error in readRDS("counties.rds") : error reading from connection
Resolution: Download via browser or use binary mode (wb).
download.file("http://shiny.rstudio.com/tutorial/lesson5/census-app/data/counties.rds",
"counties.rds", mode = "wb")
counties <- readRDS("counties.rds") # Success!
My suggestion is to always specify 'mode' regardless and that it pretty safe to always use mode="wb". I argue it the latter should be the default and that the automatic recognition by file extension is faulty and should not be relied upon, cf. https://stat.ethz.ch/pipermail/r-devel/2012-August/064739.html
I am having trouble with the XBRL library examples for reading XBRL documents from either the SEC website and from my local hard drive.
This code first attempts to do the read from the SEC site as written in the example in the pdf file for the XBRL library, and second tries to read a file saved locally:
# Following example from XBRL pdf doc - read xml file directly from sec web site
library(XBRL)
inst <- "http://www.sec.gov/Archives/edgar/data/1223389/000122338914000023/conn-20141031.xml"
options(stringsAsFactors = FALSE)
xbrl.vars <- xbrlDoAll(inst)
# attempt 2 - save the xml file to a local directory - so no web I/O
localdoc <- "~/R/StockTickers/XBRLdocs/aapl-20160326.xml"
xbrl.vars <- xbrlDoAll(localdoc)
Both of these throw an IO error. The first attempt to read from the SEC site results in this and crashes my RStudio instance:
error : Unknown IO error
I/O warning : failed to load external entity "http://www.sec.gov/Archives/edgar/data/1223389/000122338914000023/conn-20141031.xml"
So I restart RStudio, re-load XBRL library and try the second attempt, to read from a local file give this error:
I/O warning : failed to load external entity "~/R/StockTickers/XBRLdocs/aapl-20160326.xml"
I am using R version 3.3.0 (2016-05-03)
I hope I am missing something obvious to somebody, I am just not seeing it. Any help would be appreciated.
I want to deploy a basic trained R model as a webservice to AzureML. Similar to what is done here:
http://www.r-bloggers.com/deploying-a-car-price-model-using-r-and-azureml/
Since that post the publishWebService function in the R AzureML package was has changed it now requires me to have a workspace object as first parameter thus my R code looks as follows:
library(MASS)
library(AzureML)
PredictionModel = lm( medv ~ lstat , data = Boston )
PricePredFunktion = function(percent)
{return(predict(PredictionModel, data.frame(lstat =percent)))}
myWsID = "<my Workspace ID>"
myAuth = "<my Authorization code"
ws = workspace(myWsID, myAuth, api_endpoint = "https://studio.azureml.net/", .validate = TRUE)
# publish the R function to AzureML
PricePredService = publishWebService(
ws,
"PricePredFunktion",
"PricePredOnline",
list("lstat" = "float"),
list("mdev" = "float"),
myWsID,
myAuth
)
But every time I execute the code I get the following error:
Error in publishWebService(ws, "PricePredFunktion", "PricePredOnline", :
Requires external zip utility. Please install zip, ensure it's on your path and try again.
I tried installing programs that handle zip files (like 7zip) on my machine as well as calling the utils library in R which allows R to directly interact with zip files. But I couldn't get rid of the error.
I also found the R package code that is throwing the error, it is on line 154 on this page:
https://github.com/RevolutionAnalytics/AzureML/blob/master/R/internal.R
but it didn't help me in figuring out what to do.
Thanks in advance for any Help!
The Azure Machine Learning API requires the payload to be zipped, which is why the package insists on the zip utility being installed. (This is an unfortunate situation, and hopefully we can find a way in future to include a zip with the package.)
It is unlikely that you will ever encounter this situation on Linux, since most (all?) Linux distributions includes a zip utility.
Thus, on Windows, you have to do the following procedure once:
Install a zip utility (RTools has one and this works)
Ensure the zip is on your path
Restart R – this is important, otherwise R will not recognize the changed path
Upon completion, the litmus test is if R can see your zip. To do this, try:
Sys.which("zip")
You should get a result similar to this:
zip
"C:\\Rtools\\R-3.1\\bin\\zip.exe"
In other words, R should recognize the installation path.
On previous occasions when people told me this didn’t work, it was always because they thought they had a zip in the path, but it turned out they didn’t.
One last comment: installing 7zip may not work. The reason is that 7zip contains a utility called 7zip, but R will only look for a utility called zip.
I saw this link earlier but the additional clarification which made my code not work was
1. Address and Path of Rtools was not as straigt forward
2. You need to Reboot R
With regards to the address - always look where it was installed . I also used this code to set the path and ALWAYS ADD ZIP at the end
##Rtools.bin="C:\\Users\\User_2\\R-Portable\\Rtools\\bin"
Rtools.bin="C:\\Rtools\\bin\\zip"
sys.path = Sys.getenv("PATH")
if (Sys.which("zip") == "" ) {
system(paste("setx PATH \"", Rtools.bin, ";", sys.path, "\"", sep = ""))
}
Sys.which("zip")
you should get a return of
" C:\\RTools|\bin\zip"
From looking at Andrie's comment here: https://github.com/RevolutionAnalytics/AzureML/commit/9cf2c5c59f1f82b874dc7fdb1f9439b11ab60f40
Implies we can just download RTools and be done with it.
Download RTools from:
https://cran.r-project.org/bin/windows/Rtools/
During installation select the check box to modify the PATH
At first it didn't work. I then tried R32bit, and that seemed to work. Then R64 bit started working again. Honestly, not sure if I did something in the middle to make it work. Only takes a few minutes so worth a punt.
Try the following
-Download the Rtools file which usually contains the zip utility.
-Copy all the files in the "bin" folder of "Rtools"
-Paste them in "~/RStudio/bin/x64" folder
I'M running the following code in R:
library(GEOquery)
mypath <- "C:/Users/Farzin/Desktop/BIOC"
GDS1 <- getGEO('GDS1',destdir=mypath)
But I'm getting the following error:
Using locally cached version of GDS1 found here:
C:/Users/Farzin/Desktop/BIOC/GDS1.soft.gz
Error in read.table(con, sep = "\t", header = FALSE, nrows = nseries) :
invalid 'nlines' argument
Could anyone please tell me how I could get rid of this error?
I have had the same error using GEOquery (version 2.23.5) with R and Bioconductor from ubuntu (12.04), whatever GDS file I queried. Could it be that the GEOquery package is faulty ?
In my experience, getGEO is extremely finicky. I commonly experience issues connecting to the GEO server. If this happens during download, getGEO leaves a partial file. But since the partial file is there, when you try to re-download, it will use this cached, partially downloaded file, and run into the error you see (which you want, because its not the full file).
To solve this, delete the cached SOFT file and retry the download.