Hi I am new to writing R packages.
r package development imports not loaded advised me to use roxygen2.
I once called devtools::document() and the namespace was generated.
However when I load this simple package (or try it via opencpu) the functions are NOT available.
calling the code in native R seems to work
test2::hello()
[1] "Hello, world!"
Starting opencpu like:
1) start opencpu simple server via library(opencpu)
2) execute opencpu$restartwhich will show a port number
3) http://localhost:myPortNumber/ocpu/library/myPackage/info ---> this endpoint works
As mentioned in the comments this is not a "proper" way of calling a function. However opencpu defaults to myfunction/print if a function is called via HTTP as http://public.opencpu.org/ocpu/library/stats/R/quantile/printand even that does not work when I call the hello function.
This is a demonstration of how to call a more complex function:
curl http://localhost:myPortNumber/ocpu/library/stats/R/quantile/json -d '{"type":1,"x":[1,2,3,4,5,6,7,8,9,10],"probs":[0.05,0.25,0.75,0.95]}' -H "Content-Type: application/json"
You can simply test it via:
curl http://public.opencpu.org/ocpu/library/stats/R/quantile/json -d \
'{"type":1,"x":[1,2,3,4,5,6,7,8,9,10],"probs":[0.05,0.25,0.75,0.95]}' \
-H "Content-Type: application/json"
I did install it via sudo like:
sudo R CMD INSTALL test2_0.1.tgz
that means it should be available via in the /library/test2 endpoint.
Solution:
It still was the wrong API endpoint --> I was missing the R sub-directory
http://localhost:myPort/ocpu/library/myPackage/R/hello/
Example-code is here: https://github.com/geoHeil/rSimplePackageForOpenCpu
It still was the wrong API endpoint --> I was missing the R sub-directory
http://localhost:myPort/ocpu/library/myPackage/R/hello/
Related
I have an internal repository on Gitlab.
In my R script I want to source an .R file from that internal repository.
Normally I can source a publicly available R script with the following code
source("public-path-to-file")
But When I try to do that with a private file I get:
Error in source("") :
:1:1: unexpected '<'
1: <
^
I found a hacky way to do it that at least gets things done:
First you need to create a private token with api access. Then you can call gitlab's API directly to GET the file
Code:
cmd <- "curl -s --header 'PRIVATE-TOKEN: <your private token for gitlab with api access>' '<full gitlab link of the raw file that you want to source>'" # you directly access the API with your private token
output <- system(cmd, intern=TRUE) # capturing the output of the system call
writeLines(output, "functions.R") # creating a temporary file which contains the output of the API call
source("functions.R") # Sourcing the file the usual way
file.remove("functions.R") # removing the temporary file for housekeeping
Explanation:
We call the API directly with a GET request using the system command curl.
Then you call the system command through R, capturing the result.
And then you can create a temporary file with the respective contents and source it as you would normally do. Hope this may help someone
I have a Docker container that is setup to run an R script weekly via an Airflow DAG. The DAG has 3 events- 1 that is upstream of the Docker code which takes data from several databases, computes various features and then writes data to S3. This script reads in data from an S3 bucket, formats the data frame, runs a model to score records, then writes data back to S3. Finally there is downstream code that formats the output so that it can be loaded into Salesforce. The script worked while testing when I wrote and built it in December. Recently the run has failed several times with the error code:
Error in as.character(x) :
cannot coerce type 'closure' to vector of type 'character'
Calls: %>% ... mutate_impl -> ymd -> .parse_xxx -> unlist -> lapply -> FUN
Execution halted
Ok, so that seems to mean that the date it is reading in as a character is having an issue being formatted as a date. Since 'ymd' is in the chain I believe it to be the Lubridate function in the R script below.
The Docker file (code below) leverages an R image that has the Tidyverse because my code uses Dplyr and Lubridate. I could likely get by without Lubridate and use a base function to format the date, but more on that below
Docker file code:
FROM rocker/tidyverse
RUN mkdir -p /model
RUN apt-get update -qq && apt-get install -y \
libssl-dev \
libcurl4-gnutls-dev
RUN R -e "install.packages('caret')"
RUN R -e "install.packages('randomForest')"
RUN R -e "install.packages('lubridate')"
RUN R -e "install.packages('aws.s3')"
EXPOSE 80
EXPOSE 8787
COPY / /
ENTRYPOINT ["Rscript", "account_health_scoring.R"]
R script: I have to exclude the first few lines due to some identifying info and credentials, but the code first just reads in my S3 credentials from a file. Then, this code block runs and fails. There is a good deal of code downstream, but it all functions in the container:
require("dplyr")
require("caret")
require("aws.s3")
require("randomForest")
require("lubridate")
#set credentials
Sys.setenv("AWS_ACCESS_KEY_ID" = "key",
"AWS_SECRET_ACCESS_KEY" = "key")
#read in model file
s3load("rf_gridsearch.RData", bucket = "account-model")
#read in data
data<-read.csv(text = rawToChar(get_object((paste0("account_health_data_",
gsub("-", "_", as.character(Sys.Date()),
fixed=TRUE),".csv")),
bucket = "account-health-model-input")),
stringsAsFactors = FALSE)%>%
mutate(period=ymd(period))%>%
mutate_if(is.integer,as.numeric)
The reason for the 2 mutate lines is that despite being formatted as a POSIX timestamp, R coerces the date to a string AND coerces floats to integers. Perhaps I am missing something here as well in my read.csv or there is a better function for properly reading data, but this is what I have always used.
Questions:
What is the error message referring to/am I correct to think the YMD function is the culprit?
If so, how can I rewrite my code potentially using base functions to accomplish the same goal and avoid relying on a package.
Could it be package dependencies? In reviewing the logs it doesn't seem that this is the case as Lubridate imports several base functions/uses several. The package has not been updated since I wrote and tested this code.
Well, the answer seems to be simple although I do not understand it. I changed
require(lubridate)
to
library(lubridate)
And it builds. I found this post What is the difference between require() and library()? and decided to just try changing it, built the container, and it worked. I'm still trying to understand "why".
I am trying to make a cURL API call with R and I am unable to retrieve data. Or more specifically I am unable to figure out how to translate a multi-line curl call into an R command.
I am trying to get data from Twitch, the Twitch Developers API page offers the following curl code. Though I am unsure about the syntax of the call.
curl -H 'Accept: application/vnd.twitchtv.v5+json' \
-H 'Client-ID: uo6dggojyb8d6soh92zknwmi5ej1q2' \
-X GET 'https://api.twitch.tv/kraken/games/top'
I have attempted variations of:
library(curl)
library(httr)
library(jsonlite)
df <- GET('https://api.twitch.tv/kraken/games/top', add_headers('Accept: application/vnd.twitchtv.v5+json', 'Client-ID: uo6dggojyb8d6soh92zknwmi5ej1q2'))
fromJSON(df)
df <- curl_download('https://api.twitch.tv/kraken/games/top', destfile = 'C:\\....\\curldta.csv')
fromJSON(df)
Thanks for any help in advance.
I wrote a package that is a wrapper of twitch API for R language (you can install packages from github it with devtools package). The data frame you're trying to get can be obtained with
library(rTwitchAPI)
twitch_auth("YOUR_CLIENT_ID")
df = get_top_games()$data
I am trying to develop a plugin to frama-c.
I did build the application, which has several files, and then created the makefile referencing all the files i needed.
I am able to use make and then make install and execute the plugin. My problem appears when i call functions from the ocamlyices library in a function...
I am still able to do the make and make install and when i try to execute i get the following error:
[kernel] warning: cannot load plug-in 'name' (incompatible with Fluorine-20130601).
[kernel] user error: option `<name>' is unknown.
use `frama-c -help' for more information.
[kernel] Frama-C aborted: invalid user input.
So it says it is incompatible when I add the call to ocamlyices functions. Is there any option/configuration I am missing somewhere?
Thank you for your help.
The final solution looked like this:
FRAMAC_SHARE := $(shell frama-c.byte -print-path)
FRAMAC_LIBDIR := $(shell frama-c.byte -print-libpath)
PLUGIN_NAME = Fact
# All needed files
PLUGIN_CMO = ../base/basic_types concolic_search run_fact ../lib/lib
PLUGIN_DOCFLAGS = -I ../base -I ../lib -I $(YICES) -I /usr/lib/ocaml/batteries -I ../instrumentation
PLUGIN_BFLAGS = -I ../base -I ../lib -I $(YICES) -I ../instrumentation
PLUGIN_OFLAGS = -I ../base -I ../lib -I $(YICES) -I ../instrumentation
PLUGIN_EXTRA_BYTE = $(YICES)/ocamlyices.cma
PLUGIN_EXTRA_OPT = $(YICES)/ocamlyices.cmxa
include $(FRAMAC_SHARE)/Makefile.dynamic
The variable $(YICES) is defined as
export YICES="/usr/local/lib/ocaml/3.12.1/ocamlyices"
As mentioned by Anne, if your plug-in uses an external library that is not already included by Frama-C itself, you need a few more steps than for a basic plug-in, especially setting PLUGIN_EXTRA_BYTE and PLUGIN_EXTRA_OPT to the external libraries that you want to be linked to your plug-in. It might also be necessary to adapt the flags passed to the linker with PLUGIN_LINK_BFLAGS and PLUGIN_LINK_OFLAGS, but this is heavily dependent on ocamlyices itself. More information on the variables that can be used to customize the compilation of your plug-in can be found in section 5.3.3 of Frama-C's development guide.
Okay - so here is what I'm trying to do.
I've got this password protected CSV file I'm trying to import into R.
I can import it fine using:
read.csv()
and when I run my code in RStudio everything works perfect.
However, when I try and run my .R file using a batch file (windows .bat) it doesn't work. I want to use the .BAT file so that I can set up a scheduled task to run my code every morning.
Here is my .BAT file:
"E:\R-3.0.2\bin\x64\R.exe" CMD BATCH "E:\Control Files\download_data.R" "E:\Control Files\DailyEmail.txt"
And here is my .R file:
url <- "http://username:password#www.url.csv"
data <- read.csv(url, skip=1)
** note, I've put my username/password and the exact location of the CSV in my code. I've used generic stuff here, as this is work related and posting usernames and passwords is probably frowned upon.
As I've said, this code works fine when I use it in RStudio. But fails when I use the .BAT file.
I get the following error message:
Error in download.file(url, "E:/data/data.csv") :
cannot open URL 'websiteurl'
In addition: Warning message:
In download.file(url, "E:/data/data.csv") :
unable to resolve 'username'
Execution halted
** above websiteurl is the http above (I can't post links)
So obviously, the .BAT is having trouble with the username/password? Any thoughts?
* EDIT *
I've gone so far as trying this on Linux. Thinking maybe windows was playing silly bugger.
Just from the terminal, I run Rscript -e "download_data.r" and get the EXACT same error message as I did in Windows. So I suspect this may be a problem with where I'm getting the data? Could the provider be blocking data from the command line, but not from with Rstudio?
I have had similar problems which had to do with file permissions. The .bat file somehow does not have the same privileges as you running the code directly from Rstudio. Try using rscript (http://stat.ethz.ch/R-manual/R-devel/library/utils/html/Rscript.html) within your .bat file like
Rscript "E:\Control Files\download_data.R"
What is the purpose of the argument "E:\Control Files\DailyEmail.txt"? Is the program suppose to use it in any way?
So, I've found a solution, which is likely not the most practical for most people, but works for me.
What I did was migrated my project over to a Linux system. Running daily scripts, is easier on Linux anyways.
The solution makes use of the "wget" function in linux.
You can either run the wget right in your shell script, or make use of the system() function in R to run the wget.
code looks like:
wget -O /home/user/.../file.csv --user=userid --password='password' http://www.url.com/file.csv
And you can do something like:
syscomand >- "wget -O /home/.../file.csv --user=userid --password='password' http://www.url.com/file.csv"
system (syscommand)
in R to download the CSV to a location on your hard drive, then grab the CSV using read.csv()
Doing it this way gave me some more insight into the potential root cause of the problem. While the system(syscommand) is running, I get the following output:
Connecting to www.website.com (www.website.com)|ip.ad.re.ss|:80... connected.
HTTP request sent, awaiting response... 401 Unauthorized
Reusing existing connection to www.weburl.com:80.
HTTP request sent, awaiting response... 200 OK
Not sure why it has to send the request twice? And why I'm getting a 401 Unauthorized the first try?