Say I have the following Lua code, which reads and parses data from a file.
local cjson = require 'cjson'
function read_data (path)
local file = io.open(path, 'r')
local raw = file.read('*all')
return cjson.decode(raw)
end
function use_data (path)
local data = read_data(path)
-- do something with `data`
end
I'm now trying to figure out what the non-blocking version of this basic example would look like.
In Javascript, you'd make both of those functions asynchronous, and then await the results. But judging by what I've read so far about Lua, here I'd only have to refactor the read_data function to use coroutines, and I could leave the use_data code as-is. How would I achieve this?
EDIT: Solutions that replace the current calls with external libraries are welcome. Whatever it takes to make the example non-blocking, basically.
Related
I am building an extension API for R in Rust. I annotate my functions with a procedural macro to generate C wrappers with the appropriate conversion and error handling:
use extendr_api::*;
#[export_function]
fn hello() -> &'static str {
"hello"
}
This generates a C function hello__wrapper__ that is callable from R
using the .Call mechanism.
In addition to this, we need to generate a NAMESPACE file for the R
metatdata:
export(hello)
useDynLib(libhello, "__wrap__hello")
And a file lib.R
hello <- function() {
.Call("__wrap__hello")
}
What is the simplest way to extend cargo or rustc
to write this additional information? I'm guessing
that writing a file from the procedural macro code is
a bad idea.
From what I understand, procedural macros generate code at compile time, but it has to be valid Rust code. I do not think that a procedural macro is the way to go in this case.
One possible solution is to create a script that will go through and find your exported functions in the Rust file and generate the 2 extra files you need. You could make a build script that runs before compiling if you want to do all the parsing in Rust, otherwise I would suggest something like Python.
I've been successful in publishing a code file using Plumber, but've been unsuccessful in all my attempts to call a function and publish it in the form of an HTTP response.
library(plumber)
r <- plumb(predictTest.R)
r$run
Shown above is the code I've been using to publish a single code file.
When I use the same syntax for a function like:
library(plumber)
r <- plumb(predictTest("India","Australia"))
r$run
The error I get is:
TypeError: Failed to fetch
How can I call a function and publish it as a HTTP response?
Look up the documentation on the plumb function (type ?plumb in R or view it here). You'll see that it expects
plumb(file, dir = ".")
I.e., the first argument is the filename of a R script file. That's why the first code example works and the second does't; you cannot provide plumb with the output from a function (unless that output returns a filename of an R file).
If you only want to expose a single function to plumber, isolate it in a file and use that. If you meant something else, ask a new question and provide more examples. And when you cannot disclose any of your data or source, try making a minimal working example with base R stuff that isn't proprietary to your company.
Finally, read the documentation at https://www.rplumber.io/docs/. You might be interested in chapter 8 and defining end-points.
I have a user defined function in R
blah=function(a,b){
something with a and b
}
is it possile to put this somewhere so that I do not need to remember to load in the workspace every time I start up R? Similar to a built in function like
summary(); t.test(); max(); sd()
You can put the function into your .rprofile file.
However, be very careful with what you put there, since it essentially makes your code non-reproducible — it now depends on your .rprofile:
Let’s say you have an R code file performing some analysis, and the code uses the function blah. Executing the code on any other system will fail due to the non-existence of the blah function.
As a consequence, this file should only contain system-specific setup. Don’t define helper functions in there — or if you do, make them defined only in interactive sessions, so that you have a clear environment when R is running a non-interactive script:
if (interactive()) {
# Helper functions go here.
}
And if you find yourself using the same helper functions over and over again, bundle them into packages (or modules) and reuse those.
I am new to R and have been trying to use JRI. Through JRI, I have used the "eval()" function to get certain results. If I want to execute an R script, I have used "source()". However I am now in a situation where I need to execute a script on continuously incoming data. While I can still use "source()", I don't think that would be an optimal way from a performance perspectve.
What I did was to read the entire R script into memory and then try and use "eval()" passing the script - but this does not seem to work. I have ensured that the script has been correctly loaded into memory - that is because if I write this script (loaded into the memory) into a file and source this newly created file, it does produce the expected results.
Is there a way for me to not keep sourcing the same file over and over again and execute it from memory? Each of my data units are independent and have to be processed independently and as soon as they become available. I cannot wait to collect a bunch of data units and then pass them on to the R script.
I have searched a lot and not found anything related to this. Any pointers which could help me in this direction would be really helpful.
The way I handled this is as below -
I enclosed the entire script into a function.
I sourced the script file (which now contains the function) at the start of the execution of my program.
The place where I was sourcing the file, I am now just calling the function which contains the script itself i.e. -
REXP result = rengine.eval("retVal<-" + getFunctionName() + "()");
Here, getFunctionName() gives me the name of the name of the function which contains the script.
Since this is loaded into the memory and available, I do not have to source the script file every time I want to execute the script. Any arguments being passed to the script are done as env. variables.
This seems to be a workaround, but solves my problem. Any better options are welcome.
I am creating an R package, and found it useful to break parts of the logic in one file into internal helper functions, which I define in the same file. I have sort of a special case where my function decides which helper function to use via match.fun(). Since they won't be useful to other functions or people, I don't want to put these in separate files, and I don't want to export them.
All my testthat cases pass using test_dir(). When I don't export these functions, my testthat cases fail during R CMD check.
"object 'helperfunction1' of mode 'function' was not found", quote(get(as.character(FUN),
mode = "function", envir = envir)))
After looking at this post, I am able to get things to work if I explicitly export or add export entries to NAMESPACE, but again I don't want to export these.
Is there a better way to do this and doesn't require me to export? (I'll admit that the source of the issue may be match.fun() and am open to other ways of calling functions at runtime.)
From memory it wasn't clear in the documentation last time I read it (it may have changed), but it will work correctly (without having to export) so long as everything is in the right directories:
You should have a file:
tests/run-all.R
That looks like:
library(testthat)
library(myPackage)
test_package("myPackage")
Then your individual test files should be in the directory inst/tests
These will be run when you do R CMD check, otherwise you can call test_package("myPackage") in R manually.