jniPath := {
val subProjectPath = projectDependencies.value map (module => (jniPath in LocalProject(module.name)).value)
val path = libraryDependencies.value flatMap (_.name)
path ++ subProjectPath mkString File.pathSeparator
}
Does anyone have a workaround? It seems that the issue is coming from the (jniPath in LocalProject(module.name)).value but I can't see any way to do so is this a limitation of SBT?
Cheers
you'll need to define your work in a dynamic task http://www.scala-sbt.org/0.13/docs/Tasks.html#Dynamic+Computations+with which allows you to define your Task's dependencies based on things that are not well-defined at compile time.
Remember, in sbt all tasks are really a map from their dependencies to the result and any time your type thing.value you're really writing (thing).map { valueOfThing => ... } once the macro has its wicked way.
As fommil wrote dynamic task is the way to go.
To understand why take a look at Execution semantics of tasks:
Unlike plain Scala method calls, invoking value method on tasks will not be evaluated strictly. Instead, they simply act as placeholders to denote that sampleIntTask depends on startServer and stopServer tasks.
Because dependent tasks are scheduled before the curly brace of the task begins, you can't use the result from another task to scope another task.
Cheers for the answers, I solved my issue by using key.all(ScopeFilter(in)) in a Def.taskDyn:
jniPaths := jniPathsImpl(inDependencies(ThisProject))
def jniPathsImpl(in: ScopeFilter.ProjectFilter) = Def.taskDyn {
ivyJniPath.all(ScopeFilter(in)) map { ivyJniPaths =>
libraryDependencies.value flatMap (_.name)
}
}
Related
I want to await an async function inside a closure used in an iterator. The function requiring the closure is called inside a struct implementation. I can't figure out how to do this.
This code simulates what I'm trying to do:
struct MyType {}
impl MyType {
async fn foo(&self) {
println!("foo");
(0..2).for_each(|v| {
self.bar(v).await;
});
}
async fn bar(&self, v: usize) {
println!("bar: {}", v);
}
}
#[tokio::main]
async fn main() {
let mt = MyType {};
mt.foo().await;
}
Obviously, this will not work since the closure is not async, giving me:
error[E0728]: `await` is only allowed inside `async` functions and blocks
--> src/main.rs:8:13
|
7 | (0..2).for_each(|v| {
| --- this is not `async`
8 | self.bar(v).await;
| ^^^^^^^^^^^^^^^^^ only allowed inside `async` functions and blocks
After looking for an answer on how to call an async function from a non-async one, I eded up with this:
tokio::spawn(async move {
self.bar(v).await;
});
But now I'm hitting lifetime issues instead:
error[E0759]: `self` has an anonymous lifetime `'_` but it needs to satisfy a `'static` lifetime requirement
--> src/main.rs:4:18
|
4 | async fn foo(&self) {
| ^^^^^
| |
| this data with an anonymous lifetime `'_`...
| ...is captured here...
...
8 | tokio::spawn(async move {
| ------------ ...and is required to live as long as `'static` here
This also doesn't surprise me since from what I understand the Rust compiler cannot know how long a thread will live. Given this, the thread spawned with tokio::spawn might outlive the type MyType.
The first fix I came up with was to make bar an associate function, copy everything I need in my closure and pass it as a value to bar and call it with MyType::bar(copies_from_self) but this is getting ugly since there's a lot of copying. It also feels like a workaround for not knowing how lifetimes work.
I was instead trying to use futures::executor::block_on which works for simple tasks like the one in this post:
(0..2).for_each(|v| {
futures::executor::block_on(self.bar(v));
});
But when putting this in my real life example where I use a third party library1 which also uses tokio, things no longer work. After reading the documentation, I realise that #[tokio::main] is a macro that eventually wraps everything in block_on so by doing this there will be nested block_on. This might be the reason why one of the async methods called in bar just stops working without any error or logging (works without block_on so shouldn't be anything with the code). I reached out to the authors who said that I could use for_each(|i| async move { ... }) which made me even more confused.
(0..2).for_each(|v| async move {
self.bar(v).await;
});
Will result in the compilation error
expected `()`, found opaque type`
which I think makes sense since I'm now returning a future and not (). My naive approach to this was to try and await the future with something like this:
(0..2).for_each(|v| {
async move {
self.bar(v).await;
}
.await
});
But that takes me back to square one, resulting in the following compilation error which I also think makes sense since I'm now back to using await in the closure which is sync.
only allowed inside `async` functions and blocks` since the
This discovery also makes it hard for me to make use of answers such as the ones found here and here.
The question after all this cargo cult programming is basically, is it possible, and if so how do I call my async function from the closure (and preferably without spawning a thread to avoid lifetime problems) in an iterator? If this is not possible, what would an idiomatic implementation for this look like?
1This is the library/method used
Iterator::for_each expects a synchronous closure, thus you can't use .await in it (not directly at least), nor can you return a future from it.
One solution is to just use a for loop instead of .for_each:
for v in 0..2 {
self.bar(v).await;
}
The more general approach is to use streams instead of iterators, since those are the asynchronous equivalent (and the equivalent methods on streams are typically asynchronous as well). This would work not only for for_each but for most other iterator methods:
use futures::prelude::*;
futures::stream::iter(0..2)
.for_each(|c| async move {
self.bar(v).await;
})
.await;
I have an async method that uses tokio::fs to explore a directory:
use failure::Error;
use futures::Future;
use std::path::PathBuf;
use tokio::prelude::*;
fn visit_async(path: PathBuf) -> Box<Future<Item = (), Error = Error> + Send> {
let task = tokio::fs::read_dir(path)
.flatten_stream()
.for_each(move |entry| {
let path = entry.path();
if path.is_dir() {
let task = visit_async(entry.path());
tokio::spawn(task.map_err(drop));
} else {
println!("File: {:?}", path);
}
future::ok(())
})
.map_err(Error::from);
Box::new(task)
}
I need to execute another future after all the the future returned by this method ends as well as all the tasks spawned by it. Is there a better way that just starting another runtime?
let t = visit_async(PathBuf::from(".")).map_err(drop);
tokio::run(t);
tokio::run(future::ok(()));
I'd strive to avoid using tokio::spawn() here, and try to wrap it all into a single future (in general, I think you only do tokio::spawn when you don't care about the result or execution, which we do here). That should make it easy to wait for completion. I haven't tested this, but something along these lines might do the trick:
let task = tokio::fs::read_dir(path)
.flatten_stream()
.for_each(move |entry| {
let path = entry.path();
if path.is_dir() {
let task = visit_async(entry.path());
future::Either::A(task)
} else {
println!("File: {:?}", path);
future::Either::B(future::ok(()))
}
})
.map_err(Error::from)
.and_then(|_| {
// Do some work once all tasks complete
});
Box::new(task)
This will cause the asynchronous tasks to execute in sequence. You could use and_then instead of for_each to execute them in parallel and then into_future().and_then(|_| { ... }) to tuck on some action to execute afterwards.
There's another issue with parallel descent in the FS: you may run out of file descriptors.
There is a way to solve both issues by creating a tokio::sync::Semaphore to limit the concurrent number of these tasks. After you are done spawning all of them, you can use Semaphore::acquire_many with the same value you used at creation, to block until all other tasks are finished.
For correctness, you should acquire the semaphore before spawning the task, and then pass the SemaphorePermit to the task (and make sure it doesn't get dropped before you are done). If you acquire the semaphore inside the tasks, there is a risk the main task might acquire all the permits before the first sub-task has a chance to run.
Since you can only move a SemaphorePermit<'static> inside the task, you will need to have a &'static Semaphore, for instance using lazy_static! or Box::leak.
I am attempting to create simplest possible example that can get async fn hello() to eventually print out Hello World!. This should happen without any external dependency like tokio, just plain Rust and std. Bonus points if we can get it done without ever using unsafe.
#![feature(async_await)]
async fn hello() {
println!("Hello, World!");
}
fn main() {
let task = hello();
// Something beautiful happens here, and `Hello, World!` is printed on screen.
}
I know async/await is still a nightly feature, and it is subject to change in the foreseeable future.
I know there is a whole lot of Future implementations, I am aware of the existence of tokio.
I am just trying to educate myself on the inner workings of standard library futures.
My helpless, clumsy endeavours
My vague understanding is that, first off, I need to Pin task down. So I went ahead and
let pinned_task = Pin::new(&mut task);
but
the trait `std::marker::Unpin` is not implemented for `std::future::GenFuture<[static generator#src/main.rs:7:18: 9:2 {}]>`
so I thought, of course, I probably need to Box it, so I'm sure it won't move around in memory. Somewhat surprisingly, I get the same error.
What I could get so far is
let pinned_task = unsafe {
Pin::new_unchecked(&mut task)
};
which is obviously not something I should do. Even so, let's say I got my hands on the Pinned Future. Now I need to poll() it somehow. For that, I need a Waker.
So I tried to look around on how to get my hands on a Waker. On the doc it kinda looks like the only way to get a Waker is with another new_unchecked that accepts a RawWaker. From there I got here and from there here, where I just curled up on the floor and started crying.
This part of the futures stack is not intended to be implemented by many people. The rough estimate that I have seen in that maybe there will be 10 or so actual implementations.
That said, you can fill in the basic aspects of an executor that is extremely limited by following the function signatures needed:
async fn hello() {
println!("Hello, World!");
}
fn main() {
drive_to_completion(hello());
}
use std::{
future::Future,
ptr,
task::{Context, Poll, RawWaker, RawWakerVTable, Waker},
};
fn drive_to_completion<F>(f: F) -> F::Output
where
F: Future,
{
let waker = my_waker();
let mut context = Context::from_waker(&waker);
let mut t = Box::pin(f);
let t = t.as_mut();
loop {
match t.poll(&mut context) {
Poll::Ready(v) => return v,
Poll::Pending => panic!("This executor does not support futures that are not ready"),
}
}
}
type WakerData = *const ();
unsafe fn clone(_: WakerData) -> RawWaker {
my_raw_waker()
}
unsafe fn wake(_: WakerData) {}
unsafe fn wake_by_ref(_: WakerData) {}
unsafe fn drop(_: WakerData) {}
static MY_VTABLE: RawWakerVTable = RawWakerVTable::new(clone, wake, wake_by_ref, drop);
fn my_raw_waker() -> RawWaker {
RawWaker::new(ptr::null(), &MY_VTABLE)
}
fn my_waker() -> Waker {
unsafe { Waker::from_raw(my_raw_waker()) }
}
Starting at Future::poll, we see we need a Pinned future and a Context. Context is created from a Waker which needs a RawWaker. A RawWaker needs a RawWakerVTable. We create all of those pieces in the simplest possible ways:
Since we aren't trying to support NotReady cases, we never need to actually do anything for that case and can instead panic. This also means that the implementations of wake can be no-ops.
Since we aren't trying to be efficient, we don't need to store any data for our waker, so clone and drop can basically be no-ops as well.
The easiest way to pin the future is to Box it, but this isn't the most efficient possibility.
If you wanted to support NotReady, the simplest extension is to have a busy loop, polling forever. A slightly more efficient solution is to have a global variable that indicates that someone has called wake and block on that becoming true.
I'm writing a (toy) hash-and-cache decorator in TypeScript and can't find a good means of creating a solid generic one.
The code I have so far is
function cache
(target: Object,
propertyKey: string,
// Likely we can do better than <any> here -- <Function<any>> maybe?
descriptor: TypedPropertyDescriptor<any>)
{
let cacheMap = new Map();
let wrappedFn = descriptor.value;
descriptor.value = function (...args: any[]) {
if (cacheMap.has(args)) {
console.log("Short-circuiting with result: " + cacheMap.get(args));
return cacheMap.get(args);
}
let result = wrappedFn.apply(this, args);
cacheMap.set(args, result);
console.log("cacheMap %o", cacheMap);
return result;
}
return descriptor;
}
Naturally this fails, since args is not a tuple but a list, which is mutable[1]. So each input, even if it's the same over and over, gets its own list/array in its own memory location with its own hash value, wherever that comes from.
I haven't found a Tuple type in TypeScript (or JS) yet -- is there one? Is there another workaround for this sort of problem?
Shouldn't this be an error? Map<T, U> should constrain T to implementing IHashable or something, right? That's the point of types -- to raise this issue before it takes a bunch of time out of your life.
Shouldn't this be an error? Map<T, U> should constrain T to implementing IHashable or something, right?
No. Object identity is a real and well-defined thing in JavaScript; TypeScript doesn't attempt to force you to pretend it doesn't exist.
If the ECMAScript committee thought it was appropriate to enforce non-object-identity-based keying in maps, they could have restricted Map keys, but they didn't.
linuxfood has created bindings for sqlite3, for which I am thankful. I'm just starting to learn Rust (0.8), and I'm trying to understand exactly what this bit of code is doing:
extern mod sqlite;
fn db() {
let database =
match sqlite::open("test.db") {
Ok(db) => db,
Err(e) => {
println(fmt!("Error opening test.db: %?", e));
return;
}
};
I do understand basically what it is doing. It is attempting to obtain a database connection and also testing for an error. I don't understand exactly how it is doing that.
In order to better understand it, I wanted to rewrite it without the match statement, but I don't have the knowledge to do that. Is that possible? Does sqlite::open() return two variables, or only one?
How can this example be written differently without the match statement? I'm not saying that is necessary or preferable, however it may help me to learn the language.
The outer statement is an assignment that assigns the value of the match expression to database. The match expression depends on the return value of sqlite::open, which probably is of type Result<T, E> (an enum with variants Ok(T) and Err(E)). In case it's Ok, the enum variant has a parameter which the match expression destructures into db and passes back this value (therefore it gets assigned to the variable database). In case it's Err, the enum variant has a parameter with an error object which is printed and the function returns.
Without using a match statement, this could be written like the following (just because you explicitly asked for not using match - most people will considered this bad coding style):
let res = sqlite::open("test.db");
if res.is_err() {
println!("Error opening test.db: {:?}", res.unwrap_err());
return;
}
let database = res.unwrap();
I'm just learning Rust myself, but this is another way of dealing with this.
if let Ok(database) = sqlite::open("test.db") {
// Handle success case
} else {
// Handle error case
}
See the documentation about if let.
This function open returns SqliteResult<Database>; given the definition pub type SqliteResult<T> = Result<T, ResultCode>, that is std::result::Result<Database, ResultCode>.
Result is an enum, and you fundamentally cannot access the variants of an enum without matching: that is, quite literally, the only way. Sure, you may have methods for it abstracting away the matching, but they are necessarily implemented with match.
You can see from the Result documentation that it does have convenience methods like is_err, which is approximately this (it's not precisely this but close enough):
fn is_err(&self) -> bool {
match *self {
Ok(_) => false,
Err(_) => true,
}
}
and unwrap (again only approximate):
fn unwrap(self) -> T {
match self {
Ok(t) => t,
Err(e) => fail!(),
}
}
As you see, these are implemented with matching. In this case of yours, using the matching is the best way to write this code.
sqlite::open() is returning an Enum. Enums are a little different in rust, each value of an enum can have fields attached to it.
See http://static.rust-lang.org/doc/0.8/tutorial.html#enums
So in this case the SqliteResult enum can either be Ok or Err if it is Ok then it has the reference to the db attached to it, if it is Err then it has the error details.
With a C# or Java background you could consider the SqliteResult as a base class that Ok and Err inherit from, each with their own relevant information. In this scenario the match clause is simply checking the type to see which subtype was returned. I wouldn't get too fixated on this parallel though it is a bad idea to try this hard to match concepts between languages.