I confess that I haven't study core.async yet. I.e. I don't know the clojure way to work asynchronously, but I know that is mostly using channels. I work mainly in clojurescript and I'm going to start writing a service worker.
I found this library to write promises as channels, but it feels there is not a lot of work to do without using the library or not.
So, should I use channels over promises in any situation?
Is there a simple convertion from promises to core.async using channels?
If you look over the original rational for core.async, it becomes clearer when it has advantages over using another thread such as with future. ClojureScript was one of the big drivers, since it is single-threaded and there is no other options.
Some resources:
https://clojure.org/news/2013/06/28/clojure-clore-async-channels
https://github.com/clojure/core.async/blob/master/examples/walkthrough.clj
https://cognitect.com/videos.html (2 on CLJS core.async)
https://github.com/cognitect/async-webinar
https://rigsomelight.com/drafts/clojurescript-core-async-todos.html
https://medium.com/#loganpowell/cljs-core-async-101-f6522faf536d
Related
I'm using Tokio and I want to receive requests from two different mpsc queues. select! seems like the way to go, but I'm not sure what the difference is between futures::select! and tokio::select!. Under which circumstances one should you use one over the other?
tokio::select! was built out of experiences with futures::select!, but improves a bit on it to make it more ergonomic. E.g. the futures-rs version of select! requires Futures to implement FusedFuture, whereas Tokio's version no longer requires this.
Instead of this, Tokio's version supports preconditions in the macro to cover the same use-cases.
The PR in the tokio repo elaborates a bit more on this.
This change was also proposed for the futures-rs version, but has not been implemented there so far.
If you already have Tokio included in your project, then using Tokio's version seems preferable. But if you have not and do not want to add an additional dependency, then the futures-rs version will cover most use-cases too in a nearly identical fashion. The main difference is that some Futures might need to be converted into FusedFutures through the FutureExt::fuse() extension method.
To complement #matthias247's answer, a related big difference is that futures::select! takes futures in branch expressions by mutable reference, so uncompleted futures can be re-used in a loop.
tokio::select!, on the other hand, consumes passed futures. To get behavior similar to futures::select! you need to explicitly pass a reference (e.g. &mut future), and pin it if necessary (e.g. if it is async fn). Tokio docs have a section on this, Resuming an async operation
This thread has an in-depth explanation of why Tokio decided not to use FusedFuture.
OiO is deprecated in the current Netty version and all the serialport implementations i could find use this.
Now i haven't been able to find any sort of guide on how to write your own channel so maybe i'm doing it all wrong.
I've tried starting from NioSocket but keep getting stuck on the Unsafe override...
Could someone tell me which base class i should extend to implement JserialComm or any other lib? Or point me in the direction of a decent howto?
I am finding myself in the same boat. I have done work with both JSerialComm and PureJavaComm and benchmarked both their input/output stream performance and their OIO implementations I found on github. https://github.com/gsrunion/Java-Serial-Solution-Performance-Tests
Per the issue https://github.com/Ziver/Netty-Transport-jSerialComm/issues/2 I believe I am going to have to take an approach where I use Netty's EmbeddedChannel as a go between between a the input/output streams and a Netty stack.
Does Spirit provide any capabilities for working with non-blocking IO?
To provide a more concrete example: I'd like to use Boost's Spirit parsing framework to parse data coming in from a network socket that's been placed in non-blocking mode. If the data is not completely available, I'd like to be able to use that thread to perform other work instead of blocking.
The trivial answer is to simply read all the data before invoking Spirit, but potentially gigabytes of data would need to be received and parsed from the socket.
It seems like that in order to support non-blocking I/O while parsing, Spirit would need some ability to partially parse the data and be able to pause and save its parse state when no more data is available. Additionally, it would need to be able to resume parsing from the saved parse state when data does become available. Or maybe I'm making this too complicated?
TODO Will post a example for a simple single-threaded 'event-based' parsing model. This is largely trivial but might just be what you need.
For anything less trivial, please heed to following considerations/hints/tips:
How would you be consuming the result? You wouldn't have the synthesized attributes any earlier anyway, or are you intending to use semantic actions on the fly?
That doesn't usually work well due to backtracking. The caveats could be worked around by careful and judicious use of qi::hold, qi::locals and putting semantic actions with side-effects only at stations that will never be backtracked. In other words:
this is bound to be very errorprone
this naturally applies to a limited set of grammars only (those grammars with rich contextual information will not lend themselves well for this treatment).
Now, everything can be forced, of course, but in general, experienced programmers should have learned to avoid swimming upstream.
Now, if you still want to do this:
You should be able to get spirit library thread safe / reentrant by defining BOOST_SPIRIT_THREADSAFE and linking to libboost_thread. Note this makes the gobals used by Spirit threadsafe (at the cost of fine grained locking) but not your parsers: you can't share your own parsers/rules/sub grammars/expressions across threads. In fact, you can only share you own (Phoenix/Fusion) functors iff they are threadsafe, and any other extensions defined outside the core Spirit library should be audited for thread-safety.
If you manage the above, I think by far the best approach would seem to
use boost::spirit::istream_iterator (or, for binary/raw character streams I'd prefer to define a similar boost::spirit::istreambuf_iterator using the boost::spirit::multi_pass<> template class) to consume the input. Note that depending on your grammar, quite a bit of memory could be used for buffering and the performance is suboptimal
run the parser on it's own thread (or logical thread, e.g. Boost Asio 'strands' or its famous 'stackless coprocedures')
use coarse-grained semantic actions like shown above to pass messages to another logical thread that does the actual processing.
Some more loose pointers:
you can easily 'fuse' some functions to handle lazy evaluation of your semantic action handlers using BOOST_FUSION_ADAPT_FUNCTION and friends; This reduces the amount of cruft you have to write to get simple things working like normal C++ overload resolution in semantic actions - especially when you're not using C++0X and BOOST_RESULT_OF_USE_DECLTYPE
Because you will want to avoid semantic actions with side-effects, you should probably look at Inherited Attributes and qi::locals<> to coordinate state across rules in 'pure functional fashion'.
I have been learning F# recently, being particularly interested in its ease of exploiting data parallelism. The data |> Array.map |> Async.Parallel |> Async.RunSynchronously idiom seems very easy to understand and straightforward to use and get real value from.
So why is it that async is not really intended for this? Donald Syme himself says that PLINQ and Futures are probably a better choice. And other answers I've read here agree with that as well as recommending TPL. (PLINQ doesn't seem too much different to the above built-in functions, as long as you're using the F# Powerpack to get the PSeq functions.)
F# and functional languages make a lot of sense for this, and some applications have achieved great success with async parallelism.
So why shouldn't I use async to execute parallel data processes? What am I going to lose by writing parallel async code instead of using PLINQ or TPL?
So why shouldn't I use async to execute parallel data processes?
If you have a tiny number of completely independent non-async tasks and lots of cores then there is nothing wrong with using async to achieve parallelism. However, if your tasks are dependent in any way or you have more tasks than cores or you push the use of async too far into the code then you will be leaving a lot of performance on the table and could do a lot better by choosing a more appropriate foundation for parallel programming.
Note that your example can be written even more elegantly using the TPL from F# though:
Array.Parallel.map f xs
What am I going to lose by writing parallel async code instead of using PLINQ or TPL?
You lose the ability to write cache oblivious code and, consequently, will suffer from lots of cache misses and, therefore, all cores stalling waiting for shared memory which means poor scalability on a multicore.
The TPL is built upon the idea that child tasks should execute on the same core as their parent with a high probability and, therefore, will benefit from reusing the same data because it will be hot in the local CPU cache. There is no such assurance with async.
I wrote an article that re-implements one C# TPL sample using both Task and Async, which also has some comments on the difference between the two. You can find it here and there is also a more advanced async-based version.
Here is a quote from the first article that compares the two options:
The choice between the two possible implementations depends on many factors. Asynchronous workflows were designed specifically for F#, so they more naturally fit with the language. They offer better performance for I/O bound tasks and provide more convenient exception handling. Moreover, the sequential syntax is quite convenient. On the other hand, tasks are optimized for CPU bound calculations and make it easier to access the result of calculation from other places of the application without explicit caching.
I always figured it's what TPL, PLinq etc... give you over and above what Async does. (Cancellation mechanisms is the one that comes to mind.) This question has some better answers.
This article hints at a slight performance advantage to TPL, but probably not enough to be significant.
Somebody that I work with and respect once remarked to me that there shouldn't be any need for the use of reflection in application code and that it should only be used in frameworks. He was speaking from a J2EE background and my professional experience of that platform does generally bear that out; although I have written reflective application code using Java once or twice.
My experience of Ruby on Rails is radically different, because Ruby pretty much encourages you to write dynamic code. Much of what Rails gives you simply wouldn't be possible without reflection and metaprogramming and many of the same techniques are equally as applicable and useful to your application code.
Do you agree with the viewpoint that reflection is for frameworks only? I'd be interested to hear your opinions and experiences.
There's the old joke that any sufficiently sophisticated system written in a statically-typed language contains an incomplete, inferior implementation of Lisp.
Since your requirements tend to become more complicated as a project evolves, you often eventually find that the common idioms in statically-typed object systems eventually hit a wall. Sometimes reaching for reflection is the best solution.
I'm happy in dynamically-typed languages like Ruby, and statically-typed languages like C#, but the implicit reflection in Ruby often makes for simpler, easier-to-read code. (Depending on the metaprogramming magic required, sometimes harder to write).
In C#, I've found problems that couldn't be solved without reflection, because of information I didn't have until runtime. One example: When trying to manipulate some third-party code that generated proxies to Silverlight objects running in another process, I had to use reflection to invoke a specific strongly-typed "Generic" version of a method, because the marshalling required the caller to make an assumption about the type of the object in the other process was in order to extract the data we needed from it, and C# doesn't allow the "type" of the generic method invocation to be specified at run time (except with reflection techniques). I guess you could argue our tool was kind of a framework, but I could easily imagine a case in an ordinary application facing a similar problem.
Reflection makes DRY a lot easier. It's certainly possible to write DRY code without reflection, but it's often much more verbose.
If some piece of information is encoded in my program in one way, why wouldn't I use reflection to get at it, if that's the easiest way?
It sounds like he's talking about Java specifically. And in that case, he's just citing a special case of this: in Java, reflection is so wonky it's almost never the easiest way to do something. :-) In other languages like Ruby, as you've seen, it often is.
Reflection is definitely heavily used in frameworks, but when used correctly can help simplify code in applications.
One example I've seen before is using a JDK Proxy of a large interface (20+ methods) to wrap (i.e. delegate to) a specific implementation. Only a couple of methods were overridden using a InvocationHandler, the rest of the methods were invoked via reflection.
Reflection can be useful, but it is slower that doing a regular method call. See this reflection comparison.
Reflection in Java is generally not necessary. It may be the quickest way to solve a certain problem, but I would rather work out the underlying problem that causes you to think it's necessary in app code. I believe this because it frequently pushes errors from compile time to run time, which is always a Bad Thing for large enough software that testing is non-trivial.
I disagree, my application uses reflection to dynamically create providers. I might also use reflection to control logic flow, if the logic is simple and doesn't warrant a more complicated pattern.
In C# I use reflection to grab attributes off Enumeration which help me determine how to display an enumeration to an end user.
I disagree, reflection is very useful in application code and I find myself using it quite often. Most recently, I had to use reflection to load an assembly (in order to investigate its public types) from just the path of the assembly.
Several opinions on this subject are expressed here...
What is reflection and why is it useful?
Use reflection when there is no other way! This is a matter of performance!
If you have looked into .NET performance pitfalls before, it might not surprise you how slow the normal reflection is: a simple test with repeated access to an int property proved to be ~1000 times slower using reflection compared to the direct access to the property (comparing the average of the median 80% of the measured times).
See this: .NET reflection - performance
MSDN has a pretty nice article about When Should You Use Reflection?
If your problem is best solved by using reflection, you should use it.
(Note that the definition of 'best' is something learnt by experience :)
The definition of framework vs. application isn't all that black & white either. Sometimes your app needs a bit of framework to do its job well.
I think the observation that there shouldn't be any need for the use of reflection in application code and that it should only be used in frameworks is more or less true.
On the spectrum of how coupled some piece of code are, code joined by reflection are as loosely coupled as they come.
As such, the code which is doing it's job via reflection can quite happily fulfil it's role in life knowing not-a-thing about the code which is using it.