I have never used ZeroMQ and first heard of it an hour ago. But from the guide (this guide) it sounds like there are async I/O.
It also happens that there is a nim port : this one
So I was wondering, does the async magic has something to do with async/await which are keywords not present in the nim port (which is just c2nim). So is it just something that's internal to ZMQ and the API doesn't have to bother about it ?
I thought async/await was a vernacular thing that has to bubble up to the upper most main loop (framework loop) so the API would have to be async-aware.
Is this a complete misconception on my part ?
Native ZeroMQ API supports both blocking and non-blocking I/O-s.
For this purpose, there are flags, where zmq.NOBLOCK could be added, so as to achieve a non-blocking mode of operation.
The respective language-wrapper functionality decides . . .
If I read the nim ZeroMQ-wrapper, that you have mentioned above, it seems to me, that there is a hardcoded blocking version for both send() and recv() function-wrappers.
The wrapper also seems not to support correct wireline message sizing in case a nim-based node of a distributed-system meets another node, which is using ZeroMQ version 2.1.+, which is still interesting and common in heterogeneous distributed-system realms.
ZeroMQ has also a poll() method, equipped with a timeout parameter, so that your multiplexed I/O-operations may yield all wanted ways of how to operate multiple I/O-channels under some soft real-time control constraints.
While the accepted answer was true at the time; async with ZMQ now is built with the wrapper and there are examples provided :
See :
https://github.com/nim-lang/nim-zmq/blob/master/zmq/asynczmq.nim
https://github.com/nim-lang/nim-zmq/blob/master/examples/ex08_async_reqrep.nim
You can also work around the blocking behaviour or ZMQ in order to not block the async-dispatch loop manually with poll / sleepAsync :
let
zmq_timeout = 50
async_loop_time = 450 # spend more time on async stuff than on zmq stuff
var
conn = listen("tcp://127.0.0.1:36000", mode = PAIR=
poller = initZPoll([conn], ZMQ_POLLIN)
if poller.poll(timeout):
if events(poller[0]):
var res = poller[0].receive()
# Do async stuff
else:
waitFor sleepAsync(async_loop_time) # Calling sleepAsync is a trick to make the async dispatch loop progress for a time
Related
I am new to akka and still trying to understand the different akka and streaming concepts. For some new feature i need to add a http call to already existing stream which is working on an internal object. Something like this -
val step1Flow = Flow[SampleObject].filter(...--Filtering condition--...)
val step2Flow = Flow[SampleObject].map(obj => {
...
-- Business logic to update values in the obj --
...
})
...
override val flowGraph: Flow[SampleObject, SampleObject, NotUsed] =
bufferIn.via(Flow.fromGraph(GraphDSL.create() {
implicit builder =>
import GraphDSL.Implicits._
...
val step1 = builder.add(step1Flow)
val step2 = builder.add(step2Flow)
val step3 = builder.add(step3Flow)
...
source ~> step1 ~> step2 ~> step3 ~> merge
...
}
I need to add the new http request flow (lets call it newFlow) after step1. All these flow have Inlet and Outlet as SampleObject. Now my understanding is that the newFlow would need to be blocking because the outlet need to be SampleObject only. For that I have used Await function on the http call future. The code looks like this -
val responseFuture: Future[(Try[HttpResponse], SomeContext)] =
Source
.single(httpRequest -> context)
.via(Retry(retrySettings).join(clientFlow))
.runWith(Sink.head)
...
val (httpTry, passedAlongContext) = Await.result(responseFuture, 30.seconds)
-- logic to process response and return SampleObject --
Now this works fine but i think there should be a better way to do this without using wait. Also i think this would block the main thread till the request completes, which is going to affect the app throughput.
Could you please guide if the approach i used is correct or not. And how do i make use of some other thread pool to handle these blocking call so my main threadpool is not affected
This question seems very similar to mine but i do not understand it completely - connect Akka HTTP to Akka stream . Also i can't change the step2 or further flows.
EDIT : Added some code details for the stream
I ended up using the approach mentioned in the question because i couldn't find anything better after looking around. Adding this step decreased the throughput of my application as expected, but there are approaches to increase that can be used. Check these awesome blogs by Colin Breck -
https://blog.colinbreck.com/maximizing-throughput-for-akka-streams/
https://blog.colinbreck.com/partitioning-akka-streams-to-maximize-throughput/
To summarize -
Use Asynchronous Boundaries for flows which are blocking.
Use Futures if possible and add callbacks to futures. There are several ways to do that.
Use Buffers. There are several types of buffers available, choose what suits your needs.
Other than these, you can use inbuilt flows like -
Use "Broadcast" to broadcast your events to multiple consumers.
Use "Partition" to partition your stream into multiple streams based
on some condition.
Use "Balance" to partition your stream when there is no logical way to partition your events or they all could have different work loads.
You could use any one or multiple things from above options.
Lets say we have an ASP.NET Core receiving a string as a payload, size order of couple of megabytes. First method implementation:
[HttpPost("updateinfos")]
public async Task UpdateInfos()
{
var len = (int)this.Request.ContentLength;
byte[] b = new byte[len];
await this.Request.Body.ReadAsync(b,0,len);
var content = Encoding.UTF8.GetString(b);
.....
}
Body is read with ReadAsync, this is good since we have I/O stuff on socket and having it asynchronous is for free due to the nature of the call itself. But if we have a look after, the GetString() method, is purely CPU, is blocking with linear complexity. Anyway this affect somehow the performance since other clients wait for my bytes to get converted in string. I think to avoid this the solution is to run GetString() on the thread pool, by this:
[HttpPost("updateinfos")]
public async Task UpdateInfos()
{
var len = (int)this.Request.ContentLength;
byte[] b = new byte[len];
await this.Request.Body.ReadAsync(b,0,len);
var content = await Task.Run(()=>ASCIIEncoding.UTF8.GetString(b));
.....
}
please don't mind the return right now, something more has to be done in the function.
So the question, is the second approach overkilling? If so, what is the boundary to discriminate what could be run as blocking and what has to be moved to another thread?
You are very much abusing Task.Run there. Task.Run is used to off-load work onto a different thread and asynchronously wait for it to complete. So every Task.Run call will cause thread context switches. Of course, that is usually a very bad idea to do for things that should not run on their own thread.
Things like ASCIIEncoding.UTF8.GetString(b) are really fast. The overhead involved in creating and managing a thread that encapsulates this is much larger than just executing this directly on the same thread.
You should generally use Task.Run only to off-load (primarily synchronous) work that can benefit from running on its own thread. Or in cases, where you have work that would take a bit longer but block the current execution.
In your example, that is simply not the case, so you should just call those methods synchronously.
If you really want to reduce the work for that code, you should look at how to work properly streams. What you do in your code is read the request body stream completely and only then you work on it (trying to translate into a string).
Instead of separating the process of reading the binary stream and then translating it into a string, you could just read the stream as a string directly using a StreamReader. That way, you can read the request body directly, even asynchronously. So depending on what you actually do with it afterwards, that may be a lot more efficient.
I'm trying to use Dart with sqlite, with this project dart-sqlite.
But I found a problem: the API it provides is synchronous style. The code will be looked like:
// Iterating over a result set
var count = c.execute("SELECT * FROM posts LIMIT 10", callback: (row) {
print("${row.title}: ${row.body}");
});
print("Showing ${count} posts.");
With such code, I can't use Dart's future support, and the code will be blocking at sql operations.
I wonder how to change the code to asynchronous style? You can see it defines some native functions here: https://github.com/sam-mccall/dart-sqlite/blob/master/lib/sqlite.dart#L238
_prepare(db, query, statementObject) native 'PrepareStatement';
_reset(statement) native 'Reset';
_bind(statement, params) native 'Bind';
_column_info(statement) native 'ColumnInfo';
_step(statement) native 'Step';
_closeStatement(statement) native 'CloseStatement';
_new(path) native 'New';
_close(handle) native 'Close';
_version() native 'Version';
The native functions are mapped to some c++ functions here: https://github.com/sam-mccall/dart-sqlite/blob/master/src/dart_sqlite.cc
Is it possible to change to asynchronous? If possible, what shall I do?
If not possible, that I have to rewrite it, do I have to rewrite all of:
The dart file
The c++ wrapper file
The actual sqlite driver
UPDATE:
Thanks for #GregLowe's comment, Dart's Completer can convert callback style to future style, which can let me to use Dart's doSomething().then(...) instead of passing a callback function.
But after reading the source of dart-sqlite, I realized that, in the implementation of dart-sqlite, the callback is not event-based:
int execute([params = const [], bool callback(Row)]) {
_checkOpen();
_reset(_statement);
if (params.length > 0) _bind(_statement, params);
var result;
int count = 0;
var info = null;
while ((result = _step(_statement)) is! int) {
count++;
if (info == null) info = new _ResultInfo(_column_info(_statement));
if (callback != null && callback(new Row._internal(count - 1, info, result)) == true) {
result = count;
break;
}
}
// If update affected no rows, count == result == 0
return (count == 0) ? result : count;
}
Even if I use Completer, it won't increase the performance. I think I may have to rewrite the c++ code to make it event-based first.
You should be able to write a wrapper without touching the C++. Have a look at how to use the Completer class in dart:async. Basically you need to create a Completer, return Completer.future immediately, and then call Completer.complete(row) from the existing callback.
Re: update. Have you seen this article, specifically the bit about asynchronous extensions? i.e. If the C++ API is synchronous you can run it in a separate thread, and use messaging to communicate with it. This could be a way to do it.
The big problem you've got is that SQLite is an embedded database; in order to process your query and provide your results, it must do computation (and I/O) in your process. What's more, in order for its transaction handling system to work, it either needs its connection to be in the thread that created it, or for you to run in serialized mode (with a performance hit).
Because these are fairly hard constraints, your plan of switching things to an asynchronous operation mode is unlikely to go well except by using multiple threads. Since using multiple connections complicates things a lot (as you can't share some things between them, such as TEMP TABLEs) let's consider going for a single serialized connection; all activity will be serialized at the DB level, but for an application that doesn't use the DB a lot it will be OK. At the C++ level, you'd be talking about calling that execute from another thread and then sending messages back to the caller thread to indicate each row and the completion.
But you'll take a real hit when you do this; in particular, you're committing to only doing one query at a time, as the technique runs into significant problems with semantic effects when you start using two connections at once and the DB forces serialization on you with one connection.
It might be simpler to do the above by putting the synchronous-asynchronous coupling at the Dart level by managing the worker thread and inter-thread communication there. That would let you avoid having to change the C++ code significantly. I don't know Dart well enough to be able to give much advice there.
Myself, I'd just stick with synchronous connection processing so that I can make my application use multi-threaded mode more usefully. I'd be taking the hit with the semantics and giving each thread its own connection (possibly allocated lazily) so that overall speed was better, but I do come from a programming community that regards threads as relatively heavyweight resources, so make of that what you will. (Heavy threads can do things that reduce the number of locks they need that it makes no sense to try to do with light threads; it's about overhead management.)
I'm learning SignalR using the .Net client (not javascript), and was hoping for some clarification on how to invoke hub proxy methods in a synchronous or asynchronous manner.
Method with no return value
So far I've been doing something like this:-
myHubProxy.Invoke("DoSomething");
I've found this to be asynchronous, which is fine as it's effectively "fire-and-forget" and doesn't need to wait for a return value. A couple of questions though:-
Are there any implications with wrapping the Invoke in a try..catch block, particularly with it being asynchronous? I might want to know if the call failed.
Are there any scenarios where you would want to call a method that doesn't return a value synchronously? I've seen the .Wait() method mentioned, but I can't think why you would want to do this.
Method with return value
So far I've been using the Result property, e.g.:-
var returnValue = myHubProxy.Invoke<string>("DoSomething").Result;
Console.WriteLine(returnValue);
I'm assuming this works synchronously - after all, it couldn't proceed to the next line until a result had been returned. But how do I invoke such a method asynchronously? Is it possible to specify a callback method, or should I really be using async/await these days (something I confess to still not learning about)?
If you want to write asynchronous code, then you should use async/await. I have an intro on my blog with a number of followup resources at the end.
When you start an asynchronous operation (e.g., Invoke), then you get a task back. The Task type is used for asynchronous operations without a return value, and Task<T> is used for asynchronous operations with a return value. These task types can indicate to your code when the operation completes and whether it completed successfully or with error.
Although you can use Task.Wait and Task<T>.Result, I don't recommend them. For one, they wrap any exceptions in an AggregateException, which make your error handling code more cumbersome. It's far easier to use await, which does not do this wrapping. Similarly, you can register a callback using ContinueWith, but I don't recommend it; you need to understand a lot about task schedulers and whatnot to use it correctly. It's far easier to use await, which does the (most likely) correct thing by default.
The .Result property returns a async Task, so the server requests is still performed async.
There is not reason to hold up a thread for the duration of the call thats why you use async.
If you fire the call on the GUI thread its even more important todo it async because otherwise the GUI will not respond while the call is done
1) Yuo need to use the await keyword if you want try catch blocks to actually catch server faults. Like
try
{
var foo = await proxy.Invoke<string>("Bar");
}
catch (Exception ex)
{
//act on error
}
2) I think you ment to ask if its any reason to call it async? And yes like I said, you do not want to block any threads while the request is being made
I have a function that loads a user object from a web service asynchronously.
I wrap this function call in another function and make it synchronous.
For example:
private function getUser():User{
var newUser:User;
var f:UserFactory = new UserFactory();
f.GetCurrent(function(u:User):void{
newUser = u;
});
return newUser;
}
UserFactory.GetCurrent looks like this:
public function GetCurrent(callback:Function):void{ }
But my understanding is there is no guarantee that when this function gets called, newUser will actually be the new user??
How do you accomplish this type of return function in Flex?
This way madness lies.
Seriously, you're better off not trying to force an asynchronous call into some kind of synchronous architecture. Learn how the event handling system works in your favour and add a handler for the result event. In fact, here's the advice straight from the flexcoders FAQ :
Q: How do I make synchronous data calls?
A: You CANNOT do synchronous calls. You MUST use the result event. No,
you can't use a loop, or setInterval or even callLater. This paradigm is
quite aggravating at first. Take a deep breath, surrender to the
inevitable, resistance is futile.
There is a generic way to handle the asynchronous nature of data service
calls called ACT (Asynchronous Call Token). Search for this in
Developing Flex Apps doc for a full description.
See my answer here:
DDD and Asynchronous Repositories
Flex and Flash Remoting is inherently asynchronous so fighting against that paradigm is going to give you a ton of trouble. Our service delegates return AsyncToken from every method and we've never had a problem with it.
If you want to ensure that the application doesn't render a new view or perform some other logic until the result/fault comes back, you could do the following:
Attach an event listener for a custom event that will invoke your "post result/fault code"
Make the async call
Handle the result/fault
Dispatch the custom event to trigger your listener from #1
Bear in mind this going to lead to a lot of annoying boilterplate code every time you make an async call. I would consider very carefully whether you really need a synchronous execution path.
You can't convert async call into sync one without something like "sleep()" function and as far as I know it is missing in AS3. And yes, it is not guaranteed that newUser would contain user name before return statement.
The AS3 port of the PureMVC framework has mechanisms for implementing synchronous operations in a Model-View-Controller context. It doesn't try to synchronize asynchronous calls, but it lets you add a synchronous application pattern for controlling them.
Here's an example implementation: PureMVC AS3 Sequential Demo.
In this example, five subcommands are run sequentially, together composing a whole command. In your example, you would implement getUser() as a command, which would call commandComplete() in the getURL() (or whatever) callback. This means the next command would be certain that the getUser() operation is finished.