In my methods, I use without difference alternatively this.unblock and Meteor.defer.
The idea, as I understand it, is to follow the good practice to let other method calls from the same client start running, without waiting for one single method to complete.
So is there a difference between them?
Thanks.
this.unblock()
only runs on the server
allows the next method call from the client to run before this call has finished. This is important because if you have a long running method you don't want it to block the UI until it finishes.
Meteor.defer()
can run on either the server or the client
basically says "run this code when there's no other code pending." It's similar to Meteor.setTimeout(func, 0)
If you deferred execution of a function on the server for example, it could still be running when the next method request came in and would block that request.
Related
Is there a built-in method to execute code in a Firebase function if it times out?
I would also like to be able to execute code when the function ends (no matter how it ends). I'm thinking this would behave like catch and finally in a try block.
Is this possible in a Firebase function or must I implement my own timer?
There is no mechanism provided for this. Furthermore, implementing your own timer is prone to failure because Cloud Functions will forcibly terminate any code left running after a timeout or crash. You really have no guarantee that any function code will reliably complete after it has been started.
Your best bet is to understand and enable retries, then write code in your function to determine if your fallback code should run during a retry execution. Your function should strive to complete without any errors in order to tell Cloud Functions to stop retrying a background event that would trigger the function.
I have a Netty ChannelInboundHandler in which I'm making calls to an external service (which is relatively slow). Depending upon the result of that call, I'm rewriting the pipeline to invoke different subsequent handlers. I have this working using a different executor group for the handler, but that's still inefficient, since the handler thread is doing nothing while waiting for the external service to respond.
Complicating the issue is that I'm doing this from a derivative of the PortUnificationServerHandler (itself a derivative of ByteToMessageDecoder), since the external service looks at the SNI hostname to determine whether or not to insert a SslHandler and decode or just to pass the traffic along straight.
I've seen how the HexDumpProxy example makes a call to an external service, but I don't see how this can be done from within something like ByteToMessageDecoder. My rough idea is to create a future for the external request, then have the futures call ChannelHandlerContext.fireUserEventTriggered on completion with a custom event that my handler can look for and do the pipeline rewrites. But that feels ugly, and tests didn't make it look like my event would reach my own handler...
Suggestions?
I'm having trouble getting multiple RPC calls to operate over a single connection. Server and client are both operating asynchronously using a completion queue.
I fire off a streaming call (getData), which sends one reply per second for 10 seconds. I wait a couple of seconds, then try to fire off a getVersion call (a unary call) and it doesn't come back until the getData call completes. Examination of the server shows that the getVersion call never hit the server until getData finished up.
And if I try to start multiple calls while the first getData is running, they all run once the first getData finishes. And in fact, they all run in parallel - for instance, if I fire off multiple getData calls, I can see all of them running in parallel after the first (blocking) getData finishes.
It's like you can queue up all you want, but once something is in progress you can't get a new call started on that channel?
Is this supposed to do this? It doesn't seem like the correct behavior, but my experience with gRPC is somewhat limited.
The problem was a bug in the way I was waiting for the next time to send data. I was blocking things that I shouldn't have been.
I'm using Vertx 3.5.0 and very new to it. I'm trying to cancel the code execution when a client cancels their request.
Currently it's setup to where the first thing we do is deploy a verticle to run an HttpServer, and we add all of our Routes to the Router. From here we have a handler function per route. Inside this handler I'm trying this:
routingContext.request().connection().closeHandler({
//execute logic for ending execution
});
This is the only method I've seen that actually catches the closing of the connection, but the problem is it doesn't execute the handler early enough in the eventloop. So if I have any logs in there it will look like:
...[vert.x-eventloop-thread-0].....
...[vert.x-eventloop-thread-0]..... (Let's say I cancelled the request at this point)
...[vert.x-eventloop-thread-0].....
...[vert.x-eventloop-thread-0]..... (Final log of regular execution before waiting on asynchronous db calls)
...[vert.x-eventloop-thread-0]..... (Execution of closeHandler code)
I'd like for the closeHandler code to interrupt the process and execute essentially when the event actually happens.
This seems to always be the case regardless of when I cancel the request so I figure I'm missing something about how Vertx is handling the asynchronousity.
I've tried executing the closeHandler code via a worker verticle, inside the blockingHandler from the Router object, and inside the connectionHandler from the HttpServer object. All had the same result.
The main code execution is also not executed by a worker verticle, just a regular one.
Thanks!
It seems you misunderstand what a closeHandler is. It's a callback method, that Vert.x invokes when the request is being closed. It is not way to terminate the request early.
If you would like to terminate request early, one way is to use response().close() instead.
As a footnote, I'd like to mention that Vert.x 3.5.0 is 4 years old now, and you should be upgrading to 3.9, or, if you can, to 4.0
Meteor's documentation states:
In Meteor, your server code runs in a single thread per request, not in the asynchronous callback style typical of Node
Do they actually mean?
A) the server is running multiple threads in parallel (which seems unusual within the Node.js ecosystem)
or
B) There is still only a single thread within an evented server and each request is processed sequentially, at least until it makes calls to resources outside the server - like the datastore, at which point the server itself is handling the callbacks while it processes with other requests, so you don't have to write/administer the callbacks yourself.
Brad, your B is correct.
Meteor uses fibers internally. As you said, there's only one thread inside an evented server, but when you do (eg) a database read, Fibers yields and control quickly gets back to the event loop. So your code looks like:
doc = MyCollection.findOne(id);
(with a hidden "yield to the event loop, come back when the doc is here") rather than
MyCollection.findOne(id, function (err, doc) {
if (err)
handle(err);
process(doc);
});
Error handling in the fiber version also just uses standard JavaScript exceptions instead of needing to check an argument every time.
I think this leads to an easier style of code to read for business logic which wants to take a bunch of actions which depend on each other in series. However, most of Meteor's synchronous APIs optionally take callbacks and become asynchronous, if you'd like to use the async style.