Redis Lua Script vs single calls - asynchronous

Please refer to this thread for reference:
redis lua script vs. single calls
How does calling redis from inside lua script lead to reduced network communication. How's it different from making redis calls from inside our application ?
Since we're working with redis asynchronous, does this mean, once we've made call to redis.call(), it'll not wait for the result and straightway move to the next line ? If yes, then what if the value returned from redis.call() is used just below it and redis is still processing the redis.call() command?

Related

Netty -- Performing async work inside PortUnificationServerHandler

I have a Netty ChannelInboundHandler in which I'm making calls to an external service (which is relatively slow). Depending upon the result of that call, I'm rewriting the pipeline to invoke different subsequent handlers. I have this working using a different executor group for the handler, but that's still inefficient, since the handler thread is doing nothing while waiting for the external service to respond.
Complicating the issue is that I'm doing this from a derivative of the PortUnificationServerHandler (itself a derivative of ByteToMessageDecoder), since the external service looks at the SNI hostname to determine whether or not to insert a SslHandler and decode or just to pass the traffic along straight.
I've seen how the HexDumpProxy example makes a call to an external service, but I don't see how this can be done from within something like ByteToMessageDecoder. My rough idea is to create a future for the external request, then have the futures call ChannelHandlerContext.fireUserEventTriggered on completion with a custom event that my handler can look for and do the pipeline rewrites. But that feels ugly, and tests didn't make it look like my event would reach my own handler...
Suggestions?

In Mule 4 is there any any way to run the Mule-Batch in synchronous Mode

In Mule 4 is there any any way to run the Mule-Batch in synchronous Mode
I have done several projects using mule batch component.
Now present case we have a situation where need to dependent on the output produced by the mule batch component.For my case it is creating a file in asynchronous mode which
contains the below information.
studentId,Name,Status
1,bijoy,enrolled
2,hari,not_enrolled,
3,piyush,erolled
But as it is running in asynchronous mode could not rely on the data.
My question is is there any way to run mule Batch (Mule 4) synchronously?
No, it is not possible to run a Batch synchronously within the flow it is called, by design.
As an alternative you could put the logic that you want to execute after the batch in a separate flow that listens to a VM queue. In the On Complete phase of the batch you can send a message to that VM queue. The listening flow can't receive batch data directly but for a file it should be OK.
Having said that, file exchange is not a very good method of exchanging information inside an application. I would recommend to explore alternatives like databases for transient data, unless you just need the file to send it elsewhere.

Difference between this.unblock and Meteor.defer?

In my methods, I use without difference alternatively this.unblock and Meteor.defer.
The idea, as I understand it, is to follow the good practice to let other method calls from the same client start running, without waiting for one single method to complete.
So is there a difference between them?
Thanks.
this.unblock()
only runs on the server
allows the next method call from the client to run before this call has finished. This is important because if you have a long running method you don't want it to block the UI until it finishes.
Meteor.defer()
can run on either the server or the client
basically says "run this code when there's no other code pending." It's similar to Meteor.setTimeout(func, 0)
If you deferred execution of a function on the server for example, it could still be running when the next method request came in and would block that request.

With Meteor, how to I run a singleton that updates periodically while clients are connected?

I'm just getting started with Meteor and I have a REST API hooked up with publish / subscribe that can periodically update per client. How do I run this behavior once globally and only refresh as long as a client is connected?
My first use case is periodically refreshing content while clients are active. My second use case is having some kind of global lock to make sure a task is only happening once at a time. I'm trying to use Meteor to make a deployment UI and I only want 1 deployment to happen at once.
publish/subscribe will work automatically only when clients are connected. However, do not put any functionality that you want to control amount of execution times in publish or subscribe functions. They might run arbitrary amount of times.
If you want some command to be executed by any client use Meteor.methodss on server side, and call it explicitly with Meteor.call from client template event.
To make sure that only one deployment happens at any given time, simplest way would be to create another collection, called for example, CurrentDeployments.And any time deployment script function in Meteor.methods is executed, check with CurrentDeployments.findOne if there are ongoing deployment or not, and only call new one if none is running.
As a side bonus, subscribe to CurrentDeployments in client, to disable 'deploy' button in case one is already running.

EJB or Servlet - how to add a 'kill switch' to force a process/thread to stop

Kind of an open question that I run into once in a while -- if you have an EJB stateful or stateless bean, or possibly a direct servlet process, that may with the wrong parameters start running long on a production system, how could you effectively add in a manual 'kill switch' for an administrator/person to specifically kill that thread/process?
You can't, or at least you shouldn't, interfere with application server threads directly. So a "kill switch" look definitively inappropriate to me in a Java EE environment.
I do however understand the problem you have, but would rather suggest to take an asynchronous approach where you split you job in smaller work unit.
I did that using EJB Timers and was happy with the result: An initial timer is created for the first work unit. When the app. server executes the timer, it then register as second one that correspond to the 2nd work unit, etc. Information can be passed form one work unit to the other because EJB Timers support the storage of custom information. Also, timer execution and registration is transactional, which is fine to work with database. You can even shutdown and restart the application sever with this approach. Before each work unit ran, we checked in database if the job had been canceled in the meantime.

Resources