I have a chat application using firebase and node.js where I keep track of presence by running a single worker thread on the server that monitors child_added and child_deleted events on the firebase presence channel, and updating our presence database tables accordingly.
My question is this - now that firebase queue exists https://www.firebase.com/blog/2015-05-15-introducing-firebase-queue.html
Can I use the queue to replace the worker thread that I have running on the server to monitor presence and child_added events? Looking at the current examples - it looks like I would create a reference to the queue on the client and then set on disconnect and connect events to push into that queue from the client? However I'd like to secure it a bit more and not rely on the client so much. I'd also like to have the queue process the event by archiving it to a 3rd party logging service - credentials or details I wouldn't want to expose to the client.
Does this mean I would still need a server side worker process - and if so what benefit would the firebase queue be in this use case?
Firebase Queue is not a hosted solution - you still need to run it on your own server.
The main advantage of using a queue over a single listener process is the ability to run multiple workers for the same tasks so there's not a single point of failure. Using the queue you'll know that the worker processes are synchronized such that only one worker will be processing a task at any one point in time, and if a worker dies during processing or takes too long, another worker will pick it up again once the task has timed out.
It sounds like you're trying to create some kind of audit trail for presence, but there's currently no way to report presence directly from the server - you'll need to rely on the client at some point. Your security rules can enforce whether a write is a boolean to specific location in your database, but they can't enforce that the client was in any particular presence state when writing it. Also note that there's no push or childByAutoId equivalent onDisconnect handler, so to push to a queue you'd have to do something like:
var ref = new Firebase(…);
var disconnectTask = {};
var pushId = ref.push().key(); // This just generates the ID and does no network traffic
disconnectTask[pushId] = { /* populate with task data here */ };
ref.onDisconnect().update(disconnectTask);
Note that the push ID will be generated client-side before the operation is sent to the server and so the task won't necessarily be in order when added to the queue.
Related
I'm designing a system with multiple bounded contexts (microservices). I will have 2 kind of events.
Domain Events, which happens "in memory" within single transaction (sync)
Integration Events, which are used between bounded contexts (async)
My problem is, how to make sure that once transaction is committed (at this point I'm sure all Domain Events were processed successfully) that Integration Events are successful as well.
When my Transaction is committed, normally I will dispatch Integration Events (e.g. to the queue), but there is possibility that this queue is down as well, so previously just-committed transaction has to be "reverted". How?
The only solution that comes to my mind is to store Integration Events to the same DB, within the same Transaction, and then process the Integration Events records and push them to the queue - this would be something like "using current DB, as a pre-queue, before pushing it to The Real Queue (however I read that using DB for this is an anti-pattern).
Is there any pattern (reliable approach) to make sure both: Transaction commit and Message pushed to the queue is an atomic operation?
EDIT
After reading https://devblogs.microsoft.com/cesardelatorre/domain-events-vs-integration-events-in-domain-driven-design-and-microservices-architectures/ , the author actually suggests the approach of "pre-queue" in same DB (he calls it “ready to publish the event”).
Checkout transactional outbox pattern.
This pattern does create a pre-queue. But the nice part is that pushing messages from pre-queue to real queue is fully decoupled. Instead you have a middleman called, a message relay that reads your transaction logs and pushes your event from to the real queue. Now since sending message and your domain events are fully decoupled, you can do all your domain events in a single transaction.
And make sure you that all your services are idempontent(same result despite duplicate calls). This transactional outbox patter does guarantee that messages are published, but in case when the message relay fails just after publishing(before acknowledging) it would publish the same event again.
Idempotent services is also necessary in other scenarios. As the event bus(the real queue) could have the same issue. Event bus propagates events, services acknowledge, then network error, then since the event bus is not acknowledged, the same event would be sent again.
Hmm actually idempotence alone could solve the whole issue. After the domain events computation completes(single transaction), if publishing message fails the service can simply throw an error without roll back. Since the event is not acknowledged the event bus will send the same event again. Now since the service is idempotent, the same database transaction will not happen twice, it will basically overwrite or better(should) skip and directly move to message publishing and acknowledging.
are the realtime database trigger onWrite onCreate queued or threaded ?
Neither.
Cloud Functions events don't necessarily get handled in the same order that they occurred. If you are depending on ordering, your functions may not work the way you expect. There is no single ordered queue that all events pass through - this would not scale.
Each function invocation runs full isolation from other function invocations. Cloud Functions will spin up new server instances to handle load as needed. So, if one server is busy handling events, Cloud Functions may decide to add more servers to the mix to be able to handle more incoming events. Each server handles only one event at a time. The events are handled serially within each server instance, and handled in parallel between server instances. There is no "threading" going on, from the perspective of the event trigger code (that's not the way node.js works for application code).
We are capturing a new committed state in the vault through vaultTrack method on Corda RPC proxy for using in the logs recording. Although it’s working properly, we thinks it might have cause some overhead for network connection. So, we decided to try using ServiceHub in the CorDapp for capturing the new event instead. Unfortunately, the event keep occurring every time when the flow is called (based on observable concept?). Maybe we did not set up properly?. Based on your experience and expertise, could you
Suggest what went wrong; and
The corresponding solutions?
More details here:
As we are using the logs of CorDapp for a performance benchmark. Therefore, we are focusing only new committed state event. In API endpoint where we had started, we are using VaultTrack in RPC to record each new committed state event as shown in the example below:
Although the API seems to work properly but we think it might consume RPC connection in the overall performance since the observable is called every time a new state is committed. Please correct us if we're wrong. As such we decided to change to logging the events in the flow instead.
In CorDapp, we are using VaultService in ServiceHub to record each new committed state event in the ‘call function’ of flow initiator as shown in the example below:
We found that the logs recording in CorDapp i.e. in the flow (from the serviceHub mentioned above) keep gaining duplicated log every time the flow is called. From our initial investigation, we found that the problem is "vaultService" keep getting subscribed every time the flow is initiated. Therefore, we switched back to use the API endpoint method. Please could you advise us the right way to capture the event in CorDapp. To log the event of a newly committed state during our performance testing.
The approach of subscribing to a vault observable within a flow will not work. Once the flow ends, the subscription will not be terminated. Every time you run the flow, an additional subscriber will be added. This will degrade performance (although the RPC overhead is generally quite low as long as the states serialise quickly enough).
You should observe updates to the vault using an RPC client instead. Here is an example:
val client = CordaRPCClient(nodeAddress)
val proxy = client.start(rpcUserUsername, rpcUserPassword).proxy
// Track IOUState updates in the vault
val (snapshot, updates) = proxy.vaultTrack(IOUState::class.java)
// Log the existing IOUStates and listen for new ones.
snapshot.states.forEach { logState(it) }
updates.toBlocking().subscribe { update ->
update.produced.forEach { logState(it) }
}
When you call start on the CordaRPCClient, you will connect to the node's Artemis message queue. This message queue will be used to stream updates from the vault back to the client over time.
In the example above, the vault updates are simply logged. You can change this behaviour as required (e.g. to call an API whenever an update is produced).
New to firebase and trying to understand how things work. I have an android app and plan to use the offline support and I'm trying to figure out whether or not I need to use callbacks. When I make a call like:
productNode.child("price").setValue(product.price)
Does that call to setValue happen synchronously on the main thread and the sync to the cloud happens asynchronously? Or does both execute asynchronously on a background thread?
The Firebase client immediately updates its local copy of the data with the new value. As part of this it fires any local (value, child_*) events that are needed.
Sending of the data to the database happens on a separate thread. If you want to know when this has completed, you can register a CompletionListener.
If the server somehow cannot complete the write operation (typically because the write violates a security rule), the client will fire any additional events that are needed to get the app back into the correct state. So in the case of a value listener it will then fire a second value event with the previous value.
I have following scenario:
User request for certain resource on server, This request is long running task and very like 2~3 seconds to 10 seconds. We issue a JobTicket to user, As our user want to wait.
On receiving request we store that request in persistence storage and issue a token to user as JobTicket (GUID).
User make connection with Hub to get information about that GUID.
In Background:
We have WAS Hosted as well as Windows Service to perform some operation on that request.
On complete, WAS Hosted/Windows Service call our Web Application that job has been completed.
From there based on job Ticket we identify which user and on its connection we let user know its job has been completed.
Now we have farm of servers, we are using Windows Server On Prem ServiceBus 1.1 which is working fine, But challenge we have is that we are not able to intercept ServiceBus based backplane message broadcast and message is going to all the client. As we have farm, user intermediately may have drop connection and connected to other server based on load balancer so we need to have scale out using Service Bus as its kind of seamless to integrate and we are also using for our internal purpose in our application so we don't want to user any other mix in complex solution.
I have tried using IHubPipelineModule but still Scale out message broadcast not passing thru that, I tried to hookup SignalR code directly and debug thru it but its taking long. I don't want to mess-up something arbitrary in actual code. As I can see that in OnReceive I can see message are coming, but not able to follow further. I just need small mechanism that I can intercept broadcast message and make sure that it goes to client it intended and not all the client by wasting resources, and security concern as well.
Please help me on this issue, it's kind of stuck from last 4 days and not able to come to any solution and same time I want to go with establish pattern and don't want to fork any special build for this kind of small issues which I am sure one of you expert knows how I can do that seamlessly.
Thanks,
Shrenik
After lots of struggling and not finding straight forward way, I have found the way as below for someone else in future it might help.
Scenario:
1. Web Farm: Host External User facing Web Pages
2. Backend Process: Which is mix of WebApi, SharePoint, Windows Service etc.
User from Web Page submit some request and get a unique id as return back. Internally on receiving request, we queue that request to Service Bus using TopicClient for processing.
There are pool of Windows Service watching on Message on Service Bus using SubscriptionClient and process that message. On completion of process which can run from 5 seconds to 30 seconds and some cases even more. We need to inform client that its job done if its waiting on web page or waiting for completion notification.
In this story, We are using SignalR to push job completion notification to client.
Now my earlier problem is How I let know from windows service to web application that job is done so send notification to client who submitted request.
One way is we hosted another hub internally in web application, Windows service act as client and call web application hosted hub, and in that hub method it will call external facing hub method to propagate message to specific client who submitted request, for which we are using Single user Group.
And as we have register service bus as backplane it will propagate to other servers and then appropriate client will get notification. So this is ideal solution and should work in most cases.
In above approach we have one limitation that, how Windows Service connect to Web Client, as we donot have windows auth, but we have openid based auth with ADFS. Now in such case Web Application required special code in which provide separate userid or password for windows service to communicate or have windows authentication also allowed for that hub for service account of windows service.
I was trying and trying how to remove all this hopes between interserver communication and again management of extra security.
So I did below with simplicity, though it tooks me whole night to find our internal of SignalR. But it works:
Approach is to send message directly to ServiceBus Backplane, and as all Web Server already hooked-up with ServiceBus backplane then they will get message.
Unfortunately SignalR doesn't provide such mechanism to send message directly to Backplane. I think its on pub/sub model so they don't want somebody to hack in their system :). or its violation of their pattern, but its make sense, in my case because of different roles and security, I have simplify code as below:
Create a ServiceBusMessageBus instance in my code, Same way as Below: Though I have created separate instance and store till lifetime of Windows Service, so I don't create instance every time:
ServiceBusMessageBus serviceBusBackplane = new ServiceBusMessageBus(new DefaultDependencyResolver(), new ServiceBusScaleoutConfiguration(connectionString, appName));
Create a ClientHubInvocation Object: This is a message which actually get created in SignalR infrastructure when Backplane based message broadcast:
ClientHubInvocation hubData = new ClientHubInvocation
{
Args = new object[] { msg },
Hub = "JobStatusHub",
Method = "onJobStatus",
State = null,
};
Create a Message object which accept by ServiceBusMessageBus.Publish, Yes, so this is a method which actually get called on base class ScaleoutMessageBus.Publish. This class is actually responsible for sending message to topic and other subscribers on other server nodes. Why not use that directly. Now to create Message Object, You need following code:
Message backplaneMessage = new Message(
sourceId,
"hg-JobStatusHub." + name,
new ArraySegment(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(hubData))));
In above second parameter is something interesting,
In case if you want to publish to all the client then syntax is "h-", in my case specific group user, so syntax is "hg-.. You can check the code here: https://github.com/SignalR/SignalR/blob/bc9412bcab0f5ef097c7dc919e3ea1b37fc8718c/src/Microsoft.AspNet.SignalR.Core/Infrastructure/PrefixHelper.cs
Publish your message to backplane directly as below:
await serviceBusBackplane.Publish(backplaneMessage);
I wish this PrefixHelper class have been public.
Remember: This is not recommended way and doent insulate from future upgrade for SignalR, as its internal they may change so any upgrade might come with small hazale to change this code. But in summary this works. Hope SignalR Team provide some mechanisam out of box to send message directly to backplane instead.
Thanks