I am trying to communicate between four services in Microservice. Every service has serperate Database. I am using saga pattern.
My question is, if RabbitMQ goes down or crash or no way to come back, how would I rollback or revert database without help of message Queue?
I'm assuming that you're using SAGA pattern to maintain consistency in your microservice application. Main component for this pattern is the Orchestrator which coordinates your microservice and I'm assuming that in your application you are using rabitmq for message passing.
One way is to store your message in the local database of your microservice if you're not able to send a message to the queue you can add a scheduler that will try to send the message to the queue after some interval.
The second way can be to use rabitmq in cluster mode this will prevent a Single Point of failure for your application.
Related
Let's say we have three microservices, A, B, and C, and we also have a Blazor WASM app that makes calls to these microservices for a simple example.
Suppose we have a single SQL Server database and each microservice has a separate schema with no relationships between schemas.
When service A is called it has to call B, and when service B is called it has to call C. This is done both por read and write operations.
Communication between microservices is done in a decoupled manner by sending messages to a RabbitMq exchange (If this is not possible, then the mode would be request/response)
Questions:
How is the information refreshed in the Blazor application if the entire processing cycle has not yet been completed?
What is the proper method for presenting and synchronizing information in the Blazor interface?
How are processing errors treated and reported?
I implemented a SignalR channel to notify when the task is completed.
I have resolved the issue with push notifications.
In an interesting blogpost about 'Firebase Authentication with the Firebase 3.0 SDK and Auth0 Integration', it is stated that:
You can even have Firebase communicate with Webtask!
Now I can imagine the (web)client triggering a Firebase operation and subsequently a Webtask, but not the other way around. Or am I missing something?
Firebase can run as a serverless app, but it can also run on the server. You can even have Firebase communicate with Webtask! (sic!)
I think that paragraph is misleadingly phrased, perhaps it was just added at the last minute to spark interest. You can have a webtask communicate with Firebase, not the other way around. You don't "run Firebase" on your server either.
TL;DR: A client application may call a webtask with an HTTP request, and that task can read/write the database, but not in any other order.
Here's a quick and dirty reality check as of Nov. 2016:
The Realtime Database by itself does not provide you with a way of executing code. This includes responding to database changes and user requests, handling fan-in and fan-out operations, etc. There is no support for webhooks either.
Which means you have to provide your own execution environment for such logic on a custom server, or you can try to cram as much as possible into the client code. This is a pretty exhaustive topic by itself.
Webtasks are short-lived functions that respond to HTTP requests. Their lifecycle always starts with a request, so they are not fit for continuously watching the database for changes. But they are perfectly valid for handling requests coming in from your client application.
As you can store "secrets" for the webtasks, you can authenticate the task on an admin access level. This gives you the possibility to verify client tokens – which should be sent along with the request –; perform complex authorization and validation, and perform RTDB write operations you wouldn't trust the clients with.
Or trigger external services securely. The possibilities are close to endless.
I have an ASP.NET website with a a number of long-running (5 mins to 2 hours) user-initiated tasks. I want each user to be able to see the progress of there own jobs, and be able to close their browser and return at a later time.
Current plan is to store each job in the database when it's started and publish a message to a RabbitMQ queue, which a windows service will receive and start processing the job.
However, I'm not sure of the best way to pass the progress information back to the webserver from the service? I see two options:
Store the progress information in the database, and have the web-app poll for it
Have a RabbitMQ consumer in the webserver and have the windows service post progress messages to that queue
I'm leaning towards the second option, as I don't really want to add more overhead to the database by regular polling / writing progress info. However, there are lots of warnings about using RabbitMQ (as a consumer) - as I am not sending vital messages (it doesn't matter if progress messages aren't processed), I'm wondering if this matters? It's not that (famous last words) difficult to restart the RabbitMQ consumer whenever the web app is restarted.
Does that option sound reasonable? Any better choices out there?
Store the progress information in the database, and have the web-app poll for it
Have a RabbitMQ consumer in the webserver and have the windows service post progress messages to that queue
the correct answer is C) All Of The Above!
A database is not an integration layer for applications.
RabbitMQ is not meant for end-user consumption of messages.
But when you combine RabbitMQ with a database, you get beautiful things...
have your background service send progress updates through RabbitMQ. the web server will listen for these updates and write the new status to the database. use websockets (signalr) to push the progress update to the user immediately, but you still have the current status in the database in case the user does a full refresh of the page or comes back later.
i wrote about this basic setup in a blog post on using rabbitmq to do user notifications
We are using Elastic Beanstalk to serve a REST API. Now, I want to develop an endpoint that serves notifications from an SNS-topic in an asynchronous way.
In order to receive those notifications, I need to subscribe the API-servers to the SNS-topic. How could I do this, with the scenario in mind that the EBS application can scale up to multiple servers and scale down again? I don't want a lot of dead links subscribed to the SNS-topic...
In spring world we have a #PostConstruct which gets called on server startup, where you can subscribe "this.server" url to a given topic (you may need to build a proper working subscription url --using InetAddress et el).
Hence there is the working subscribe url using #RestController which confirms such an subscription instantaneously causes sns endpoint to be registered. Any new servers will do the same aka getting registered themselves (when new stack is created). We also need additional code for the consumption of notification messages subsequently and do something when confirmed subscription endpoints receive one.
The way AWS wants you to use SNS is not by directly subscribing to it. Any notification which need to trigger something in a component should buffer notifications with an SQS queue. For this reason we chose to do Pub-Sub with a variable/scalable group of Subs using the Amazon managed Redis distribution.
I have 2 roles in my Azure cloud service application: a web role (signalr connections here) and a worker role.
The web role uses Azure service bus as its scaleout provider.
At certain points in time the worker role will emit certain events. I'd like to send this data directly to clients connected to a Hub.
My current implementation involves the worker role placing a message on a service bus queue which the web role subscribes to, the web role then forwards this message to clients via a HubContext call.
My question is: how can I send this message directly to connected clients from the worker role? So far I have considered 3 methods:
Configure signalr as in the web role so that they use the same servicebus topic. - this done not work as intended as worker role instances "steal messages" from topic subscriptions intended for the web role. This would seem to be the cleanest way of doing it but configuration is a problem.
Use the .Net client to send a hub message - this is not ideal as it places unnecessary load on the web role, as well as double the amount of service bus messages when compared to the above method.
Manually write a signalr compatible message to the topic - very hacky and succeptable to breaking changes.
I know that the team are currently rewriting scaleout for the next release but will this be possible at some point?
Edit:
I have noticed that this is supported in the RabbitMq implementation.
It seems an issue with my configuration was responsible for the first method not working.
However, it seems like that method is slower end to end (by about 150 ms) even with one less message in the loop.
I will wait and see if the scaleout work brings any improvements to this method before making any changes.