Microservices Architecture: Handle synch actions in a choreographed environment - asynchronous

I'm studying microservices choreography strategy, and I'm asking myself how to handle actions that, due to their own nature, are synchronous operations.
Let's imagine a simple scenario like this, where a microservice needs to read some data from another microservice in order to some actions:
I have thought of two possible solutions, but I'm not sure that they are the best/only ones:
In the case of a Blocking action:
Use the brokers in a synch way.
Use direct ISC, through synch gRPC or Rest protocol.
What do you think about this topic?
Is there an alternative solution or a particular pattern to apply to solve these use cases in a more elegant way?
Thank you

Related

Microservice - Produce And Consume From Same Queue

We are designing a new system, which will be based on microservices.
I would like to consult, whether it is considered an anti-pattern to produce and consume from the same queue.
The service should be a REST-based microservice.
The backend microservice should handle large-scale operations on IoT devices, which may not always be connected to the internet, the API must be asynchronous and provide robust features such as the number of X retries before final failure, etc.
The API should immediately return a UUID with 201 responses from the POST request and expose 'Get Current Status' (pending, active, done.. etc.) of the operation status of the IoT device, by accepting the UUID in the get request.
We've thought about two solutions (described at a high level) for managing this task:
Implement both API GW Microservice and the logic handler Microservice.
The API GW will expose the REST API methods and will publish messages via RabbitMQ that will be consumed by multiple instances of the logic handler microservice.
The main drawback is that we will need to manage multiple deployments, and keep consistency between the APIs exposed in service A, to the consumer in service B.
Implement one microservice, that exposes the relevant APIs, and in order to handle the scale and the asynchronous operations, will publish the request to own queue over RabbitMQ, being consumed by the same microservice at a background worker.
The second solution looks easier to develop, maintain, and update because all the logic including the REST API's handling will take care of the same microservice.
To some members of the team, this solution looks a bit dirty, and we can't decide whether it is an anti-pattern to consume messages of your own queue.
Any advice will be helpful.
I would like to consult, whether it is considered an anti-pattern to produce and consume from the same queue.
That definitely sounds suspicious, not that I have a lot of experience with queues. But, you also talk about microservices and "producing" & consuming from those - that sounds fine, there's no reason why a microservice (and by extension, it's API) can't do both. But then I'm a bit confused because in reading the rest of the question I don't really see how that issue is a factor.
Having some kind of separation between the API and the microservice sounds wise, because you can change the microservices implementation without affecting callers, assuming it's a non-breaking change. It means you have the ability to solve API / consumer problems, and backend problems, separately.
Version control is just a part of good engineering practice, and probably not an ideal reason to bend your architecture. You should be able to run more than one version in parallel - a lot of API providers operate a N+2 model, where they support the current version, plus the last two (major) releases. That way you give your consumers a reasonable runway for upgrading.
As long as you keep the two concerns modular so you'd be able to separate them when it would make sense it doesn't matter.
I'd think in the longer term you'd probably want to treat them as two aspects of the same service as they'd probably have different update cycle (e.g. the gateway part may need things like auth, maybe additional api in gRPC, etc.),different security reqs (one can accessible to the outside where the other consumes internal resource) different scalability concerns (you'd probably need more resources for the processing) etc.

Axon Framework: Should microservices share events?

We are migrating a monolithic to a more distributed and we decided to use AxonFramework.
In Axon, as messages are first-class citizens, you get to model them as POJOs.
Now I wonder, since one event can be dispatched by one service and listen on any others, how should we handle event distribution.
My first impulse is to package them in a separate project as a JAR file, but this goes against a rule for microservices, that they should not share implementations.
Any suggestion is welcome.
Having some form of 'common' module is definitely not uncommon, although I'd personally use that 'common' module for that specific application alone.
I'd generally say you should regard your commands/events/queries as the API of your application. As such, it might be beneficial to share the event structure with other projects, but just not the actual POJO itself. You could for example think about using ProtoBuf for this use case, were in ProtoBuf describes a schema for your events.
Another thing to think about is to not expose your whole 'event-API'. Typically you'll have quite some fine grained events, things which other (micro) services in your environment are not interested in. There are however always a couple of 'very important events', differently put 'milestone events', which others definitely are interested in.
These milestone events in some scenarios aren't a direct POJO following from your domain, but rather an accumulations of several events.
It is thus not to uncommon to have a service which accumulates these and publishes another event to notify other services. The accumulating of these fine grained, internal events, and publishing a milestone event as a response to these is typically better suited as the event-API within your micro service architecture.
So that's a couple of ideas there for you, hope they give you some insights.
I'd like to give a clear cut solution to your question, but such an answer always hides behind 'it depends'.
You are right, the "official" rule is not to share models. So if you have distributed dev-teams, I would stick to it.
However, I tend to not follow strictly when I have components that are decoupled but developed by the same team or teams with high interaction ...

Pattern matching requirement

We have a requirement wherein we will have to periodically monitor stream of data and include them in particular buckets. What will be the best language or tool to implement such requirement ?
you can use Kafka
"Kafkaâ„¢ is used for building real-time data pipelines and streaming apps"
https://kafka.apache.org/
You could use scala if you have very complicated pattern, but if it's simple, it's a perfect match to use a message broker like RabbitMQ.
Based on my humble experience I would reccomend nodejs, it works nicely with sockets and streams, you can implement thing fast, and performance are good too (async I/O). If you are not familiar with it think of it as a js engine with socket/HTTP support and async I/O. Performnces are really good. If you need really fast or peculiar things you can even extend with C++, but I think it's not necessary. Give a look https://nodejs.org/en/

Is messaging a good implementation of Request/Reply

JMS or messaging is really good in tying up disparate applications and form the infrastructure of many ESB and SOA architectures.
However say Application A needs an immediate response from a service on Application B e.g. Needs the provisioning details of an Order OR Needs an immediate confirmation on some update. Is Messaging the right solution for that from a performance point of view? Normally the client would connect to a MoM on a Queue - then a listener which has to be free will pick up the message and forward to the server side processor - which will process the response and send it back to a Queue or Topic and the requesting client will follow the same process and pick it up. If the message size is big the MoM will have to factor that in as well.
Makes me wonder if Http is a better solution to access such solutions instead of going via messaging route? I have seen lots of applications use MoM like AMQ or TIBCO Rvd to actually use for immediate Request/Response - but is that bad design or is it some fine tuning or setting that makes it same as Http.
It really depends on your requirements. Typically messaging services will support one or all of the following:
Guaranteed Delivery
Transactional
Persistent (i.e. messages are persisted until delivered, even if the system fails in the interrim)
An HTTP connection cannot [easilly] implement these attributes, but then again, if you don't need them, then I suppose you could make the case that "simple" HTTP would offer a simpler and more lightweight solution. (Emphasis on the "simple" because some messaging implmentations will operate over HTTP).
I don't think Request/Response implemented over messaging is, in and of itself, bad design. I mean, here's the thing.... are you implementing both sides of this process ? If not, and you already have an established messaging service that will respond to requests, all other considerations aside, that would seem to be the way to go... and bypassing that to reimplement using HTTP because of some desgin notion would need some fairly strong reasoning behind it, in my view.
But the inverse is true as well. If you an HTTP accessible resource already, and you don't have any super stringent requirements that might otherwise suggest a more robust messaging solution, I would not force one in where it's not warranted.
If you are commensing totally tabula-rasa and you must implement both sides from scratch..... then..... post another question here with some details ! :)

Should a email messaging system be responsible for templating?

I'm writing a custom email sending service for a client. The client also wants message templating as well, but they didn't specify whether or not they wanted it in the messaging service or not. So, I'm thinking about best practice, here. Should a messaging service be responsible for templating as well? Or should the templating happen before the call to the messaging service? What have you done? What works better and makes the most sense?
This is easy to answer with a question: Are you going to use the messaging service for sending all kinds of messages (with or without templates) or just templated ones? (a.k.a reusability of messaging service's functionality).
You mentioned two solutions in your question. Lets call them solution A and solution B.
Since clients constantly change their mind, you might have to later change whichever solution you adopted. Your implementation must be easy to change later on, so you can chose which one to implement like this:
think that you have implemented solution A and you have to change it into B. How hard will it be and what will it involve? Let's call this result 1;
think that you have implemented solution B and you have to change it into A. How hard will it be and what will it involve? Let's call this result 2;
compare result 1 and 2 with pros and cons.
Choose the one with the most pros.
You could also opt for a solution C. Make the messaging service send all kind of messages (generic) and include loosely coupled plugable templating (more specific). Package them together and you get a specific tool that you can later split with ease or add more templating implementations to it if needed.
Just my 2 cents!

Resources