Testing async microservices with no API - asynchronous

We have 5 microservices (more on the way) that communicate with each other asynchronously. 3 of these microservices do not have any API. Those consume data from a message queue, do some processing, and write data into another queue. 2 of these microservices do have APIs, and those also consume data from the queues but send the response back to the caller.
Given that, for testing the service interactions, correctness of contracts, and end-to-end flow:
what would be the best way to test the asynchronous services that read from and write to queues?
would consumer-contract test be applicable anywhere?
I feel end-to-end production testing is possible, but can something more granular and effective be done?

First, let's correct one thing - you do have APIs. The messages that the "API-less" services read must have some defined content or format. That's an API. You should be testing it, both positive and negative.
Before you get to whole-system testing (in a test or staging environment that mimics your production environment) your testing should probably be in layers, much as in any other system.
Unit tests to test the behavior of each class. In your message-handling classes, for example, call each with messages that prompt specific actions and test that it works correctly. These can usually be run very quickly so it's easy to run them often while developing code.
Integration tests to make sure your interaction with next-level external systems works. You can use, for example, testcontainers to run isolated instances of the queueing system that your services interact with.
Exactly what form these tests take depends on what languages and frameworks your system is built with. I briefly looked at the Pact tool you referenced and it is taking a similar approach.

Looks like with Pact it is possible to do message-based contract testing. I think I have a path forward with this.

Related

Microservice to Microservice Architecure using gRPC : .NET Core

So I've this Microservice architecture where there is an ApiGateway, 2 microservices i.e., Configurations. API and API-1. The Configuration. API is mainly responsible to parse the JSON request and
access the DB and update Status tables, also to fetch required data, it even adds up more values to the JSON request and send it to the API-1. API-1 is responsible to just generate report based on the json passed.
Yes I can merge the configurations. API to the API-1 and make it a single service/container but the requirement is not to merge and create two different components i.e., 1 component purely based on
fetching the data, updating the status while the other just to generate the reports.
So here are some questions:
: Should I use gRPC for the configuration.API or is there a better way to achieve this.
Thank you.
RPC is a synchronous communication so you have to come up with strong reason to use it in service to service communication. it brings the fast and performant communication on the table but also coupling to the services. if you insist use rpc it is better to use MASSTRANSIT to implement the rpc in less coupled way. however in most cases the asynchronous event-base communication is recommended to avoid coupling (in that case look at CAP theory, SAGA, circuit breaker ).
since you said
but the requirement is not to merege and create two different
components
and that is your reason and also base on the fact
also to fetch requried data, it even adds up more values to the JSON
request and send it to the API-1
i think the second one makes scenes more. how ever i cant understand why you change the database position since you said the configuration service is responsible for that.
if your report service needs request huge data to generate report you have to think about the design. there is no more profile on you domain so there cannot be an absolute answer to this. but consider data reduce from insertion or request or some sort of pre-calculation if you could and also caching responses.

What will happen if a Django request gets to long?

I am trying to figure out a microservices architecture for my Django Restfull webapp.This Engineering app will envolve with heavy mechanical and geometrical calculations in some places. Due to the Django's slow (actually python's) calculation nature I will make main app with Django(because of its killer nice features) and also I want to design microservices as side functions on C# asp.net(due to its calculation power). This will allow me through heavy calculations on it so I can keep Django main app just like a bridge. So my questions;
When Django main app sends a request to microservice What will happen if a calculation gets too long on microservice? Lets say 10sec, will my whole app be frozen along the 10sec.?
I want to use async fuctions on ASP.NET, will it negatively effect the sync Django main app?
#Issac v- For long-running processes, you shouldn't do it in the request-response thread. You should use a message queue like RabbitMQ and use celery to do it asynchronously. This will free up the request-response thread. E.g Image processing, file processing, etc. You can do heavy computations in python too. Python has all you need for computation. I have used Python and OpenCV for heavy image processing. It's up to you how you design your service.
Additionally, in a microservice architecture, have you thought of using something like pub-sub design instead of Http request-response. Service to service interaction is ideal for publishing-subscribing architecture.
And my 2cent, You can do everything with Django. Django has everything you need.You might not need asp.net

Microservices extensions/plugins architecture

We are building a platform based on microservices. The platform will provide a number of basic functions that will be used in various independent projects.
We need to come up with such an architecture that will allow us to extend the basic functionality and interact with existing services. The main task is to ensure that the code of the main services remains unchanged, and custom solutions based on the platform can be easily reused.
We are considering several options. For example, there is a certain service "foo" which provides the functions foo1 and foo2. To extend the functions, we can create an independent "foobar" service and put it in front of the foo service, accept API requests, execute custom functions, and then redirect the request to foo. It turns out a kind of intermediary service, which acts as the main link in the implementation of the functions specific to a given project and the main platform. The advantage of this approach can be attributed to complete independence from the code base of basic services. And the main disadvantage is the complexity of implementation and the need to greatly fragment the functions of the main service.
The second option under consideration is similar to the approach that is often used in monolithic applications - the hook system, which allows you to override the behavior of the system. For example, you can create an independent service that will connect events and subscribers.
This approach is more flexible, but at the same time it is still quite difficult to implement. The main disadvantage of this approach is synchronous blocking network calls.
The third option that we are considering is the construction of the microservices themselves in such a way that additional modules can be added to them so that services can be customized at the build stage. The main code remains unchanged, but inside the process, the already mentioned scheme of hooks and events is implemented (at the code level of individual services). Benefit is ease of implementation. Of the cons, it is very difficult to implement customizations in the case multiple services are involved.
Perhaps we are trying to invent a bicycle and there are exist good solution(s) for the problem. If you know such, or you have a good idea about possible ways to solve this problem, please share.
I don't think there is an answer to this generic problem. The third solution seems pretty good. You might take advantage of chain-of-responsabilities or request-response pipelines paradigm to implement the open/closed principle pretty easily. This should ensure that you can add/modify features into your foo service without touching the existing code.
If you are looking for something to tackle to whole architecture rather than a single microservice, you might want to give a look at the Event-Driven microservices design.
This design approach is similar to your option 2, with the difference that it uses a message broker and async communication to let the microservices communicate.
There are a lot of advantages, like more resiliency and decoupling of the services.

Using pact for a service with a sequence of requests and responses

Here is my use case:
My Service is: ServiceA.
It depends on the following services: ServicesB and ServiceC.
ServiceA sends a POST request to ServiceB with some authentication details (username+password) and ServiceB replies back with a json document which has a sessionId.
Request:
POST /authenticate
{
"username": "_at_api",
"password": "xxx"
}
Response:
{
"sessionId": "axy235da7ad5a24abeb3e7fbb85d0ef45f"
}
The above sessionId is used for all api calls from ServiceA to ServiceC.
ServiceA requests to serviceC to start a job using a POST request and serviceC returns with a job id(alphanumeric).
Request:
POST /jobs/local/start
Header: Authentication: axy235da7ad5a24abeb3e7fbb85d0ef45f
{
...
}
Response:
{
"status": "RUNNING",
"jobId": "a209016e3fdf4425ea6e5846b8a46564abzt"
}
ServiceA keeps polling serviceC for the completion of the job using the jobId returned above:
Request:
GET /jobs/status/a209016e3fdf4425ea6e5846b8a46564abzt
Header: Authentication: axy235da7ad5a24abeb3e7fbb85d0ef45f
Response:
{
"status": "RUNNING"
}
The polling continues until the status is returned as COMPLETED or FAILED.
Response:
{
"status": "COMPLETED"
}
How can I use Pact to test serviceA?
My plan is to use only unit tests and contract tests to achieve code coverage of more than 90%. Is it a good idea, or do I need to have a separate tests using virtual servers? My understanding is that Pact is a superset of virtual server (example: mountebank) and everything which a virtual server can do, Pact can do. And so I do not need a separate Component testing. Also, it looks like Contract testing completely replaces end-to-end testing, so I do not need end-to-end testing as well. Is this right?
Also, it looks like Contract testing completely replaces end-to-end testing, so I do not need end-to-end testing as well. Is this right?
No. Contract testing is not functional testing (see this excellent article, with the same title)
What is contract testing?
Contract testing is about testing whether two components are able to communicate.
Consider a contract between a house and a postal worker: The postal worker needs to know that they can approach the house and deliver post (and, that sometimes they may be unable to do this - perhaps the mailbox is full).
From the postal worker's perspective, the contract looks like this:
Find postbox (with a success and fail case)
Deliver post to postbox (with a success and fail case)
Note that the postal worker doesn't know anything about the implementation of the postbox. Perhaps there are multiple reasons that delivering to the postbox might fail - maybe the door is jammed, maybe the box is full, maybe
the post is too big to fit in it.
In this hypothetical case, our postal worker doesn't do anything different in those cases - they just fail to deliver. So, from the perspective of the contract, the reason for the failure is irrelevant. The contract - that the worker can try to deliver post, and that they can be successful or unsuccessful - can be tested without enumerating all the possible reasons for failure.
See the article linked above for a more detailed example, but to quote from the end of it:
Contracts should be about catching:
bugs in the consumer
misunderstanding from the consumer about end-points or payload
breaking changes by the provider on end-points or payload
A really nice feature of Pact is that you can test multiple contracts against only the bits of communication that they rely on.
Note that the consumer contract tests only describe communication that the consumer needs to make or understand. A contract is not (necessarily) a full API description.
Ok, but why can't I use contract tests for end to end tests?
It's possible to use a tool like Pact to replace your end-to-end tests. However, although contract testing has a lot of similarities with the features you'd need for end-to-end testing, contract testing (and Pact in particular) isn't designed for end-to-end testing.
If you're doing end-to-end testing by extending an existing consumer's tests (say, adding all the possible reasons to failure to the postworker's tests), then it's no longer clear what the contract means. The contract now describes how the communication works along with the behaviour.
This will cause problems when you start adding more consumers (say, a parcel courier) - do you duplicate all of the failure cases in all the consumers, or do you just keep them in the original consumer tests? If you duplicate the tests, then you have a lot of things to change if you change the behaviour of the provider - and your tests will be brittle. If you don't duplicate the tests, then your end-to-end tests are stuck in one consumer - with all the problems of losing them if you decommission that consumer.
With pure contract tests, you (ideally) don't have to change anything if you're adding more possible reasons for failure that the consumers already understand.
There are many other reasons that you'll have headaches if you try this (your tests start relying heavily on exact data, and the meaning of failed verifications and can-i-deploy hooks would change if your tests are end-to-end tests), but the key takeaway is that Pact is not designed as a replacement for end-to-end testing. You can use it that way, but it's not advisable and is likely to lead to painful maintenance.
How can I use Pact to test serviceA?
You describe each request separately, using Pact provider state as the prerequisite for each request.
Additionally, you may find the question on PACT - Using provider state helpful.
Although this comes late, for everyone who stumbles across this:
The pact spec (v3.0) only supports a single request per interaction (but multiple interactions per consumer-provider-pair). On the provider side, every interaction results in an individual test. So while having a sequence or requests running on the provider side is not conceptually not a good idea, it also doesn't work technically.
On the consumer side though, you have an alternative - which I would encourage not to misuse for end-to-end tests. In my case, I had a class which implemented a template/strategy pattern which involved multiple requests and didn't allow injecting intermediate states on the consumer side. It required all requests to have valid responses.
In that case, pact-jvm-consumer-junit (=JUnit4) allows to specify multiple PactVerification annotations on a single test method (Kotlin in my case):
#Test
#PactVerifications(
value = [
PactVerification("Service B", fragment = "authenticateOk"),
PactVerification("Service C", fragment = "jobStartOk")
PactVerification("Service C", fragment = "jobStatusOk")
]
)
fun `test successful job execution`() {
// add your test here.
}
On the provider side, all fragments above are executed as individual tests, thus they need a proper state as specified in the answer above.
I got this approach running in JUnit4. Haven't found a quick way to mimic it JUnit5 though.

PACT: How to guard against consumer generating incorrect contracts

We have two micro-services: Provider and Consumer, both are built independently. Consumer micro-service makes a mistake in how it consumes Provider service (for whatever reason) and as a result, incorrect pact is published to the Pact Broker.
Consumer service build is successful (and can go all the way to release!), but next Provider service build will fail for the wrong reason. So we end up with the broken Provider service build and a broken release of Consumer.
What is the best practice to guard against situations like this?
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Thanks!
This is the nature of consumer-driven contracts - the consumer gets a significant say in the API!
As a general rule, if the contract doesn't change, there is no need to run the Provider build, albeit there is currently no easy way to know this in the Broker (see feature request https://github.com/bethesque/pact_broker/issues/48).
As for solutions you could use one or more of the below strategies.
Effective use of code branches
It is of course very important that new assumptions on the contract be validated by the Provider before the Consumer can be safely released. Have branches tested against the Provider before you merge into master.
But most importantly - you must be collaborating closely with the Provider team!
Use source control to detect a modified contract:
If you also checked the master pact files into source control, your CI build could conditionally act - if the contract has changed, you must wait for a green provider build, if not you can safely deploy!
Store in separate repository
If you really want the provider to maintain control, you could store contracts in an intermediate repository or file location managed by the provider. I'd recommend this is a last resort as it negates much of the collaboration pact intends to facilitate.
Use Pact Broker Webhooks:
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Yes, this is possible using web hooks on the Pact Broker. You could trigger a build on the Provider as soon as a new contract is submitted to the server.
You could envisage this step working with options 1 and 2.
See Using Pact where the Consumer team is different from the Provider team in our FAQ for more on this use case.
You're spot on, that is one of the current things lacking with the Pact workflow and it's something I've been meaning of working towards once a few other things align.
That being said, in the meantime, this isn't solving your current problem, so I'm going to suggest a potential workaround in your process. Instead of running the test for the consumer, them passing, and then releasing it straight away, you could have the test run on the consumer, then wait for the provider test to come back green before releasing the consumer/provider together. Another way would be to version your provider/consumer interactions (api versioning) so that you can release the consumer beforehand, but isn't "turned on" until the correct version of the provider is released.
None of these solutions are great and I wholeheartedly agree. This is something that I'm quite passionate about and will be working on soon to fix the developer experience with pact broker and releasing the consumer/provider in a better fashion.
Any and all comments are welcome. Cheers.
I think the problem might be caused by the fact that contracts are generated on the consumer side. It means that consumers can modify those contracts how they want. But in the end producer's build will suffer due to incorrect contracts generated by consumers.
Is there any way that contracts are defined by producer? As I think the producer is responsible for maintaining its own contracts. For instance, in case of Spring Cloud Contracts it is recommended to have contacts defined in producer sources (e.g. in the same git repo with producer source code) or in a separate scm repo that can be managed by producer and consumer together.

Resources