Workflow for Pact Testing in Feature Branches - pact

I'm currently experimenting with Pact and stumbled over a problem with the workflow and can't find a satisfying solution. So hopefully someone can help me. :-)
First of all, that's my current workflow for changes on the consumer side:
The consumer changes are uploaded to Github in a feature branch
Pact tests are run on the CI system and the resulting pact is uploaded to the pact broker with the tags [feature-branch-name] and verify_feature (currently I only use the latter one)
The consumer waits for the verification (using the can-i-deploy tool)
The pact broker triggers the provider via webhook (trigger: contract_content_changed)
The provider runs pact-verify for the latest version tagged verify_feature and uploads the result
The consumer checks the result, if verification was successful the branch can be merged
So far so good. The problem arises when the consumer introduces breaking changes:
After the consumer uploads the changes to Github, the process described above is executed and the provider verification fails which in turn causes the consumer build to fail as expected.
Now the necessary changes are made on the provider side. The provider runs pact-verify against the consumer version of the testing stage and - if successful - the new version is then merged and deployed.
Now the new consumer version should be able to be merged as well. Alas, it does not work, because this version has not been verified again. And when I restart the CI job, the pact is re-uploaded to the pact broker, but since the pact content does not change no webhook is triggered. The consumer version is never verified.
So, what am I doing wrong?

You need the new WIP pacts feature. It's currently under development, and could be available for beta testing in pact-js (and other languages that wrap the pact-ruby-standalone) within days if you wanted to try it out. If you're using pact-jvm, you'll have to wait a little longer, but we may be able to provide a work around. I've written a blog post on WIP pacts, but haven't published it yet because the feature is not ready for public release. You can find the post here http://blog.pact.io/p/51906e22-ccce-486b-9993-c21794e557d4/ I'd love to get your feedback on it.
Hop on to slack.pact.io and # me if you'd like to chat further about this.

Related

Schedule HTTP requests at point in time

I need to schedule actions (HTTP requests is enough) at a certain point in time. Every programmed request will only run once.
Sure I could implement this myself; saving the event to a database, then have an event-loop check if an action should be launched.
However this is such a generic need, there must be an existing service for this general type of need, feels like something I shouldn't implement myself. Any ideas where this can be found? I'm thinking one could just specify the http request to be saved (uri, body, headers)
AWS sure has a way of doing this using Cloudwatch events with a cron configured at the specific point in time. But this is way to clunky IMO. Is there an existing service/solution for this?
Agenda-Rest is a solution that does exactly what I asked for. It has to be self hosted though, as there seems to be no commercial hosting of it. It's also not actively developed, which could very well be that it's pretty much feature complete. After all it's a small wrapper on top of the library Agenda.
There's an alternative, suggested in a GitHub issue of Agenda-Rest, called AgenDash build on top of the same library. It's actively developed, as of autumn 2022. It's primarily a UI on top of Agenda, but it has rest routes that can be called.
There are also several libraries in various languages that exposes this functionality provided a persistence mechanism
agenda (nodejs + mongodb)
redbeat (python + redis)
db-scheduler (java + any rdbms)
I'm quite surprised that I can't find this functionality as a first class citizen in the public cloud providers.
Edit:
AWS introduced the feature EventBridge Scheduler in nov 2022. It does not allow for a http request per see, but things like invoke a lambda or post a message to a queue is possible. They support one-time schedules so no need for cron and no need for removing it later as mentioned in my question above.

How to write a pact test for an interface, which is not used by the most recent consumer anymore?

I have a weird situation and I have no idea how to handle it:
Let's say I have a consumer C and a provider P.
We did not use Pact, when we implemented these services. So here is the problem: An older version of C needs a specific interface of P, where it provides some information via an url parameter. BUT the newest version of C is not even able to send this kind of request. Since there are still old versions of C out there, we need to ensure this interface is still provided by P and working correctly.
I'm using Pact-JVM and it looks like I have to send this request somehow, else the test will fail. I'm struggling with creating a meaningful test for this scenario. I could create a dummy, that sends the request, but that would not test anything. Does somebody have any idea what to do in this situation?
I would suggest that you check out out the commit of the old version of the consumer, make a new branch from it, add the pact test to that branch, and publish it as normal.
If all else fails (and I would never normally suggest this) you could take your most recently generated pact, hand modify it to match what the old consumer expects, and then manually publish it to the Pact Broker.
If I understand correctly, there are two (or more) versions of C in production with different versions of a contract on P. One option is to publish the old C with a separate name and verify it as normal. Another option is to publish a pact with the same name.
If you do the latter, you'll need to ensure you tag both versions as prod and then on the provider side, you can verify all prod versions the contract using consumer version selectors.

Pact CDC Testing Best Practice

I've read articles like this one that suggest verifying contracts on the provider side that exist in a consumer's feature branch, in effect allowing the contract to be "pre-verified" before being merged to master. However, I've read other documentation from the Pact team stating the opposite. In The Steps to Reaching Pact Nirvana, it states "To keep a green build in your provider’s CI, rather than verifying the latest overall pact, it should verify the pact for the latest version tagged with “master” in the CI." Here, I'm assuming the words "latest overall pact" mean the pact that might exist in a consumer's feature branch that was published to the Pact Broker.
I'm confused. So as to not "make provider teams unhappy" as stated in The Steps to Reaching Pact Nirvana, what would be the purpose of ever publishing a pact from a consumer's feature branch if the provider would never verify that pact and only verify "master" and "production" pacts? Another way to ask this is when would/should pacts ever be published/verified from feature branches and solely not the master branches of consumers and providers against the "master" and "production" pacts?
Just noting that this is the latest guide on "effective Pact setup": https://docs.pact.io/best_practices/pact_nirvana. Hopefully this is clearer.
But in case it's not, pre-verifying feature branches is definitely a core feature of the Broker and something we would want to do. Once a change is in master, in 99% of the cases it should be smooth sailing (i.e. compatible). It's standard practice to either a) have a webhook that can trigger the pact verification step of a provider build to verify the new feature or b) have the corresponding feature branch in the provider verify the pact in CI when a change is pushed.
There is also a new feature coming out soon called "pending pacts", which will improve this situation drastically too, effectively allowing any new contracts to not break a providers' build, but still providing feedback to consumers if the change is supported.

Symfony Messenger - checking if queue is empty

We are migrating our architecture to take advantage of the Symfony Messenger component. What I am dealing with at the moment is adjusting the deploy process of our application.
The Symfony documentation suggests that the workers should be restarted on deploy to pick up the new code. Makes sense. My problem is that this does not address the issue when upgrading the deployed code. Consider hypothetical versions 1 and 2.
Version 1 works with and understands a certain set of messages.
Version 2 adds more message types and changes the names/structure/whatever of some of the message types defined in version 1.
During deploy, in order to be sure that all messages were processed and there are no incompatibilities when the new version, this is the process that makes intuitive sense to me:
Stop accepting new messages to the queue (put the site to a "maintenance mode")
Let the workers finish processing pending messages in the queue
Deploy new code
Restart workers
Start accepting new messages
The problem I am facing is that I can't see any way to check whether the queue is empty or not.
Is my deploy scenario correct? How the deploy usually done in applications using the Symfony messenger component (or any messaging queue, for that matter)? Is the only way to go ensuring backward compatibility for all the message types?
This is an interesting challenge.
Version 1 (new handlers for the same messages you sent out in the previous release)
For this you could use Middleware and Stamps to add a version-header to the messages sent over a transport. Then on the consuming side your handler can watch for the version stamp and check if its responsible for this message or not. The upside of this approach is, that you can change the handler logic without changing the message itself just by having the new code add a new version to the same message types you sent out before.
This can easily be introduced to an existing application by having your existing handlers look for the stamp and if it's not there assume they are responsible and otherwise bail out. When a new version wants to introduce a new handler it will only work with whatever version you specify and ignore any messages without this header.
Version 2 (Modifying data structure)
One approach to this problem would be to only have backwards compatible changes in your messages and handlers between each release. So for example assume your message looks something like this:
{
"foo": 123
}
and you want to change it to something like this:
{
"bar": "123"
}
In that case you would first release an intermediate version, containing both the old and new field and after a while you can release the version where you remove the old logic. The intermediate version of the message might look like this:
{
"foo": 123,
"bar": "123",
}
You would then have a Handler that checks for bar first and and falls back to using foo and the old logic, if bar is missing. This way you can make sure that both new and old messages are processed by your new application and by adding logging you can easily see when the old code is no longer called making it safe to remove the old property and logic in an upcoming release.
The main drawback of this approach is, that you will have to catch breaking changes in advance which requires a thorough review and testing process. Luckily failure transports can catch issues when your handler encounters issues, but if the message can not be properly decoded those messages might be thrown out instantly, so be careful.
I don't think the Messenger component gives any help with working out the queue length - at least none I found so far.
So the answer depends on what type of transport are you using. For example, with the Doctrine transport you can just count the number of rows in the DB table etc.
The problem with that approach is that you make your code less portable/configurable - if your code expect to count rows in DB table, it won't work with Redis transport, or if the table name changes.
In our project we ended up with a queue counting service that looks into the Messenger configuration and decides how to count the items in the queue.
As for the rest of the question about the deployment, other answers here are good. I'll sum up what we learned when running a clustered Symfony application on AWS ECS with blue/green deployment:
You could treat your message handlers like you would do DB migrations: any two adjacent versions must work with the same schema - so any two message handler versions must be able to work with the same message format.
Turn the handlers off before running a deployment, deploy the new version and turn the handlers on again. If you have multiple versions, you will need to do multiple deployments, one version by one.
You should know before each deployment whether you can just roll out multiple versions at once because there are no breaking changes, or not.
If your environment autoscales, you also need to ensure the handlers are not started on any additional nodes that appear during the deployment and are still serving the older version of the application.
We use a boolean flag in Redis to allow nodes work out whether the handlers should be started or not - that flag is set to "false" just before we halt our current handlers at the beginning of the deployment.
--
If there are any better ways to do this, I'm all ears.
Good luck!

Connect to web service fails

I have a web application which fetches information from a web service. It works fine in our development environment.
Now I'm trying to get it to work in a customer's environment instead, where the web service is from a third party. The problem is that the first time the application tries to fetch information it cannot connect to the web service. When it tries again just seconds later it works fine. If I wait a couple of hours and try again, the problem occurs again.
I'm having a hard time believing this is a programming error, as our customer and the maker of the web service thinks. I think it has to do with one of the IIS or some security in the network. But I don't have much to go on and can't reproduce the error in our development environment.
Is it failing with timeOutException when you try to connect first time?. If yes, this could be the result on start up time of the service
I have a rule: "Always assume its your fault until you can demonstrate otherwise". After over 20 years, I still stick to it.
So there are therefore two cases:
The code is broken
There is a specific issue with the live environment
Since you want to demonstrate that the problem is (2) you need to test calls to the service, from the live environment, using something other than your application. Exactly what will depend on the nature of the web service but we've found SoapUI to be helpful.
The other thing that's not clear is whether you are making calls to the live service from your development environment - if, in testing, you're not communicating with the same instance of the service then that's an additional variable that will need to be considered (and I appreciate that you're not always given the option).
Lastly, #Krishna is right - there may be a spin up issue with the remote service (hence my question about whether you're talking to the same service from your dev environment) and -horrible as it is - the solution in the first instance may simply be to find a way to allow for this!
The error was the web service from the third party. The test stub we got to develop against was made in C# and returned only dummy answers. The web service in the customer environment actually connected to a COM object. The first communication with the COM object after a longer wait took almost a minute.
Good for me that the third party developers left the source code on the customer servers...

Resources