How to write a pact test for an interface, which is not used by the most recent consumer anymore? - pact

I have a weird situation and I have no idea how to handle it:
Let's say I have a consumer C and a provider P.
We did not use Pact, when we implemented these services. So here is the problem: An older version of C needs a specific interface of P, where it provides some information via an url parameter. BUT the newest version of C is not even able to send this kind of request. Since there are still old versions of C out there, we need to ensure this interface is still provided by P and working correctly.
I'm using Pact-JVM and it looks like I have to send this request somehow, else the test will fail. I'm struggling with creating a meaningful test for this scenario. I could create a dummy, that sends the request, but that would not test anything. Does somebody have any idea what to do in this situation?

I would suggest that you check out out the commit of the old version of the consumer, make a new branch from it, add the pact test to that branch, and publish it as normal.
If all else fails (and I would never normally suggest this) you could take your most recently generated pact, hand modify it to match what the old consumer expects, and then manually publish it to the Pact Broker.

If I understand correctly, there are two (or more) versions of C in production with different versions of a contract on P. One option is to publish the old C with a separate name and verify it as normal. Another option is to publish a pact with the same name.
If you do the latter, you'll need to ensure you tag both versions as prod and then on the provider side, you can verify all prod versions the contract using consumer version selectors.

Related

Workflow for Pact Testing in Feature Branches

I'm currently experimenting with Pact and stumbled over a problem with the workflow and can't find a satisfying solution. So hopefully someone can help me. :-)
First of all, that's my current workflow for changes on the consumer side:
The consumer changes are uploaded to Github in a feature branch
Pact tests are run on the CI system and the resulting pact is uploaded to the pact broker with the tags [feature-branch-name] and verify_feature (currently I only use the latter one)
The consumer waits for the verification (using the can-i-deploy tool)
The pact broker triggers the provider via webhook (trigger: contract_content_changed)
The provider runs pact-verify for the latest version tagged verify_feature and uploads the result
The consumer checks the result, if verification was successful the branch can be merged
So far so good. The problem arises when the consumer introduces breaking changes:
After the consumer uploads the changes to Github, the process described above is executed and the provider verification fails which in turn causes the consumer build to fail as expected.
Now the necessary changes are made on the provider side. The provider runs pact-verify against the consumer version of the testing stage and - if successful - the new version is then merged and deployed.
Now the new consumer version should be able to be merged as well. Alas, it does not work, because this version has not been verified again. And when I restart the CI job, the pact is re-uploaded to the pact broker, but since the pact content does not change no webhook is triggered. The consumer version is never verified.
So, what am I doing wrong?
You need the new WIP pacts feature. It's currently under development, and could be available for beta testing in pact-js (and other languages that wrap the pact-ruby-standalone) within days if you wanted to try it out. If you're using pact-jvm, you'll have to wait a little longer, but we may be able to provide a work around. I've written a blog post on WIP pacts, but haven't published it yet because the feature is not ready for public release. You can find the post here http://blog.pact.io/p/51906e22-ccce-486b-9993-c21794e557d4/ I'd love to get your feedback on it.
Hop on to slack.pact.io and # me if you'd like to chat further about this.

How to retry R testthat test on API error?

Some tests rely on some external services (e.g. APIs). Sometimes, these external services will go down. This can cause tests to fail and (worse) continuous integration tools to fail.
Is there a way to instruct testthat tests and regular package examples re-run examples/tests more than once, ideally with the second attempt being 5 minutes after the first?
Ideally you would write your tests in a way that they don't call API or database.
Instead you will mock API end points according to the specification and also write test for cases where API returns unexpected results or errors.
Here is an example of package that allows you to do so:
https://github.com/nealrichardson/httptest
If you are worried that your vendor might change API, talk to them and extract details on their API change management.
Ask them this:
What is your change management process?
How do you avoid introducing break changes to existing endpoints that people are using?
(modified from this post)
If you have to check that API is still the same, draw the line between API validation and testing of your code.
You will need two separate processes:
Unit / Acceptance tests that are executed against the mocks of the API end points. Those run fast and are focused on the logic of your application.
pipeline for regular validation of the API. If your code is already live, you are likely to find out of any breaking changes in API anyway. So this is highly redundant. In exceptional cases this can be useful, but only with a very bad vendor.

Symfony Messenger - checking if queue is empty

We are migrating our architecture to take advantage of the Symfony Messenger component. What I am dealing with at the moment is adjusting the deploy process of our application.
The Symfony documentation suggests that the workers should be restarted on deploy to pick up the new code. Makes sense. My problem is that this does not address the issue when upgrading the deployed code. Consider hypothetical versions 1 and 2.
Version 1 works with and understands a certain set of messages.
Version 2 adds more message types and changes the names/structure/whatever of some of the message types defined in version 1.
During deploy, in order to be sure that all messages were processed and there are no incompatibilities when the new version, this is the process that makes intuitive sense to me:
Stop accepting new messages to the queue (put the site to a "maintenance mode")
Let the workers finish processing pending messages in the queue
Deploy new code
Restart workers
Start accepting new messages
The problem I am facing is that I can't see any way to check whether the queue is empty or not.
Is my deploy scenario correct? How the deploy usually done in applications using the Symfony messenger component (or any messaging queue, for that matter)? Is the only way to go ensuring backward compatibility for all the message types?
This is an interesting challenge.
Version 1 (new handlers for the same messages you sent out in the previous release)
For this you could use Middleware and Stamps to add a version-header to the messages sent over a transport. Then on the consuming side your handler can watch for the version stamp and check if its responsible for this message or not. The upside of this approach is, that you can change the handler logic without changing the message itself just by having the new code add a new version to the same message types you sent out before.
This can easily be introduced to an existing application by having your existing handlers look for the stamp and if it's not there assume they are responsible and otherwise bail out. When a new version wants to introduce a new handler it will only work with whatever version you specify and ignore any messages without this header.
Version 2 (Modifying data structure)
One approach to this problem would be to only have backwards compatible changes in your messages and handlers between each release. So for example assume your message looks something like this:
{
"foo": 123
}
and you want to change it to something like this:
{
"bar": "123"
}
In that case you would first release an intermediate version, containing both the old and new field and after a while you can release the version where you remove the old logic. The intermediate version of the message might look like this:
{
"foo": 123,
"bar": "123",
}
You would then have a Handler that checks for bar first and and falls back to using foo and the old logic, if bar is missing. This way you can make sure that both new and old messages are processed by your new application and by adding logging you can easily see when the old code is no longer called making it safe to remove the old property and logic in an upcoming release.
The main drawback of this approach is, that you will have to catch breaking changes in advance which requires a thorough review and testing process. Luckily failure transports can catch issues when your handler encounters issues, but if the message can not be properly decoded those messages might be thrown out instantly, so be careful.
I don't think the Messenger component gives any help with working out the queue length - at least none I found so far.
So the answer depends on what type of transport are you using. For example, with the Doctrine transport you can just count the number of rows in the DB table etc.
The problem with that approach is that you make your code less portable/configurable - if your code expect to count rows in DB table, it won't work with Redis transport, or if the table name changes.
In our project we ended up with a queue counting service that looks into the Messenger configuration and decides how to count the items in the queue.
As for the rest of the question about the deployment, other answers here are good. I'll sum up what we learned when running a clustered Symfony application on AWS ECS with blue/green deployment:
You could treat your message handlers like you would do DB migrations: any two adjacent versions must work with the same schema - so any two message handler versions must be able to work with the same message format.
Turn the handlers off before running a deployment, deploy the new version and turn the handlers on again. If you have multiple versions, you will need to do multiple deployments, one version by one.
You should know before each deployment whether you can just roll out multiple versions at once because there are no breaking changes, or not.
If your environment autoscales, you also need to ensure the handlers are not started on any additional nodes that appear during the deployment and are still serving the older version of the application.
We use a boolean flag in Redis to allow nodes work out whether the handlers should be started or not - that flag is set to "false" just before we halt our current handlers at the beginning of the deployment.
--
If there are any better ways to do this, I'm all ears.
Good luck!

Pact. How to test a REST GET with automatically generated ID in the URL

I want to test a REST service that returns the detail of a given entity identified by an UUID, i.e. my consumer pact has an interaction requesting a GET like this:
/cities/123e4567-e89b-12d3-a456-426655440000
So I need this specific record to exist in the Database for the pact verifier to find it. In other projects I've achieved this executing an SQL INSERT in the state setup, but in this case I'd prefer to use the microservice's JPA utilities for accessing to the DB, because the data model is quite complex and using these utilities would save me much effort and make the test much more maintainable.
The problem is that these utilities do not allow specifying the identifier when you create a new record (they assign an automatic ID). So after creating the entity (in the state setup) I'd like to tell the pact verifier to use the generated ID rather than the one specified by the consumer pact.
As far as I know, Pact matching techniques are not useful here because I need the microservice to receive this specific ID. Is there any way for the verifier to be aware of the correct ID to use in the call to the service?
You have two options here:
Option 1 - Find a way to use the UUID from the pact file
This option (in my option) would be the better one, because you are using well known values for you verification. With JPA, I think you may be able to disable the auto-generation of the ID. And if you are using Hibernate as the JPA provider, it may not generate an ID if you have provided it one (i.e. setting the ID on the entity to the one from the pact file before saving it). This is what I have done recently.
Using a generator (as mentioned by Beth) would be a good mechanism for this problem, but there is no current way to provide a generator to use a specific value. They generate random ones on the fly.
Option 2 - Replace the ID in the URL
Depending on how you run the verification, you could use a request filter to change the UUID in the URL to the one which was created during the provider state callback. However, I feel this is a potentially bad thing to do, because you could change the request in a way that weakens the contract. You will not be verifying that your provider adheres to what the consumer specified.
If you choose this option, be careful to only change the UUID portion of the URL and nothing else.
For information on request filters, have a look at Gradle - Modifying the requests before they are sent and JUnit - Modifying the requests before they are sent in the Pact-JVM readmes.
Unfortunately not. The provider side verifier takes this information from the pact file itself and so can't know how to send anything else.
The best option is to use provider states to manage the injection of the specific record prior this test case (or to just have the correct record in there in the first place).
You use the JPA libraries during the provider state setup to modify the UUID in record to what you're expecting.
If you are using pact-jvm on both the consumer and provider sides, I believe you may be able to use 'generators', but you'll need to look up the documentation on that as I haven't used them.

appropriate user-agent header value

I'm using HttpBuilder (a Groovy HTTP library built on top of apache's httpclient) to sent requests to the last.fm API. The docs for this API say you should set the user-agent header to "something appropriate" in order to reduce your chances of getting blocked.
Any idea what kind of values would be deemed appropriate?
The name of your application including a version number?
I work for Last.fm. "Appropriate" means something which will identify your app in a helpful way to us when we're looking at our logs. Examples of when we use this information:
investigating bugs or odd behaviour; for example if you've found an edge case we don't handle, or are accidentally causing unusual load on a system
investigating behaviour that we think is inappropriate; we might want to get in touch to help your application work better with our services
we might use this information to judge which API methods are used, how often, and by whom, in order to do capacity planning or to get general statistics on the API eco-system.
A helpful (appropriate) User-Agent:
tells us the name and version of your application (preferably something unique and easy to find on Google!)
tells us the specific version of your application
might also contain a URL at which to find out more, e.g. your application's homepage
Examples of unhelpful (inappropriate) User-Agents:
the same as any of the popular web browsers
the default user-agent for your HTTP Client library (e.g. curl/7.10.6 or PEAR HTTP_Request)
We're aware that it's not possible to change the User-Agent sent when your application is browser-based (e.g. Javascript or Flash) and don't expect you to do so. (That shouldn't be a problem in your case.)
If you're using a 3rd party Last.fm API library, such as one of the ones listed at http://www.last.fm/api/downloads , then we would prefer it if you added extra information to the User-Agent to identify your application, but left the library name and version in there as well. This is immensely useful when tracking down bugs (in either our service or in the client libraries).

Resources