Microservices Contract Testing without provider real API call - automated-tests

In Contract Testing for microservices, first we write mock Provider and create a json contract later this contract will be used to call real provider API call and test the contracts.
Can we mock the real provider call along with json contract ?

If you're talking about Pact, then no, it needs to be able to test against a real provider (usually one running locally for dev and on CI).
We're building a more generalised contract testing approach at Pactflow that would allow a contract to be generated on the consumer by a Pact test, record/replay or code generation (for example) and for it to be compared against an OAS or other provider contract.
The idea of JSON schema comes up a bit, but the hard challenge there is mapping requests to the correct schema. By the time you do all of that, you have started to essentially rebuild pact but with JSON schema.
And JSON schema has its own set of challenges.
If you want to know more, join the community slack channel and chat with the team.

Related

What is Pact provider is it an application or an Api Endpoint?

We have a situation where we have an application that hosts 10-20 API’s for which we plan to implement contact testing . Will it be better to have a pact per Api against a consumer application or one pact for application which as acts as provider and each Api acts as separate interaction?
A pact (contract) is usually between two applications. If the provider application provides multiple endpoints or resources (what I think you're referring to as "APIs") then they are modelled as one or more interactions (test cases).
...or one pact for application which as acts as provider and each Api acts as separate interaction?
This is what you want, I think.
There are some popular CDC(Consumer Driven Contracts) testing solution, Spring provides Spring Cloud Contracts which is integrated tightly with Spring, Pact is another great solution for CDC .
In my micro-service examples, I provides examples using Pact and Spring Cloud Contracts.
The API consumer(which consumes APIs) and provider(which provides APIs) sides maintains a contract to ensure the API consistence.
Ideally the contract for each APIs should per consumer(if you are following B4F pattern, if not we can ignore the difference from consumers) and per version. Eg. user client v1.0, and v2.0 could be different.

What are the best practices for automated testing of "Sign in with Apple" (REST API)?

I'd like to create a set of automated tests that could run in a CI/CD pipeline. I'm struggling to understand how to verify the Generate and Validate Tokens portion of the "Sign in with Apple" flow (REST API implementation):
How can I verify that I'm properly handling the exchange of an authorization code for a refresh token? Considering that the authorization code is single-use and only valid for five mins, which in turns comes from authenticating. In my case authenticating requires 2FA.
END TO END TESTS
A common starting point is to perform UI tests to verify logins in a basic way, in technologies such as Selenium:
These will automatically sign in test user accounts, to perform real logins and exchange of the authorization code for tokens.
After login the UI can proceed to test the application logic, such as calling real APIs using real tokens.
COMPONENTS UNDER TEST
Sometimes though, the OAuth related infrastructure gets in the way, eg if it is not possible to automate 2FA actions such as typing in a one time password.
It is possible when working with this type of technology to mock the Identity system. One option can be to pretend Apple authentication has completed, while issuing your own mock tokens with a JWT library, with the same properties as the Apple ones.
A key behaviour of course is to ensure that zero code is changed in UIs or APIs, so that they continue to run the same production logic, with no awareness that they are using mock tokens.
HTTP MOCK ENDPOINTS
The open source Wiremock tool can be a useful addition to your toolbox in this case, as in these API focused tests of mine. To use this type of tool, an automated test stage of the pipeline would need to repoint UIs and / or APIs to a URL that you are pretending represents Apple's identity system. So deployment work would be needed.
DESIGNING THE INFRASTRUCTURE
As always of course, it depends what you want to focus on testing, and which areas you are happy to mock. I recommend thinking this through end to end, thinking about both UIs and APIs. The important thing is to avoid situations where you are blocked and unable to test.

Can we create consumer tests and generate pact file without access to consumer code

I am test automation engineer and new to PACT. My questions is I have a frontend and a backend. Frontend sends a request and get response from backend. I would like to create consumer tests and generate a Pact file, but I don't have access to the client code. Could someone tell me, if we can create consumer tests using java? Could you please also provide the reason?
Pact tests on the consumer side are a unit test of your API client, so it's not recommended to test from the outside of the code in a "black box" way. They really should be w
See scope of a pact test and who would typically write Pact tests.
You can do a form of black-box contract test using a feature in Pactflow called bi-directional contracts (currently in developer preview), but note it's a commercial only feature.

How to add Azure custom Policy for Azure Data Factory to only use Azure Key Vault during the Linked Service Creation?

How to add Azure custom Policy for Azure Data Factory to only use Azure Key Vault during the Linked Service Creation for fetching the Data Store Credentials instead of credentials being put up directly in ADF Linked Service. Please suggest ARM or PowerShell methods for the policy implementation.
As of yesterday, the Data Factory Azure Policy integration is available which means you can now find some built-in policies that can be assigned to ADF.
One of those is exactly what you're asking for as you can see in the image below. You can find more information here
Edit: Based on your comment, I'm editing this answer with the info you want. When it comes to custom policies, it's pretty much up to you to come up with them and create what fits your needs. In your particular case, I've created one policy that does what you want, please see here.
This policy will audit your data factory linked services and check if they're using a self-hosted integration runtime. Currently, that check is only done for a few types of integration runtimes (if you look at the policy, you can see 5 of them) which means that if you want to check more types of linked services, you'll need to add them to the list of allowed values and select them when assigning the policy definition.
Bear in mind that for some linked services types, such as Key Vault, that check won't make sense since that service can't use a self-hosted IR

Understanding the Pact Broker Workflow

Knowing full well, there are many types of workflows for different ways of integrating Pact, I'm trying to visualize what a common work flow looks like. I developed this Swimlane for Pact Broker Workflow.
How do we run a Provider verification on an older Provider build?
How does this change with tags?
When does the webhook get created back to the Provider?
What if different Providers have different base urls (i.e. build systems)?
How does a new Provider build alert about the Consumers if the Provider fails?
Am I thinking about this flow correctly?
I've tried to collect my understanding from Webhooks, Using pact where the consumer team is different from the provider team, and Publishing verification results to a Pact Broker . Assuming I am thinking about the problem the right way and did not completely miss some documentation, I'd gladly write up an advise work flow documentation for the community.
Your swimlane diagram is a good picture of the workflow, with the caveat that once everything is all set up, it's rare to manually start provider builds from the broker.
The provider doesn't ever notify the consumers about verification failure (or success) in the process. If it did, then you could end up with circular builds.
I think about it like this:
The consumer tests create a contract (the Pact file).
This step also verifies that the consumer can work with a provider that fulfils that contract (using the mock provider).
Then, the consumer gives this Pact file to the broker (if configured to do so)
Now that there's a new pact, the broker (if configured) can trigger a provider build
The provider's CI infrastructure builds the provider, and runs the pact verification
The provider's CI infrastructure (if configured) tells the broker about the verification result.
The broker and the provider's build system are the only bits that know about the verification result - it isn't passed back to the consumer at the moment.
A consumer that is passing the tests means the consumer can say "I've written this communication contract and confirmed that I can hold up my side of it". Failure to verify the contract at the provider end doesn't change this statement.
However, if the verification succeeds, you may want to trigger a consumer deployment. As Beth Skurrie (one of the primary contributors to Pact) points out in the comments below:
Communicating the status of the verification back to the consumer is actually a highly important thing, as it tells the consumer whether or not they can be deployed safely. It is the missing part of the pact workflow at the moment, and I'm working away as fast as I can to rectify this.
Currently, since the verification status is information you might like to know about - especially if you're unable to see the provider's CI infrastructure - you might like to check out the pact build badges, which are a lighter way of checking the broker.

Resources