Is there a way to show the description defined in dsl at admin endpoint - spring-cloud-contract

The description is for a human to read. I want the stub runner boot application users to see those descriptions directly at the admin endpoint. Is there a way to do this?
Or other recommended ways to use spring contract more efficiently, make the communication more accurate?

What you can do is to store the contracts under src/main/resources/contracts, change the plugin to point to that directory and then write an actuator endpoint that will print the contents of the contracts

Related

Can we create consumer tests and generate pact file without access to consumer code

I am test automation engineer and new to PACT. My questions is I have a frontend and a backend. Frontend sends a request and get response from backend. I would like to create consumer tests and generate a Pact file, but I don't have access to the client code. Could someone tell me, if we can create consumer tests using java? Could you please also provide the reason?
Pact tests on the consumer side are a unit test of your API client, so it's not recommended to test from the outside of the code in a "black box" way. They really should be w
See scope of a pact test and who would typically write Pact tests.
You can do a form of black-box contract test using a feature in Pactflow called bi-directional contracts (currently in developer preview), but note it's a commercial only feature.

Verifying Confluent Kafka access rights programmatically

I'm trying to figure out if there is any way to programmatically check if provided credentials have read and write access to different Kafka topics using the Confluent Kafka lib for .NET.
What I want to do is basically a smoke test upon system startup,
to verify that the given credentials are correct.
e.g. when deploying to various environments with different settings.
Setting up an entire consumer or producer and then actually reading or writing data seems hacky and expensive.
I thought that maybe there could be something in e.g. the AdminClient that allows verifying this, but I don't see anything that hints in that direction.
yes there is AdminClient implementation, please check out the following github repository example for .NET
https://github.com/confluentinc/confluent-kafka-dotnet/blob/master/examples/AdminClient/Program.cs

Understanding the Pact Broker Workflow

Knowing full well, there are many types of workflows for different ways of integrating Pact, I'm trying to visualize what a common work flow looks like. I developed this Swimlane for Pact Broker Workflow.
How do we run a Provider verification on an older Provider build?
How does this change with tags?
When does the webhook get created back to the Provider?
What if different Providers have different base urls (i.e. build systems)?
How does a new Provider build alert about the Consumers if the Provider fails?
Am I thinking about this flow correctly?
I've tried to collect my understanding from Webhooks, Using pact where the consumer team is different from the provider team, and Publishing verification results to a Pact Broker . Assuming I am thinking about the problem the right way and did not completely miss some documentation, I'd gladly write up an advise work flow documentation for the community.
Your swimlane diagram is a good picture of the workflow, with the caveat that once everything is all set up, it's rare to manually start provider builds from the broker.
The provider doesn't ever notify the consumers about verification failure (or success) in the process. If it did, then you could end up with circular builds.
I think about it like this:
The consumer tests create a contract (the Pact file).
This step also verifies that the consumer can work with a provider that fulfils that contract (using the mock provider).
Then, the consumer gives this Pact file to the broker (if configured to do so)
Now that there's a new pact, the broker (if configured) can trigger a provider build
The provider's CI infrastructure builds the provider, and runs the pact verification
The provider's CI infrastructure (if configured) tells the broker about the verification result.
The broker and the provider's build system are the only bits that know about the verification result - it isn't passed back to the consumer at the moment.
A consumer that is passing the tests means the consumer can say "I've written this communication contract and confirmed that I can hold up my side of it". Failure to verify the contract at the provider end doesn't change this statement.
However, if the verification succeeds, you may want to trigger a consumer deployment. As Beth Skurrie (one of the primary contributors to Pact) points out in the comments below:
Communicating the status of the verification back to the consumer is actually a highly important thing, as it tells the consumer whether or not they can be deployed safely. It is the missing part of the pact workflow at the moment, and I'm working away as fast as I can to rectify this.
Currently, since the verification status is information you might like to know about - especially if you're unable to see the provider's CI infrastructure - you might like to check out the pact build badges, which are a lighter way of checking the broker.

Watson Conversation - Storing and managing context for users in the application

We are using Watson Conversation service for ChatBot functionality. We want to configure a standard sequence of communication with users using Dialog and intents and entities.
We are writing the application is java to communicate with the Conversation service via RESTful API.
I understand we have to maintain the context and pass it between the application and Conversation until the conversation ends.
In order to achieve this, I understand we need to store and manage the context for each user in our application.
Could anyone please clarify if my understanding is correct? Also is Java a right fit for this functionality?
Thanks
Each conversation has its own conversation_id and its own context in the Json sent from the service. So, you don't have to store each context in your application. You could, but it is not necessary.
The usual way to use this is, when you get an answer from the Conversation Service, you store the context object, update it and send it back. In the next iteration, the service is going to send the context inside the Json again. If you use the same conversation_id, you should be able to send and receive the context, so, you don't need to store it.
There are a number of SDK's for different languages which make this easier for you.
https://github.com/watson-developer-cloud

Publish Subscribe Content Notifications

I came across this site in alfresco discussing publish/subscribe notifications within alfresco and was wondering if there were any progress on it or someone had created an add-on
http://wiki.alfresco.com/wiki/Publish_Subscribe_Content_Notifications
Only type of notification I've read thus far from the wiki or forums is email or using rss feeds. The CMIS specifications does not encompass this and alfresco web services does not include any such methods.
We have several web applications that need to download content once a document has been uploaded and transformed in alfresco. I could develop an action to push the documents to the appropriate app, but that would require me to know every endpoint. At this point there are only 3 application but there are requirements to add additional ones in the future. Having a publisher/subscriber model would make the solution more scalable and easy maintenance in the future
What if you wrote a custom action that adds a message to a queue. You could have the queue/topic name configurable so that when someone configures a rule on a folder, they can specify which queue to put the message on. Your apps can then subscribe to the queue and act appropriately.
You could also do something similar as a step in a workflow.
Maybe the message would be something simple like the nodeRef or the CMIS object ID.

Resources