Verifying Confluent Kafka access rights programmatically - confluent-kafka-dotnet

I'm trying to figure out if there is any way to programmatically check if provided credentials have read and write access to different Kafka topics using the Confluent Kafka lib for .NET.
What I want to do is basically a smoke test upon system startup,
to verify that the given credentials are correct.
e.g. when deploying to various environments with different settings.
Setting up an entire consumer or producer and then actually reading or writing data seems hacky and expensive.
I thought that maybe there could be something in e.g. the AdminClient that allows verifying this, but I don't see anything that hints in that direction.

yes there is AdminClient implementation, please check out the following github repository example for .NET
https://github.com/confluentinc/confluent-kafka-dotnet/blob/master/examples/AdminClient/Program.cs

Related

Pact flow for Event Driven Applications

Although Pact supports testing of messages, I find that the recommended flow in the "Pact Nirvana" doesn't quite match the flow that I understand an Event Driven Application needs.
Let's say we have an Order management service and a Shipping management service.
The Shipping service emits ShippingPreparedEvents that are received by the Order service.
If we deleted a field inside the ShippingPreparedEvent, I'd expect first to make a change to the Order service so that it stops reading the old field. Deploy it. And then make the change in the Shipping service and deploy it.
That way, there wouldn't have any downtime on the services.
However, I believe Pact would expect to deploy the Shipping service first (it's the provider of the event) so that the contract can be verified before deploying the consumer. In this case, deploying the provider first will break my consumer.
Can this situation be avoided somehow? Am I missing anything?
Just to provide more context, we can see in this link that different changes would require different order of deployment. https://docs.confluent.io/current/schema-registry/avro.html#summary
I won't be using Kafka nor Avro, but I believe my flow would be similar.
Thanks a lot.
If we deleted a field inside the ShippingPreparedEvent, I'd expect first to make a change to the Order service so that it stops reading the old field. Deploy it. And then make the change in the Shipping service and deploy it. That way, there wouldn't have any downtime on the services.
I agree. What specifically in the Pact Nirvana guide gives you the impression this isn't the way to go? Pact (and the Pact Broker) don't actually care about the order of deployments.
In your case, removing the field would cause a failure of a can-i-deploy check, because removing the field would break the Order Management Service. The only approach would be to remove the field usage from the consumer, publish a new version of that contract and deploy to Production first.

Access permissions for the Spring REST API

Faced the problem of differentiating access rights of different user groups to the same objects in the database. I use Spring Data JPA, Spring REST and Spring Security. The front end will interact with the REST API and render everything on the client side. It is necessary to differentiate the rights to access REST API methods for different user groups. Until this moment, he posted #PreAuthorise annotations on repositories and their methods. The problems started already at the stage of writing tests, started looking for solutions on the Internet and came across an interesting answer to stackoverflow: https://stackoverflow.com/a/21577081/13226066. It says that repositories are not the best place to set rights, use services instead. Please tell me where you can read about such an architecture, preferably also with examples. And then, I feel, I shoot myself legs with my approach.
I think there should be some kind of layer between the repository interface and the controller, which Spring Data REST automatically generates

How to enable instance termination protection for OpenStack using terraform?

I'm trying to enable instance termination protection using terraform. But did not see any arguments for openstack like what I found for AWS 'disable_api_termination'.
I think you need a different mechanism to manage that. Terraform doesn't have option to disable termination like it is implemented for AWS. Those options are tailored after the provider APIs. I'm guessing that OpenStack just doesn't have something similar to this behavior.
To prevent some confusion I want to mention that the Terraform's lifecycle documented here won't be of much good in this regard:
https://www.terraform.io/docs/configuration/resources.html#prevent_destroy
It will disallow you to destroy it using 'terraform destroy' and the likes but won't do much in terms of protection coming from the OpenStack provider itself.
I would rather think about solving this problem in the architectural layer. Think about how you call the OpenStack API and how you manage your services. Around those steps you can probably place an additional layer or step that will manage the lifecycle and keep mistakes down to the minimum. Your process is what could protect you better than any tool.

Understanding the Pact Broker Workflow

Knowing full well, there are many types of workflows for different ways of integrating Pact, I'm trying to visualize what a common work flow looks like. I developed this Swimlane for Pact Broker Workflow.
How do we run a Provider verification on an older Provider build?
How does this change with tags?
When does the webhook get created back to the Provider?
What if different Providers have different base urls (i.e. build systems)?
How does a new Provider build alert about the Consumers if the Provider fails?
Am I thinking about this flow correctly?
I've tried to collect my understanding from Webhooks, Using pact where the consumer team is different from the provider team, and Publishing verification results to a Pact Broker . Assuming I am thinking about the problem the right way and did not completely miss some documentation, I'd gladly write up an advise work flow documentation for the community.
Your swimlane diagram is a good picture of the workflow, with the caveat that once everything is all set up, it's rare to manually start provider builds from the broker.
The provider doesn't ever notify the consumers about verification failure (or success) in the process. If it did, then you could end up with circular builds.
I think about it like this:
The consumer tests create a contract (the Pact file).
This step also verifies that the consumer can work with a provider that fulfils that contract (using the mock provider).
Then, the consumer gives this Pact file to the broker (if configured to do so)
Now that there's a new pact, the broker (if configured) can trigger a provider build
The provider's CI infrastructure builds the provider, and runs the pact verification
The provider's CI infrastructure (if configured) tells the broker about the verification result.
The broker and the provider's build system are the only bits that know about the verification result - it isn't passed back to the consumer at the moment.
A consumer that is passing the tests means the consumer can say "I've written this communication contract and confirmed that I can hold up my side of it". Failure to verify the contract at the provider end doesn't change this statement.
However, if the verification succeeds, you may want to trigger a consumer deployment. As Beth Skurrie (one of the primary contributors to Pact) points out in the comments below:
Communicating the status of the verification back to the consumer is actually a highly important thing, as it tells the consumer whether or not they can be deployed safely. It is the missing part of the pact workflow at the moment, and I'm working away as fast as I can to rectify this.
Currently, since the verification status is information you might like to know about - especially if you're unable to see the provider's CI infrastructure - you might like to check out the pact build badges, which are a lighter way of checking the broker.

Looking for guidance on WF4

We have a rather large document routing framework that's currently implemented in SharePoint (with a large set of cumbersome SP workflows), and it's running into the edge of what SP can do easily. It's slated for a rewrite into .NET
I've spent the past week or so reading and watching WF4 discussions and demonstrations to get an idea of WF4, because I think it's the right solution. I'm having difficulty envisioning how the system will be configured, though, so I need guidance on a few points from people with experience:
Let's say I have an approval that has to be made on a document. When the wf starts, it'll decide who should approve, and send that person an email notification. Inside the notification, the user would have an option to load an ASP.NET page to approve or reject. The workflow would then have to be resumed from the send email step. If I'm planning on running this as a WCF WF Service, how do I get back into the correct instance of the paused service? (considering I've configured AppFabric and persistence) I somewhat understand the idea of a correlation handle, but don't think it's meant for this case.
Logging and auditing will be key for this system. I see the AppFabric makes event logs of this data, but I haven't cracked the underlying database--is it simple to use for reporting, or should I create custom logging activities to put around my actions? From experience, which would you suggest?
Thanks for any guidance you can provide. I'm happy to give further examples if necessary.
To send messages to a specific workflow instance you need to set up message correlation between your different Receive activities. In order to do that you need some unique value as part of your message data.
The Appfabric logging works well but if you want to create custom a custom logging solution you don't need to add activities to your workflow. Instead you create a custom TrackingParticipant to do the work for you. How you store the data is then up to you.
Your scenario is very similar to the one I used for the Introduction to Workflow Services Hands On Lab in the Visual Studio 2010 Training Kit. I suggest you take a look at the hands on lab or the Windows Server AppFabric / Workflow Services Demo - Contoso HR sample code.

Resources