Pact. How to test a REST GET with automatically generated ID in the URL - pact

I want to test a REST service that returns the detail of a given entity identified by an UUID, i.e. my consumer pact has an interaction requesting a GET like this:
/cities/123e4567-e89b-12d3-a456-426655440000
So I need this specific record to exist in the Database for the pact verifier to find it. In other projects I've achieved this executing an SQL INSERT in the state setup, but in this case I'd prefer to use the microservice's JPA utilities for accessing to the DB, because the data model is quite complex and using these utilities would save me much effort and make the test much more maintainable.
The problem is that these utilities do not allow specifying the identifier when you create a new record (they assign an automatic ID). So after creating the entity (in the state setup) I'd like to tell the pact verifier to use the generated ID rather than the one specified by the consumer pact.
As far as I know, Pact matching techniques are not useful here because I need the microservice to receive this specific ID. Is there any way for the verifier to be aware of the correct ID to use in the call to the service?

You have two options here:
Option 1 - Find a way to use the UUID from the pact file
This option (in my option) would be the better one, because you are using well known values for you verification. With JPA, I think you may be able to disable the auto-generation of the ID. And if you are using Hibernate as the JPA provider, it may not generate an ID if you have provided it one (i.e. setting the ID on the entity to the one from the pact file before saving it). This is what I have done recently.
Using a generator (as mentioned by Beth) would be a good mechanism for this problem, but there is no current way to provide a generator to use a specific value. They generate random ones on the fly.
Option 2 - Replace the ID in the URL
Depending on how you run the verification, you could use a request filter to change the UUID in the URL to the one which was created during the provider state callback. However, I feel this is a potentially bad thing to do, because you could change the request in a way that weakens the contract. You will not be verifying that your provider adheres to what the consumer specified.
If you choose this option, be careful to only change the UUID portion of the URL and nothing else.
For information on request filters, have a look at Gradle - Modifying the requests before they are sent and JUnit - Modifying the requests before they are sent in the Pact-JVM readmes.

Unfortunately not. The provider side verifier takes this information from the pact file itself and so can't know how to send anything else.
The best option is to use provider states to manage the injection of the specific record prior this test case (or to just have the correct record in there in the first place).
You use the JPA libraries during the provider state setup to modify the UUID in record to what you're expecting.

If you are using pact-jvm on both the consumer and provider sides, I believe you may be able to use 'generators', but you'll need to look up the documentation on that as I haven't used them.

Related

Creating a plugin in the provider client

I am looking to create a plugin on yagna where providers will only accept tasks from whitelisted requestors utilizing private/public keys for authentication.
Are there any documentation or how can I go about doing it? Workflow as follows:
Requestor sends task in a specific subnet(do requestors also need a plugin, or are yagna ids unique?)
Providers on that subnet check if it's a whitelisted requestor by checking yagna id or other task information which cannot be faked
I am not entirely sure on how I would format it because I don't know how the system works behind the scenes - so any advice there might be necessary if my workflow is too bad.
It's a nice idea & it's doable.
You can differentiate requestors by their id's. To get the nodes' id (requestor or provider) just run yagna id list (yagna daemon has to be running - yagna service run).
Probably the most challenging part would be to modify ReactToProposal handler of CompositeNegotiator. Current implementation casts Proposal (Demand in this context) into ProposalView. Unfotunatelly at this point requestor_id is lost. For testing purposes you can just filter requestors directly in fn handle() and return Ok(ProposalResponse::RejectProposal {....}) when you want to block a requestor. Your field of interest is msg.demand.issuer_id.
If you want to introduce some kind of public/private key pair functionality, it could achieved by adding custom * demand/offer constraint* that is understood by both your requestor and provider. Unfortunately at this time there is no public documentation on this topic.
Please feel free to reach out if you have further questions.

Should I check DynamoDB map update path before sending

I am storing data in DynamoDB as a Map attribute type and implementing a restful PATCH endpoint to modify that data (RFC 6902).
In my validation routine, I am currently NOT making sure that the Map exists before translating the patch into an updating expression and sending it to DynamoDB.
This means that if the Map is not already set in DynamoDB, the update will fail (ValidationException since the document path does not exist).
My Question: is it appropriate/acceptable/OK to rely on DynamoDB rejecting the update in this way, or should I get a copy of the item and reject the patch in my own validation routines?
I have not been able to think of a reason not to allow DynamoDB the pleasure of rejecting the patch (and it saves me a GET call), but it makes me a little nervous to rely on a 3rd party validation like this (although we specify the API version now when accessing AWS so in theory this should always work...)
This seems like a highly subjective question, yet personally I don't see the benefit of doing your own validation by adding an extra round-trip to DynamoDB when, the way I see it, the worst that can happen is the update succeeding to DynamoDB. This is no different than treating a local database update that returns success as authoritative.
And if you're worried about long-term backwards compatibility (ie. DynamoDB API contract changing from underneath you) then hopefully you've written some functional/integration tests that are run periodically and specifically whenever you update your software dependencies.

How to best architect website when each client has own database and subdomain?

For client security and privacy reasons, we want to deploy a unique database for each client while using the same website.
I envision that during the session_start event, we would determine which database to use for them (by looking at the subdomain they come in on) and set the connection string in a session variable. Then on every page_init, we'd dynamically set any object's connection string. In code behind, we'd do the same thing with the connection string.
Is there a better approach to doing this and will setting the connection string in page_init work? Is using a session variable wise? I've tended not to ever use them except when no other solution was possible.
The problem with the model itself it is really complex and can let you with some errors specially when we are talking about changes in the database. Imagine that you need to add an extra field on the interface. if you have 100 clients this will mean updating 100 different databases. when we talk about dealing with downtime them things get even worst.
I would do with that in a light different abstract your database layer create one api that will call the database. And from the website you always call the api passing the domain that you want the data to come from.
You can ask me what advantage this will give to you. The biggest one that you will see it is when doing upgrades and maintenance. Having one api per client it is a lot better to think them having one database per client. and if you really want to have just one (I would really recommend having one per client and deploying automatically) you can have a switch on the call and base with some parameters that you pass to the api ( can be on the header like the subdomain on the header) you can chose what database to connect.
Let me give you a sample scenario and how I would suggest to approach this scenario (that is true for database or api)
I want to include a new data field. So first thing it is to add this field on the backend (api or database) deploy this new field if it is one api you can even test that calling the api and see that the new field it is now returned that is not a problem for your ui because it is just a field that it does not use. after that you change the ui to actually use this field and deploy that to production.

Determining the set of message destinations at runtime in BizTalk application

I’m a complete newbie at BizTalk and I need to create a BizTalk 2006 application which broadcasts messages in a specific way. I’m not asking for a complete solution, but for advise and guidelines, which capabilities of BizTalk I should use.
There’s a message source, for simplicity, say, a directory where the user adds files to publish them. There are several subscribers, each having a directory to receive published files. The number of subscribers can vary in the course of exploitation of the program. There are also some rules which determine if a particular subscriber needs to receive a particular file, based on the filename. For example, each subscriber has a pattern or mask of filename which files they receives must match. Those rules (for example, patterns) can change in time as well.
I don’t know how to do this. Create a set of send ports at runtime, each for each destination? Is it possible? Use one port changing its binding? Would it work correctly with concurrent sendings? Are there other ways?
EDIT
I realized my question may be to obscure and general to prefer one answer over another to accept. So I just upvoted them.
You could look at using dynamic send ports to achieve this - if your subscribers are truly dynamic. This introduces a bit of complexity since you'll need to use an orchestration to configure the send port's properties based on your rules.
If you can, try and remove the complexity. If you know that you don't need to be truly dynamic when adding subscribers (i.e. a subscriber and it's rules can be configured one time only) and you have a manageable number of subscribers then I would suggest configuring each subscriber using it's own send port and use a filter to create subscriptions based on message context properties. The beauty of this approach is that you don't need to create and deploy an orchestration and this becomes a highly performant and scalable solution.
If the changes to the destination are going to be frequent, you are right in seeking a more dynamic solution. One nice solution is using dynamic send ports and the Business Rules Engine. You create rule set for the messages you are receving. This could be based on a destination property or customer ID in the message. Using these facts, the rules engine can return a bunch of information like file mask, server name, ip address of deleiver server, etc. You can thenuse this information to configure the dynamic send in the orchestration. The real nice thing here is that you can update the rule set in the rules engine without redeploying the whole solution. As a newb, these are some advanced concepts, but not as diificult as you may think.
For a simpler solution, you might want to look at setting the FILE Send adapters properties via it's Propery Schema (ie. File name, Directory, etc.). You could pull these values from a database with a helper class inside an expresison shape. On each message ogig out, use the property shcema to set where the message will be sent and named. This way, you just update the database as things change.
Good Luck!

BizTalk custom adaptor

I am not sure if I ask the right question, but this is the scenario I am trying to run:
Multiple files (XML and a few related files, "attachments") have to get into BizTalk as a single message. I have looked into existing adapters, and don't see that done with existing once. To be more accurate, files are taken from file system. Files are not found at the same time, but arrive one at a time, when order is not ensured. XML (content) file is the one that knows what attachments it has to have (what other files).
We are looking into BizTalk 2009 and I was wondering would be that responsibility of a custom Adaptor, or something else. And were I could look for samples.
Thanks.
It is probably possible to do what you want using a custom adapter, though I'd recommend against it. You can achieve what you require using orchestration.
What you are looking for is likey a convoy, or at the least some use of correlation.
In BizTalk a convoy is a messaging pattern (as opposed a BizTalk feature) that allows groups of messages to be processed by a single orchestration.
You essentially use correlation on a receive port to group messages together in either a parallel (what you probably want) or sequential fashion.
There is an article [here](http://msdn.microsoft.com/en-us/library/ms942189(BTS.10\).aspx) by Stephen W. Thomas about convoys (it is for BT 2004 but the concepts still hold) and there is a lot of additional information on the web and in books (Professional BizTalk server 2006 has a subsection on them)
Without more details on your scenario it is hard to know exactly how the convoy would be built but below are two approaches to look at (also, I've not had a chance to properly use BT2009 so there may be extended support for correlation scenarios that help you out).
Flexible Correlation
If you don't know anything about the files listed in the context XML you will probably need a pattern like the one described by Charles Young in this post.
Non-uniform sequential convoy
If you do have a little bit of info before hand one way might be as follows (basically a Non-uniform sequential convoy):
This makes the assumption that there is some way of linking all the files together so you can correlate them.
Create a single orchestration that subscribes to you inbound receive port (which contains the file receive location).
This orchestration will have a single activation receive shape that is set up for your content file.
Once the orchestration is started by a content file a second correlated receive shape starts picking up the messages that match that content file. (this second receive could possible be in a loop to allow for variable numbers of files)
You then pack them all together into a single outbound file of your design and send them out once the full number of files has been received.
Seems to me a better approach would be to implement the above requirements with a combination of a custom pipeline component and/or a custom adapter. I assume you do not really need to manipulate the incoming files - except for the content XML file - or that you couldn't since they are in binary format. This calls for a custom pipeline component.
What you can do is develop a custom BizTalk adapter to interact with the file system and to implement the listening and looping logic. Next you can develop a custom pipeline component to create a single BizTalk message perhaps with base64 data type in it for binary data. Additionally you could also promote messages right in this component to enable orchestration subscriptions.
Orchestrations are more suited for implementing business work-flow scenarios where the messages are already in XML format. This do not appear to be the case. In any case I think at the very least a custom pipeline component would be needed.
David's answer is the correct answer.
Even in cases where you don't know absolutely nothing about the contents of the expected attachments, surely you know their names and locations. Therefore you can use the Flexible Correlation linked to in david's answer like this:
The key to the solution is to correlate on the builtin BTS.ReceivedFileName property.
First, create a custom receive pipeline, with a custom pipeline component that promotes the BTS.ReceivedFileName context property of the received messages. This simple custom component is fairly easy to write but you can make it straightforward by using third-party frameworks such as, (shameless plug, here) my PipelineComponentBase class or the excellent BizTalk Server Pipeline Component Wizard.
Now for the easy part:
Attachments are received in a specific location, designated by its path on the filesystem.
Create a receive location that listens to an alternate location, used only to control when files are actually swallowed by BizTalk.
In your orchestration, create a correlation type with the BTS.ReceivedFileName property and a correlation set base on this correlation type.
When you want to receive binary attachments, send a dummy message with the BTS.ReceivedFileName context property set to the filename of the binary attachment but with the path matching the alternate location ; the one used by the receive location. Initialize the correlation on the send shape.
Use an expression shape to copy the binary file from its original location to the one used by the receive location.
Finally, use a receive shape bound to the receive port that contains the receive location whose custom receive pipeline will promote the BTS.ReceivedFileName property.
Notice that you actually need to send a message in order to initialize the correlation. It does not matter what message you send actually. What I'd do is send the message through a send pipeline that contains an empty pipeline component. That is a pipeline component that reads the message but return null (so that the message vanishes into thin air before it reaches the adapter). A more elaborate solution would be to use a null adapter. That is an adapter that reads the message but does not do anything about it.
These two solutions avoid having many files accumulate in a temporary location somewhere, just for the sake of initializing a correlation!

Resources