Proper way to implement a whitelist - corda

I am new to Corda and have a question how to properly implement a whitelist in Corda.
Let's assume a fungible security token is issued on Corda that has to meet a certain kind of regulation (e.g., the investor is not allowed to be from a certain country). Therefore a whitelist would be required to make sure that all regulation requirements are met.
In a private network I assume that there is no need for an actual whitelist, as the issuer who runs the node can control who is allowed to join the network and who isn't.
But on the public Corda network there are potentially many identities who are not allowed to hold a certain type of token and a whitelist would be required.
What would be the proper design choice for this kind of problem?
I thought about having a WhitelistState which holds a set of all whitelisted investors. But if I understood correctly, each participant of the state (in this case the issuer and the investors) would have to sign a transaction if a new investor was added or removed from the whitelist, which is not a suitable solution.
I would appreciate any helpful advice on how to solve such an issue!

Have a look at the blacklist project in the samples repo:
https://github.com/corda/samples/tree/release-V4/blacklist
Essentially what they do:
1. Inside ReachAgreementFlow they add an attachment to the transaction, that attachment is a jar file that contains blacklist.txt file.
2. Inside the state contract AgreementContract (which validates the transaction) they extract the jar and read its contents and make sure that the company listed in the agreement is not part of the blacklist.

Related

Creating a plugin in the provider client

I am looking to create a plugin on yagna where providers will only accept tasks from whitelisted requestors utilizing private/public keys for authentication.
Are there any documentation or how can I go about doing it? Workflow as follows:
Requestor sends task in a specific subnet(do requestors also need a plugin, or are yagna ids unique?)
Providers on that subnet check if it's a whitelisted requestor by checking yagna id or other task information which cannot be faked
I am not entirely sure on how I would format it because I don't know how the system works behind the scenes - so any advice there might be necessary if my workflow is too bad.
It's a nice idea & it's doable.
You can differentiate requestors by their id's. To get the nodes' id (requestor or provider) just run yagna id list (yagna daemon has to be running - yagna service run).
Probably the most challenging part would be to modify ReactToProposal handler of CompositeNegotiator. Current implementation casts Proposal (Demand in this context) into ProposalView. Unfotunatelly at this point requestor_id is lost. For testing purposes you can just filter requestors directly in fn handle() and return Ok(ProposalResponse::RejectProposal {....}) when you want to block a requestor. Your field of interest is msg.demand.issuer_id.
If you want to introduce some kind of public/private key pair functionality, it could achieved by adding custom * demand/offer constraint* that is understood by both your requestor and provider. Unfortunately at this time there is no public documentation on this topic.
Please feel free to reach out if you have further questions.

Pact. How to test a REST GET with automatically generated ID in the URL

I want to test a REST service that returns the detail of a given entity identified by an UUID, i.e. my consumer pact has an interaction requesting a GET like this:
/cities/123e4567-e89b-12d3-a456-426655440000
So I need this specific record to exist in the Database for the pact verifier to find it. In other projects I've achieved this executing an SQL INSERT in the state setup, but in this case I'd prefer to use the microservice's JPA utilities for accessing to the DB, because the data model is quite complex and using these utilities would save me much effort and make the test much more maintainable.
The problem is that these utilities do not allow specifying the identifier when you create a new record (they assign an automatic ID). So after creating the entity (in the state setup) I'd like to tell the pact verifier to use the generated ID rather than the one specified by the consumer pact.
As far as I know, Pact matching techniques are not useful here because I need the microservice to receive this specific ID. Is there any way for the verifier to be aware of the correct ID to use in the call to the service?
You have two options here:
Option 1 - Find a way to use the UUID from the pact file
This option (in my option) would be the better one, because you are using well known values for you verification. With JPA, I think you may be able to disable the auto-generation of the ID. And if you are using Hibernate as the JPA provider, it may not generate an ID if you have provided it one (i.e. setting the ID on the entity to the one from the pact file before saving it). This is what I have done recently.
Using a generator (as mentioned by Beth) would be a good mechanism for this problem, but there is no current way to provide a generator to use a specific value. They generate random ones on the fly.
Option 2 - Replace the ID in the URL
Depending on how you run the verification, you could use a request filter to change the UUID in the URL to the one which was created during the provider state callback. However, I feel this is a potentially bad thing to do, because you could change the request in a way that weakens the contract. You will not be verifying that your provider adheres to what the consumer specified.
If you choose this option, be careful to only change the UUID portion of the URL and nothing else.
For information on request filters, have a look at Gradle - Modifying the requests before they are sent and JUnit - Modifying the requests before they are sent in the Pact-JVM readmes.
Unfortunately not. The provider side verifier takes this information from the pact file itself and so can't know how to send anything else.
The best option is to use provider states to manage the injection of the specific record prior this test case (or to just have the correct record in there in the first place).
You use the JPA libraries during the provider state setup to modify the UUID in record to what you're expecting.
If you are using pact-jvm on both the consumer and provider sides, I believe you may be able to use 'generators', but you'll need to look up the documentation on that as I haven't used them.

How can i access vault in SmartContract?

How can I access vault in Smart Contract?
I want to do below business validation in Smart Contract
- New Data and attachment which I have entered, already exists in vault or not
You cannot access the vault, or any other source of outside information, from within the contract. This is because contract execution must be deterministic. If a contract's view of the validity of a ledger update depended on the current contents of your vault, disagreements could arise between different nodes (or even within the same node at different points in time) on whether a given ledger update was valid. This would destroy the integrity of the ledger - there would be no consensus on which updates were valid.
In your case, it might be best to impose the additional constraints you want to impose within the flow. For example, within the flow you could check the contents of the proposed transaction against the contents of the vault, and sign or not sign the transaction accordingly.
It's important to keep in mind - just because a transaction is contractually valid, does not mean you have to sign it!

Storage Commitment Service (push model): how i get the result back to my SCU?

I planned to implement a Storage Commitment Service to verify if files previously sent to the storage were safely stored.
My architecture is very simple and straightforward my SCU sends some secondary capture images to the storage and I want to be sure they are safely stored before delete them.
I am going to adopt push model and I wonder what steps/features I need to implement to accomplish the service
What I understood is
I need to issue a N-ACTION request with SOP Class UID
1.2.840.10008.1.20.1 and add to the request a transaction identifier together with a list of Referenced SOP Class UID – Referenced SOP
Instance UID where Referenced SOP Instance UID are the UIDs of the
secondary capture images I previously sent to the storage and
Referenced SOP Class UID in my case is the soap class identifier
representing the Secondary Capture Image
Wait for my N-ACTION response to see if the N-ACTION request succeed
or not
Get the response from the storage in form of N-EVENT-REPORT
But when? How the storage give me back the
N-EVENT-REPORT along with the results? Does my SCP AE implements some
SCP features? Or I need to issue a N-EVENT request to get a
N-EVENT-REPORT?
Have a look at the image below copied from here:
Now, about your question, following is the explanation assuming same association will be used for entire communication. For communication over multiple associations, refer above article from Roni.
But when?
Immediately. On same connection/association. On receiving NAction response, you should wait for timeout configured in your application. Before timeout expires, you should get the NEventReport.
How the storage give me back the N-EVENT-REPORT along with the results?
When you receive NAction response from SCP, that means SCP saying "Ok; I understood what you want. Now wait while I fetch your data...". So, you wait. When SCP is ready with all the data (check list) necessary, it just sends it back on same association through NEventReport. You parse the Report and do your stuff and send response to SCP saying "Fine; I am done with you." and close the association.
Does my SCP AE implements some SCP features?
No (in most of the cases); you do not need to implement any SCP features in both (single association/multiple associations) cases. You should get NEventReport on same association as mentioned above. DICOM works on TCPIP. Client/Server concept in TCP is only limited to who establishes the connection and who listens for connections. Once the connection is established, any one can read/write data on socket.
In rare cases, SCP sends NEventReport by initiating new association on its own. In that case, SCU need to implement SCP features. This model is not in use as far as I am aware. It is difficult to implement this model for both SCP and SCU. It also needs multiple configurations which everyone tends to avoid. So, this could be neglected. I am calling this rare because I never (at least so far) come across such implementation. But yes; this is valid case for valid reason.
Or I need to issue a N-EVENT request to get a N-EVENT-REPORT?
No; as said above. Refer this.
J.3.3 Notifications
The DICOM AEs that claim conformance to this SOP Class as an SCP shall invoke the N-EVENT-REPORT request. The DICOM AEs that claim conformance to this SOP Class as an SCU shall be capable of receiving the N-EVENT-REPORT request.
That said, SCU should be able to process NEventReport. It will NOT issue it.
There are three different sequences of events possible. I could describe them here, but this article is really excellent: Roni's DICOM blog
I have nothing to add to what is written there.

Determining the set of message destinations at runtime in BizTalk application

I’m a complete newbie at BizTalk and I need to create a BizTalk 2006 application which broadcasts messages in a specific way. I’m not asking for a complete solution, but for advise and guidelines, which capabilities of BizTalk I should use.
There’s a message source, for simplicity, say, a directory where the user adds files to publish them. There are several subscribers, each having a directory to receive published files. The number of subscribers can vary in the course of exploitation of the program. There are also some rules which determine if a particular subscriber needs to receive a particular file, based on the filename. For example, each subscriber has a pattern or mask of filename which files they receives must match. Those rules (for example, patterns) can change in time as well.
I don’t know how to do this. Create a set of send ports at runtime, each for each destination? Is it possible? Use one port changing its binding? Would it work correctly with concurrent sendings? Are there other ways?
EDIT
I realized my question may be to obscure and general to prefer one answer over another to accept. So I just upvoted them.
You could look at using dynamic send ports to achieve this - if your subscribers are truly dynamic. This introduces a bit of complexity since you'll need to use an orchestration to configure the send port's properties based on your rules.
If you can, try and remove the complexity. If you know that you don't need to be truly dynamic when adding subscribers (i.e. a subscriber and it's rules can be configured one time only) and you have a manageable number of subscribers then I would suggest configuring each subscriber using it's own send port and use a filter to create subscriptions based on message context properties. The beauty of this approach is that you don't need to create and deploy an orchestration and this becomes a highly performant and scalable solution.
If the changes to the destination are going to be frequent, you are right in seeking a more dynamic solution. One nice solution is using dynamic send ports and the Business Rules Engine. You create rule set for the messages you are receving. This could be based on a destination property or customer ID in the message. Using these facts, the rules engine can return a bunch of information like file mask, server name, ip address of deleiver server, etc. You can thenuse this information to configure the dynamic send in the orchestration. The real nice thing here is that you can update the rule set in the rules engine without redeploying the whole solution. As a newb, these are some advanced concepts, but not as diificult as you may think.
For a simpler solution, you might want to look at setting the FILE Send adapters properties via it's Propery Schema (ie. File name, Directory, etc.). You could pull these values from a database with a helper class inside an expresison shape. On each message ogig out, use the property shcema to set where the message will be sent and named. This way, you just update the database as things change.
Good Luck!

Resources