Explicit vs Implicit upgrades of Contracts & States in Corda - corda

There seems to be plenty of information on explicit contracts&states upgrades, but it seems there is lack of info about implicit contract and state upgrades.
Assume that I use signature policy for contracts. How do I migrate old states to new ones if I want to use old ones also?
UPDATE:
I have found those samples and as I understand there is no states upgrade process at all! On the contrary, all flows/states and contracts are created in backward compatible way. But intuitively, if I have 50 releases for example, does it mean that related piece of the code will contain 50 if/else for all possible old versions of the flow? Won't the code become a mess? Is there any way of somehow normalizing the states?

I think you are correct. As long as the old versions of data (i.e. Corda states) exist in the network, you will need to keep this conditional logic in your contract code, so that it's capable of handling states of the older format.
What you could do to mitigate this proliferation of conditional logic is:
identify all the states of the older format. If there are any, migrate them to the new format, by spending them in a transaction and re-creating them with the new format. If there aren't any, move to the next step.
perform another implicit upgrade of your contract code that does not have any functional changes besides removing the conditional logic that is not needed anymore.
Following this steps, you can gradually remove conditional logic that's not needed, simplifying the contract code gradually. But, you're essentially back to a form of explicit upgrade, which might not be very practical depending on the number of parties and states in your network.

Related

Should we include value matching as part of contract validation?

we are using pact for contract testing. For an api that is being used in two different context, would respond with different values in the response (though the fields and format of the responses are same.) Should we cover this as two different interaction ? if yes, is it not like testing the business functionality?
This will have implication in the following case too...
we are using contract test for backward compatibility validation of our mobile app. If there is a change in api, where only the value of the response have changed. And if the change work for current version of the consumer, but not with the older version. This is will break the backward compatibility of the provider with the older version of mobile app. How will we be able to catch this, if we only consider the structure of the response as part of the contract tests?
It's a good question and there is no straightforward answer for it in all cases.
The first thing I'd say on this, is that structure alone is not a contract test - it's more a schema test - and schemas are not contracts. Contract tests do care about values in many cases.
Start by working with the following operative:
The rule of thumb for working out what to test or not test is - if I don't include this scenario, what bug in the consumer or what misunderstanding about how the provider responds might be missed. If the answer is none, don't include it.
In your case the value of role has a specific enum-like meaning. I assume that the consumer code looks for this value, and might conditionally do something different based on its value. For example, if it were a UI it might display different options alongside the page, or fetch other relevant data.
If that's the case, then you would potentially want to include it as a contract test. If the UI doesn't care about the value of the field other than to display it, then it wouldn't be worth including in a contract test
if yes, is it not like testing the business functionality?
Well, not exactly. Contract tests are not functional tests, but like many types of testing there is overlap. Your aim here is to ensure the various important conversations that are are expected to occur between a consumer and a provider are covered. Functional testing is a separate activity that can primarily be distinguished by its concern for side effects.

Using Rule Flow in InRule for Workflow

I see Rule Flow which supports action so it may be possible to build some types of workflow on top of this. In my situation I have an case management application with tasks for different roles, all working on a "document" that flows through different states and depending on state, different role will see in their queue to work on.
I'm not sure what your question is, but InRule comes with direct support for Windows Workflow Foundation, so executing any InRule RuleApplication, including those with RuleFlow definitions, is certainly possible.
If you'd like assistance setting up this integration, I would suggest utilizing the support knowledge base and forums at http://support.inrule.com
Full disclosure: I am an InRule Technology employee.
For case management scenarios, you can use decisions specifically to model a process. Create a custom table or flags in your cases that depict the transition points in your process (steps). As you transition steps, call a decision which will determine if the data state is good enough to make the transition. If it is, then set the flag for the new state. Some folks allow for multiple states at the same time. InRule is a stateless platform; however, when used with CRM it provides 95% of the process logic and relies on CRM to do the persistence. I have written about this pattern in a white paper:
https://info.inrule.com/rs/inruletechnology/images/White_Paper_InRule_Salesforce_Integration.pdf

Which tincan verbs to use

For data normalisation of standard tin can verbs, is it best to use verbs from the tincan registry https://registry.tincanapi.com/#home/verbs e.g.
completed http://activitystrea.ms/schema/1.0/complete
or to use the adl verbs like those defined:
in the 1.0 spec at https://github.com/adlnet/xAPI-Spec/blob/master/xAPI.md
this article http://tincanapi.com/2013/06/20/deep-dive-verb/
and listed at https://github.com/RusticiSoftware/tin-can-verbs/tree/master/verbs
e.g.
completed http://adlnet.gov/expapi/verbs/completed
I'm confused as to why those in the registry differ from every other example I can find. Is one of these out of date?
It really depends on which "profile" you want to target with your Statements. If you are trying to stick to e-learning practices that most closely resemble SCORM or some other standard then the ADL verbs may be most fitting. It is a very limited set, and really only the "voided" verb is provided for by the specification. The other verbs were related to those found in 0.9 and have become the de facto set, but aren't any more "standard" than any other URI. If you are targeting statements to be used in an Activity Streams way, specifically with a social application then you may want to stick with their set. Note that there are verbs in the Registry that are neither ADL coined or provided by the Activity Streams specification.
If you aren't targeting any specific profile (or existing profile) then you should use the terms that best capture the experiences which you are trying to record. And we ask that you either coin those terms at our Registry so that they are well formed and publicly available, or if you coin them under a different domain then at least get them catalogued in our Registry so others may find them. Registering a particular term in one or more registries will hopefully help keep the list of terms from exploding as people search for reusable items. This will ultimately make reporting tools more interoperable with different content providers.

How to release a subset of deliverables?

Further to my question at accidentally-released-code-to-live-how-to-prevent-happening-again. After client UAT we often have the client saying they are happy for a subset of features to be released while others they want in a future release instead.
My question is "How do you release 2/3 (two out of 3) of your features".
I'd be interested in how the big players such as Microsoft handle situations like..
"Right we're only going to release 45/50 of the initially proposed features/fixes in the next version of Word, package it and ship it out".
Assuming those 5 features not being released in the next release have been started.. how can you ignore them in the release build & deployment?
How do you release 2/3 of your developed features?
How to release a subset of deliverables?
-- Lee
If you haven't thought about this in advance, it's pretty hard to do.
But in the future, here's how you could set yourself up to do this:
Get a real version control system, with very good support for both branching and merging. Historically, this has meant something like git or Mercurial, because Subversion's merge support has been very weak. (The Subversion team has recently been working improving their merge support, however.) On the Windows side, I don't know what VC tools are best for something like this.
Decide how to organize work on individual features. One approach is to keep each feature on its own branch, and only merge it back to the main branch when the new feature is ready. The goal here is to keep the main branch almost shippable at all times. This is easiest when the feature branches don't sit around collecting dust—perhaps each programmer could work on only 1 or 2 features at a time, and merge them as soon as they're ready?
Alternatively, you can try to cherry-pick individual patches out of your version control history. This is tedious and error-prone, but it may be possible for certain very disciplined development groups who write very clean patches that make exactly 1 complete change. You see this type of patch in the Linux kernel community. Try looking at some patches on the Linux 2.6 gitweb to see what this style of development looks like.
If you have trouble keeping your trunk "almost shippable" at all times, you might want to read a book on agile programming, such as Extreme Programming Explained. All the branching and merging in the world will be useless if your new code tends to be very buggy and require long periods of testing to find basic logic errors.
Updates
How do feature branches work with continuous integration? In general, I tend to build feature branches after each check-in, just like the main branch, and I expect developers to commit more-or-less daily. But more importantly, I try to merge feature branches back to the main branch very aggressively—a 2-week-old feature branch would make me very, very nervous, because it means somebody is living off in their own little world.
What if the client only wants some of the already working features? This would worry me a bit, and I would want to ask them why the client only wants some of the features. Are they nervous about the quality of the code? Are we building the right features? If we're working on features that the client really wants, and if our main branch is always solid, then the client should be eager to get everything we've implemented. So in this case, I would first look hard for underlying problems with our process and try to fix them.
However, if there were some special once-in-a-blue-moon reason for this request, I would basically create a new trunk, re-merge some branches, and cherry-pick other patches. Or disable some of the UI, as the other posters have suggested. But I wouldn't make a habit of it.
This reminds me a lot of an interview question I was asked at Borland when I was applying for a program manager position. There the question was phrased differently — there's a major bug in one feature that can't be fixed before a fixed release date — but I think the same approach can work: remove the UI elements for the features for a future release.
Of course this assume that there's no effect of the features you want to leave out with the rest of what you want to ship... but if that's the case just changing the UI is easier than trying to make a more drastic change.
In practice what I think you would do would be to branch the code for release and then make the UI removals on that branch.
Its usually a function of version control, but doing something like that can be quite complicated depending on the size of the project and how many changesets/revisions you have to classify as being desired or not desired.
A different but fairly successful strategy that I've employed in the past is making the features themselves configurable and just deploying them as disabled for unfinished features or for clients who don't want to use certain features yet.
This approach has several advantages in that you don't have to juggle what features/fixes have been merged and have not been merged, and depending on how you implement the configuration and if the feature was finished at the time of deployment, the client can change their mind and not have to wait until a new release to take advantage of the additional functionality.
That's easy, been there done that:
Create a 2/3 release branch off your current mainline.
In the 2/3 release branch, delete unwanted features.
Stabilize the 2/3 release branch.
Name the version My Product 2.1 2/3.
Release from the 2/3 release branch.
Return to the development in the mainline.

Architecture for Satellite Parts of a Larger Application

I work for a firm that provides certain types of financial consulting services in most states in the US. We currently have a fairly straightforward CRUD application that manages clients and information about assets and services we perform for each. It only concerns itself with the fundamental data points and processes that are common to all locations--the least common denominator.
Now we want to implement support for tracking disparate data points and processes that vary from state to state while preserving the core nationally-oriented system. Like this:
(source: flickr.com)
The stack I'm working with is ASP.Net and SQL Server 2008. The national application is a fairly straightforward web forms thing. Its data access layer is a repository wrapper around LINQ to SQL entities and datacontext. There is little business logic beyond CRUD operations currently, but there would be more as the complexities of each state were introduced.
So, how to impelement the satellite pieces...
Just start glomming on the functionality and pursue a big ball of mud
Build a series of satellite apps that re-use the data-access layer but are otherwise stand-alone
Invest (money and/or time) in a rules engine (a la Windows Workflow) and isolate the unique bits for each state as separate rule-sets
Invest (time) in a plugin framework a la MEF and implement each state's functionality as a plugin
Something else
The ideal user experience would appear as a single application that seamlessly adapts its presentation and processes to whatever location the user is working with. This is particularly useful because some users work with assets in multiple states. So there is a strike against number two.
I have no experience with MEF or WF so my question in large part is whether or not mine is even the type of problem either is intended to address. They both kinda sound like it based on the hype, but could turn out to be a square peg for a round hole.
In all cases each state introduces new data points, not just new processes, so I would imagine the data access layer would grow to accommodate the addition of new tables and columns, but I'm all for alternatives to that as well.
Edit: I tried to think of some examples I could share. One might be that in one state we submit certain legal filings involving client assets. The filing has attributes and workflow that are different from other states that may require similar filings, and the assets involved may have quite different attributes. Other states may not have comparable filings at all, still others may have a series of escalating filings that require knowledge of additional related entities unique to that state.
Start with the Strategy design pattern, which basically allows you outline a "placeholder", to be replaced by concrete classes at runtime.
You'll have to sketch out a clear interface between the core app and the "plugins", and you have each strategy implement that. Then, at runtime, when you know which state the user is working on, you can instantiate the appropriate state strategy class (perhaps using a factory method), and call the generic methods on that, e.g. something like
IStateStrategy stateStrategy = StateSelector.GetStateStrategy("TX"); //State id from db, of course...
stateStrategy.Process(nationalData);
Of course, each of these strategies should use the existing data layer, etc.
The (apparent) downside with this solution, is just that you'll be hardcoding the rules for each state, and you cannot transparently add new rules (or new states) without changing the code. Don't be fooled, that's not a bad thing - your business logic should be implemented in code, even if its dependent on runtime data.
Just a thought: whatever you do, completely code 3 states first (with 2 you're still tempted to repeat identical code, with more it's too time-consuming if you decide to change the design).
I must admit I'm completely ignorant about rules or WF. But wouldn't it be possible to just have one big stupid ASP.Net include file with instructions for states separated from main logic without any additional language/program?
Edit: Or is it just the fact that each state has quote a lot a completely different functionality, not just some bits?

Resources