Spring Web Flow - spring-webflow

I am trying to get my head around Spring Web Flow 2...
Am I correct in saying that a web flow operates entirely through a single url (but with different execution parameters?
Eg.
http://mydomain.com/flowname.html
http://mydomain.com/flowname.html?execution=e1s1
All the examples I have seen, seem to do just that.
Am I correct in saying that if you leave the flow (by going to a page outside of the flow's control), when you return to the flow, the actual flow is a new instance and the contents from the earlier flow are lost?
I am trying to incorporate Web Flow 2 into an existing ecommerce site and having problems...

A flow is mapped to a URL. When you visit this URL for the first time, a new flow execution is created and a new key assigned:
http://www.mydomain.com/flow
Once a flow execution is created Webflow assigns it a flow execution key. This is the execution parameter you see:
http://www.mydomain.com/flow?execution=e1s1
To answer your question about returning to the flow: If you return using the URL without the execution key, you will get a new flow execution. But if you include the execution key you will be taken to the state and flow execution that is encoded in the key. e1 indicates the flow execution, and s1 indicates the state. Note that depending on how your flow is setup, you may or may not be able to return to certain states by specifying it on the execution key.
Also note, by default flow execution snapshots are stored in HttpSession. If this session times out you would not be able to return to that flow.

Related

Grails 4 Async with Database Operations

My Grails 4.0.10 app needs to call an external service. The call may take up to 3 minutes, so it has to be async'ed. After reading the doco I wrote a non-blocking service method to perform the call using a Promise without too much trouble.
The documentation describes how async outcome can be displayed.
In my case the outcome affects the database. I must create new domain objects, modify existing domain objects and persist the result in the onComplete closure. The doco is rather quiet on how to do this.
These are my assumptions about the onComplete closure. My question is: Are the assumptions valid? Is this the proper way to do it?
No injected stuff is available, neither services nor (for example) log -- things you normally expect in a service
Database logic must be enclosed first within Tenants.withId if multitenancy is used, and then within withTransaction
withTransaction is prefixed with a domain name. However, other domains may freely be manipulated and persisted in the same closure
Domain instances picked up before the async call may be attached to the current session like this instance.attach() and then modified and saved
If logging is needed, create a new log instance

Calling external service from corda

There is a bank which creates a contract which is then accepted by the lender and the borrower. After signing the contract the lender provides fund to the borrower. The bank then creates an obligation state based on the data received by calling an external service automatically.
And Now
1) In API Layer, I am calling first flow which creates one state.
2) In API layer itself, On success of first flow , I am calling the http request to external service and get the data.
3) Now I pass the http response to the the second flow for creating the other state.
Can you please let me know if there is any issue with this approach.
Requirment is I want to trigger the first flow manually, but calling external service and initiating the second flow should happen automatically
I had referred the link given below.
Making asynchronous HTTP calls from flows
You'll make calls to an external service during the running of flows.
The best place to get started would be looking at the CorDapp samples here. In particular, take a look at the Accessing External Data section

Contract Tests for APIs which involve session or workflow

Hi I am trying to write Contract Tests for a product purchase workflow. So obviously i cannot call the Checkout API directly without calling Add to Cart API.
But as I have observed the verification hits the paths individually might not be in the same order as listed in the Pact JSON File.
So how should I handle such scnenario which involve session management and workflow(meaning step1 shud be successful only then step2 will be success)
Thanks!
Use provider states to set up the right data in the cart so that when you call checkout, you get the behaviour you want. Here's the documentation from pact.io on provider states.
Each interaction in a pact should be verified in isolation, with no context maintained from the previous interactions. Tests that depend on the outcome of previous tests are brittle and land you back in integration test hell, which is the nasty place you're trying to escape by using pacts.
So how do you test a request that requires data to already exist on the provider? Provider states allow you to set up data on the provider by injecting it straight into the data source before the interaction is run, so that it can make a response that matches what the consumer expects. The name of the provider state is specified in the given clause of an interaction in the consumer, and then used to find the block of code to run in the provider to set up the right data. If you need to stub a downstream system, or return an error response that is difficult to cause in the normal scheme of things (e.g. a 500), this is the place where you can set up stubs.
https://docs.pact.io/documentation/provider_states.html
In your case, this would look like:
Given an item has been added to the cart
upon receiving a request to checkout it will respond with the checkout response....
As an aside, I would also imagine you'd want, given no items have been added to the cart upon receiving a request to checkout it will respond with some other type of response (empty cart? error?)
The provider will need to implement the provider state hook for an item has been added to the cart in the verification code, which will add the item to the cart by inserting it directly into the datasource.

Is it possible to suspend a flow such that it can be resumed with an RPC-call?

I am trying to implement the following use-case in Corda:
FlowA has been invoked on PartyA via startFlowDynamic. FlowA creates a partially signed transaction and invokes FlowB on PartyB via sendAndReceive. A human user shall now review and manually approve this transaction. Ideally FlowB should suspend after receiving the transaction. I would like to be able to query for suspended instances of FlowB via RPC, and present those (or rather some representation of the transaction therein) to the user in my UI. Then, after the user actions his approval, I would like to resume FlowB via RPC, which would then sign the transaction and return it to FlowA on PartyA.
I noticed that I can inspect suspended flows to some degree via CordaRPCOps.stateMachineAndUpdates and I read the tutorial on progress tracking, but it doesn't quite suffice for my case. I also read that interacting with people from flows is listed as a future feature, I just wondered if there isn't already some way to accomplish this ?
See the Negotiation Cordapp sample for an example of how this would work in practice here.
Corda doesn't currently support suspending a flow for user interaction.
However, you can support this kind of workflow as follows. Suppose you're writing a CorDapp for loan applications. You could have an initial flow that agrees the creation of a loanApplicationstate between two parties. From there, the approver can inspect the loan application, and either kick off an approve flow that creates a transaction to transform the loanApplication into an approvedLoan state, or kick off a reject flow to consume the loanApplication state without issuing an approvedLoan state.
Equally, you could add a status field to the loan state, specifying whether the loan is approved or not. Initially, the loan state would have the field set to unapproved. Then the approver could kick off one of two flows to update the loan state, to either have an approved or a rejected status.
I'm not sure if this is a "recommended approach" but I implemented a Quasar compatible AsynchListenableFuture in my flow as someone else had described here.
I needed to suspend a flow and wait for the production of a state from another flow (in response to a user interaction). It seems to work, but suspect it could be regarded as rather off-piste(?!).
Splitting the activities into atomic flows invoked directly by UI interaction is fine, but I needed a sort of "monitoring" flow to wait for an external (e.g. user) event before determining which sub flow to initiate next, and this needed to happen automatically and from within a flow already invoked prior to the the user interaction - the flow logic is then conditional on a state change which may arise from a user interaction or an incoming transaction from another node. In my case, this high level monitoring flow detects the consumption of a known state on the node, then invokes a subflow in response. The high level flow waits on the AsynchListenableFuture as described in the answer referenced above. I created a composite VaultQuery on an attribute of states of contract state types of interest (e.g custom field X = Y), and converted the returned observable (returned from trackBy.future) to a Quasar compatible AsynchListenableFuture. When the state is consumed by a transaction created by a flow triggered by the external action, the future returns and the automatic event (in my case the creation of an other transaction with another party) is executed.
I'm only experimenting / evaluating Corda, not sure how robust this approach would be in production reality, but it seems to work OK, hope this helps in some way.
Some form of higher level workflow flows in Corda, which can wait on external events and conditionally invoke other flows depending on the external action would be of real interest in my context.

Adding correlation id to automatically generated telemetry with App Insights

I'm very new to Application Insights, and I'm thinking of using it for a set of services I plan on implementing with asp.net webapi. I was able to get the basic telemetry up and running very easily (right-clicking on a project on VS, Add Application Insights), but then I hit a block. I plan to have a correlation id set in the request headers for calls to downstream services, and I would like to tag all the telemetry related to one outside call with the same correlation id.
So far I've found that there is a way to configure a TelemetryInitializer, but if I understood correctly, this is run before I get to access the request, meaning I can't check if there is a correlation id that I should attach.
So I guess there might be 2 ways to solve this: 1) if I can somehow actually get access to the request headers before the initializer, that would obviously solver the problem, or 2) somehow get a hold of the TelemetryClient instance that is used to report the automatically generated telemetry.
Perhaps the last resort would be to turn off all of the automatic stuff and do all of it manually, when I could of course control what properties are set on the TelemetryClient. But this would be quite a lot more work, so I'd prefer to find some other solution.
You were rights saying that you should use TelemetryInitializer. All TelemetryInitializers are called when Track method is called on any telemetry item. Autogenerated request telemetry is "tracked" on request OnEnd, you should have all your custom headers available for you at that time.
Please also have a look at OperationId - this is part of the standard context managed by App Inisghts and is used exactly for the purpose of correlating requests with downstream execution. This is created and passed automatically, including traces (if you use trackTrace).
Moreover, we have built-in support in our UX for easily seeing all telemetry for a particular operation - it can be found in "Search->Details-->Related Items-->All telemetry for this operation"

Resources