Consider the following scenario:
SCO A is launched,
SCO A sets its cmi.exit = suspend and then triggers a Continue navigation request,
SCO B is launched,
SCO B triggers a Suspend All navigation request and the learner returns to the LMS,
The learner triggers a Resume All navigation request,
SCO B is launched again,
SCO B triggers a Previous navigation request,
SCO A is launched.
Question: Now, should the Tracking Model and Run-Time Environment Data Model of SCO A be set to default or not?
Tracking data for both SCO A and SCO B is saved. In fact, Suspend All saves tracking dat for all SCOs in the course. On resuming, all tracking data for all SCOs is restored. Think of Suspend All as taking a snapshot to the whole course at the moment of returning to the LMS.
Related
I'm developing an HttpModule to capture logging and performance information for all requests to our web application. I'm tapping into the PreRequestHandlerExecute and PostRequestHandlerExecute events.
When my code executes within PostRequestHandlerExecute, is that code guaranteed to be executing within the same request context as the code which ran earlier on in PreRequestHandlerExecute? In other words, can I preserve state when handling the first event to be used during the execution of the second event?
Or is it possible for an HttpModule instance to service multiple concurrent requests, in which case any state necessary to connect the processing of these two events would need to be stored (and then fetched from) within the current HttpContext?
(I understand of course, that the HttpModule will be used to service multiple requests sequentially and that request-specific state from an earlier request will be irrelevant to a subsequent request. My question is specifically whether a single HttpModule can be used to handle multiple concurrent requests or whether I can rely on the fact that the HttpModule will be dedicated to a single request for the entire lifecycle of that request.)
Thanks for your advice!
As per the literature, and borne out by my own implementation, an HttpModule instance is indeed dedicated to a single Http transaction for the complete Request/Response transaction lifecycle.
https://learn.microsoft.com/en-us/dotnet/api/system.web.httpapplication?source=recommendations&view=netframework-4.8#remarks
https://learn.microsoft.com/en-us/previous-versions/bb470252(v=vs.140)?redirectedfrom=MSDN
State can be safely accumulated in one event to be used in subsequent events because all events the HttpModule instance will receive will belong to the same Request/Response transaction.
(Naturally, any accumulated state which is transaction-specific will need to be reinitialized for the subsequent transaction since an instance of an HttpModule is subject to be reused for a subsequent transaction once the current transaction has completed.)
I have furthermore seen that an instance of an HttpModule may be executing on a different thread than the thread on which it was originally created or previously executed.
I'm designing a system with multiple bounded contexts (microservices). I will have 2 kind of events.
Domain Events, which happens "in memory" within single transaction (sync)
Integration Events, which are used between bounded contexts (async)
My problem is, how to make sure that once transaction is committed (at this point I'm sure all Domain Events were processed successfully) that Integration Events are successful as well.
When my Transaction is committed, normally I will dispatch Integration Events (e.g. to the queue), but there is possibility that this queue is down as well, so previously just-committed transaction has to be "reverted". How?
The only solution that comes to my mind is to store Integration Events to the same DB, within the same Transaction, and then process the Integration Events records and push them to the queue - this would be something like "using current DB, as a pre-queue, before pushing it to The Real Queue (however I read that using DB for this is an anti-pattern).
Is there any pattern (reliable approach) to make sure both: Transaction commit and Message pushed to the queue is an atomic operation?
EDIT
After reading https://devblogs.microsoft.com/cesardelatorre/domain-events-vs-integration-events-in-domain-driven-design-and-microservices-architectures/ , the author actually suggests the approach of "pre-queue" in same DB (he calls it “ready to publish the event”).
Checkout transactional outbox pattern.
This pattern does create a pre-queue. But the nice part is that pushing messages from pre-queue to real queue is fully decoupled. Instead you have a middleman called, a message relay that reads your transaction logs and pushes your event from to the real queue. Now since sending message and your domain events are fully decoupled, you can do all your domain events in a single transaction.
And make sure you that all your services are idempontent(same result despite duplicate calls). This transactional outbox patter does guarantee that messages are published, but in case when the message relay fails just after publishing(before acknowledging) it would publish the same event again.
Idempotent services is also necessary in other scenarios. As the event bus(the real queue) could have the same issue. Event bus propagates events, services acknowledge, then network error, then since the event bus is not acknowledged, the same event would be sent again.
Hmm actually idempotence alone could solve the whole issue. After the domain events computation completes(single transaction), if publishing message fails the service can simply throw an error without roll back. Since the event is not acknowledged the event bus will send the same event again. Now since the service is idempotent, the same database transaction will not happen twice, it will basically overwrite or better(should) skip and directly move to message publishing and acknowledging.
I am trying to implement the following use-case in Corda:
FlowA has been invoked on PartyA via startFlowDynamic. FlowA creates a partially signed transaction and invokes FlowB on PartyB via sendAndReceive. A human user shall now review and manually approve this transaction. Ideally FlowB should suspend after receiving the transaction. I would like to be able to query for suspended instances of FlowB via RPC, and present those (or rather some representation of the transaction therein) to the user in my UI. Then, after the user actions his approval, I would like to resume FlowB via RPC, which would then sign the transaction and return it to FlowA on PartyA.
I noticed that I can inspect suspended flows to some degree via CordaRPCOps.stateMachineAndUpdates and I read the tutorial on progress tracking, but it doesn't quite suffice for my case. I also read that interacting with people from flows is listed as a future feature, I just wondered if there isn't already some way to accomplish this ?
See the Negotiation Cordapp sample for an example of how this would work in practice here.
Corda doesn't currently support suspending a flow for user interaction.
However, you can support this kind of workflow as follows. Suppose you're writing a CorDapp for loan applications. You could have an initial flow that agrees the creation of a loanApplicationstate between two parties. From there, the approver can inspect the loan application, and either kick off an approve flow that creates a transaction to transform the loanApplication into an approvedLoan state, or kick off a reject flow to consume the loanApplication state without issuing an approvedLoan state.
Equally, you could add a status field to the loan state, specifying whether the loan is approved or not. Initially, the loan state would have the field set to unapproved. Then the approver could kick off one of two flows to update the loan state, to either have an approved or a rejected status.
I'm not sure if this is a "recommended approach" but I implemented a Quasar compatible AsynchListenableFuture in my flow as someone else had described here.
I needed to suspend a flow and wait for the production of a state from another flow (in response to a user interaction). It seems to work, but suspect it could be regarded as rather off-piste(?!).
Splitting the activities into atomic flows invoked directly by UI interaction is fine, but I needed a sort of "monitoring" flow to wait for an external (e.g. user) event before determining which sub flow to initiate next, and this needed to happen automatically and from within a flow already invoked prior to the the user interaction - the flow logic is then conditional on a state change which may arise from a user interaction or an incoming transaction from another node. In my case, this high level monitoring flow detects the consumption of a known state on the node, then invokes a subflow in response. The high level flow waits on the AsynchListenableFuture as described in the answer referenced above. I created a composite VaultQuery on an attribute of states of contract state types of interest (e.g custom field X = Y), and converted the returned observable (returned from trackBy.future) to a Quasar compatible AsynchListenableFuture. When the state is consumed by a transaction created by a flow triggered by the external action, the future returns and the automatic event (in my case the creation of an other transaction with another party) is executed.
I'm only experimenting / evaluating Corda, not sure how robust this approach would be in production reality, but it seems to work OK, hope this helps in some way.
Some form of higher level workflow flows in Corda, which can wait on external events and conditionally invoke other flows depending on the external action would be of real interest in my context.
How can we determine the status of a Job a servlet is doing.
If a sservlet creates a Job Object and call its do() method which takes 2 minutes to complete and we want to show the users how much task it has completed so far on the front-end.
You'll have to
put the job in a session attribute, for example, or in some other shared map where you can find it in a subsequent request
start a new thread which invokes the job's long-running method
make sure every access to the job's status is properly synchronized
poll the server from your HTML page (by refreshing the page or submitting AJAX requests
Each polling request will just get the job from the session (or the shared map) and get its status.
Can any one explain what the difference is between the following methods of WorkflowApplication:
Abort
Cancel
Terminate
After further investigating this issue, I want to summarize the differences:
Terminate :
the Completed event of the workflow application will be triggered
the CompletionState (WorkflowApplicationCompletedEventArgs) is Faulted
the Unloaded event of the workflow application will be triggered
the workflow completes
OnBodyCompleted on the activity will be called
Cancel:
the Completed event of the workflow application will be triggered
the CompletionState (WorkflowApplicationCompletedEventArgs) is Cancelled
the Unloaded event of the workflow application will be triggered
the workflow completes
OnBodyCompleted on the activity will be called
Abort:
the Aborted event of the workflow application will be triggered
the workflow does not complete
An unhandled exception
triggers OnUnhandledException
in this eventhandler the return value (of type UnhandledExceptionAction) determines what will happen next:
UnhandledExceptionAction.Terminate will terminate the workflow instance
UnhandledExceptionAction.Cancel will cancel the workflow instance
UnhandledExceptionAction.Abort will abort the workflow instance
Each will trigger the corresponding events explained above
Update: Abort does not seem to trigger unloading of the instance in the SQL persistence store. So it seems to me, you better use Cancel or Terminate and if you have to perform some action based upon the completion status, you can check CompletionState in the Complete event.
First, hats off to Steffen Opel (and his comments below). I failed to catch that my original post linked documentation that was WF 3.5 specific. Did a little more digging around.
For posterity's sake, I have left my previous response below, labelled as WF3.5. Please see WF4.0 for a few notes regarding Canceling, Abort, and Terminate in WF4.0.
WF4.0
Unfortunately, there is little explicit documentation discussing differences in Cancel, Abort, and Terminate in WF4.0. However, from member method documentation,
On Abort, a) activity is immediately halted, b) Aborted handler is invoked, and c) Completed handler is not invoked.
On Cancel, a) activity is given a grace period to stop gracefully after which a TimeoutException is thrown, b) Completed handler is invoked.
On Terminate, a) activity is given a grace period to stop gracefully after which a TimeoutException is thrown, b) Completed handler is invoked.
The differences between Abort and Cancel/Terminate is quite striking. Simply call Abort to kill a Workflow outright. The difference between Cancel and Terminate is more nuanced. Cancel does not require any sort of reason (it is a void parameterless method), whereas Terminate requires a reason (in either string or Exception format).
In all cases, Workflow runtime will not perform any implicit action on your behalf (ie Workflows will not auto-self destruct a la WF3.5 Terminate). However, with the highly customizable exception\event handling exposed by the runtime, any such features may be implemented with relative ease.
WF3.5
Canceling
According to Msdn documentation
An activity is put into the Canceling state by a parent activity explicitly, or because an exception was thrown during the execution of that activity.
While Canceling may be used to stop an entire Workflow (ie invoked on root Activity), it is typically used to stop discrete portions of a Workflow (ie either as error-recovery or an explicit action on part of parent). In short, Canceling is a means of Workflow control-flow.
Abort and Terminate
Again, according to Msdn documentation
Abort is different from Terminate in that while Abort simply clears the in-memory workflow instance and can be restarted from the last persistence point, Terminate clears the in-memory workflow instance and informs the persistence service that the instance has been cleared from memory. For the SqlWorkflowPersistenceService, this means that all state information for that workflow instance is deleted from the database upon termination. You will not be able to reload the workflow instance from a previously stored persistence point.
Which is pretty clear in itself. Abort merely stops in-memory execution whereas Terminate stops in-memory execution and destroys any persisted state.