Vacation workflow - workflow-foundation-4

We will going to develop multiple long running workflow for employees to request a vacation the scenario will be like this :
Employee request a vacation
get the direct manager to review
Direct manager review the request if approved the request goes to HR Manager else the request rejected also the manager can return the request to employee for modification and then employee modify the request and forward it again.
Hr manager review the request if approved send notification mails for employee and manager.
The Workflow should be persisted and resumed.
I looked for many online resources but I can not decide where to start from
State machine or flowchart ??
this may be silly question but can anyone provide ant design consideration and small sample if available to follow
Thanks

Since you have various states and you also have moving back and forth from one state to another, I'd pick state machine. Create several states, such as:
InReview
Incomplete
Approved
Then you can create a set of service contracts and expose your activities as workflow service, and then use those as triggers to move between the states. I.e. create WCF operation contract such as:
void ApproveVacation(...)
And then you would create a transition between the states InReview to Approved, which would be triggered by service call to ApproveVacation operation of your workflow/WCF service.

Related

.Net core microservices query from many services

I have a .net core 5 microservices project, the client has a search module which will query the data from many objects, these objects are in many services.
first microservice for products.
in this microservice has table product {productid, productname }.
second microservice for vendors.
in this microservice has table account {verndorId, vendorName}.
third microservice for purchase
in this microservice has table purchase {title, productid(this id comes from product table in first microservcice), accountid (this id comes from account table in secount microservice)}.
Now : the user want to search for the purchase where product name like "clothes" and vendorname like "x";
who I can do this query throw microservice pattern.
taking a monolithic system and slapping service interfaces on top of each table will just make your life hell as you will begin to implement that database logic yourself - like in this question where you try to recreate database joins.
Putting aside that it looks like your partitioning to services doesn't sound right (or needed) for this case. considering purchases describe events that happened (even if they are later deleted) they can capture a state of the product or the vendor. so you can augment the data in the purchases to represent the product and vendor data that were right at the time of the purchase. This will isolate purchase queries to the purchase service and will have the added benefit of preserving history when products/vendors evolve over time.
Considering this case you have to build at least 3 Queue Managers in your Message Queue service integration for Purchase, Vendor and Products.
And setup the Request and Reply queues.
Now those request and reply of the data can be in either JSON or XML however you like.
You need to create a listener which will be responsible to listen to the reply queues and you can create something like SignalR streaming to continuous listen.
Once all the integeration is completed you can directly inject the result in your Client application.

Handle RPC and inter Node communication timeout

I have 5 nodes & 1 Notary node corda private network and a web client with UI & RESTful services.
There are quite a few states and its attributes that are managed by users using the UI. I need to understand how to handle timeouts and avoid multiple updates or Errors
Scenario 1
User is viewing a specific un-consumed current state of a feature.
User performs an edit and updates the state
Upon receiving the request RESTful component use CORDA RPCClient to start the flow. It sets the timeout value e.g. 2 secs
CORDA flow runs the configured rules and it has to sync / collect signatures from all the participating nodes (4). Complete processing takes more than 2 secs (Some file processing, multiple states update etc. I can change the timeout to higher value based on specific use cases. It can surely happen anytime. Need to understand what is the recommended way of handling)
As time taken is higher than provided, CORDA RPCClient throws exception. For the RESTFul service / User transaction has failed.
Behind the scenes CORDA is processing and collecting signatures and updating nodes. From CORDA perspective everything looks fine and changed set is committed to the ledger.
Question:
Is there a way to know transaction submitted is in progress so RESTful service should wait
If user submits again we do check for transaction hash is the latest one associated with unconsumed state and reject if not (It was provided to UI while querying.
Any recommended way of handling.
Scenario 2
User viewing a specific un-consumed current state of a feature.
User performs an edit and updates the state
Upon receiving the request RESTful component use CORDA RPCClient to start the flow. It sets the timeout value e.g. 2 secs
CORDA flow runs the configured rules and it has to sync / collect signatures from all the participating nodes (4). One of the nodes is down or not reachable. Flow hangs / waits for the node to be live again.
RESTFul service / UI receives a timeout exception. User refreshes the view and submits the change again. Querying the current node will return old data and user will try to make change again and submit. Same will happen at CORDA layer transaction will be of latest unconsumed state (comparing the tx hash as state is not committed, it will proceed further and will hang / waits for the node to be live again. It waits for long time i have waited for a minute it did not quite trying.
Now the node comes up and will be syncing with peers. Notary will give exception as there are two states / requests pending to form the next state in chain. Transaction fails.
Question:
Is there a way to know transaction submitted is in progress so RESTful service should wait
Any recommended way of handling.
Is there a way to provide timeout values for node communication.
Do i need to keep monitoring if the node is active or not and accordingly tailor user experience.
Appreciate all the help and support for above issue. Please let me know if there is any additional information needed.
Timeouts
As of Corda 3.3, there is no way to set a timeout either on a Corda RPC request, a flow, or a message to another node. If another node is down when you try to contact it as part of a flow, the message will simply remain in your outbound message queue until it can be successfully delivered.
Checking flow progress
Each flow has a unique run ID. When you start a flow via RPC (e.g. using CordaRPCOps.startFlowDynamic), you get back a FlowHandle. The flow's unique run ID is then available via FlowHandle.id. Once you have this ID, you can check whether the flow is still in progress by checking whether it is still present in the list of current state machines (i.e. flows):
val flowInProgress = flowHandle.id in cordaRPCOps.stateMachinesSnapshot().map { it.id }
You can also monitor the state machine manager to wait until the flow completes, then get its result:
val flowUpdates = cordaRPCOps.stateMachinesFeed().updates
flowUpdates.subscribe {
if (it.id == flowHandle.id && it is StateMachineUpdate.Removed) {
val int = it.result.getOrThrow()
// Handle result.
}
}
Handling duplicate requests
The flow will throw an exception if you try and consume the same state twice, either when you query the vault to retrieve the state or when you try to notarise the transaction. I'd suggest letting the user start the flow again, then handling any double-spend errors appropriately and reflecting them on the front-end (e.g. via an error message and automatic refresh).

Access BizTalk Orchestration Instances from database

Can I access persisted data of running orchestration Instances from the BizTalk database?
My BizTalk application deals with long-running processes and hundreds of orchestration instances can be running at a time.
I would like to access data persisted by these orchestration instances and display it on my application's UI. The data would give an insight about how many instances are running and at which state each of them is.
EDIT :
Let me try to be a little more specific.
My BizTalk application gets ticket requests (messages) from a source and after checking some business rules they are to be assigned to different departments of the company. The tickets can hop between inbox of different departments as each department completes its processing.
Now, the BizTalk orchestration instances are maintaining all the information that which department owns a particular ticket at a given time. I would want to read this orchestration information and generate inbox for each of the department at runtime. I can definitely do this by pushing this information to a separate database and populate the UI from there BUT as all this useful information is already available in the form of orchestration instances I would like to utilize it and avoid any syncing issues.
Does it make any sense?
The answer to your specific question is NO.
BAM exists for this purpose exactly.
Yes it is doable. Your question is little confusing. You can't get the data which is persisted for your orchestration instance, however You can get number of running or dehydrated instances using various options like WMI, ExplorerOM library. As a starting point you can look at some samples provided as part of BizTalk installation under SDK\Samples\Admin folder. Also you should be looking at MSBTS_ServiceInstance WMI class to get the service instances. You can also look at a sample http://msdn.microsoft.com/en-us/library/aa561219.aspx here. You can also use powershell to perform the same operation

Pattern for long running tasks invoked through ASP.NET

I need to invoke a long running task from an ASP.NET page, and allow the user to view the tasks progress as it executes.
In my current case I want to import data from a series of data files into a database, but this involves a fair amount of processing. I would like the user to see how far through the files the task is, and any problems encountered along the way.
Due to limited processing resources I would like to queue the requests for this service.
I have recently looked at Windows Workflow and wondered if it might offer a solution?
I am thinking of a solution that might look like:
ASP.NET AJAX page -> WCF Service -> MSMQ -> Workflow Service *or* Windows Service
Does anyone have any ideas, experience or have done this sort of thing before?
I've got a book that covers explicitly how to integrate WF (WorkFlow) and WCF. It's too much to post here, obviously. I think your question deserves a longer answer than can readily be answered fully on this forum, but Microsoft offers some guidance.
And a Google search for "WCF and WF" turns up plenty of results.
I did have an app under development where we used a similar process using MSMQ. The idea was to deliver emergency messages to all of our stores in case of product recalls, or known issues that affect a large number of stores. It was developed and testing OK.
We ended up not using MSMQ because of a business requirement - we needed to know if a message was not received immediately so that we could call the store, rather than just letting the store get it when their PC was able to pick up the message from the queue. However, it did work very well.
The article I linked to above is a good place to start.
Our current design, the one that we went live with, does exactly what you asked about a Windows service.
We have a web page to enter messages and pick distribution lists. - these are saved in a database
we have a separate Windows service (We call it the AlertSender) that polls the database and checks for new messages.
The store level PCs have a Windows service that hosts a WCF client that listens for messages (the AlertListener)
When the AlertSender finds messages that need to go out, it sends them to the AlertListener, which is responsible for displaying the message to the stores and playing an alert sound.
As the messages are sent, the AlertSender updates the status of the message in the database.
As stores receive the message, a co-worker enters their employee # and clicks a button to acknowledge that they've received the message. (Critical business requirement for us because if all stores don't get the message we may need to physically call them to have them remove tainted product from shelves, etc.)
Finally, our administrative piece has a report (ASP.NET) tied to an AlertId that shows all of the pending messages, and their status.
You could have the back-end import process write status records to the database as it completes sections of the task, and the web-app could simply poll the database at arbitrary intervals, and update a progress-bar or otherwise tick off tasks as they're completed, whatever is appropriate in the UI.

ASP.NET MVC + SQL Server Application: Best Way to Send Out Event-Driven E-mail Notifications

I have an ASP.NET MVC application that utilizes NHiberante and SQL Server 2008 on the backend. There are requirements for sending out both event-driven notifications on a daily/weekly basis AND general notifications on a weekly basis.
Here is an example of how the event-driven workflow needs to work:
Employee(s) create a number of purchase orders.
An e-mail notification is sent daily to any supervisor with a subordinate employee that has made a purchase order with a list of all purchase orders created by subordinates that require his approval. The supervisor should only get this once (e.g. if Employee A creates a PO, his supervisor should not get an e-mail EVERY DAY until he approves). Also, the list of purchase orders should ONLY include those which the supervisor has NOT taken action against. If no purchase orders need approval by a given supervisor ... they should not get an e-mail.
An e-mail notification is sent daily to Dept. Managers with a list of all purchase orders APPROVED by subordinate supervisors in a similar fashion to #2 above.
Any time any action is taken with regards to approving a PO by a supervisor or dept. manager, the employee should get an e-mail notification daily listing ALL such changes. If there are none for a given employee, they should not get an e-mail at all.
So given such a workflow:
What is the best way to schedule such notifications to happen daily, weekly or even immediately after an event occurs?
How would you ensure that such event-driven notifications ONLY get delivered once?
How would you handle exceptions to ensure that failed attempts to send e-mail are logged and so that an attempt could be made to send the following day?
Thanks!
I would have all your emails, notifications, etc saved to a DB / table first and then have a service that polls for new entries in this database or table that handles the actual sending of the email / notification.
To address your specific situations you could have controllers write to the DB when a email / notification is required and a service that does you interval / event specific checks also write to the DB to create new email. This way your application and service don't really care about how or what is happening to these notifications, they are just saying, "Hey do something." and the emailer/notification service is actually doing the implementation.
The advantage to this is that if your email provider is down you don't lose any emails and you have a history of all emails sent with their details of when, who, etc... You can also rip out or change the emailer to do more, like send to Twitter or a phone text message etc. This effectively decouples your notifications from your application.
All applications I have made recently use this type of model and it has stopped emails from being lost due to service failures and other reasons. It has also made it possible to lookup all emails that have gone through the system and get metrics that allows me to optimize the need to send emails by storing extra information in the email record like reason sent, if it's to report an error, etc... Adding on additions such as routing notifications (eg go to text message instead if email) based on time of day or user has been possible with no changes to the primary applicaton.
Your customer might think all they need is email today but you should make sure your solution is flexible enough to allow for more than just email in the future with just minor tweeks.
You can add a normal action in a controller
Function SendEmails() As ActionResult
Dim result As String = ""
''//big timeout to handle the load
HttpContext.Server.ScriptTimeout = 60 * 10 ''//ten minutes
result = DoTheActualWork()
''//returns text/plain
Return Content(result)
End Function
And then call the page from a scheduled task. Can be a scheduled task on the server or in any machine. Use a .vbs for this:
SendEmails.vbs:
''//Force the script to finish on an error.
On Error Resume Next
''//Declare variables
Dim objRequest
Dim URL
Set objRequest = CreateObject("Microsoft.XMLHTTP")
''//Put together the URL link appending the Variables.
URL = "http://www.mysite.com/system/sendemails"
''//Open the HTTP request and pass the URL to the objRequest object
objRequest.open "POST", URL , false
''//Send the HTML Request
objRequest.Send
''//Set the object to nothing
Set objRequest = Nothing
You can host a windows workflow or windows service. and setup a message queue to process these events. You might just use the database for your message queue or you could use ms message queue or use triggers in the database. But this kind of functionality really shouldn't be a responsibility of your front end web app. If push comes to shove you could spawn off another thread in you asp.net application to process this queue.
This can be done using SQL Server Agent read more about it here:
http://msdn.microsoft.com/en-us/library/ms189237.aspx
Sounds like a job for a service or a scheduled job.
You don't want to do it in ASP.NET because you'd have to configure IIS to keep your app alive all the time, which may not be the best idea.
A scheduled task is fine, but you'll have to program it with all the logic for parsing out your data. Not the best for a separation of concerns. Also you'll have to update two code bases if something changes.
A service isn't ideal as it would only truly be doing something once a day. But you could set up a wcf service and have the website queue up emails using the service.

Resources