I have created a simple order manager wf service (state machine) in WF4.
Order (EF entity) properties: Id, IsExport, NumOfProduct, ProductName, Status (waiting, approved, rejected).
State machine states:
1. OrderReceived (validation -> response activity)
2. Waiting (empty)
- Transitions:
update(update order activity) -> waiting state
approve(assign status field, update order and response activities) -> final state
3. Final state.
Correlation key: Order.Id
The implementation rised a few questions.
WF can manage one flow of the order instance, the order flow and the order entity is in one-one relation.
Question is that where and how should I implement the listing of entityes according to a state filter (eg. approved orders or waiting orders). The list should be accessible via WCF service method.
What is the best practise to manage the batch data processing. (eg: Multiple order approval. "Foreach" in the client is not the required sln.)
The state of the order is symbolized by the "state activity persisted instances" and the entity's status field in the db as well.
What is the best practise to decide the state of the entity, listing the active persisted activity instances in the defined state or select the entities from the db (by an activity) according to a state filter parameter?
Any help would be appreciated.
Good questions!
Taking your first and third questions, there are several possible approaches to this. All require that you write a custom WCF service to enumerate the required orders. This would probably not be a WF service; it might be a REST or OData service. How would you implement the service?
You could do it entirely by querying your database through EF. This would have no dependency on WF at all, and is probably the easiest way. Your workflow would update the database record on each state change, and the service would only need to read that value.
You could rely on the tracking mechanism provided by WF, and the extensions that Ron Jacobs refers to in his answer to your question. The tracking infrastructure is described here on MSDN. It is possible to use the tracking object in memory to get the state of active workflows. However, this probably won't work well with IIS/WF services, which are automatically persisted and unloaded when dormant. You would be better off using the tracking facilities to write state records to a database. Your custom service would then just query this tracking database.
Unless you want comprehensive information about the state changes and updates that have occurred through your WF service, suggestion number one should suffice.
As for your second question, that is a little more complicated. Let's say you write a REST service that lists the orders awaiting approval. You write a Web page that displays those orders, and the user can check the orders he wants to approve. Now, the number of workflows that you need to update is the same as the number of orders he approves.
You could, as you mention, call the Web service multiple times—but for a large number of orders that would be an unnecessary overhead.
What's the alternative? You would need to write a custom service method on your non-WF service that takes an array of order ids. That service would have to call your WF service multiple times to update each one. Since the WF service is being called from another service on the same machine, you can use the .Net Named Pipe binding instead of one of the HTTP bindings so that the overhead is much less.
It's worth noting that Entity Framework doesn't support batched updates either. You'd need to write a stored procedure or custom SQL if you wanted the database update to be batched too.
Is all of this worth the effort? Probably! Using WCF and the named pipes binding is pretty standard with WF. You'll need to configure Windows Activation Service for named pipes. Also, if you're not already using AppFabric for Windows Server, have a look into it, because it adds some very good management tools for WF services.
I recently published some new samples to show how you can access the current state of the StateMachine and possible transitions. These might help you.
Windows Workflow Foundation (WF4) - Tracking State Machine Workflow Service
Windows Workflow Foundation (WF4) - Tracking State Machine
Related
I would like to understand an Axon feature.
Currently, we are developing an application using microservice architecture.
We want to store all service events in a central RDBMS database, like for example PostgreSQL.
Is it possible to use such a store?
We have used the below configuration to store events in same domain DB:
#Bean
public AggregateFactory<UserAggregate> userAggregateFactory() {
SpringPrototypeAggregateFactory<UserAggregate> aggregateFactory =
new SpringPrototypeAggregateFactory<>();
aggregateFactory.setPrototypeBeanName("userAggregate");
return aggregateFactory;
}
Now we want to store events in a central Event Store DB, not with domain DB.
Firstly, the AggregateFactory within any Axon application does not define where or how your events are stored at all.
I instead suggest to read the Event Bus & Event Store section of the Axon Framework reference guide on the matter to explain how you can achieve this.
The short answer to your question is by the way yes, you can have a single Event Store backed by a RDBMS, like PostgreSQL, to store all your events in.
Between duplicated instances of a given application it is actually highly recommended to use the same storage location.
As soon as you are going to span different Bounded Context's, I would suggest to define different Event Stores per context though.
Concluding, you are using an old version of Axon Framework.
I would highly recommend to move the at least the latest Axon 3 release, being 3.4.3, but ideally you start using 4.1.2.
Note that there is no active development taking place on Axon 3 any more, hence the suggestion.
Background
I've been killing some neurons lately with this. I would like to make a multi-tier application for parcel services like UPS and such. Long story short, the backend will be a WCF-based server while the consumer will be an ASP.NET MVC application. The idea is that the backend will handle all business operations (like adding a new shipment, editing existing shipments, carriers and such) but will provide the consumer with data in form or queries.
And the Issue Is...?
My plan for business operations is that the consumer should pass all the information required to complete the operation (pretty much like a model, i.e. for adding shipments, the consumer would send all the required information for those shipments.) Now, my actual issue is with data querying.
The consumer application should be able to display the backend-provided data in anyway it desires, not limited by a DTO. For example, when listing shipments, I only want to show a grid with Name, ID, date shipped and such, not the entire shipment object graph.
How can the consumer application specify the data projection it needs to the WCF endpoint?
Options
I thought on creating several operation method overloads exposing different DTOs for different purposes. I.e.:
IList<ShipmentDetailsDTO> GetAllShipmentsAsDetailed();
IList<ShipmentListingItemDTO> GetAllShipmentsAsListingItems();
I dropped the idea since the backend is adapting to the consumer application needs, and this is not a good practice. The backend should be agnostic of the consumer.
Another option is to combine WCF data services for querying data and WCF regular services for business operations. This way the MVC application can project the data like a regular LINQ query against the WCF data services. Sounds quite elegant but I would like to hear a second opinion.
Any thoughts? What would you do if you were me? I need an elegant and practical solution for this.
Team:
I need to invoke a WF activity (XAML) from a WF service (XAMLX) asynchronously. I am already referencing the Microsoft.Activities.Extensions framework and I'm running on the Platform Update 1 for the state machine -- so if the solution is already in one of those libraries I'm ready!
Now, I need to invoke that activity (XAML) asynchronously -- but it has an output parameter that needs to set a variable in the service (XAMLX). Can somebody please provide me a solution to this?
Thanks!
* UPDATE *
Now I can post pictures, * I think *, because I have enough reputation! Let me put a couple out here and try to better explain my problem. The first picture is the WF Service that has the two entry points for the workflow -- the second is the workflow itself.
This workflow is an orchestration mechanism that constantly restarts itself, and has some failover mechanisms (e.g. exit on error threshold and soft exit) so that we can manage our queue of durable transactions using WF!
Now, we had this workflow working great when it was all one WF Service because we could call the service, get a response back and send the value of that response back into another entry point in a trigger to issue a soft exit. However, a new requirement has arrisen asking us to make the workflow itself a WF activity in another project and have the Receive/Send-Reply sequences in the WF Service Application project.
However, we need to be able to startup this workflow and forget about it -- then let it know somehow that a soft exit is necessary later on down the road -- but since WF executes on a single thread this has become a bit challenging at best.
Strictly speaking in XAML activities Parallel and ParallelForEach are how you perform asynchrony.
The workflow scheduler only uses a single thread (much like UI) so any activity that is running will typically be running on the same thread, unless it implements AsyncCodeActivity, in which case you are simply handing back the scheduler thread to the runtime while waiting for a callback from whichever async code your AsyncCodeActivity implementation is calling.
Therefore are you sure this is what you want to achieve? Do you mean you want to run it after you have sent your initial response? In this case place your activity after the Send Reply.
Please provide more info if these suggestions don't answer your question./
Update:
The original requirement posed (separating implementation from the service Receive/Send activities) may actually be solved by hosting the target activity as a service. See the following link
http://blog.petegoo.com/index.php/2011/09/02/building-an-enterprise-workflow-system-with-wf4/
I am in the process of updating a website for the third time in in 2 years, looks like this is going to happen all of the time and several websites are using the same DB. I want to use the same code for all of them and keep it easy to update in the future. So I plan on writing some interfaces and then place the business login in a service to keep things consistent across the board and add in some unit testing.
So I am looking at my current repositories and I am not sure what should be in my Interface and what should be in my Service.
For example I have an Add method - no brainer I have an Add in the interface and an add in the Service.
Then I have an AuthenticateAccountManager method that takes 3 parameters, should this be in both or just the Service and have a simple Get method in my interface (say by Username) and then do the validation against th other 2 properties in the Service.
I also have a QualifyPartner that sets a bool to true, should this just be in the Service and again have a simple Get method in my Interface, trying to keep that as small as possible?
Following the Separation of Concerns principle- AuthenticateAccountManager is a service-level operation. It should call into your repository, which will return the raw User data. The service then authenticates or not based on what is returned by the repository.
The general guideline is that the repository is responsible for retrieval and committing of data only. Interpreting and executing behaviors based on the data is business logic.
I'm wondering what strategies exist to handle object integrity in a stateful client like a Flex or Silverlight app.
What I mean is the following: consider an application where you have a Group and a Member entity. Groups contain multiple members and members can belong to multiple groups. A view lists the different groups, which are lazy loaded (no members initially). When requesting the details of the group, all members are loaded and cached so the next time we don"t need to invoke a service to fetch the details and members of the group.
Now, when we request the details of another group that has the same member of a group that was already loaded, would we care about the fact that the member is already in memory?
If we don't, I can see a potential data conflict when we edit the member (referenced in the first group) and changes are not applied to the other member instance. So to solve this, we could check the result of the service call (that gets the group details) for members that are already loaded and then replace the loaded ones with the cached ones.
Any tips, ideas or experiences to share?
What you are describing is something that is usually solved by a "first-level cache" (in Hibernate, the "Session"; in JPA, the "EntityManager") which ensures that only one instance of a particular entity exists in a particular context. As you suggest, this could be applied to objects as they are fetched from the server to ensure that all references to a particular entity are in fact references to the same object instance. You would also need a mechanism to ensure that entities created inside the AVM exist in that same context so they have similar logic applied to them.
The Granite Data Services project has a project called "Tide" which aims to solve this problem:
http://www.graniteds.org/confluence/display/DOC/6.+Tide+Data+Framework
As far as DDD goes, it's important not to design the backend as a simple data access API, such as simply exposing a set of DAOs or Repositories. The client application cannot be trusted and in fact is very easy to manipulate with a debugging proxy such as Charles. I always design a services API that is tailored to the UI (so that data for a screen can be fetched in a single call) and has necessary security or validation logic enforced, often using annotations and Spring AOP.
What I would do is create a client side application service which does the caching and servicing of requests for data. This would handle whether an object already exists in the cache. If you are using DDD then you'll need to decide what is going to be your aggregate root entity. Group or Member. You can't have both control each other. There needs to be one point for managing loading etc. Check out this video on DDD at the Canadian ALT.NET OpenSpaces. http://altnetpedia.com/Calgary200808.ashx