Reusing Receive/SendReply in WF4 - workflow-foundation-4

How can Receive/SendReply be reused in WF (4)?
E.g
Receive/SendReply (Start)
Process data
Decision (data is valid?)
True
Pick
1) Receive/SendReply (Confirm)
2) Receive/SendReply (Input data)
3) Receive/SendReply (Restart)
False
Pick
1) Receive/SendReply (Input data)
2) Receive/SendReply (Restart)
It should be possible to call Input data and Restart in two different Picks.
Currently I'm using WF 4, but I'd like to hear if 4.5 has a simpler solution.

Just create a custom composite activity and add the Receive/SendReply pair to that and reuse the that custom activity in multiple places. This has been the basic reuse mechanism in WF4 since its release.

PickBranch is sealed, so you can't go the x:Class route. It's a hack, but you may be able to use a custom MarkupExtension to achieve your needs.

Related

MDriven ECO_ID duplicates

We appear to have a problem with MDriven generating the same ECO_ID for multiple objects. For the most part it seems to happen in conjunction with unexpected process shutdowns and/or server shutdowns, but it does also happen during normal activity.
Our system consists of one ASP.NET application and one WinForms application. The ASP.NET app is setup in IIS to use a single worker process. We have a mixture of WebForms and MVC, including ApiControllers. We're using a rather old version of the ECO packages: 7.0.0.10021. We're on VS 2017, target framework is 4.7.1.
We have it configured to use 64 bit integers for object id:s. Database is Firebird. SQL configuration is set to use ReadCommitted transaction isolation.
As far as I can tell we have configured EcoSpaceStrategyHandler with EcoSpaceStrategyHandler.SessionStateMode.Never, which should mean that EcoSpaces are not reused at all, right? (Why would I even use EcoSpaceStrategyHandler in this case, instead of just creating EcoSpace normally with the new keyword?)
We have created MasterController : Controller and MasterApiController : ApiController classes that we use for all our controllers. These have a EcoSpace property that simply does this:
if (ecoSpace == null)
{
if (ecoSpaceStrategyHandler == null)
ecoSpaceStrategyHandler = new EcoSpaceStrategyHandler(
EcoSpaceStrategyHandler.SessionStateMode.Never,
typeof(DiamondsEcoSpace),
null,
false
);
ecoSpace = (DiamondsEcoSpace)ecoSpaceStrategyHandler.GetEcoSpace();
}
return ecoSpace;
I.e. if no strategy handler has been created, create one specifying no pooling and no session state persisting of eco spaces. Then, if no ecospace has been fetched, fetch one from the strategy handler. Return the ecospace. Is this an acceptable approach? Why would it be better than simply doing this:
if (ecoSpace = null)
ecoSpace = new DiamondsEcoSpace();
return ecoSpace;
In aspx we have a master page that has an EcoSpaceManager. It has been configured to use a pool but SessionStateMode is Never. It has EnableViewState set to true. Is this acceptable? Does it mean that EcoSpaces will be pooled but inactivated between round trips?
It is possible that we receive multiple incoming API calls in tight succession, so that one API call hasn't been completed before the next one comes in. I assume that this means that multiple instances of MasterApiController can execute simultaneously but in separate threads. There may of course also be MasterController instances executing MVC requests and also the WinForms app may be running some batch job or other.
But as far as I understand id reservation is made at the beginning of any UpdateDatabase call, in this way:
update "ECO_ID" set "BOLD_ID" = "BOLD_ID" + :N;
select "BOLD_ID" from "ECO_ID";
If the returned value is K, this will reserve N new id:s ranging from K - N to K - 1. Using ReadCommitted transactions everywhere should ensure that the update locks the id data row, forcing any concurrent save operations to wait, then fetches the update result without interference from other transactions, then commits. At that point any other pending save operation can proceed with its own id reservation. I fail to see how this could result in the same ID being used for multiple objects.
I should note that it does seem like it sometimes produces id duplicates within one single UpdateDatabase, i.e. when saving a set of new related objects, some of them end up with the same id. I haven't really confirmed this though.
Any ideas what might be going on here? What should I look for?
The issue is most likely that you use ReadCommitted isolation.
This allows for 2 systems to simultaneously start a transaction, read the current value, increase the batch, and then save after each other.
You must use Serializable isolation for key generation; ie only read things not currently in a write operation.
MDriven use 2 settings for isolation level UpdateIsolationLevel and FetchIsolationLevel.
Set your UpdateIsolationLevel to Serializable

3-tier Architecture business logic

I have a question about Service Layer - now my controllers interacts with Service Layer where I work with EF Context but sometime business logic inside Service methods can be really huge approximately 1000 lines.
For example:
class OrderService {
public async Task UpdateOrder(OrderDto dto) {
if(dto.Products.Count > 3)
{
**change order status that can take 300 lines**
}
}
}
Where can I keep that logic that I use inside if statement ?
Usually I just create private method inside that service but I'm not sure that it is a good approach. Maybe create specific class for that or something like this ?
Thank you.
This question can go more to the developer's personal point of view. Here's some techniques I use to organize my code. Just try to see if they fit your scenario and change them to your need.
Move some logic to your entities.Example: An entity should be able to know if is in a good state.
Entity.Type = Square
Entity.Edges = 3
Entity.IsValid() => False // Squares has 4 edges
You can create multiple classes that will help you add some logic to your code. In a feature/entity you can create multiple validation classes and then do something like entity.AddValidation(EdgeValidator).
If you'll have a different algorithm base on conditions you can use the Strategy Pattern which is a Behavioral patterns.
If House.FamilySize == 1 then ProcessAlgorithm = SinglePersonAlgorithm
Else ProcessAlgorithm = MultiplePersonAlgorithm
ProcessAlgorithm(House)
The old and classic split a function in multiple ones.
Incapsulating different branches of if statements in separate classes may require some rework and refactoring but may well lead to great results and allow you to use patterns like for example
Chain of responsibility
Pipeline processing
and eventually all other Behavioral patterns

Updating sort-order indicator in QTableView/QHeaderView when model is sorted

I want to know how to ensure the sort indicator in the horizontal header of a QTableView is updated when a programmatic sort is performed on the model.
Here's the problem:
QStandardItemModel model(3,1);
QTableView view;
view.setModel( &model );
// Populate the model ensuring it is not in a sorted order
for( int row = 0; row < model.rowCount(); ++row )
{
model.setItem( row , 0 ,
new QStandardItem(QString::number((row+1)%model.rowCount())));
}
view.setSortingEnabled( true );
// At this point everything is consistent since enabling the sorting
// triggers a sort that matches the indicator in the horizontalHeader (see A)
model.sort( 0 , Qt::AscendingOrder );
// However at this point the sort order has been reversed but the
// header's sort indicator remains unchanged (see B)
A: B:
As you can see the sort indicator remains the same and therefore is inconsistent with the actual sort order.
In my application I have two views that interact with the same model and sorting can be triggered from either of them. I don't see anything in QAbstractItemModel that signals when a sort has been performed. It seems like QHeaderView/TableView assume that they are the only thing that can trigger a sort.
Does Qt provide facilities for coping with this that I'm missing? If not, what's the best way of keeping the sort indicator up-to-date without breaking the encapsulation of the multiple views on the model too much?
One of the ItemDataRole enumerators available since Qt 4.8 is InitialSortOrderRole.
http://qt-project.org/doc/qt-4.8/qt.html#ItemDataRole-enum
It should therefore be possible to transmit sort order information through the QAbstractItemModel::headerData method.
I've tried this however and found that QTableView and QHeaderView do not seem to update in response to changes in this headerData role. A customised header view would appear to be necessary...
It might be worth it because passing this information via the model allows any number of views to synchronise without any external agent having to track all the views in existence so that it can distribute notifications. It would also work seamlessly through model proxy stacks such as those built with QSortFilterModelProxy.
The solution I've come up with to avoid breaking encapsulation too much is
to have a signal on each view (on QTableView the sortIndicatorChanged signal suffices and on my custom view I have added a similar signal).
the manager of views connects to these signals
when any view emits such a signal the manager of views calls a slot on all the other views so that they can synchronise their sort indicators
I still feel like I might be missing something - surely this is a common problem? It seems to me that QAbstractItemModel should have a way of transmitting sort-order information to views...

Query workflow tasks based on custom property with other criteria than equals

I have the need to construct a WorkflowTaskQuery with a custom workflow model date as criteria. The criteria needs to be "currentDate >= myCustomDate".
I have noticed that it is possible to add custom properties to the WorkflowTaskQuery but looking into the implementation it seems like those properties all are added as equals-criterias. (reference(4.2.x): org.alfresco.repo.workflow.activiti.ActivitiWorkflowEngine.addTaskPropertiesToQuery)
To get all active tasks and do the filtering on the returned result will not be a good approach since there will be thousands of running workflow tasks in this implementation.
The only other approach I can think of would be to subclass both WorkflowTaskQuery and ActivitiWorkflowEngine and rewrite some private methods (like createRuntimeTaskQuery) and handle my special cases on my own there. (Activiti has methods like greaterThan and so on when searching for tasks based on variables....)
If anyone have any better suggestions, please feel free to share them with me :)
We are implementing a solution that drives Activiti using the Rest interface and have successfully implemented task queries using the POST /rest/service/query/task
The body of the request contains the conditions and the operator to use in query can have the following values: "equals", "notEquals", "equalsIgnoreCase", "notEqualsIgnoreCase", "lessThan", "greaterThan", "lessThanOrEquals", "greaterThanOrEquals" and "like".
Now, with that said.....I'm not sure I understand your query.
currentData >= customDate, obviously currentDate is self explanatory, but is customDate a process instance variable or a task local variable? It may impact the format of the query.

How to assign Custom Activity Result to a Root level variable at Runtime (in Custom Activity Execute method) in workflow?

Assume that I have a workflow with 3 Custom Activities which are placed in a Sequence Activity. And I created a Boolean variable (name it as “FinalResult”) at Sequence Activity level (Root) to hold the Result. My Intention is, I want to assign each Custom Activity Result to Root level variable (“FinalResult”) within the Custom Activity Execute method after finishing the activity.
I can get this by declaring the output argument in Custom Activity and placing the variable name at design time in the properties window of activity manually while designing the policy.
But I don’t want to do this by the end user. I want just the end user drag and drop the activities and write conditions on the” FinalResult” variable. Internally I have to maintain the Activity Result in “FinalResult” Variable through programmatically.
Finally I want to maintain the workflow state in “FinalResult” variable and access it anytime and anywhere in the workflow.
I tried like this below getting error "Property does not exist".
WorkflowDataContext dataContext = context.DataContext;
PropertyDescriptorCollection propertyDescriptorCollection = dataContext.GetProperties();
foreach (PropertyDescriptor propertyDesc in propertyDescriptorCollection)
{
if (propertyDesc.Name == "FinalResult")
{
object data = propertyDesc.GetValue(dataContext);// as WorkUnitSchema;
propertyDesc.SetValue(dataContext, "anil");
break;
}
}
Please let us know the possible solutions for the same.
I do this all the time.
Simply implement IActivityTemplateFactory in your activity. When dragged and dropped onto the design surface, the designer will determine if your activity (or whatever is being dropped) implements this interface. If it does, it will construct an instance and call the Create method.
Within this method you can 1) instantiate your Activity and 2) configure it. Part of configuring it is binding your Activities' properties to other Activities' arguments and/or variables within the workflow.
There are a few ways to do this. Most simply, require these arguments/variables have well known names. In this case, you can simply bind to them via
return new MyActivity
{
MyInArgument = new VisualBasicValue<object>(MyActivity.MyInArgumentDefaultName),
};
where MyActivity.MyInArgumentDefaultName is the name of the argument or variable you are binding to.
Alternatively, if that variable/argument is named by the user... you're in for a world of hurt. Essentially, you have to
Cast the DependencyObject target passed to the Create method to an ActivityDesigner
Get the ModelItem from that AD
Walk up the ModelItem tree until you find the argument/value of the proper type
Use its name to create your VisualBasicValue
Walking up the ModelItem tree is super duper hard. Its kind of like reflecting up an object graph, but worse. You can expect, if you must do this, that you'll have to fully learn how the ModelItem works, and do lots of debugging (write everything down--hell, video it) in order to see how you must travel up the graph, what types you encounter along the way, and how to get their "names" (hint--it often isn't the Name property on the ModelItem!). I've had to develop a lot of custom code to walk the ModelItem tree looking for args/vars in order to implement a drag-drop-forget user experience. Its not fun, and its not perfect. And I can't release that code, sorry.

Resources