Accessing workflowArguments in a hosted workflow - workflow-foundation-4

We are mixing workflows, a workflow using receive activity's more at the end. But at the start we want to pass in some arguments (not using a receive activity!)
Our workflows are already being created and resumed using a dynamic endpoint with IWorkflowCreation and a class derived from WorkflowHostingEndpoint. In the OnGetCreationContext the creationgContext is filled with WorkflowArguments and the workflow runs. At a later part the receive activity's are creating a bookmark which can be resumed with a message. All seems nice.
But in a xamlx there are no WorkflowArguments, i understand why, except that i want them anyway. I though about an activity in which i can write some code to get the Arguments myself, but i do need some help here.
Or is there another way to pass along the WorkflowArguments into a xamls without using Messaging?

You can't pass arguments into a starting workflow service except through the SOAP message that starts it. But there is nothing preventing you from reading any properties in your workflow service. So it is perfectly fine to do read settings or something similar instead of passing them in at startup.

We have solved this exact situation by creating another WCF service which sits alongside our xamlx service on a slightly different url (e.g. /WorkflowMetadata) and this is where we implement a service method that returns a dictionary of string, type.
In the implementation of this service we simply read the xamlx and determine the arguments.
This is what we use to interrogate a target workflow in an activity designer when creating something like a launch-workflow activity.
Creating an activity will not work as that activity will need an instance in order to run. All you want is some metadata about the xamlx service. And if you are using a WorkflowCreationEndpoint to construct a creation context then you are probably only allowing a dictionary of string, object as the start parameters. Therefore standard metadata will not work. This left us with the only option being to provide another service beside the workflow which serves metadata.
Background here: http://blog.petegoo.com/index.php/2011/09/02/building-an-enterprise-workflow-system-with-wf4/

Related

Grails 4 Async with Database Operations

My Grails 4.0.10 app needs to call an external service. The call may take up to 3 minutes, so it has to be async'ed. After reading the doco I wrote a non-blocking service method to perform the call using a Promise without too much trouble.
The documentation describes how async outcome can be displayed.
In my case the outcome affects the database. I must create new domain objects, modify existing domain objects and persist the result in the onComplete closure. The doco is rather quiet on how to do this.
These are my assumptions about the onComplete closure. My question is: Are the assumptions valid? Is this the proper way to do it?
No injected stuff is available, neither services nor (for example) log -- things you normally expect in a service
Database logic must be enclosed first within Tenants.withId if multitenancy is used, and then within withTransaction
withTransaction is prefixed with a domain name. However, other domains may freely be manipulated and persisted in the same closure
Domain instances picked up before the async call may be attached to the current session like this instance.attach() and then modified and saved
If logging is needed, create a new log instance

Adding correlation id to automatically generated telemetry with App Insights

I'm very new to Application Insights, and I'm thinking of using it for a set of services I plan on implementing with asp.net webapi. I was able to get the basic telemetry up and running very easily (right-clicking on a project on VS, Add Application Insights), but then I hit a block. I plan to have a correlation id set in the request headers for calls to downstream services, and I would like to tag all the telemetry related to one outside call with the same correlation id.
So far I've found that there is a way to configure a TelemetryInitializer, but if I understood correctly, this is run before I get to access the request, meaning I can't check if there is a correlation id that I should attach.
So I guess there might be 2 ways to solve this: 1) if I can somehow actually get access to the request headers before the initializer, that would obviously solver the problem, or 2) somehow get a hold of the TelemetryClient instance that is used to report the automatically generated telemetry.
Perhaps the last resort would be to turn off all of the automatic stuff and do all of it manually, when I could of course control what properties are set on the TelemetryClient. But this would be quite a lot more work, so I'd prefer to find some other solution.
You were rights saying that you should use TelemetryInitializer. All TelemetryInitializers are called when Track method is called on any telemetry item. Autogenerated request telemetry is "tracked" on request OnEnd, you should have all your custom headers available for you at that time.
Please also have a look at OperationId - this is part of the standard context managed by App Inisghts and is used exactly for the purpose of correlating requests with downstream execution. This is created and passed automatically, including traces (if you use trackTrace).
Moreover, we have built-in support in our UX for easily seeing all telemetry for a particular operation - it can be found in "Search->Details-->Related Items-->All telemetry for this operation"

Is Biztalk 2010 Receive Shape Filter configurable

I am currently writing an orchestration that is directly bound to the message box, picks up messages and filters according to the filter expression in the receive shape inside said orchestration. The problem I'm having is this; I want to be able to change the filter in the BizTalk bindings, just like send filters are changed in the bindings. Really, I just don't want to have to recompile and re deploy every time My filter changes. Is there a way to do this? I'm thinking maybe modify the binding.xml file somehow, or possibly try a custom pipeline with configurable properties(as my last resort).
If it matters I typically use the BizTalk Deployment Framework for deployments.
No, it is not possible to modify a Receive Shape Filter at Runtime.
If the filter needs to be dynamic, then you will have to apply that logic upstream. The idea of using a custom Pipeline Component is a common solution.
One other approach to consider is leaving you Receive Shape Filter broad and testing each incoming message with the BRE. If it 'passes', continue processing, otherwise exit. BRE Policies/Rules can be updated at runtime.
For this sort of thing you will probably want to execute Business Rules in the Receive Pipeline that then sets a context property on the message that then determines the routing.
That way the filter in the Orchestration is lightweight and doesn't need to be changed.
See http://brepipelineframework.codeplex.com/ (Disclosure: This is written by a colleague of mine)

hiding method from certain layers in project

I was looking through an old project and wanted to see if anyone had a suggestion on how to hide certain methods from being called by various layers. This was a 3 tier project, webapplication -> web service -> database
In the application there is a User object for example. When a User was being updated, the webapplication would create a User object and pass it to the webservice. The webservice would use the DataAccessLayer to save the User object to the database. After looking at this I was wondering if instead I should have made a Save method in the User class. This way the service and simply call the Save on the User object which would trigger the db update.
However doing it this way would expose the Save to be called from the webapplication as well, correct? Since the webapplication also has access to the same User object.
Is there anyway around this, or is it better to avoid this altogether?
There is a separation of concerns by keepeing the User object as object that only holds data with no logic in it. you better keep it separated for the following reasons:
As you stated, it is a bad practice since the Save' functionality will be exposed to other places/classes where it is irrelevant for them (This is an important for programming generally).
Modifying the service layer - I guess you are using WCF web service as you can transfer a .NET object (c#/VB) to the service via SOAP. If you put the saving logic in the 'User' object, you can't replace it another webservice that receives a simple textual data structures like JSON or XML or simply doesn't support .NET objects.
Modifying the data storage layer - If you want, for example, to store the data inside a different place like other database such as MongoDB, RavenDB, Redis or what ever you want, you will have to reimplement each class that responsible for updating the data. This is also relevant for Unit Testing and Mocking, making them more complicated to interrogate.

Attaching an event listener to all URLRequest's

We have a flex application that connects to a proxy server which handles authentication. If the authentication has timeout out the proxy server returns a json formatted error string. What I would like to do is inspect every URLRequest response and check if there's an error message and display it in the flex client then redirect back to login screen.
So I'm wondering if its possible to create an event listener to all URLRequests in a global fashion. Without having to search through the project and add some method to each URLRequest. Any ideas if this is possible?
Unless you're only using one service, there is no way to set a global URLRequest handler. If I were you, I'd think more about architecting your application properly by using a delegate and always checking the result through a particular service which is used throughout the app.
J_A_X has some good suggestions, but I'd take it a bit farther. Let me make some assumptions based on the limited information you've provided.
The services are scattered all over your application means that they're actually embedded in multiple Views.
If your services can all be handled by the same handler, you notionally have one service, copied many times.
Despite what you see in the Adobe examples showing their new Service generation code, it's incredibly bad practice to call services directly from Views, in part because of the very problem you are seeing--you can wind up with lots of copies of the same service code littered all over your application.
Depending on how tightly interwoven your application is (believe me, I've inherited some pretty nasty stuff, so I know this might be easier said than done), you may find that the easiest thing is to remove all of those various services and replace them by having all your Views dispatch a bubbling event that gets caught at the top level. At the top level, you respond to that event by calling one instance of your service, which is again handled in one place.
You may or may not choose to wrap that single service in a delegate, but once you have your application archtected in a way where the service is decoupled from your Views, you can make that choice at any time.
Would you be able to extend the class and add an event listener in the object's constructor? I don't like this approach but it could work.
You would just have to search/replace the whole project.

Resources