I have a Receive location with both Rcv as well as a Send pipeline.
Both the pipelines have a Custom Pipeline component that has some Design-Time properties.
In the Send pipeline, if I am setting those properties through BizTalk Admin Console, the properties are not being overridden. However, the same thing works completely fine with Rcv pipeline.
I cannot just set the properties at Design time as it is an Environment based value and need to be set at runtime.
After debugging the pipeline component, this is what I have found:
Following is the usual working of a Pipeline component (http://geekswithblogs.net/cyoung/archive/2011/09/14/biztalk-server-2010-loading-properties-in-custom-pipeline-components.aspx)
When a pipeline component is executed, the Load method of Pipeline components is called twice - first time it loads all the design time properties set on the Pipeline and when the Load method is called the second time, it is loading the Property Bag as set in the Pipeline configuration on BizTalk Admin Console.
Note: Only the properties that are changed will be passed in this property bag.
When we use a Request-Response Receive location, the above mentioned process is followed on the Receive Pipeline.However, when the same pipeline component is called from the Send pipeline,the Load method is only called once and hence none of the properties set from the BizTalk Admin Console are being set and the design-time properties do not get overwritten, thus causing the issue.
I have found a similar post witht he similar issue and no answer(https://social.msdn.microsoft.com/Forums/en-US/c69b3af1-b208-4213-884e-a98b8583761c/strange-ipersistpropertybag-load-pattern?forum=biztalkgeneral)
It looks like it is by design and I will be raising a ticket with Microsoft.
Please make sure you have restarted the host after you made the design time change. Also, you can put a break point also to see how it is behaving.
Related
I'm very new to Application Insights, and I'm thinking of using it for a set of services I plan on implementing with asp.net webapi. I was able to get the basic telemetry up and running very easily (right-clicking on a project on VS, Add Application Insights), but then I hit a block. I plan to have a correlation id set in the request headers for calls to downstream services, and I would like to tag all the telemetry related to one outside call with the same correlation id.
So far I've found that there is a way to configure a TelemetryInitializer, but if I understood correctly, this is run before I get to access the request, meaning I can't check if there is a correlation id that I should attach.
So I guess there might be 2 ways to solve this: 1) if I can somehow actually get access to the request headers before the initializer, that would obviously solver the problem, or 2) somehow get a hold of the TelemetryClient instance that is used to report the automatically generated telemetry.
Perhaps the last resort would be to turn off all of the automatic stuff and do all of it manually, when I could of course control what properties are set on the TelemetryClient. But this would be quite a lot more work, so I'd prefer to find some other solution.
You were rights saying that you should use TelemetryInitializer. All TelemetryInitializers are called when Track method is called on any telemetry item. Autogenerated request telemetry is "tracked" on request OnEnd, you should have all your custom headers available for you at that time.
Please also have a look at OperationId - this is part of the standard context managed by App Inisghts and is used exactly for the purpose of correlating requests with downstream execution. This is created and passed automatically, including traces (if you use trackTrace).
Moreover, we have built-in support in our UX for easily seeing all telemetry for a particular operation - it can be found in "Search->Details-->Related Items-->All telemetry for this operation"
when I use the cloudify(2.7) to deploy an application(e.g. an application app includes two services A and B ),I try to use the Admin.addEventListener() to add some eventListener,but it does't work !
I try to add the ProcessingUnitStatusChangedEventListener ,when I debug the code,the value of (ProcessingUnitStatusChangedEvent)event.getNewStatus() changes from SCHEDULED to INTACT,then SCHEDULED,then INTACT again,
I also try to add the ProcessingUnitInstanceLifecycleEventListener,when I debug the code,the status is intact,but the service is not available!
Is there any other listener or method to know the application(not the services) is available,or I use the listener in the wrong way?
First, the Admin API is internal - use it at your own risk. And you should not be using it the way you are - Cloudify adds a lot of logic on top of the internal Admin API.
Second, it is not exactly clear where you are executing your code from.
You can always use the rest client to get an accurate state of the application. Look at https://github.com/CloudifySource/cloudify/blob/master/rest-client/src/main/java/org/cloudifysource/restclient/RestClient.java#L388
In addition, if you are running this code in a service lifecycle event handler, the easiest way to implement this is to have your 'top' level service, the one that should be available last, write an application entry to the shared attributes store in its 'postStart' event. Everyone else can just periodically poll on this entry. The polling itself is very fast, all in-memory operations.
If you do not have a top-level service, or your logic is more complicated then that, you would need to use the Service Context API to scan each service and its instances to see if they are up. An explanation on getting service instance state is available here:
cloudify service dependsOn other service
I am currently writing an orchestration that is directly bound to the message box, picks up messages and filters according to the filter expression in the receive shape inside said orchestration. The problem I'm having is this; I want to be able to change the filter in the BizTalk bindings, just like send filters are changed in the bindings. Really, I just don't want to have to recompile and re deploy every time My filter changes. Is there a way to do this? I'm thinking maybe modify the binding.xml file somehow, or possibly try a custom pipeline with configurable properties(as my last resort).
If it matters I typically use the BizTalk Deployment Framework for deployments.
No, it is not possible to modify a Receive Shape Filter at Runtime.
If the filter needs to be dynamic, then you will have to apply that logic upstream. The idea of using a custom Pipeline Component is a common solution.
One other approach to consider is leaving you Receive Shape Filter broad and testing each incoming message with the BRE. If it 'passes', continue processing, otherwise exit. BRE Policies/Rules can be updated at runtime.
For this sort of thing you will probably want to execute Business Rules in the Receive Pipeline that then sets a context property on the message that then determines the routing.
That way the filter in the Orchestration is lightweight and doesn't need to be changed.
See http://brepipelineframework.codeplex.com/ (Disclosure: This is written by a colleague of mine)
We are mixing workflows, a workflow using receive activity's more at the end. But at the start we want to pass in some arguments (not using a receive activity!)
Our workflows are already being created and resumed using a dynamic endpoint with IWorkflowCreation and a class derived from WorkflowHostingEndpoint. In the OnGetCreationContext the creationgContext is filled with WorkflowArguments and the workflow runs. At a later part the receive activity's are creating a bookmark which can be resumed with a message. All seems nice.
But in a xamlx there are no WorkflowArguments, i understand why, except that i want them anyway. I though about an activity in which i can write some code to get the Arguments myself, but i do need some help here.
Or is there another way to pass along the WorkflowArguments into a xamls without using Messaging?
You can't pass arguments into a starting workflow service except through the SOAP message that starts it. But there is nothing preventing you from reading any properties in your workflow service. So it is perfectly fine to do read settings or something similar instead of passing them in at startup.
We have solved this exact situation by creating another WCF service which sits alongside our xamlx service on a slightly different url (e.g. /WorkflowMetadata) and this is where we implement a service method that returns a dictionary of string, type.
In the implementation of this service we simply read the xamlx and determine the arguments.
This is what we use to interrogate a target workflow in an activity designer when creating something like a launch-workflow activity.
Creating an activity will not work as that activity will need an instance in order to run. All you want is some metadata about the xamlx service. And if you are using a WorkflowCreationEndpoint to construct a creation context then you are probably only allowing a dictionary of string, object as the start parameters. Therefore standard metadata will not work. This left us with the only option being to provide another service beside the workflow which serves metadata.
Background here: http://blog.petegoo.com/index.php/2011/09/02/building-an-enterprise-workflow-system-with-wf4/
We have a flex application that connects to a proxy server which handles authentication. If the authentication has timeout out the proxy server returns a json formatted error string. What I would like to do is inspect every URLRequest response and check if there's an error message and display it in the flex client then redirect back to login screen.
So I'm wondering if its possible to create an event listener to all URLRequests in a global fashion. Without having to search through the project and add some method to each URLRequest. Any ideas if this is possible?
Unless you're only using one service, there is no way to set a global URLRequest handler. If I were you, I'd think more about architecting your application properly by using a delegate and always checking the result through a particular service which is used throughout the app.
J_A_X has some good suggestions, but I'd take it a bit farther. Let me make some assumptions based on the limited information you've provided.
The services are scattered all over your application means that they're actually embedded in multiple Views.
If your services can all be handled by the same handler, you notionally have one service, copied many times.
Despite what you see in the Adobe examples showing their new Service generation code, it's incredibly bad practice to call services directly from Views, in part because of the very problem you are seeing--you can wind up with lots of copies of the same service code littered all over your application.
Depending on how tightly interwoven your application is (believe me, I've inherited some pretty nasty stuff, so I know this might be easier said than done), you may find that the easiest thing is to remove all of those various services and replace them by having all your Views dispatch a bubbling event that gets caught at the top level. At the top level, you respond to that event by calling one instance of your service, which is again handled in one place.
You may or may not choose to wrap that single service in a delegate, but once you have your application archtected in a way where the service is decoupled from your Views, you can make that choice at any time.
Would you be able to extend the class and add an event listener in the object's constructor? I don't like this approach but it could work.
You would just have to search/replace the whole project.