Can someone explain how the IIB (Integration Bus) and the Business Monitor are connected together?
I've defined monitoring events in my flow.
I then exported the monitoring information to create a model.
Used the model to create an EAR file which I installed in BM as a model.
Used the mqsichangeflowmonitoring command to activate monitoring on all flows.
But when I run my flow, nothing happens, no events are recorded, nothing shows up in business space.
So I think some crucial link between the 2 systems is missing but I can't figure out what it is.
I've already read about creating topics or so, but this information wasn't clear to me.
If someone could shed some light on this it would be greatly appreciated.
you must configure the database for monitoring first.
Create DataCaptureStore and DataCaptureSource.
Define the topic and subscription object.
Define the backout queue.
for each terminal you like to record event define the data you want to export with the event.
for more details about steps above :
take this Link it helps : View Monitoring events Via IBM Integration Bus Web UI
You may have already done this, but you did not mention it...
You need to install a message-driven bean ( MDB ) which subscribes to the topic and forwards the IIB events to Business Montitor as CBEs. See http://www-01.ibm.com/support/knowledgecenter/SSMKHH_9.0.0/com.ibm.etools.mft.doc/ac60392_.htm
Related
I am trying to consume messages from a remote kafka and produce output to a local kafka setup.
Additionally I don't want to 'pollute' the remote Kafka with kakfa streams specific technical topics (e.g. intermediate KTable stores).
I created an example app for demo purposes, however this doesn't work as expected - and I cannot say why.
Not working means: I don't really understand what's going on. Multiple consumers are being created but all to locahost, none seems to point to the remote kafka.
https://github.com/webermich/multibinder
https://github.com/webermich/multibinder/blob/main/src/main/resources/application.yaml
So my qustions are:
Is my general understanding correct that I can build something like I described above?
If the answer to 1) is true: Can anyone spot my mistakes?
I am really unsure about the type: kstream of my bindings, too. I feel this is wrong.
Is there a link to a working example?
I have .Net core App deployed on azure and enabled application insights.
Sometimes Azure application insights End-to-end transaction details do not display all telemetry.
Here it only logs the error and not request or maybe request logged but both do not display together over here(difficult to find out due to many people use it)
Should be like:
Sometimes request log but with no error log.
What could be the reason for happening this? do I need to look into application insights specific set-up/feature?
Edit:
As suggested by people here, try to disable the Sampling feature but still not works, Here is open question as well.
This usually happens due to sampling. By default, adaptive sampling is enabled in the ApplicationInsights.config which basically means that only a certain percentage of each telemetry item type (Event, Request, Dependency, Exception, etc.) is sent to Application insights. In your example probably one part of the end to end transaction got sent to the server, another part got sampled out. If you want, you can turn off sampling for specific types, or completely remove the
AdaptiveSamplingTelemetryProcessor
from the config which completely disables sampling. Bear in mind that this leads to higher ingestion traffic and higher costs.
You can also configure sampling in the code itself, if you prefer.
Please find here a good overview of how sampling works and can be configured.
This may be related to :
When using SDK 2.x, you have to track all events and send the telemetries to Application insights
When using auto-instrumentation with 3.x agent, in this case the agent collect automatically the traffic, logs ... and you have to pay attention to the sampling file applicationinsights.json where you can filter the events.
If you are using java, below the accepted Logging libraries :
-java.util.logging
-Log4j, which includes MDC properties
-SLF4J/Logback, which includes MDC properties
Can I access persisted data of running orchestration Instances from the BizTalk database?
My BizTalk application deals with long-running processes and hundreds of orchestration instances can be running at a time.
I would like to access data persisted by these orchestration instances and display it on my application's UI. The data would give an insight about how many instances are running and at which state each of them is.
EDIT :
Let me try to be a little more specific.
My BizTalk application gets ticket requests (messages) from a source and after checking some business rules they are to be assigned to different departments of the company. The tickets can hop between inbox of different departments as each department completes its processing.
Now, the BizTalk orchestration instances are maintaining all the information that which department owns a particular ticket at a given time. I would want to read this orchestration information and generate inbox for each of the department at runtime. I can definitely do this by pushing this information to a separate database and populate the UI from there BUT as all this useful information is already available in the form of orchestration instances I would like to utilize it and avoid any syncing issues.
Does it make any sense?
The answer to your specific question is NO.
BAM exists for this purpose exactly.
Yes it is doable. Your question is little confusing. You can't get the data which is persisted for your orchestration instance, however You can get number of running or dehydrated instances using various options like WMI, ExplorerOM library. As a starting point you can look at some samples provided as part of BizTalk installation under SDK\Samples\Admin folder. Also you should be looking at MSBTS_ServiceInstance WMI class to get the service instances. You can also look at a sample http://msdn.microsoft.com/en-us/library/aa561219.aspx here. You can also use powershell to perform the same operation
We are applying unittests, integration tests and we are practicing test driven and behaviour driven development.
We are also monitoring our applications and servers from outside (with dedicated software in our network)
What is missing is some standard for a live monitoring inside the apllication.
I give an example:
There should be a cron-like process inside the application, that regularily checks some structural health inside our data structures
We need to monitor that users have done some regular stuff that does not endanger the health of the applications (there are some actions and input that we can not prevent them to do)
My question is, what is the correct name for this so I can further research in the literature. I did a lot of searching but I almosdt always find the xunit and bdd / integration test stuff that I already have.
So how is this called, what is the standard in professional application development, I would like to know if there is some standard structure like xunit, or could xunit libraries even bee used for it? I could not even find appropriate tagging for this question, so please if you read this and know some better tags, why not add them to this answer and remove the ones that don't fit.
I need this for applications written in python, erlang or javascript and those are mostly server side applications, web applications or daemons.
What we are already doing is that we created http gateway from inside the applications that report some stuff and this is monitored by the nagios infrastructure.
I have no problem rolling some cron-like controlled self health scheme inside the applications, but I am interested about knowing some professional standardized way of doing it.
I found this article, it already comes close: Link
It looks like you are asking about approaches how to monitor your application. In general, one can distinguish between active monitoring and passive monitoring.
In active monitoring, you create some artificial user load that would mimic real user behavior, and monitor your application based on these artificial responses from a non-existing user (active = you actively cause traffic to your application). Imagine that you have a web application which allows to get weather forecast for specific city. To have active monitoring, you will need to deploy another application that would call your web application with some predefined request ("get weather for Seattle") every N hours. If your application does not respond within the specified time interval, you will trigger alert based on that.
In passive monitoring, you observe real user behavior over time. You can use log parsing to get number of (un)successful requests/responses, or inject some code into your application that would update some values in database whenever successful or not successful response was returned (passive = you only check other users' traffic). Then, you can create graphs and check whether there is a significant deviation in user traffic. For example, if during the same time of the day one week ago your application served 1000 requests, and today you get only 200 requests, it may mean some problem with your software.
I need to invoke a long running task from an ASP.NET page, and allow the user to view the tasks progress as it executes.
In my current case I want to import data from a series of data files into a database, but this involves a fair amount of processing. I would like the user to see how far through the files the task is, and any problems encountered along the way.
Due to limited processing resources I would like to queue the requests for this service.
I have recently looked at Windows Workflow and wondered if it might offer a solution?
I am thinking of a solution that might look like:
ASP.NET AJAX page -> WCF Service -> MSMQ -> Workflow Service *or* Windows Service
Does anyone have any ideas, experience or have done this sort of thing before?
I've got a book that covers explicitly how to integrate WF (WorkFlow) and WCF. It's too much to post here, obviously. I think your question deserves a longer answer than can readily be answered fully on this forum, but Microsoft offers some guidance.
And a Google search for "WCF and WF" turns up plenty of results.
I did have an app under development where we used a similar process using MSMQ. The idea was to deliver emergency messages to all of our stores in case of product recalls, or known issues that affect a large number of stores. It was developed and testing OK.
We ended up not using MSMQ because of a business requirement - we needed to know if a message was not received immediately so that we could call the store, rather than just letting the store get it when their PC was able to pick up the message from the queue. However, it did work very well.
The article I linked to above is a good place to start.
Our current design, the one that we went live with, does exactly what you asked about a Windows service.
We have a web page to enter messages and pick distribution lists. - these are saved in a database
we have a separate Windows service (We call it the AlertSender) that polls the database and checks for new messages.
The store level PCs have a Windows service that hosts a WCF client that listens for messages (the AlertListener)
When the AlertSender finds messages that need to go out, it sends them to the AlertListener, which is responsible for displaying the message to the stores and playing an alert sound.
As the messages are sent, the AlertSender updates the status of the message in the database.
As stores receive the message, a co-worker enters their employee # and clicks a button to acknowledge that they've received the message. (Critical business requirement for us because if all stores don't get the message we may need to physically call them to have them remove tainted product from shelves, etc.)
Finally, our administrative piece has a report (ASP.NET) tied to an AlertId that shows all of the pending messages, and their status.
You could have the back-end import process write status records to the database as it completes sections of the task, and the web-app could simply poll the database at arbitrary intervals, and update a progress-bar or otherwise tick off tasks as they're completed, whatever is appropriate in the UI.