Gantt chart on Application Insights - azure-application-insights

I'm trying to visualize concurrent requests of a given user on a given time frame. I thought that the best way to do so would be using a gantt chart... How can i do that using Application Insights?

Application Insights can visualize service-side telemetry using Gantt chart (dependencies made as a part of a request). At the moment Application Insights doesn't provide a way to visualize client side telemetry the same way. The latter is on backlog (no ETA at this point).

Related

Grafana 6.2 and custom metrics in Azure App Insights

I have custom metrics in Azure Application Insights stored as JSON objects.
And I have Grafana version 6.1 som can read and visualize them.
I have upgraded to Grafana v 6.2 and these metrics does show data in it. Why?
Is there a way to troubleshoot Grafana? Any logs about data sources?
To make answer visible to others, I'm summarizing the answer Mikael shared in comment:
Restructure my Application Insights data.
From app Insights Grafa can easy retrieve customEvents Measurements, ie number values. While customDimensions are slow to retrieve.
Looks like customMetrics can not be retrieved just with filtering, only with queries.
Queries are very slow. Conclusion: Store your data in customEvents Measurements

What is Kibana used for / where is it useful?

I’ve read the Kibana website so I know theoretically what Kibana can be used for, but I’m interested in real-world stories. Is Kibana solely used for log analysis? Can it be used as some kind of BI tool across your actual data set? Interested to hear what kind of applications its useful in.
Kibana is very useful for visualizing mixed types of data, not just numbers - metrics, but also text and GEO data. You can use to Kibana to visualize:
real-time data about visitors of your webpage
number of sales per region
locations from sensor data
emails sent, server load, most frequent errors
... and many other there's a plethora of use cases, you only need to feed your data into Elasticsearch (and find appropriate visualization)
Kibana is basically an analytics and visualization platform, which lets you easily visualize data from Elasticsearch and analyze it to make sense of it. You can assume Kibana as an Elasticsearch dashboard where you can create visualizations such as pie charts, line charts, and many others.
There are like the infinite number of use cases for Kibana. For example, You can plot your website’s visitors onto a map and show traffic in real time. Kibana is also where you configure change detection and forecasting. You can aggregate website traffic by the browser and find out which browsers are important to support based on your particular audience. Kibana also provides an interface to manage authentication and authorization regarding Elasticsearch. You can literally think of Kibana as a Web interface to the data stored on Elasticsearch.

How do you kick off an Azure ML experiment based on a scheduler?

I created an experiment within Azure ML Studio and published as a web service. I need the experiment to run nightly or possible several times a day. I currently have azure mobile services and azure web jobs as part of the application and need to create an endpoint to retrieve data from the published web service. Obviously, the whole point is to make sure I have updated data.
I see answers like use azure data factory but I need specifics as in how to actually set up the scheduler.
I explain my dilemma further # https://social.msdn.microsoft.com/Forums/en-US/e7126c6e-b43e-474a-b461-191f0e27eb74/scheduling-a-machine-learning-experiment-and-publishing-nightly?forum=AzureDataFactory
Thanks.
Can you clarify what you mean by "experiment to run nightly"?
When you publish the experiment as a web service, it should give you and api key and the endpoint to consume the service. From that point on you should be able to call this api with the key, and it would return the result processing it tru the model you've initially trained. So all you have to do is to do the call from your web/mobile/desktop etc application in the desired times.
If the issue is to retrain the data model nightly, to improve the prediction, then this is a different process. That was only available tru the UI only, now you can achieve this programmatically by using the retraining api.
Kindly find the usage of this here.
Hope this helps!
Mert

Getting detailed step information from via MS Band Cloud API OR ...?

I've been putting together a mechanism to sync activity data collected by the MS Band with our backend via the cloud API and getting all the boilerplate setup for the OAuth flows... The intent being to periodically run this data through our backend processes to categorise periods of meaningful walk based activity.
I've been experimenting with the data available and as far as I can tell we cannot get access to the raw step data (or at a fine grained level )? We have successfully been able to request summary info by hour/day, however this is not fit for our purpose.
What I'd like is to access step data in the form [startTimeStamp,endTimeStamp,stepsTaken,...] where each record represents a continuous period of movement by the wearer.
We would also be able to work with data summarised by minute as this would give enough context to our use case.
Is this possible via the cloud API? or are there any plans to implement the Period "Minute" on the summary API endpoint?
https://api.microsofthealth.net/v1/me/Summaries/Minute?startTime=2015-12-09T14%3A00%3A00.369Z
If this isn't possible perhaps there is another way to make this data available? (via HealthKit on iOS or Fit on Android?)
As a complete alternate perhaps it might be possible to get the accumulated step data detail from the band via bluetooth in a similar fashion to the native MS Health App?
We already use the SDK to stream realtime Heart Rate data during user cardio sessions, but there appears to be no way to extract the historical step info from the band directly.
Thanks!
the Band itself monitors and logs the steps over time. When sync'ing, that log is transferred to the Cloud via the Microsoft Health app. The app then pulls the "steps for the day" from the Health service.
These logs are not exposed to apps via the SDK. The only way to calculate steps per custom short period yourself is to have your app sample the counter in the background on a frequent enough basis in order to do the calculation.

Looking for guidance on WF4

We have a rather large document routing framework that's currently implemented in SharePoint (with a large set of cumbersome SP workflows), and it's running into the edge of what SP can do easily. It's slated for a rewrite into .NET
I've spent the past week or so reading and watching WF4 discussions and demonstrations to get an idea of WF4, because I think it's the right solution. I'm having difficulty envisioning how the system will be configured, though, so I need guidance on a few points from people with experience:
Let's say I have an approval that has to be made on a document. When the wf starts, it'll decide who should approve, and send that person an email notification. Inside the notification, the user would have an option to load an ASP.NET page to approve or reject. The workflow would then have to be resumed from the send email step. If I'm planning on running this as a WCF WF Service, how do I get back into the correct instance of the paused service? (considering I've configured AppFabric and persistence) I somewhat understand the idea of a correlation handle, but don't think it's meant for this case.
Logging and auditing will be key for this system. I see the AppFabric makes event logs of this data, but I haven't cracked the underlying database--is it simple to use for reporting, or should I create custom logging activities to put around my actions? From experience, which would you suggest?
Thanks for any guidance you can provide. I'm happy to give further examples if necessary.
To send messages to a specific workflow instance you need to set up message correlation between your different Receive activities. In order to do that you need some unique value as part of your message data.
The Appfabric logging works well but if you want to create custom a custom logging solution you don't need to add activities to your workflow. Instead you create a custom TrackingParticipant to do the work for you. How you store the data is then up to you.
Your scenario is very similar to the one I used for the Introduction to Workflow Services Hands On Lab in the Visual Studio 2010 Training Kit. I suggest you take a look at the hands on lab or the Windows Server AppFabric / Workflow Services Demo - Contoso HR sample code.

Resources