How does one search across application insight analytics tables? - azure-application-insights

I'd like to search for all analytics for a given operation_id in Application Insights Analytics without having to specify each table (request, dependencies, exceptions, traces, etc.). I recall that there is some way that doesn't prompt in intellisense, but unable to locate it.

This should work:
union *
| where operation_Id == "<id>"
| take 10

I've also found that the following works as well:
search *
| where operation_Id =="<Id>"
I'll update with my source once I figure out where I found this

Related

Google Log Stackdriver Sink/Router

I am trying to kick off Google Cloud Function when two tables ga_sessions and events have successfully created in BigQuery (these tables can be created anytime in the gap of 3-4 hours).
I have written the following log stackdriver sink/log router to which Pub/Sub topic is subscribed (which in turn kick off google cloud function). However, it is not working. If I use sink/router individually for ga_sessions and events it works fine but when I combine them together then it doesn't work.
So my question is how do I take two different events from log stackdriver, combined them together & pass them to pub/sub topic
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.datasetId="my_dataset"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.projectId="my-project"
protoPayload.authenticationInfo.principalEmail="firebase-measurement#system.gserviceaccount.com"
protoPayload.methodName="jobservice.jobcompleted"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.tableId:"events"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.writeDisposition:"WRITE_TRUNCATE"
protoPayload.serviceData.jobCompletedEvent.job.jobStatus.state:"DONE"
NOT protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.tableId:"events_intraday"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.datasetId="my_dataset"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.projectId="my-project"
protoPayload.authenticationInfo.principalEmail="analytics-processing-dev#system.gserviceaccount.com"
protoPayload.methodName="jobservice.jobcompleted"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.tableId:"ga_sessions"
NOT protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.tableId:"ga_sessions_intraday"
Thanks in advance for your help/guidance.
The trick here is to create a metric that would actually show 1 when both conditions are met.
Try creating a new logs based metric and swithc to "query editor". There you can create your own metric using MQL language.
To be able to create a single metric from two "metrics" you need to use something like this:
{ fetch gce_instance :: compute.googleapis.com/instance/cpu/utilization ;
fetch gce_instance :: compute.googleapis.com/instance/cpu/reserved_cores
} | join | div
And here's some useful info on how to create an alerting policy using MQL.
The code for the alerting policy can look like this:
{ fetch gce_instance :: compute.googleapis.com/instance/cpu/utilization ;
fetch gce_instance :: compute.googleapis.com/instance/cpu/reserved_cores
}
| join | div
| condition val() >1
This is just an example to demonstrate that it is very likely possible to create the metric to monitor the creation of BigQuery tables but you have to test it yourself.

Why is the same query syntax not supported in Log Analytics and Application Insight Logs?

I am trying to figure out why the same query is not valid in both a Log Analytics and Application Insights workspace.
I've been working on creating a cross-resource query and when I write the syntax in Log Analytics it has a syntax error around the workspace operator. It is successful when I do the same thing in an Application Insights query.
The query looks like this:
union
workspace("DefaultWorkspace-b432aa91-rrrr-qqqq-zzzz-aabbba7e8f42-WUS2").SecurityEvent
,workspace("DefaultWorkspace-fca02198a-aaaa-eeee-cccc-aaad9fbf7302-EUS").SecurityEvent
| count
Since in both workspaces it references other workspaces, I would think it would be portable if queried under the same tenant (which I am). In Azure Log Analytics it gives me the error:
Unknown function: 'workspace'.
I am running these in the Azure portal at the moment.
Can you try adding a space after the comma? This query is working for my own workspaces.
union
workspace("DefaultWorkspace-b432aa91-rrrr-qqqq-zzzz-aabbba7e8f42-WUS2").SecurityEvent
, workspace("DefaultWorkspace-fca02198a-aaaa-eeee-cccc-aaad9fbf7302-EUS").SecurityEvent
| count
This is not a direct answer, but suggestions.
As far as I know, only if the query missing table name, then it will cause the error "Unknown function: 'workspace'.". Like below:
So first, make sure in your query, you are adding table name after workspace("xxx"). I notice that in your query, you're using the correct syntax, but just want to make sure the table name is there.
Second, if you're adding table name after workspace("xxx"), and still get this error. You can try just use the query below to check if the workspace("xxx") works:
workspace("adsmit-test").Heartbeat
| count
Please feel free to let me know if you still have the issue.
I tried the same statement 2 weeks later in both an Application Insights and Log Analytics and it works in both.

Search option in Crashlytics/Firebase where I can search by the name of the crash?

Is there search option in Crashlytics/Firebase where I can search by name of the crash like for example
java.lang.IllegalStateException: Expected BEGIN_ARRAY but was STRING at line 1 column 3
Now is there search option where if I search for Expected BEGIN_ARRAY, then I got all the errors containing that.
I have searched everywhere but didn't find anything
As a follow-on from Mike Bonnell's answer, once you've linked Firebase to BigQuery, you'll want to run something like:
SELECT *
FROM `com.example.yourpackage`
WHERE exceptions.exception_message CONTAINS "Expected BEGIN_ARRAY"
A full list of Crashlytics BigQuery export fields are available in the docs.
Mike from Firebase here. You can link Big Query and Firebase Crashlytics in order to get full custom search or data analysis.

Correct Firebase database layout for a user to user (and group) chat app?

I'm trying to understand the best database structure to store and retrieve user to user conversations using the Firebase database for a chat app (web based).
My current plan is to give each chat its own ID which would be created by combining the unique Firebase IDs of the two chat participants (like UserID1_UserID2), for example: FQ5d0jwLQDcQLryzevBxKrP72Bb2_GSIbxEMi4jOnWhrZaq528KJKDbm8 this chat ID would be stored in the database, and would contain the messages sent between the two participants.
Example layout:
MYAPP
|_______conversations
| |_____UserID1_UserID2
| | |
| | |__OshwYF72Jhd9bUw56W7d
| | | |__name:"Jane"
| | | |__text:"Hello!"
| | |
| | |__KbHy4293dYgVtT9pdoW
| | |__PS8tgw53SnO892Jhweh
| | |__Qufi83bdyg037D7RBif
| | |__Gicuwy8r23ndoijdakr
| |
| |_____UserID5_UserID16
| |_____UserID8_UserID7
| |_____UserID3_UserID8
|
|_______users
Whenever a user signs into the app, they'll see a list of their contacts. When they select one to chat with, I would use some Javascript to combine their, and their selected friend's Firebase ID to generate the chat ID. This chat ID would then be either created in the database (if it's their first time to chat), or it would be used to load previous messages that they have exchanged (if they have chatted before), from the database.
My question is, is this the correct method to use? What issues might I run into if I use this method? For example, would I have problems if I try to implement group conversations (with more than 2 people) in the future?
I'd be really grateful for any help, or examples of the correct database layout logic for a person to person (and group) chat application using Firebase/a no SQL database.
Thank you in advance!
Something that I would like to point out as one of the most important "rules" to consider when creating a NoSQL database is that you must
Structure your data after the view.
This means that the data must be structured in such a way, that when you want to display it on your view (probably your html pages) you do a single read.
So in order to find the best way to structure your database, you must first look at your view. And try to imagine how you would read data (direct reads and queries).
Altought your current structure looks good (for what you're building now), yes, you might have some problems when creating group chats. I would recommend using something like this:
You wil notice that this way, you can actually add more participants to each chat easily. Note that this isn't supposed to be your only node. You would have to create other nodes like users (to store the user details) and messages (store each chat's messages), etc
To help you with that, I recommend watching David East's Firebase Database For SQL Developers.

Tying an Application Insight metric to a Operation Id

Configuration: I have a metric reporting the request duration, and I have two custom events setup to show the start time and end time, and each event is filled out with pertinent information for the request.
Problem: I have a metric which is reporting long request durations. But the list of insights is not friendly to correlate the metric with the events and dependencies for the operation.
I would like to either find the duration between the two events to identify which operations are taking long, or assign an Operation Id to a metric which would then allow me to filter the list of insights to ones that have high durations.
I can suggest 2 approaches.
Approach #1:
Open Search in Azure portal and filter requests by performance bucket. You can then click on each search result to view correlated events. If predefined buckets don't work for you you can assign your own using telemetry initializer in SDK (please let us know if predefined buckets don't work)
Approach #2
Use Analytics join query to find out telemetry items with chosen operation ID, for example:
requests
| where duration > 5000
| project operation_Name , operation_Id, duration
| join (traces | project operation_Id, message ) on operation_Id
| project operation_Name , message
| limit 10

Resources