Configuration: I have a metric reporting the request duration, and I have two custom events setup to show the start time and end time, and each event is filled out with pertinent information for the request.
Problem: I have a metric which is reporting long request durations. But the list of insights is not friendly to correlate the metric with the events and dependencies for the operation.
I would like to either find the duration between the two events to identify which operations are taking long, or assign an Operation Id to a metric which would then allow me to filter the list of insights to ones that have high durations.
I can suggest 2 approaches.
Approach #1:
Open Search in Azure portal and filter requests by performance bucket. You can then click on each search result to view correlated events. If predefined buckets don't work for you you can assign your own using telemetry initializer in SDK (please let us know if predefined buckets don't work)
Approach #2
Use Analytics join query to find out telemetry items with chosen operation ID, for example:
requests
| where duration > 5000
| project operation_Name , operation_Id, duration
| join (traces | project operation_Id, message ) on operation_Id
| project operation_Name , message
| limit 10
Related
I am trying to kick off Google Cloud Function when two tables ga_sessions and events have successfully created in BigQuery (these tables can be created anytime in the gap of 3-4 hours).
I have written the following log stackdriver sink/log router to which Pub/Sub topic is subscribed (which in turn kick off google cloud function). However, it is not working. If I use sink/router individually for ga_sessions and events it works fine but when I combine them together then it doesn't work.
So my question is how do I take two different events from log stackdriver, combined them together & pass them to pub/sub topic
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.datasetId="my_dataset"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.projectId="my-project"
protoPayload.authenticationInfo.principalEmail="firebase-measurement#system.gserviceaccount.com"
protoPayload.methodName="jobservice.jobcompleted"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.tableId:"events"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.writeDisposition:"WRITE_TRUNCATE"
protoPayload.serviceData.jobCompletedEvent.job.jobStatus.state:"DONE"
NOT protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.tableId:"events_intraday"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.datasetId="my_dataset"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.projectId="my-project"
protoPayload.authenticationInfo.principalEmail="analytics-processing-dev#system.gserviceaccount.com"
protoPayload.methodName="jobservice.jobcompleted"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.tableId:"ga_sessions"
NOT protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.load.destinationTable.tableId:"ga_sessions_intraday"
Thanks in advance for your help/guidance.
The trick here is to create a metric that would actually show 1 when both conditions are met.
Try creating a new logs based metric and swithc to "query editor". There you can create your own metric using MQL language.
To be able to create a single metric from two "metrics" you need to use something like this:
{ fetch gce_instance :: compute.googleapis.com/instance/cpu/utilization ;
fetch gce_instance :: compute.googleapis.com/instance/cpu/reserved_cores
} | join | div
And here's some useful info on how to create an alerting policy using MQL.
The code for the alerting policy can look like this:
{ fetch gce_instance :: compute.googleapis.com/instance/cpu/utilization ;
fetch gce_instance :: compute.googleapis.com/instance/cpu/reserved_cores
}
| join | div
| condition val() >1
This is just an example to demonstrate that it is very likely possible to create the metric to monitor the creation of BigQuery tables but you have to test it yourself.
As part of the Application Insights custom telemetry I create and send from my application, I can provide custom properties with Events that I choose to track. Those are then available later within the normal AI UI or Analytics interface to query against. Similarly, when a user begins a session, I can use the AI API to set the app-defined user identifier or an app-defined session identifier.
But is there a way to do a cross of the two? For example, is there a way that I could set a custom property for a given user (such as an audience or role she is part of)? Or a way to set a custom property for a given user session? (perhaps the connection type or company branch office they are in) There are plenty of predefined sort of user- and session-related properties that AI implicitly associates with each user session. (like city, country, device, etc.)
I would really like to set properties like these one time for that session (or user) and then be able to associate other activities during that user session with these properties. (such as custom events, metrics, trace entries, etc.) What I need to avoid is having to set such properties with every event, every trace, or every metric logged (e.g., with an ITelemetryInitializer), because I've got about 25 different ASP.NET apps instrumented on the client and server side and a couple of separate SaaS apps instrumented only on the client side. To try to introduce custom extensions and then continually and repeatedly determine the custom properties to be added to everything logged would be a monumental undertaking across a lot of teams.
Is this possible? If so, how? I haven't been able to find any mention of it in the API documentation and Intellisense snooping in the C# API has similarly turned up nothing obvious. (e.g., with Microsoft.ApplicationInsights.Channel.ITelemetry.Context.Session or .User)
Yes, you can set property once per session. Then use join to associate it with the rest of events.
For instance, below query counts events per session and then associates this count with custom property. After that it can be piped for further aggregations if needed.
let events = customEvents
| where timestamp > ago(1d);
events
| summarize count() by session_Id
| join kind=inner (
events
| where name == "MySingleEventPerSession"
| summarize any(*) by session_Id
) on session_Id
| project count_, any_customDimensions.MyCustomProperty, session_Id
I tried to automate test to validate GA events.
My approach is :-
List item use google analytics real time reporting api.
Before the test ends i will hit this api and collect the last 30 mins data
This data will be a huge chunk of formattedJson string
and in this string i will search my GA events which was supposed to push.
This approach seems to be in-efficient.
My issue is to find the analytics data which corresponds to test user.
Each user has unique user id, hence, i am trying for making the request such that api returns me the filtered data based on some custom dimension "custom:user_id='user_unique_id'" .
Is it possible to get all data having condition e.g 'custom:user_id="XYZ"'.
Please advise, how to get all ga events data for a specific event label / custom dimension ? Also, does it support dimensionFilterClauses like reporting api v4 ?
We can do it by filtering.
e.g rt:eventCategory==ProductPage
earlier i was using quote, rt:eventCategory=='ProductPage' which wasn't supported.
I'm trying to understand the best database structure to store and retrieve user to user conversations using the Firebase database for a chat app (web based).
My current plan is to give each chat its own ID which would be created by combining the unique Firebase IDs of the two chat participants (like UserID1_UserID2), for example: FQ5d0jwLQDcQLryzevBxKrP72Bb2_GSIbxEMi4jOnWhrZaq528KJKDbm8 this chat ID would be stored in the database, and would contain the messages sent between the two participants.
Example layout:
MYAPP
|_______conversations
| |_____UserID1_UserID2
| | |
| | |__OshwYF72Jhd9bUw56W7d
| | | |__name:"Jane"
| | | |__text:"Hello!"
| | |
| | |__KbHy4293dYgVtT9pdoW
| | |__PS8tgw53SnO892Jhweh
| | |__Qufi83bdyg037D7RBif
| | |__Gicuwy8r23ndoijdakr
| |
| |_____UserID5_UserID16
| |_____UserID8_UserID7
| |_____UserID3_UserID8
|
|_______users
Whenever a user signs into the app, they'll see a list of their contacts. When they select one to chat with, I would use some Javascript to combine their, and their selected friend's Firebase ID to generate the chat ID. This chat ID would then be either created in the database (if it's their first time to chat), or it would be used to load previous messages that they have exchanged (if they have chatted before), from the database.
My question is, is this the correct method to use? What issues might I run into if I use this method? For example, would I have problems if I try to implement group conversations (with more than 2 people) in the future?
I'd be really grateful for any help, or examples of the correct database layout logic for a person to person (and group) chat application using Firebase/a no SQL database.
Thank you in advance!
Something that I would like to point out as one of the most important "rules" to consider when creating a NoSQL database is that you must
Structure your data after the view.
This means that the data must be structured in such a way, that when you want to display it on your view (probably your html pages) you do a single read.
So in order to find the best way to structure your database, you must first look at your view. And try to imagine how you would read data (direct reads and queries).
Altought your current structure looks good (for what you're building now), yes, you might have some problems when creating group chats. I would recommend using something like this:
You wil notice that this way, you can actually add more participants to each chat easily. Note that this isn't supposed to be your only node. You would have to create other nodes like users (to store the user details) and messages (store each chat's messages), etc
To help you with that, I recommend watching David East's Firebase Database For SQL Developers.
I have a 6NF-esque schema in part of my database, where any time a property value is changed, a new row is created with the CURRENT_TIMESTAMP. For example
+----------+-------+---------+
| EntityID | Value | TimeSet |
+----------+-------+---------+
| 1 | foo | 1:30 PM |
+----------+-------+---------+
| 1 | bar | 1:31 PM |
+----------+-------+---------+
So, the PK is EntityID, TimeSet (TimeSet is a MySQL TIMESTAMP - I just used readable values for the example). Any GET requests will SELECT the latest value for the entity only (i.e. GET /entities/1/<property> would return bar only).
As of right now, there are no behaviors that depend on the time set, it's just there for auditing. My question is: when I want to set values for this attribute over HTTP, should I be using PUT or POST? Technically, a new row is being created every time the user sends a value, but from the standpoint of the API, the request is idempotent, because you could create 100 rows of the same value, and only the most recent one will be returned for any GET requests.
Meaby this can help you:
The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI in the Request-Line. POST is designed to allow a uniform method to cover the following functions:
- Annotation of existing resources;
- Posting a message to a bulletin board, newsgroup, mailing list,
or similar group of articles;
- Providing a block of data, such as the result of submitting a
form, to a data-handling process;
- Extending a database through an append operation.
The PUT method requests that the enclosed entity be stored under the supplied Request-URI.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
You should be looking at things with a resource perspective. Though you are just updating a value with a timestamp, you are actually creating a new resource on the server and not modifying the old one. Returning the latest timestamped resource is actually part of your business logic and should not be confused with a PUT/POST request.
So, the right answer is use a POST request.