Create Insights Log query from Azure cli not Log Monitor saved search - azure-application-insights

Using latest azure cli (2.28.1)
The creation of Kusto queries against Log Analytics with the azure cli is documented here:
https://learn.microsoft.com/en-us/cli/azure/monitor/log-analytics/workspace/saved-search?view=azure-cli-latest
using the saved-search directive. A minor irritation is that the cli always creates legacy categories rather than non-legacy and tags sometimes are not correctly applied.
But what I can not find is how to create queries against Insights with the cli. Combed the Microsoft docs without a hit. Insights is a subset of Log Analytics (Monitor) but the queries are stored separately. Alarms can target both resource groups (i.e. Insights and Log Analytics).

You need to use az monitor app-insights query command to run the kusto queries for application insights using Azure CLI.
We have tested in our environment Using the below cmdlets, can pull the appid for an application insights & also Total numbers requests for that particular application over a time period of 1day.
az monitor app-insights component show --app <app-insightsName> --resource-group <resource-Name> --query appId
az monitor app-insights query --app <appId> --analytics-query 'requests | summarize count() by bin(timestamp, 24h)' --offset 1h30m
Here is the reference document for more information about running app insight analytics-queries using Azure CLI.

With bicep (az bicep build --file <bicep file>) resource definitions can be defined in a template (json) then deployed with the azure cli (az deployment group create --resource-group <name> --template-file <bicep generated template>)
Hard part was making parent and child resources in bicep. Needed a parent query pack and a child queries:
resource querypack 'Microsoft.OperationalInsights/queryPacks#2019-09-01-preview' =
{
name: 'DefaultQueryPack'
location: 'northeurope'
properties: {}
}
resource query 'Microsoft.OperationalInsights/queryPacks/queries#2019-09-01-preview' = {
parent: querypack
name: '6967c00c-9b46-4270-bee0-5a27b8b85cef'
properties: {
displayName: 'BadEventsBySdcFileId'
description: ''
body: '<kusto query>'
related: {
categories: [
'applications'
]
resourceTypes: [
'microsoft.insights/components'
]
}
tags: {}
}
}
Also the query resource name has to be a GUID which is not at all clear in the documentation. Tags are helpful to group by topic when hunting around for queries say that belong to a project domain.

Related

Can I use Kusto.Explorer to KQL query an Application Insight instance? [duplicate]

Update July 13, 2021
The links used below are now partially obsolete. Here is the new section on language differences.
Original post
On Azure Portal, in my App Insights / Logs view, I can query the app data like this:
app('my-app-name').traces
The app function is described in the article app() expression in Azure Monitor query.
Kusto.Explorer doesn't understand the app() function, which appears to be explained by the fact it is one of the Additional operators in Azure Monitor.
How can I query my App Insights / Logs with Kusto.Explorer? I cannot use cluster as it is one of the functions not supported in Azure Monitor.
Relevant doc: Azure Monitor log query language differences
Note on troubleshooting joins
(added December 16, 2021)
Pro-tip from Kusto team:
If you are querying application insights from Kusto.Explorer, and your joins to normal clusters fail with bad gateway or other unexpected error, consider adding hint.remote=left to your join. Like:
tableFromApplicationInsights
| join kind=innerunique hint.remote=left tableFromNormalKustoCluster
We have a private preview for Azure Data Explorer (ADX) Proxy that enables you to treat Log Analytics / Application Insights as a virtual cluster, query it using ADX tools and connecting to it as a second cluster in cross cluster query. Since its a private preview you need to contact adxproxy#microsoft.com in order to get enrolled. The proxy is documented at https://learn.microsoft.com/en-us/azure/data-explorer/query-monitor-data.
(disclaimer - I'm the PM driving this project).
Step 1 Connection String
Build your connection string from this template:
https://ade.applicationinsights.io/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/components/<ai-app-name>
Fill in the subscription-id, resource-group-name, and ai-app-name from the portal. Here is an example image
Step 2 Add the connection to Kusto.Explorer
Open Kusto.Explorer, choose Add Connection, and paste your connection string into the Cluster connection field.
After you kit OK, Windows will prompt you to log in with your Azure Active Directory account. Once you have authenticated, Kusto.Explorer will display the Application Insights tables in the Connections panel.

Deploying in functions in multiple regions with a unified codebase

I have a fairly simple requirement in that I need an identical replica of my Firebase functions, bucket and firestore database in multiple regions to satisfy that data does not move between regions. One for EU, GB, US etc..
Because you can only have one firestore database per firebase project I'm creating a new project per region as recommended. I can redirect my bucket read/writes to the correct region using the firebase project environment variables to define which bucket to write to. So far so good.
Now the last bottle neck is the functions themselves.
The problem is by default functions go to "us-central1" rather than the firebase project region so it seems the only way to specify region is using the .region("eu-west3") specifier in the codebase etc. But because I want a single unified code base across all projects changing this on a per project basis is a bit cumbersome.
Any suggestions on how best to manage this?
You can store the Region values as environment configs using firebase functions:config:set:
firebase functions:config:set env.region="us-central1"
After running functions:config:set, you must redeploy functions to make the new configuration available.
Then get the environment variable inside your function by using this code:
exports.myFunction = functions
.region(functions.config.env.region)
.https.onRequest((req, res) => {
res.send("Hello");
});
With this, you no longer need to change region inside your code. But you still have to change configs and redeploy functions on every project if you want to change regions.
An additional solution is to use Cloud Build to automate everything. I haven't fully tested it yet, but here's what I can come up with.
First, follow the instructions on how to use the Firebase builder tool. You need this community provided image to run Firebase CLI commands on Cloud Build. Once finished, make sure that you have the following API's enabled on your projects:
Cloud Resource Manager API
Firebase Management API
and on your Cloud Build settings, Firebase Admin is enabled.
Then try this cloudbuild.yaml file. Cloud Build will use the tool from your project and to your other projects:
steps:
# Setup First Project
- name: gcr.io/project-id1/firebase
id: 'Use Project 1'
args: ['use', 'project-id1']
- name: gcr.io/project-id1/firebase
id: 'Set Firebase Environment Config'
args: ['functions:config:set', 'env.region=us-central1']
- name: gcr.io/project-id1/firebase
id: 'Deploy function1'
args: ['deploy', '--project=project-id1', '--only=functions']
# Setup Second Project
- name: gcr.io/project-id1/firebase
id: 'Use Project 2'
args: ['use', 'project-id2']
- name: gcr.io/project-id1/firebase
id: 'Set Firebase Environment Config'
args: ['functions:config:set', 'env.region=eu-west3']
- name: gcr.io/project-id1/firebase
id: 'Deploy function2'
args: ['deploy', '--project=project-id2', '--only=functions']
# And so on...
Note: Change project-id with your actual Project ID.
exports.myFunction = functions
.region(functions.config.env.region)
.https.onRequest((req, res) => {
res.send("Hello");
});
config is a function so this adjustement was necessary:
exports.myFunction = functions
.region(functions.config().env.region)
.https.onRequest((req, res) => {
res.send("Hello");
});

BAD_GATEWAY when connecting Google Cloud Endpoints to Cloud SQL

I am trying to connect from GCP endpoints to a Cloud SQL (PostgreSQL) database in a different project. My endpoints backend is an app engine in the flexible environment using Python.
The endpoints API works fine for non-db requests and for db requests when run locally. But the deployed API produces this result when requiring DB access:
{
"code": 13,
"message": "BAD_GATEWAY",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "application"
}
]
}
I've followed this link (https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine) to create the endpoints project, and this (https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql-postgres) to link to Cloud SQL from a different project.
The one difference is that I don't use the SQLALCHEMY_DATABASE_URI env variable to connect, but take the connection string from a config file to use with psycopg2 SQL strings. This code works on CE servers in the same project.
Also double checked that the project with the PostgreSQL db was given Cloud SQL Editor access to the service account of the Endpoints project. And, the db connection string works fine if the app engine is in the same project as the Cloud SQL db (not coming from endpoints project).
Not sure what else to try. How can I get more details on the BAD_GATEWAY? That's all that's in the endpoints logfile and there's nothing in the Cloud SQL logfile.
Many thanks --
Dan
Here's my app.yaml:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
env_variables:
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://postgres:password#/postgres?host=/cloudsql/cloudsql-project-id:us-east1:instance-id
beta_settings:
cloud_sql_instances: cloudsql-project-id:us-east1:instance-id
endpoints_api_service:
name: api-project-id.appspot.com
rollout_strategy: managed
And requirements.txt:
Flask==0.12.2
Flask-SQLAlchemy==2.3.2
flask-cors==3.0.3
gunicorn==19.7.1
six==1.11.0
pyyaml==3.12
requests==2.18.4
google-auth==1.4.1
google-auth-oauthlib==0.2.0
psycopg2==2.7.4
(This should be a comment but formatting really worsen the reading, I will update on here)
I am trying to reproduce your error and I come up with some questions:
How are you handling the environment variables in the tutorials? Have you hard-coded them or are you using environment variables? They are reset with the Cloud Shell (if you are using Cloud Shell).
This is not clear for me: do you see any kind of log file in CloudSQL (without errors) or you don't see even logs?
CloudSQL, app.yaml and requirements.txt configurations are related. Could you provide more information on this? If you update the post, be careful and do not post username, passwords or other sensitive information.
Are both projects in the same region/zone? Sometimes this is a requisite, but I don't see anything pointing this in the documentation.
My intuition points to a credentials issue, but it would be useful if you add more information to the post to better understand where the issue cames from.

Does Firebase Realtime Database REST API support multi path updates at different entity locations?

I am using the REST API of Firebase Realtime Database from an AppEngine Standard project with Java. I am able to successfully put data under different locations, however I don't know how I could ensure atomic updates to different paths.
To put some data separately at a specific location I am doing:
requestFactory.buildPutRequest("dbUrl/path1/17/", new ByteArrayContent("application/json", json1.getBytes())).execute();
requestFactory.buildPutRequest("dbUrl/path2/1733455/", new ByteArrayContent("application/json", json2.getBytes())).execute();
Now to ensure that when saving a /path1/17/ a /path2/1733455/ is also saved, I've been looking into multi path updates and batched updates (https://firebase.google.com/docs/firestore/manage-data/transactions#batched-writes, only available in Cloud Firestore?) However, I did not find whether this feature is available for the REST API of the Firebase Realtime Database as well or only through the Firebase Admin SDK.
The example here shows how to do a multi path update at two locations under the "users" node.
curl -X PATCH -d '{
"alanisawesome/nickname": "Alan The Machine",
"gracehopper/nickname": "Amazing Grace"
}' \
'https://docs-examples.firebaseio.com/rest/saving-data/users.json'
But I don't have a common upper node for path1 and path2.
Tried setting as the url as the database url without any nodes (https://db.firebaseio.com.json) and adding the nodes in the json object sent, but I get an error: nodename nor servname provided, or not known.
This would be possible with the Admin SDK I think, according to this blog post: https://firebase.googleblog.com/2015/09/introducing-multi-location-updates-and_86.html
Any ideas if these atomic writes can be achieved with the REST API?
Thank you!
If the updates are going to a single database, there is always a common path.
In your case you'll run the PATCH command against the root of the database:
curl -X PATCH -d '{
"path1/17": json1,
"path2/1733455": json2
}' 'https://yourdatabase.firebaseio.com/.json'
The key difference with your URL seems to be the / before .json. Without that you're trying to connect to a domain on the json TLD, which doesn't exist (yet) afaik.
Note that the documentation link you provide for Batched Updates is for Cloud Firestore, which is a completely separate database from the Firebase Realtime Database.

Referencing a Managed Service Identity in ARM-template deploy

When deploying a Microsoft.Web resource with the new MSI feature the principleId GUID for the created user is visible after deployment. Screenshot below shows the structure in the ARM-template.
What would be the best way to fetch this GUID later in the pipeline to be able to assign access rights in (for instance) Data Lake Store?
Is it possible to use any of the existing ARM template functions to do so?
I just struggled with this myself. The solution that worked for me was found deep in the comments here.
Essentially, you create a variable targeting the resource you are creating with the MSI support. Then you can use the variable to fetch the specific tenantId and principalId values. Not ideal, but it works. In my examples, I'm configuring Key Vault permissions for a Function App.
To create the variable, use the syntax below.
"variables": {
"identity_resource_id": "[concat(resourceId('Microsoft.Web/sites', variables('appName')), '/providers/Microsoft.ManagedIdentity/Identities/default')]"
}
To get the actual values for the tenantId and principalId, reference them with the following syntax:
{
"tenantId": "[reference(variables('identity_resource_id'), '2015-08-31-PREVIEW').tenantId]",
"objectId": "[reference(variables('identity_resource_id'), '2015-08-31-PREVIEW').principalId]"
}
Hope this helps anyone who comes along with the same problem!
Here are a few sample templates: https://github.com/rashidqureshi/MSI-Samples that show a) how to grant RBAC access to ARM resources b) how to create access policy for keyvault using the OID of the MSI
There is new way to get identity information. You can directly get them from resource that support Managed Identity for Azure resources (Managed Service Identity in the past).
{
"tenantId": "[reference(resourceId('Microsoft.Web/sites', variables('serviceAppName')),'2019-08-01', 'full').identity.tenantId]",
"objectId": "[reference(resourceId('Microsoft.Web/sites', variables('serviceAppName')),'2019-08-01', 'full').identity.principalId]",
}
You can also get principal Id for resource in other resource group or/and subscription. ResourceId supports optional parameters:
"tenantId": "[reference(resourceId(variables('resourceGroup'), 'Microsoft.Web/sites', variables('serviceAppName')),'2019-08-01', 'full').identity.tenantId]",
or
"tenantId": "[reference(resourceId(variables('subscription'), variables('resourceGroup'), 'Microsoft.Web/sites', variables('serviceAppName')),'2019-08-01', 'full').identity.tenantId]",

Resources