The Monitor section only talks about monitoring the ADX instance / server and performance and ingestion. Is there an Azure tool that monitors results of a Kusto query? Where can I find the documentation?
For example, receiving an alert when a Kusto query like event | where isnotempty(error) has result (number of rows > 0)
You can use Azure logic apps or Microsoft power automate. It will allow you to create alerts over results from Kusto queries.
Related
Update July 13, 2021
The links used below are now partially obsolete. Here is the new section on language differences.
Original post
On Azure Portal, in my App Insights / Logs view, I can query the app data like this:
app('my-app-name').traces
The app function is described in the article app() expression in Azure Monitor query.
Kusto.Explorer doesn't understand the app() function, which appears to be explained by the fact it is one of the Additional operators in Azure Monitor.
How can I query my App Insights / Logs with Kusto.Explorer? I cannot use cluster as it is one of the functions not supported in Azure Monitor.
Relevant doc: Azure Monitor log query language differences
Note on troubleshooting joins
(added December 16, 2021)
Pro-tip from Kusto team:
If you are querying application insights from Kusto.Explorer, and your joins to normal clusters fail with bad gateway or other unexpected error, consider adding hint.remote=left to your join. Like:
tableFromApplicationInsights
| join kind=innerunique hint.remote=left tableFromNormalKustoCluster
We have a private preview for Azure Data Explorer (ADX) Proxy that enables you to treat Log Analytics / Application Insights as a virtual cluster, query it using ADX tools and connecting to it as a second cluster in cross cluster query. Since its a private preview you need to contact adxproxy#microsoft.com in order to get enrolled. The proxy is documented at https://learn.microsoft.com/en-us/azure/data-explorer/query-monitor-data.
(disclaimer - I'm the PM driving this project).
Step 1 Connection String
Build your connection string from this template:
https://ade.applicationinsights.io/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/components/<ai-app-name>
Fill in the subscription-id, resource-group-name, and ai-app-name from the portal. Here is an example image
Step 2 Add the connection to Kusto.Explorer
Open Kusto.Explorer, choose Add Connection, and paste your connection string into the Cluster connection field.
After you kit OK, Windows will prompt you to log in with your Azure Active Directory account. Once you have authenticated, Kusto.Explorer will display the Application Insights tables in the Connections panel.
Good afternoon,
We use odbc connector to access our Teradata (we use the TTU 15) and we are looking for a simple solution allowing to select the SCHEMA directly from the ODBC connection when doing a "GET DATA" from Power BI Desktop.
Why ? Because people who are using PBI does not have SQL knowledge and it takes more than 1 minutes to display the database's liste.
We know it's possible to do that from the SQL query but we are looking for a solution to do that in the connection itself.
We've tried using the DATABASE/DB/SCHEMA but no way, each time Power BI Desktop display us the entire list of database.
Any idea or tricks ?
Cosmos DB FAQ says in Cassandra API section that Azure Cosmos DB provides automatic indexing of all attributes without any schema definition. https://learn.microsoft.com/en-us/azure/cosmos-db/faq#does-this-mean-i-dont-have-to-create-more-than-one-index-to-satisfy-the-queries-1
But when I try to add WHERE column1 = 'x' filter to my CQL query, I get exception from Datastax cassandra driver saying that data filtering is not supported. I tried to bypass client driver by supplying ALLOW FILTERING but this time got error from cosmos server saying this feature is not implemented.
So, if automatic indexing is implemented for Cosmos/Cassandra API, how can it be used?
I am working on the Zabbix monitoring tool.
Could any one advise whether we have any tool to generate reports.
Not at my knowledge out-of-the box.
Zabbix is tricky because MySQL backend history tables grow extremely fast and they don't have primary keys. Our current history tables have 440+ million records and we monitor 6000 servers by Zabbix. Single table scan takes 40 minutes on the active server.
So your challenge could be splitted in three smaller challenges:
History
Denormalization is the key because joins don't work on huge history tables because you have to join history, items, functions, triggers and hosts tables.
Besides you want to evaluate global and host macros, replace {ITEM.VALUE} and {HOST.NAME} in trigger and item names/descriptions.
BTW there is experimental version of Zabbix which uses Elasticsearch for keeping history and it makes possible sorting and selecting item values by intervals. Zabbix using Elasticsearch for History Tables
My approach is to generate structures like this for every Zabbix record from history tables and dump them to the document database. Make sure you don't use buffered cursors.
{'dns_name': '',
'event_clock': 1512501556,
'event_tstano': '2017-12-05 19:19:16',
'event_value': 1,
'host_id': 10084,
'host_name': 'Zabbix Server',
'ip_address': '10.13.37.82',
'item_id': 37335,
'item_key': 'nca.test.backsync.multiple',
'item_name': 'BackSync - Test - Multiple',
'trig_chg_clock': 1512502800,
'trig_chg_tstamp': '2017-12-05 19:40:00',
'trig_id': 17206,
'trig_name': 'BackSync - TEST - Multiple - Please Ignore',
'trig_prio': 'Average',
'trig_value': 'OK'
}
Current Values
Zabbix APIs are documented pretty good and JSON is handy to dump the structure like proposed for the history. Don't expect Zabbix APIs will return more than 500 metrics / second max. We currently pull 350 metrics / second.
And finally reporting ... There are many options but you have to integrate them:
Jasper
Kibana (Elasticsearch)
Tableau
Operations Bridge Reporter (Vertica)
..
JasperReports - IMHO good "framework" for reports:
connect it with SQL data connector => you have to be familiar with SQL structure of your Zabbix DB
more generic solution will be Zabbix API data connector for JasperReports, but you have to code this data connector, because it doesn't exist
You can use the Event API to export data for reporting.
Quoting from the main reference page:
Events:
Retrieve events generated by triggers, network discovery and other
Zabbix systems for more flexible situation management or third-party
tool integration
Additionally, if you've set up IT Services & SLA, you can use the Service API to extract service % availabilities
We have 2 databases. One is an oracle 11g DB and the other is a DB2 database. I need to make a query to the oracle database to get a list of accounts and feed that as parameters into another DB2 query. If the DB2 Query returns any results then I need to send an alert. Is this in any way possible with sitescope (I am fairly new to sitescope so be gentle)? It looks like there is only room for 1 connection string in the sitescope monitors. Can I create 2 monitors (one for DB2 and one for Oracle) and use the results of one query as a parameter into the other monitor? Looks like there is some monitor to monitor capabilities but still trying to understand what is possible. Thanks!
It is possible to extract the results from the first query using a custom script alert, but it is not possible than to reuse the data in another monitor.
SiteScope (SiS) Database Query monitor has no feature to include dynamically changing data in the query used. Generally speaking monitor configurations are static and can only be updated by an external agent (user or integration).
Thinking inside the box of one vendor using HP Operations Orchestration (OO) would be an option to achieve your goal. You could either use OO to run the checks and send you alerts in case of a problem or run the checks and dump the result to a file somewhere which than subsequently can be read by SiS using Script monitor or Logfile monitor.