I am trying to make a line graph in Azure Data Explorer, I have a single table and I am trying to make each line in the line graph be based on a column in that table.
Using a single column works just fine with the below query
scandevicedata
| where SuccessFullScan == 1
| summarize SuccessFulScans = count() by scans = bin(todatetime(TransactionTimeStampUtc), 30s)
The problem is now I want to add a second column from the same table like this
scandevicedata
| where UnSuccessFullScan == 1
| summarize UnSuccessFulScans = count() by scans = bin(todatetime(TransactionTimeStampUtc), 30s)
As you can see the first query takes out successful scans and the second query takes out Un Successful scans and now I want to combine them in the same output and do a graph on them but I cant figure out how to do this since they are different columns
How can I achieve this?
You could use the countif() aggregation function :
scandevicedata
| summarize
Successful = countif(SuccessFullScan == 1),
Unsuccessful = countif(UnsuccessFullScan == 1)
by bin(todatetime(TransactionTimeStampUtc), 30s)
| render timechart
Related
I'm trying to make a table with these columns
type | count
I tried this with no luck
exceptions
| where timestamp > ago(144h)
| extend
type = type, count = summarize count() by type
| limit 100
Any idea on what I'm doing wrong?
You should do this instead:
exceptions
| where timestamp > ago(144h)
| summarize count = count() by type
| limit 100
Explanation:
You should use extend when you want to add new/replace columns to the result, for example, extend day_of_month = dayofmonth(Timestamp) - you'll remain with exactly the same record count in this case - see more info in the doc
You should use summarize when you want to summarize multiple records (so the record count after the summarize will usually be smaller than the original record count), like in your case - see more info in the doc
By the way, instead of 144h you can use 6d, which is exactly the same, but is more natural to the human eye :)
How do I write my query to create the data result in the proper format to be plotted in multiple panels using the | render timechart with (ysplit=panels) output?
Looking at Microsoft's examples, I need to have my IPPrefix column to produce multiple columns in a single row. Instead, my query is producing separate rows for each grouping in IPPrefix.
I have the following query:
let startTime = datetime('2020.07.23 20:00:00');
let endTime = datetime('2020.07.23 23:59:00');
AzureDiagnostics
| where TimeGenerated between (startTime..endTime)
| where ResourceType == "APPLICATIONGATEWAYS" and OperationName == "ApplicationGatewayAccess"
| where requestUri_s contains "api/auth/ping"
| extend IPParts = split(clientIP_s, '.')
| extend IPPrefix = strcat(IPParts[0], '.', IPParts[1], '.', IPParts[2])
| make-series Count = count() on TimeGenerated in range(startTime, endTime, 5m) by IPPrefix
//| summarize AggregatedValue = count() by IPPrefix, bin(TimeGenerated, 1m)
| render timechart with (ysplit=panels)
I want the result to look something like:
But instead, all the y-series are plotted in a single panel like:
I suppose that I am not using make-series in the correct way in order to produce the result I need but I have not been able to apply it in a different way to make it work.
I realized that I needed to pivot on the data before rendering. I also learned there is a limit of 5 panels on the ysplit=panels option. I had to limit the series to five and then perform a pivot on the aggregated data.
...
| make-series Count = count() on TimeGenerated in range(startTime, endTime, 1m) by IPPrefix
| take 5
| evaluate pivot(IPPrefix, any(Count), TimeGenerated)
| render timechart with(ysplit=panels)
Resulting chart with five panels.
I've got a simple KQL query that plots the (log of the) count of all exceptions over 90 days:
exceptions
| where timestamp > ago(90d)
| summarize log(count()) by bin(timestamp, 1d)
| render timechart
What I'd like to do is add some reference lines to the timechart this generates. Based on the docs, this is pretty straightforward:
| extend ReferenceLine = 8
The complicating factor is that I'd like these reference lines to be based on aggregations of the value I'm plotting. For instance, I'd like a reference line for the minimum, mean, and 3rd quartile values.
Focusing on the first of these (minimum), it turns out that you can't use min() outside of summarize(). But I can use this within an extend().
I was drawn to min_of(), but this expects a list of arguments instead of a column. I'm thinking I could probably expand the column into a series of values, but this feels hacky and would fall down beyond a certain number of values.
What's the idiomatic way of doing this?
you could try something like the following:
exceptions
| where timestamp > ago(90d)
| summarize c = log(count()) by bin(timestamp, 1d)
| as hint.materialized=true T
| extend _min = toscalar(T | summarize min(c)),
_perc_50 = toscalar(T | summarize percentile(c, 50))
| render timechart
I have two cross tables on a single page.
The first cross table is a summary that has Components on the horizontal axis, and Facilities on the vertical axis. The cell values shows colors "RED", "YELLOW", or "NA". The second cross table is a drilldown of the marked row on the summary table, with the horizontal axis Components and Type on the vertical axis. The cell values are a count function.
What I need is to have the color of what I marked show below each component in the drilldown.
Summary
+----------+--------+-------+--------+
| Facility | COMP1 | COMP2 | COMP3 |
+----------+--------+-------+--------+
| FAC1 | NA | RED | RED |
| FAC2 | YELLOW | NA | RED |
| FAC3 | RED | RED | YELLOW |
+----------+--------+-------+--------+
Drilldown (If I mark the FAC2 row)
+-------+--------+-------+
| Type | COMP1 | COMP3 |
+ + YELLOW + RED +
|-------|--------|-------|
| TYPE1 | 12 | |
| TYPE2 | 11 | 4 |
+-------+--------+-------+
Does anyone know if this is possible with cross tables? Any tips on how to do it? I appreciate the help.
Thanks,
John
Edit: I'm doing this to go around not being able to color column headers of a cross table, so if anyone has an alternative, I would appreciate it.
Currently using Spotfire 7.11
Okay. Bear with me here as I have hacked together a solution. I will say, I made some assumptions about your data structure. Depending on the structure of your data, the answer may need slightly modified.
Here is the structure of my data:
Step 1: Create two document properties to hold the values of the title. I created two document properties named "tableTitle1" and "tableTitle2" (one for each column in the details cross table). Create one document property to hold a DateTime value that an r script will pass us (will discuss later). I named mine "time".
Step 2: Create the cross tables as you have them. Ensure the first cross table is using Marking "Marking" and the second is limited by the marking "Marking". In the second cross table, ensure that the titles look something like this: Count([Comp1]) as [Comp1 ${tableTitle1}], Count([Comp3]) as [Comp2 ${tableTitle2}]. You need to use the document properties created in Step 1.
Step 3: Create the python script. The code is as follows:
from System.Collections.Generic import List
from Spotfire.Dxp.Data import *
# Create a cursor for the table column to get the values from.
# Add a reference to the data table in the script.
dataTable = Document.Data.Tables["SOTest"]
cursor = DataValueCursor.CreateFormatted(dataTable.Columns["Comp1"])
# Retrieve the marking selection
markings = Document.Data.Markings["Marking"].GetSelection(dataTable).AsIndexSet()
# Create a List object to store the retrieved data marking selection
markedata = List [str]();
# Iterate through the data table rows to retrieve the marked rows
for row in dataTable.GetRows(markings, cursor):
value = cursor.CurrentValue
if value <> str.Empty:
markedata.Add(value)
# Get only unique values
valData = List [str](set(markedata))
# Store in a document property
Document.Properties["tableTitle1"] = ', '.join(valData)
####DO IT AGAIN FOR THE SECOND COLUMN#####
# Create a cursor for the table column to get the values from.
# Add a reference to the data table in the script.
cursor = DataValueCursor.CreateFormatted(dataTable.Columns["Comp2"])
# Create a List object to store the retrieved data marking selection
markedata = List [str]();
# Iterate through the data table rows to retrieve the marked rows
for row in dataTable.GetRows(markings, cursor):
value = cursor.CurrentValue
if value <> str.Empty:
markedata.Add(value)
# Get only unique values
valData = List [str](set(markedata))
# Store in a document property
Document.Properties["tableTitle2"] = ', '.join(valData)
Step 4: Create an R Script to kick off the python script when data is marked. This is going to be a very simple R Script. The code is as follows:
markedTable <- inputTable
time <- Sys.time()
The check box for allow caching should be unchecked. The output parameter time should go to the document property time. the input parameter inputTable should be your datatable, all columns, and should be limited by Marking. Ensure that the refresh function automatically checkbox is checked.
Step 5: Map the python script to the time document property. In the Edit > Document Properties dialogue box, under Properties, assign the python script we created to the document property. The R script will change the current datetime each time the marking on the table changes, thus running our python script for us.
Step 6: Watch the magic happen.
I run a simulation with varying number of iterations and each iteration creates an output table like table1,table2,table3... They all have the same structure like:
ID | value
but varying number of rows.
For each table, I want to compute the average of the 'value' column and show them in a new table with the column "averages" like:
tableNumber | averageValue
1 | 516
2 | 512
3 | 521
... | ...
Is this possible in SQlite if the number of tables is quite high? And if not, how can I achieve this in a different way?
Thanks a lot in advance :-)
Instead of creating different tables, put the results in the same table, and have a column which indicates which batch or set the row belongs to. Then when you query the table you can filter on that column so that you're working only with the desired batch/set. Put an index on that column to improve the efficiency of the query and make it run faster. There will be no need to save the average results to separate tables either. Your query can produce the result without your having to persist the results as data in another table.
select batch, avg(value) as AvgValue
from simulation
where batch = 100
group by batch