Kusto - Grouping by week, Week-ending - azure-application-insights

I come up against this quite often and haven't figured it out yet. Take the below query. I am trying to group into 7 day buckets, however the first and last bucket are always less than 7 days. The middle buckets are whole weeks ( or 6.23 days whatever that means).
How do I write a query where I can offset by the end date? Additionally, how can I make sure my start date is also not truncated?
requests
| where timestamp > startofday(ago(90d))
and timestamp < endofday(now()-1d)
| summarize
min(timestamp),
max(timestamp)
by
bin(timestamp, 7d)
| extend duration = max_timestamp - min_timestamp
| project-away timestamp
| order by max_timestamp

You can use bin_at() to specify the reference data for the binning. See example below, and documentation: https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/binatfunction.
If it is relevant, you could also consider using startofweek() and/or endofweek().
range timestamp from startofday(ago(30d)) to endofday(ago(1d)) step 1111ms
| summarize max(timestamp), min(timestamp) by timestamp = bin_at(timestamp, 7d, endofday(ago(1d)))
| extend duration = max_timestamp - min_timestamp
| project-away timestamp
| order by max_timestamp
-->
| max_timestamp | min_timestamp | duration |
|-----------------------------|-----------------------------|--------------------|
| 2020-06-25 23:59:59.6630000 | 2020-06-19 00:00:00.1490000 | 6.23:59:59.5140000 |
| 2020-06-18 23:59:59.0380000 | 2020-06-12 00:00:00.6350000 | 6.23:59:58.4030000 |
| 2020-06-11 23:59:59.5240000 | 2020-06-05 00:00:00.0100000 | 6.23:59:59.5140000 |
| 2020-06-04 23:59:58.8990000 | 2020-05-29 00:00:00.4960000 | 6.23:59:58.4030000 |
| 2020-05-28 23:59:59.3850000 | 2020-05-27 00:00:00.0000000 | 1.23:59:59.3850000 |

Related

MariaDB DATETIME Index not working with Between FROM_UNIXTIME()

I have a table with DATETIME field, which is indexed by a BTree. Now i want to query it with following statement:
SELECT
count(us.CITY) as metric,
us.CITY as Name,
us.LATITUDE as latitude,
us.LONGITUDE as longitude
FROM
FACT
LEFT JOIN
USER us
ON
us.ID_USER = FACT.USER
WHERE
ASSESSMENT_DATE BETWEEN FROM_UNIXTIME(1601568552) AND FROM_UNIXTIME(1604028277)
GROUP BY us.CITY, us.LATITUDE, us.LONGITUDE;
EXPLAIN:
+------+-------------+-------+--------+----------------------------+---------+---------+------------------------------+--------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+--------+----------------------------+---------+---------+------------------------------+--------+----------------------------------------------+
| 1 | SIMPLE | FACT | ALL | INDEX_FACT_ASSESSMENT_DATE | NULL | NULL | NULL | 762621 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | us | eq_ref | PRIMARY | PRIMARY | 46 | dwh0.FACT.USER,dwh0.FACT.ENV | 1 | |
+------+-------------+-------+--------+----------------------------+---------+---------+------------------------------+--------+----------------------------------------------+
2 rows in set (0.001 sec)
Interestingly, by only changing the dates manually into the DATETIME Format string it uses the index. But the FROM_UNIXTIME() function should in my opinion return the exactly same thing...
SELECT
count(us.CITY) as metric,
us.CITY as Name,
us.LATITUDE as latitude,
us.LONGITUDE as longitude
FROM
FACT
LEFT JOIN
USER us
ON
us.ENV = FACT.ENV AND us.ID_USER = FACT.USER
WHERE
-- ASSESSMENT_DATE BETWEEN FROM_UNIXTIME(1596649101) AND FROM_UNIXTIME(1599108827)
ASSESSMENT_DATE BETWEEN '2020-08-05 11:30:11.987' AND '2020-09-03 11:30:11.987'
GROUP BY us.CITY, us.LATITUDE, us.LONGITUDE;
EXPLAIN:
+------+-------------+-------+--------+----------------------------+----------------------------+---------+------------------------------+--------+--------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra
|
+------+-------------+-------+--------+----------------------------+----------------------------+---------+------------------------------+--------+--------------------------------------------------------+
| 1 | SIMPLE | FACT | range | INDEX_FACT_ASSESSMENT_DATE | INDEX_FACT_ASSESSMENT_DATE | 5 | NULL | 132008 | Using index condition; Using temporary; Using filesort |
| 1 | SIMPLE | us | eq_ref | PRIMARY | PRIMARY | 46 | dwh0.FACT.USER,dwh0.FACT.ENV | 1 |
|
+------+-------------+-------+--------+----------------------------+----------------------------+---------+------------------------------+--------+--------------------------------------------------------+
2 rows in set (0.001 sec)
Can anyone refer to such a problem? the where clause is generated by grafana, so i can not change that, but the rest i can change if it changes something.
Thanks for suggestions!
Sorry for bothering.. after around 10^5 more inserts, it works for both cases... Maybe it was just bad luck

MS App Analytics timeline chart

I have data in the following format. Is it possible to create a time line chart using App Analytics. I am trying to easily identify the calls which overlap in my DataSet.
| Start Time | End Time | Call Name | Duration
|----------------------|----------------------|---------------------|----------
| 17:41:30.5001642Z | 17:41:30.703291Z | CreateDraftEnvelope | 203
| 17:41:31.0711234Z | 17:41:31.0867211Z | CreateLock | 21
| 17:41:31.1189342Z | 17:41:31.1345349Z | addDocument | 17
| 17:41:31.1961265Z | 17:41:31.2117613Z | addDocument | 17
| 17:41:31.4243498Z | 17:41:31.4399953Z | addDocument | 19
| 17:41:31.5242518Z | 17:41:31.5398738Z | addDocument | 17
I am looking for a chart as follows
Unfortunately, Analytics does not today provide this visualization type. Could you please submit it on our UserVoice?

How is availability zone list order determined by the nova api in openstack?

I want to change the default option for availability zone in my openstack setup in horizon. However, I am having trouble finding out what determines the order of the availability zones as returned by the nova api. For example, running openstack availability zone list I get:
+--------------+-------------+
| Zone Name | Zone Status |
+--------------+-------------+
| zone2 | available |
| zone1 | available |
| internal | available |
| zone3 | available |
+--------------+-------------+
which is the same order as in horizon's dropdown box. However, querying the database directly, I get:
mysql> select * from aggregate_metadata;
+---------------------+------------+------------+----+--------------+-------------------+--------------+---------+
| created_at | updated_at | deleted_at | id | aggregate_id | key | value | deleted |
+---------------------+------------+------------+----+--------------+-------------------+--------------+---------+
| 2015-06-12 08:43:07 | NULL | NULL | 1 | 1 | availability_zone | zone1 | 0 |
| 2015-06-12 08:43:08 | NULL | NULL | 2 | 2 | availability_zone | zone2 | 0 |
| 2015-10-26 05:30:15 | NULL | NULL | 3 | 3 | availability_zone | zone3 | 0 |
+---------------------+------------+------------+----+--------------+-------------------+--------------+---------+
3 rows in set (0.00 sec)
Obviously, the openstack api is doing some sorting before returning the result... however, I can't figure out how it is being sorted nor how I could control the sorting.
get_availability_zones is the function used by nova api to collect list of availability zones.
This function gets list of available services(which is sorted based on the id) ,adds availability zone name is added to those services.
Since service list is the first step it's id defines the order and not the zone name.
The sort order can be modified in different ways based on the requirement.
Sort the order at frontend (horizon)
Modify this line with
ng-options="zone.value as zone.label for zone in model.availabilityZones | orderBy:'value'"
Sort the order at backend (nova-api)
Add available_zones.sort()not_available_zones.sort() before return statements in get_availability_zones function

How to make a query for getting the specific rows with the latest time column value

Below is my sample data, I would like to get the host:value pair with the latest time.
+------+-------+-------+
| HOST | VALUE | TIME |
+------+-------+-------+
| A | 100 | 13:40 |
| A | 150 | 13:00 |
| A | 222 | 13:23 |
| B | 210 | 13:55 |
| B | 300 | 13:44 |
+------+-------+-------+
Wanted to get only rows with the latest time value for the each host column value.
The result should be like:
A 150 13:40
B 210 13:55
I think there are several analytical function to achieve this requirement in Oracle but I'm not sure what can I do in SQLite.
Can you let me know how I can make a query?
Here is an ANSI-compliant way of performing your query which should run on all versions of SQLite. For a potentially shorter solution see the answer by #CL.
SELECT t1.HOST || '-' || t1.VALUE || '-' || t1.TIME AS HOSTVALUETIME
FROM table t1 INNER JOIN
(
SELECT HOST, MAX(TIME) AS MAXTIME
FROM table
GROUP BY HOST
) t2
ON t1.HOST = t2.HOST AND t1.TIME = t2.MAXTIME
ORDER BY t1.HOST DESC
Output:
+---------------+
| HOSTVALUETIME |
+---------------+
| A-100-13:50 |
| B-210-13:55 |
+---------------+
In SQLite 3.7.11 or later, MAX() selects from which row in a group the other column values come:
SELECT Host,
Value,
MAX(Time)
FROM TheNameOfThisTableIsSoSecretThatICantTellYou
GROUP BY Host;

SQLite subtract time difference between two tables if there is a match

I need some help with a SQLite Query. I have two tables, a table called 'production' and a table called 'pause':
CREATE TABLE production (
date TEXT,
item TEXT,
begin TEXT,
end TEXT
);
CREATE TABLE pause (
date TEXT,
begin TEXT,
end TEXT
);
For every item which is produced, an entry in the table production with the current date, the start time and the end time (two timestamps in the format HH:MM:SS) is created. So let's assume, the production table looks like:
+------------+-------------+------------+----------+
| date | item | begin | end |
+------------+-------------+------------+----------+
| 2013-07-31 | Item 1 | 06:18:00 | 08:03:05 |
| 2013-08-01 | Item 2 | 06:00:03 | 10:10:10 |
| 2013-08-01 | Item 1 | 10:30:15 | 14:20:13 |
| 2013-08-01 | Item 1 | 15:00:10 | 16:00:00 |
| 2013-08-02 | Item 3 | 08:50:00 | 15:00:00 |
+------------+-------------+------------+----------+
The second table also contains a date and a start and an end time. So let's assume, the 'pause' table looks like:
+------------+------------+----------+
| date | begin | end |
+------------+------------+----------+
| 2013-08-01 | 08:00:00 | 08:30:00 |
| 2013-08-01 | 12:00:00 | 13:30:00 |
| 2013-08-02 | 10:00:00 | 10:30:00 |
| 2013-08-02 | 13:00:00 | 14:00:00 |
+------------+------------+----------+
Now I wanna get a table, which contains the time difference between the production begin and end time for every item. If there is a matching entry in the 'pause' table, the pause time should be subtracted.
So basically, the end result should look like:
+------------+------------+-------------------------------------------------+
| date | Item | time difference (in seconds), excluding pause |
+------------+------------+-------------------------------------------------+
| 2013-07-31 | Item 1 | 6305 |
| 2013-08-01 | Item 1 | 12005 |
| 2013-08-01 | Item 2 | 13207 |
| 2013-08-02 | Item 3 | 16800 |
+------------+------------+-------------------------------------------------+
I am not really sure, how I can accomplish it with SQLite. I know that it is possible to do this sort of calculation with Python, but in the end I think it would be better to let the database do the calculations. Maybe someone of you could give me a hint on how to solve this problem. I tried different queries, but I always ended up with different results than I expected.
To convert a time string to the number of seconds, use the strftime function with the %s modifier.
(A time string without a date part will be assumed to have the date 2000-01-01, but this cancels out when computing the differences.)
To compute the pause times for a specific production record, use a correlated subquery; the total aggregate is needed to cope with zero/one/multiple matching pauses.
SELECT date,
item,
sum(strftime('%s', end) - strftime('%s', begin) -
(SELECT total(strftime('%s', end) - strftime('%s', begin))
FROM pause
WHERE pause.date = production.date
AND pause.begin >= production.begin
AND pause.end <= production.end)
) AS seconds
FROM production
GROUP BY date,
item
The best answer I found is:
SELECT
cast(
(
strftime('%s',time_arrived)-strftime('%s',time_departed)
) AS real
)/60/60 AS elapsed
FROM date AS t;
For aditional information check this blog article.

Resources