My Query is below:
SELECT
(date),
CVC_Demand_Per_Subscriber
FROM
(SELECT
date,
sum(Max_Utilization) as SUM_Max_Util,
sum(AVC) as SUM_Total_Active_AVCs,
(SUM_Max_Util/SUM_Total_Active_AVCs) as CVC_Demand_Per_Subscriber
FROM
(
SELECT
date,
cvc as CVC,
avc as AVC,
bandwidth,
round((max(lout)/1000000),2) as Max_Utilization,
((((max(lout) / bandwidth) * 100)) / 1000000) as Max_Utilization_Percent,
(Max_Utilization/AVC) as CVC_Demand_Per_Subscriber
FROM
(
SELECT
date,
path[2] as cvc,
bandwidth,
avc,
max(load_out) as lout
FROM noc.interface
ANY INNER JOIN
(
SELECT
cvcid as cvc,
bandwidth,
activeavc as avc
FROM dictionaries.nsi_cvcs
GROUP BY
cvc,
avc,
bandwidth
) USING cvc
WHERE
managed_object IN (
SELECT bi_id
FROM dictionaries.managedobject
WHERE nbn = 1)
AND(date >= today()- 7)
GROUP BY
date,
cvc,
avc,
bandwidth
ORDER BY
date,
cvc,
avc
)
GROUP BY
date,
cvc,
avc,
bandwidth
)
GROUP BY date
ORDER BY date ASC)
tmp
I am getting the result data when i select Table in Grafana like below:
Time CVC_Demand_Per_Subscriber
2021-07-19 00:00:00 1.61
2021-07-18 00:00:00 2.70
2021-07-17 00:00:00 2.90
2021-07-16 00:00:00 2.83
2021-07-15 00:00:00 2.54
2021-07-14 00:00:00 2.38
2021-07-13 00:00:00 2.39
2021-07-12 00:00:00 0.64
But when i change it to Graph, i dont see the graph plotted with the values, according to the dates. It does not say "no data" but an empty graph.
Please Help me where i am wrong?
I tried the below but no luck:
Converted the date with UNIX_TIMESTAMP
to_char(date_format)
$__timeGroup()
$__time
Please also suggest optimization on the query.
It needs to:
define Column:DateTime as Time
set the sql-query
SELECT
$timeSeries as t,
sum(CVC_Demand_Per_Subscriber) value
FROM (
/* emulate the test dataset */
SELECT toDateTime(data.1) AS Time, data.2 AS CVC_Demand_Per_Subscriber
FROM (
SELECT arrayJoin([
('2021-07-19 00:00:00', 1.61),
('2021-07-18 00:00:00', 2.70),
('2021-07-17 00:00:00', 2.90),
('2021-07-16 00:00:00', 2.83),
('2021-07-15 00:00:00', 2.54),
('2021-07-14 00:00:00', 2.38),
('2021-07-13 00:00:00', 2.39),
('2021-07-12 00:00:00', 0.64)]) as data)
)
WHERE $timeFilter
GROUP BY t
ORDER BY t
When the graph is empty (displayed 'No data') and no query error need to check ClickHouse Datasource settigs to make sure that Add CORS flag to requests is enabled:
Related
My calendar year runs from 07-01-(of one year) to 06-30-(of the next year).
My SQLITE DB has a Timestamp column and it's data type is datetime and stores the timestamp as 2023-09-01 00:00:00.
What I'm trying to do is get the MAX date of the latest snowfall. For example, with my seasonal years beginning July-01 (earliest) and ending June 30 (latest), I want to find only the latest (MAX) date snowfall was recorded, regardless of the year, based on the month.
Say if out of five years (2017 to 2022) worth of data in the database and it snowed Mar 15, 2020. And there was no date greater than than this one in any year, then this would be the latest date regardless which year it fell.
I've been trying many variations of the below query. This query says it runs with no mistakes and returns "null" values. I'm using SQLITE DB Browser to write and test the query.
SELECT Timestamp, MAX(strftime('%m-%d-%Y', Timestamp)) AS lastDate,
snowDepth AS lastDepth FROM DiaryData
WHERE lastDepth <> 0 BETWEEN strftime('%Y-%m-%d', Timestamp,'start of year', '+7 months')
AND strftime('%Y-%m-%d', Timestamp, 'start of year', '+1 year', '+7 months', '- 1 day')
ORDER BY lastDate LIMIT 1
and this is what's in my test database:
Timestamp snowFalling snowLaying snowDepth
2021-11-10 00:00:00 0 0 7.2
2022-09-15 00:00:00 0 0 9.5
2022-12-01 00:00:00 1 0 2.15
2022-10-13 00:00:00 1 0 0.0
2022-05-19 00:00:00 0 0 8.82
2023-01-11 00:00:00 0 0 3.77
If it's running properly I should expect:
Timestamp
lastDate
lastDepth
2022-05-19 00:00:00
05-19-2022
8.82
What am I missing or is this not possible in SQLITE? Any help would be appreciative.
Use aggregation by fiscal year utilizing SQLite's feature of bare columns:
SELECT Timestamp,
strftime('%m-%d-%Y', MAX(Timestamp)) AS lastDate,
snowDepth AS lastDepth
FROM DiaryData
WHERE snowDepth <> 0
GROUP BY strftime('%Y', Timestamp, '+6 months');
See the demo.
I'd get season for each record first, snowfall date relative to record's season start date after this, and largest snowfall date relative to record's season start date finally:
with
data as (
select
*
, case
when cast(strftime('%m', "Timestamp") as int) <= 7
then strftime('%Y-%m-%d', "Timestamp", 'start of year', '-1 year', '+6 months')
else strftime('%Y-%m-%d', "Timestamp", 'start of year', '+6 months')
end as "Season start date"
from DiaryData
where 1==1
and "snowDepth" <> 0.0
)
, data2 as (
select
*
, julianday("Timestamp") - julianday("Season start date")
as "Showfall date relative to season start date"
from data
)
, data3 as (
select
"Timestamp"
, "snowFalling"
, "snowLaying"
, "snowDepth"
from data2
group by null
having max("Showfall date relative to season start date")
)
select
*
from data3
demo
You can use the ROW_NUMBER window function to address this problem, yet need to apply a subtle tweak. In order to account for fiscal years, you can partition on the year for timestamps slided 6 months further. In this way, ranges like [2021-01-01, 2021-12-31] will instead be slided to [2021-06-01, 2022-05-31].
WITH cte AS (
SELECT *, ROW_NUMBER() OVER(
PARTITION BY STRFTIME('%Y', DATE(Timestamp_, '+6 months'))
ORDER BY Timestamp_ DESC ) AS rn
FROM tab
)
SELECT Timestamp_,
STRFTIME('%d-%m-%Y', Timestamp_) AS lastDate,
snowDepth AS lastDepth
FROM cte
WHERE rn = 1
Check the demo here.
I have two time stamps #starttimestamp and #endtimestamp. How to calculate number of working hours between these two
Working hours is defined below:
Mon- Thursday (9:00-17:00)
Friday (9:00-13:00)
Have to work in impala
think i found a better solution.
we will create a series of numbers using a large table. You can get a time dimension type table too. Make it doenst get truncated. I am using a large table from my db.
Use this series to generate a date range between start and end date.
date_add (t.start_date,rs.uniqueid) -- create range of dates
join (select row_number() over ( order by mycol) as uniqueid -- create range of unique ids
from largetab) rs
where end_date >=date_add (t.start_date,rs.uniqueid)
Then we will calculate total hour difference between the timestamp using unix timestamp considering date and time.
unix_timestamp(endtimestamp - starttimestamp )
Exclude non working hours like 16hours on M-T, 20hours on F, 24hours on S-S.
case when dayofweek ( dday) in (1,7) then 24
when dayofweek ( dday) =5 then 20
else 16 end as non work hours
Here is complete SQL.
select
end_date, start_date,
diff_in_hr - sum(case when dayofweek ( dday) in (1,7) then 24
when dayofweek ( dday) =5 then 20
else 16 end ) total_workhrs
from (
select (unix_timestamp(end_date)- unix_timestamp(start_date))/3600 as diff_in_hr , end_date, start_date,date_add (t.start_date,rs.uniqueid) as dDay
from tdate t
join (select row_number() over ( order by mycol) as uniqueid from largetab) rs
where end_date >=date_add (t.start_date,rs.uniqueid)
)rs2
group by 1,2,diff_in_hr
I've got a table in an SQLite3 database containing account balances, but it currently only contains balances for a few specific dates:
Balance Date
Amount
2021-12-15
400
2021-12-18
500
2021-12-22
200
I need to fill in the gaps between these dates with the previous recorded balance, so e.g. 2021-12-16 and 2021-12-17 should have a balance of 400 and 2021-12-19, 2021-12-20 and 2021-12-21 should have a balance of 500.
Is there a way to fill these gaps using SQL? I think I need some logic like
INSERT INTO BALANCES (BalanceDate,BalanceAmount)
VALUES(previous record + 1 day, previous record's amount)
but I don't know how I can point SQL to the previous record.
Thanks
You can use a recursive cte to produce the missing dates:
WITH cte AS (
SELECT date(b1.BalanceDate, '+1 day') BalanceDate, b1.Amount
FROM BALANCES b1
WHERE NOT EXISTS (SELECT 1 FROM BALANCES b2 WHERE b2.BalanceDate = date(b1.BalanceDate, '+1 day'))
AND date(b1.BalanceDate, '+1 day') < (SELECT MAX(BalanceDate) FROM BALANCES)
UNION ALL
SELECT date(c.BalanceDate, '+1 day'), c.Amount
FROM cte c
WHERE NOT EXISTS (SELECT 1 FROM BALANCES b WHERE b.BalanceDate = date(c.BalanceDate, '+1 day'))
AND date(c.BalanceDate, '+1 day') < (SELECT MAX(BalanceDate) FROM BALANCES)
)
INSERT INTO BALANCES(BalanceDate, Amount)
SELECT BalanceDate, Amount FROM cte;
See the demo.
I have the following query that provides me with the 10 most recent records in the database:
SELECT
dpDate AS Date,
dpOpen AS Open,
dpHigh AS High,
dpLow AS Low,
dpClose AS Close
FROM DailyPrices
WHERE dpTicker = 'DL.AS'
ORDER BY dpDate DESC
LIMIT 10;
The result of this query is as follows:
bash-3.2$ sqlite3 myData < Queries/dailyprice.sql
Date Open High Low Close
---------- ---------- ---------- ---------- ----------
2016-06-13 4.0 4.009 3.885 3.933
2016-06-10 4.23 4.236 4.05 4.08
2016-06-09 4.375 4.43 4.221 4.231
2016-06-08 4.406 4.474 4.322 4.35
2016-06-07 4.377 4.466 4.369 4.384
2016-06-06 4.327 4.437 4.321 4.353
2016-06-03 4.34 4.428 4.316 4.335
2016-06-02 4.434 4.51 4.403 4.446
2016-06-01 4.51 4.512 4.317 4.399
2016-05-31 4.613 4.67 4.502 4.526
bash-3.2$
Whilst I need to plot the extracted data, I also need to obtain the following summary data of the dataset:
Minimum date ==> 2016-05-31
Maximum date ==> 2016-06-13
Open value at minimum date ==> 4.613
Close value at maximum date ==> 3.933
Maximum of High column ==> 4.67
Minimum of Low column ==> 3.885
How can I, as newbie, approach this issue? Can this be done in one query?
Thanks for pointing me in the right direction.
Best regards,
GAM
The desired output can be achieved with
aggregate functions on a convenient common table expression,
which uses OPs expression verbatim
OPs method, with limit 1 applied to common table expression,
for getting mindate and maxdate among the ten days
Query:
WITH Ten(Date,Open,High,Low,Close) AS
(SELECT dpDate AS Date,
dpOpen AS Open,
dpHigh AS High,
dpLow AS Low,
dpClose AS Close
FROM DailyPrices
WHERE dpTicker = 'DL.AS'
ORDER BY dpDate DESC LIMIT 10)
SELECT min(Date) AS mindate,
max(Date) AS maxdate,
(SELECT Open FROM Ten ORDER BY Date ASC LIMIT 1) AS Open,
max(High) AS High,
min(Low) AS Low,
(SELECT Close FROM Ten ORDER BY Date DESC LIMIT 1) AS Close
FROM Ten;
Output (.headers on and .mode column):
mindate maxdate Open High Low Close
---------- ---------- ---------- ---------- ---------- ----------
2016-05-31 2016-06-13 4.613 4.67 3.885 3.933
Note:
I think the order of values in OPs last comment do not match the order of columns in the preceding comment by OP.
I chose the order from the preceding comment.
The order in the last comment seems to me to be "mindate, maxdate, Open, Close, High, Low".
Adapting my proposed query to that order would be simple.
Using SQLite 3.18.0 2017-03-28 18:48:43
Here is the .dump of my toy database, i.e. my MCVE, in case something is unclear. (I did not enter the many decimal places, it is probably a float rounding thing.)
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE dailyPrices (dpDate date, dpOpen float, dpHigh float, dpLow float, dpClose float, dpTicker varchar(10));
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-13',4.0,4.009000000000000341,3.8849999999999997868,3.9329999999999998294,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-10',4.2300000000000004263,4.2359999999999997655,4.0499999999999998223,4.080000000000000071,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-09',4.375,4.4299999999999997157,4.2210000000000000852,4.2309999999999998721,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-08',4.4059999999999996944,4.4740000000000001989,4.3220000000000000639,4.3499999999999996447,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-07',4.3769999999999997797,4.4660000000000001918,4.3689999999999997726,4.384000000000000341,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-06',4.3269999999999999573,4.4370000000000002771,4.3209999999999997299,4.3529999999999997584,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-03',4.3399999999999998578,4.4370000000000002771,4.3209999999999997299,4.3529999999999997584,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-02',4.4340000000000001634,4.5099999999999997868,4.4029999999999995807,4.4459999999999997299,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-01',4.5099999999999997868,4.5119999999999995665,4.3170000000000001705,4.3990000000000000213,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-05-31',4.6130000000000004334,4.6699999999999999289,4.5019999999999997797,4.525999999999999801,'DL.AS');
COMMIT;
I'd like to only choose the oldest date. Using Max/Min doesn't work because it's at row level, and I couldn't figure out a way to use over or NTH as this query will be run each day with a different number of server, w_id and z_id.
The following query:
select server, w_id, z_id, date(datetime) as day
from( SELECT server, w_id, datetime, demand.b_id as id, demand.c_type, z_id,
FROM TABLE_DATE_RANGE(v3_data.v3_,DATE_ADD(CURRENT_DATE(),-2,"day"),
DATE_ADD(CURRENT_DATE(),-1,"day"))
where demand.b_id is not null and demand.c_type = 'rtb'
group by 1,2,3,4,5,6
having datetime >= DATE_ADD(CURRENT_DATE(),-2,"day")
)
group by 1,2,3,4
having count(day)<2
order by z_id, day
Gives results:
Row server w_id z_id day
1 A 722 1837 2016-04-19
2 SPORTS 51 2534 2016-04-19
3 A 1002 2546 2016-04-18
4 A 1303 3226 2016-04-19
5 A 1677 4369 2016-04-18
6 NEW 13608 9370 2016-04-19
So from the above I'd only like 2016-04-18.
I think a GROUP_CONCAT might get the job done quite simply here:
SELECT
server,
w_id,
z_id,
day,
FROM (
SELECT
server,
w_id,
z_id,
GROUP_CONCAT(day) day,
FROM (
SELECT
server,
w_id,
DATE(datetime) day,
demand.b_id AS id,
demand.c_type,
z_id,
FROM
TABLE_DATE_RANGE(v3_data.v3_,DATE_ADD(CURRENT_DATE(),-2,"day"), DATE_ADD(CURRENT_DATE(),-1,"day"))
WHERE
demand.b_id IS NOT NULL
AND demand.c_type = 'rtb'
AND DATE(datetime) >= DATE(DATE_ADD(CURRENT_DATE(),-2,"day"))
GROUP BY
1,2,3,4,5,6
ORDER BY
day) # Critical to order this dimension to make the GROUP_CONCAT permutations unique
GROUP BY
server,
w_id,
z_id,
# day is aggregated in GROUP_CONCAT and so it does not get included in the GROUP BY
)
WHERE
day = DATE(DATE_ADD(CURRENT_DATE(),-2,"day"))
Most inner select is your untouched original one
The rest is wrapper taking care of min_day
Not tested - as done on go - but at least should give you an idea
SELECT server, w_id, z_id, [day]
FROM (
SELECT server, w_id, z_id, [day], MIN([day]) OVER() AS min_day
FROM (
SELECT server, w_id, z_id, DATE(datetime) AS [day]
FROM (
SELECT server, w_id, datetime, demand.b_id AS id, demand.c_type, z_id,
FROM TABLE_DATE_RANGE(v3_data.v3_,DATE_ADD(CURRENT_DATE(),-2,"day"), DATE_ADD(CURRENT_DATE(),-1,"day"))
WHERE demand.b_id IS NOT NULL AND demand.c_type = 'rtb'
GROUP BY 1,2,3,4,5,6
HAVING datetime >= DATE_ADD(CURRENT_DATE(),-2,"day")
)
GROUP BY 1,2,3,4
HAVING COUNT([day])<2
)
)
WHERE [day] = min_day
ORDER BY z_id, [day]
Both solutions have been helpful, but I believe neither worked the way I wanted and the following does:
select server, w_id, id, demand.c_type,z_id,
NTH(1, day) First, NTH(2, day) Second,
from(
SELECT
server,
w_id,
DATE(datetime) as day,
demand.b_id AS id,
demand.c_type,
z_id,
FROM
TABLE_DATE_RANGE([black-beach-789:v3_data.v3_],DATE_ADD(CURRENT_DATE(),-2,"day"), DATE_ADD(CURRENT_DATE(),-1,"day"))
WHERE
demand.b_id IS NOT NULL
AND demand.c_type = 'rtb'
AND DATE(datetime) >= DATE(DATE_ADD(CURRENT_DATE(),-2,"day"))
GROUP BY
1,2,3,4,5,6
order by day
)
group by 1,2,3,4,5
having first = date(DATE_ADD(CURRENT_DATE(),-2,"day")) and Second is null