I have two two timestamp fields (START,END) and a TIME_DIFF field which is of Integer type. I am trying to calculate the time between START and END field.. I created a trigger to do that :
CREATE TRIGGER [TIME_DIFF]
AFTER UPDATE OF [END]
ON [KLOG]
BEGIN
update klog set TIME_DIFF =
cast(
(
strftime('%s',KLOG.END) -
strftime('%s',KLOG.START)
) as INT
) / 60/60;
END
This gives me result in whole hours.Anything between 0 and 59 minutes is neglected.
I am wondering how can I modify this trigger so it displays in decimals?
Meaning, if the time difference is 1 hour 59 minutes the result would display 1.59.If the time difference is 35 minutes it would display 0.35.
To interpret a number of seconds as a timestamp, use the unixepoch modifier. Then you can simply use strftime() to format the value:
strftime('%H:%S',
strftime('%s',KLOG.END) - strftime('%s',KLOG.START),
'unixepoch')
If you use Julian days instead of seconds, you do not need a separate modifier:
strftime('%H:%S',
julianday(KLOG.END) - julianday(KLOG.START))
Related
I want to calculate the difference between two columns containing datetime stamps in db browser SQLite, I want the answers in minutes, and it keeps returning "Null". Please what could be the reason and how can I solve it?
I tried using this;
SELECT
started_at,
ended_at,
(strftime('%M','ended_at') - strftime('%M','started_at'))as duration
FROM citi1;
You have 'started_at' and 'ended_at' which are string literals and not identifiers and SQLite returns null when you use them in strftime().
But, even if you remove the single quotes you will not get the timestamp difference, because subtracting only the minutes parts of 2 timestamps does not return their difference.
For example, the difference that you would get for started_at = '2022-03-31 13:15:00' and ended_at = '2022-03-31 14:00:00' would be -15 (= 0 - 15).
Use strftime('%s', some_date) which returns the number of seconds since 1970-01-01 00:00:00 for both timestamps, subtract and divide by 60 to get the correct difference in minutes:
SELECT started_at, ended_at,
(strftime('%s', ended_at) - strftime('%s', started_at)) / 60 AS duration
FROM citi1;
See the demo.
I'm create a Table as follows:
CREATE TABLE IF NOT EXISTS problem(`row_id` INTEGER PRIMARY KEY, `datetime` TEXT)
I insert the values into table
INSERT INTO problem(`row_id`, `datetime`) VALUES
(1, '2021-01-03 12:50 PM'),
(2, '2021-01-03 04:55 PM');
Select the values ordered by column name
SELECT *FROM problem ORDER BY `datetime`
Reselt is here:
row_id datetime
2 2021-01-03 04:55 PM
1 2021-01-03 12:50 PM
In my view, row id 1 will be the first item and row id 2 will be the second entry.
If it does not understand 12 hours time what's the solution?
Does Sqlite3 Understand 12 Hours time format?
No, it understands 24 hour time format (see link below).
If it does not understand 12 hours time what's the solution?
The correct solution would be to store the data in a recognized format as per https://sqlite.org/lang_datefunc.html#time_values
Using a recognised format means that you can then take advantage of SQLite knowing that the column is a date/time/datetime column and thus utilise date time functions as well as being suitable for ordering and comparison.
An example, utilising your dates (note the use of 24 hour times when storing) to get the dates in the 12 hour format based upon 12:00 being PM is :-
DROP TABLE IF EXISTS problem;
CREATE TABLE IF NOT EXISTS problem(`row_id` INTEGER PRIMARY KEY, `datetime` TEXT);
INSERT INTO problem(`row_id`, `datetime`) VALUES
(1, '2021-01-03 12:50'),
(2, '2021-01-03 10:50'),
(3, '2021-01-03 13:55'),
(4, '2021-01-03 16:55'),
(5, '2021-01-03 00:55');
SELECT `row_id`,
date(`datetime`)||
CASE
/* Handle times that are 13:00 or greater i.e. use PM and subtract 12 hours from the stored time */
WHEN time(`datetime`) > '12:59'
THEN ' '||strftime('%H:%M',`datetime`,'-12 hours')||' PM'
/* Handle times that have 12 as the hour i.e. use PM with stored time */
WHEN time(`datetime`) > '11:59'
THEN ' '||strftime('%H:%M',`datetime`)||' PM'
/* ELSE use AM with stored time */
ELSE ' ' || strftime('%H:%M',`datetime`)||' AM'
END
AS `newdatetime` /* Note alias otherwise column name is generated according to column selection code */
FROM problem ORDER BY `datetime`;
Note that the order is as per the datetime column which being in a sortable format is always correct as the 24 hour format is used.
Note that the precision is only suitable for hh:mm e.g. if seconds then 12:59 should be 12:59:59 .....
The result of running the above is :-
The above utilises some of the Date Time Functions found at https://sqlite.org/lang_datefunc.html
The date function returns the date in yyyy-mm-dd format,
The time
function returns the time in hh:mm:ss format,
strftime is the
underlying function that can return a value in many formats based
upon a formatting string and modifiers.
All 3 use a take a time_value
(often the respective column containing the time).
You could perhaps simplify matters by utilising a function in whatever programming language your are using that converts from 24 hour to 12 hour. This could reduce the need for the complicated queries.
I have data in Google BigQuery that looks like this:
sample_date_time_UTC time_zone milliseconds_between_samples
-------- --------- ----------------------------
2019-03-31 01:06:03 UTC Europe/Paris 60000
2019-03-31 01:16:03 UTC Europe/Paris 60000
...
Data samples are expected at regular intervals, indicated by the value of the milliseconds_between_samples field:
The time_zone is a string that represents a Google Cloud Supported Timezone Value
I'm then checking the ratio of the actual number of samples compared to the expected number over any particular day, for any single day range (expressed as a local date, for the given time_zone):
with data as
(
select
-- convert sample_date_time_UTC to equivalent local datetime for the timezone
DATETIME(sample_date_time_UTC,time_zone) as localised_sample_date_time,
milliseconds_between_samples
from `mytable`
where sample_date_time between '2019-03-31 00:00:00.000000+01:00' and '2019-04-01 00:00:00.000000+02:00'
)
select date(localised_sample_date_time) as localised_date, count(*)/(86400000/avg(milliseconds_between_samples)) as ratio_of_daily_sample_count_to_expected
from data
group by localised_date
order by localised_date
The problem is that this has a bug, as I've hardcoded the expected number of milliseconds in a day to 86400000. This is incorrect, as when daylight saving begins in the specified time_zone (Europe/Paris), a day is 1hr shorter. When daylight saving ends, the day is 1hr longer.
So, the query above is incorrect. It queries data for 31st March of this year in the Europe/Paris timezone (which is when daylight saving started in that timezone). The milliseconds in that day should be 82800000.
Within the query, how can I get the correct number of milliseconds for the specified localised_date?
Update:
I tried doing this to see what it returns:
select DATETIME_DIFF(DATETIME('2019-04-01 00:00:00.000000+02:00', 'Europe/Paris'), DATETIME('2019-03-31 00:00:00.000000+01:00', 'Europe/Paris'), MILLISECOND)
That didn't work - I get 86400000
You can get the difference in milliseconds for the two timestamps by removing the +01:00 and +02:00. Note that this gives the difference between the timestamps in UTC: 90000000, which is not the same as the actual milliseconds that passed.
You can do something like this to get the milliseconds for one day:
select 86400000 + (86400000 - DATETIME_DIFF(DATETIME('2019-04-01 00:00:00.000000', 'Europe/Paris'), DATETIME('2019-03-31 00:00:00.000000', 'Europe/Paris'), MILLISECOND))
Thanks #Juta, for the hint on using UTC times for the calculation. As I'm grouping my data for each day by a localised date, I figured out that I can work out milliseconds for each day by getting the beginning and end datetime (in UTC), for my 'localised' date, using the following logic:
-- get UTC start datetime for localised date
-- get UTC end datetime for localised date
-- this then gives the milliseconds for that localised date:
datetime_diff(utc_end_datetime, utc_start_datetime, MILLISECOND);
So, my full query becomes:
with daily_sample_count as (
with data as
(
select
-- get the date in the local timezone, for sample_date_time_UTC
DATE(sample_date_time_UTC,time_zone) as localised_date,
milliseconds_between_samples
from `mytable`
where sample_date_time between '2019-03-31 00:00:00.000000+01:00' and '2019-04-01 00:00:00.000000+02:00'
)
select
localised_date,
count(*) as daily_record_count,
avg(milliseconds_between_samples) as daily_avg_millis_between_samples,
datetime(timestamp(localised_date, time_zone)) as utc_start_datetime,
datetime(timestamp(date_add(localised_date, interval 1 day), time_zone)) as utc_end_datetime
from data
)
select
localised_date,
-- apply calculation for ratio_of_daily_sample_count_to_expected
-- based on the actual vs expected number of samples for the day
-- no. of milliseconds in the day changes, when transitioning in/out of daylight saving - so we calculate milliseconds in the day
daily_record_count/(datetime_diff(utc_end_datetime, utc_start_datetime, MILLISECOND)/daily_avg_millis_between_samples) as ratio_of_daily_sample_count_to_expected
from
daily_sample_count
Here are my table's columns :
Time | Close | High | Low | Open | pairVolume | Trades | Volume
I would love to have my data group by range of time.
Now the tricky part is that this range is custom (it's a user input which could very well be grouping by 10 minutes, 2 hours, or even 5 days)
My time field is stored in millisecond since epoch.
Solution I found for now which I'm uncertain about :
SELECT time + (21600000 - (time%21600000)) as gap, count(time)
FROM price_chart
WHERE time >= 1517418000000 and time <= 1518195600000
GROUP BY gap
21600000 is 6 hours in milliseconds
time is time since epoch
Yes, it works.
Putting some numbers into excel with your formula below, it works for me. Your gap value will be returned as the top end of each time range grouping.
SELECT time + (21600000 - (time%21600000)) as gap ...
Using the below:
SELECT time - (time%21600000) as gap_bottom ...
Would return you the bottom end of each time range grouping. You could add this as an additional calculated column and have both returned.
EDIT / PS:
You can also use the SQLite date formatting functions after dividing 1,000 milliseconds out of your epoch time and converting it to the SQLite unixepoch:
strftime('%Y-%m-%d %H:%M:%S', datetime(1517418000000 / 1000, 'unixepoch') )
... for ...
SELECT strftime('%Y-%m-%d %H:%M:%S', datetime( (time + (21600000 - (time%21600000))) / 1000, 'unixepoch') ) as gap ...
I'd like to get this to work in Teradata:
Updated SQL for better example
select
case
when
current_date between
cast('03-10-2013' as date format 'mm-dd-yyyy') and
cast('11-03-2013' as date format 'mm-dd-yyyy')
then 4
else 5
end Offset,
(current_timestamp + interval Offset hour) GMT
However, I get an error of Expected something like a string or a Unicode character blah blah. It seems that you have to hardcode the interval like this:
select current_timestamp + interval '4' day
Yes, I know I hardcoded it in my first example, but that was only to demonstrate a calculated result.
If you must know, I am having to convert all dates and times in a few tables to GMT, but I have to account for daylight savings time. I am in Eastern, so I need to add 4 hours if the date is within the DST timeframe and add 5 hours otherwise.
I know I can just create separate update statements for each period and just change the value from a 4 to a 5 accordingly, but I want my query to be dynamic and smart.
Here's the solution:
select
case
when
current_date between
cast('03-10-2013' as date format 'mm-dd-yyyy') and
cast('11-03-2013' as date format 'mm-dd-yyyy')
then 4
else 5
end Offset,
(current_timestamp + cast(Offset as interval hour)) GMT
You have to actually cast the case statement's return value as an interval. I didn't even know interval types existed in Teradata. Thanks to this page for helping me along:
http://www.teradataforum.com/l081007a.htm
If I understand correctly, you want to multiply the interval by some number. Believe it or not, that's literally all you need to do:
select current_timestamp as right_now
, right_now + (interval '1' day) as same_time_tomorrow
, right_now + (2 * (interval '1' day)) as same_time_next_day
Intervals have always challenged me for some reason; I don't use them very often. But I've had this little example in my Teradata "cheat sheet" for quite a while.
Two remarks:
You could return an INTERVAL instead of an INT
The recommended way to write a date literal in Teradata is DATE 'YYYY-MM-DD' instead of CAST/FORMAT
select
case
when current_date between DATE '2013-03-10' and DATE '2013-11-03'
then interval '4' hour
else interval '5'hour
end AS Offset,
current_timestamp + Offset AS GMT