What is the format of Chrome's timestamps? - sqlite

I'm using SQLite Database Browser to read information from a database containing the browsing history for Google Chrome. My current code that I am executing in the "Execute SQL" panel looks like this:
SELECT last_visit_time,url,title
FROM urls
WHERE url LIKE {PLACEHOLDER} AND title LIKE {PLACEHOLDER}
The stuff on the "WHERE" line is blocked out with {PLACEHOLDER} for privacy purposes. Now, I want to make it such that the data returned in the last_visit_time column is readable instead of a jumbled mess like 13029358986442901. How do I do this and how do I convert Chrome's timestamp to a readable format? How do I get it to order them (the returned rows) by last_visit_time?

The answer is given in this question: "[Google Chrome's] timestamp is formatted as the number of microseconds since January, 1601"
So for example in my sample history database, the query
SELECT
datetime(visit_time / 1000000 + (strftime('%s', '1601-01-01')), 'unixepoch', 'localtime')
FROM visits
ORDER BY visit_time DESC
LIMIT 10;
gives the results:
2014-09-29 14:22:59
2014-09-29 14:21:57
2014-09-29 14:21:53
2014-09-29 14:21:50
2014-09-29 14:21:32
2014-09-29 14:21:31
2014-09-29 14:16:32
2014-09-29 14:16:29
2014-09-29 14:15:05
2014-09-29 14:15:05
Using your timestamp value of 13029358986442901:
SELECT
datetime(13029358986442901 / 1000000 + (strftime('%s', '1601-01-01')), 'unixepoch', 'localtime')
the result is:
2013-11-19 18:23:06

visits.visit_time is in microseconds since January 1, 1601 UTC which is similar but not to be mistaken for Windows filetime which is the number of 100 nanoseconds since January 1, 1601 UTC.
Trivia: Why 1601?
I think the popular answer is because the Gregorian calendar operates on a 400-year cycle, and 1601 is the first year of the cycle that was active at the time Windows NT was being designed. In other words, it was chosen to make the math come out nicely. January 1, 1601 is origin of COBOL integer dates. It is also day 1 by ANSI date format. And if you speculate further according to ISO8601 which is the format in which it is in, ISO8601 works as far back as the year 1581. Prior to 1583 time was based on the proleptic Gregorian calendar which has 366 days per year. Perhaps they just rounded up to the next century.
downloads.start_time is the number of seconds since January 1, 1970 UTC
Trivia: Why 1970?
Well, I'm glad you asked.. It didn't used to be.. Originally it was January 1, 1971 but was later rounded to January 1, 1970. January 1, 1970 is considered to be the birth of UNIX.
It's worth noting that Firefox formats time as the number of microseconds since January 1, 1970 and the name for the format is PRTime
All of these are in an ISO 8601 EPOCH format.

Chromes Timestap is not Unixepoch!!
Chrome's base time is 01/01/1601 00:00:00. To calculate local time, Chrome time has to be converted to seconds by dividing by one-million, and then the seconds differential between 01/01/1601 00:00:00 and 01/01/1970 00:00:00 must be subtracted. There are two ways you can do this, viz SQLite itself and Unix.
SQLITE:
sqlite> SELECT strftime('%s', '1601-01-01 00:00:00');
-11644473600
DATE:
$ date +%s -d 'Jan 1 00:00:00 UTC 1601'
-11644473600
In both commands above, the "%s" represents unixepoch time. The commands calculate the number of seconds between unixepoch time (1970) and the subsequent date (Chrome time base, 1601). Note that the seconds are negative. Of course, this is because you have to count backwards from 1970 to 1601! With this information, we can convert Chrome time in SQLite like this:
sqlite> SELECT datetime((time/1000000)-11644473600, 'unixepoch', 'localtime') AS time FROM table;
Have a good read here.

Here is a compact expression to convert WebKit Time:
sqlite> SELECT datetime(time/1e6-11644473600,'unixepoch','localtime') AS time FROM table;

I'm new to coding so I'm not sure how you do it with sql, however I can show you a method in c#. I am hoping this would help someone.
If the time value given in the database is :
13029358986442901. Select only the first 11 digits 13029358986. You can convert this to time using :
DateTime dateTimeVar = new DateTime(1601,1,1).AddSeconds(time);
The answer here was : 19-11-2013 18:23:06
And this was without your time zone conversion.

You can substract 11644473600000 (1/1/1601 is -11644473600000 in unixepoch) and treat it as a regulat unix epoch timestamp this is assuming miliseconds.
milis: 11644473600000
seconds: 11644473600

Related

How to deal with "YYYY-MM-DD HH:MM:SS.SSS +0000" format in SQLite?

I have two columns, DATE_A and DATE_B.
I need to find how much time is between the two dates.
Usually, I would use JULIANDAY() and subtract one date from another, but the output is null because of the "+0000" part.
Below you'll find an example of values contained in the two columns:
DATE_A - '2022-05-12 00:16:17.553 +0000'
DATE_B - '2022-06-02 00:02:01.158 +0000'
Please tell me what '+0000' means and how can I find the time elapsed between the two dates.
+0000 is the offset from UTC this time represents in hours and minutes. For example, here in the US Pacific it's daylight savings time and we're 7 hours behind UTC so we're -0700. 2022-05-12 08:00:00+0000 and 2022-05-12 01:00:00-0700 are the same point in time.
SQLite will accept a slightly different format. There has to be the : separator between hours and minutes.
2022-05-12 00:16:17.553 +00:00
^
You'll have to change the format. Use your programming language's date and time functions.
See "Time Values" in SQLite Date and Time Functions for valid formats.

What date format is "623548800"?

I exported the SQLite db from an iOS app and was wanting to run a query based on the date, but I found that it's in a format I don't recognize. As stated above, the latest value is "623548800". I'm assuming this corresponds to today, since I created a record in the app today. This is 9 digits, so it's too short to be a Unix timestamp, which is 10 digits.
The earliest record in the db is "603244800", which likely corresponds to when I started using the app on 2/13/2020. That's a difference of 20,304,000, so it looks like it's using seconds, as it's been 20,312,837 seconds since then.
Is this essentially tracking seconds based on some proprietary date, or is this a known format?
623548800 - 603244800 = 20304000
20304000/86400 seconds in 24 hours = 235 days
October 5, 2020 - February 13, 2020 = 235 days
UTC Unix timestamp February 13, 2020 = 1581552000
Like the prior comment said it looks like an offset, it might be a timestamp somewhere in source or in db
Your dates are Unix Timestamps.
By using any on line converter (like https://www.epochconverter.com) you can find the dates they correspond to.
The latest value 623548800 corresponds to Thursday, October 5, 1989 12:00:00 AM GMT
and the earliest value 603244800 corresponds to Sunday, February 12, 1989 12:00:00 AM GMT.
So it seems like your dates or off by 31 years.
I found a similar case here: Behind The Scenes: Core Data dates stored with 31 year offset?
If you want you can convert them to the format 'YYYY-MM-DD hh:mm:ss' like this:
UPDATE tablename
SET datecolumn = datetime(datecolumn, 'unixepoch', '+31 year')
or:
UPDATE tablename
SET datecolumn = date(datecolumn, 'unixepoch', '+31 year')
if you are not interested in the time part.

Get number of milliseconds for a localised date, taking into account daylight savings

I have data in Google BigQuery that looks like this:
sample_date_time_UTC time_zone milliseconds_between_samples
-------- --------- ----------------------------
2019-03-31 01:06:03 UTC Europe/Paris 60000
2019-03-31 01:16:03 UTC Europe/Paris 60000
...
Data samples are expected at regular intervals, indicated by the value of the milliseconds_between_samples field:
The time_zone is a string that represents a Google Cloud Supported Timezone Value
I'm then checking the ratio of the actual number of samples compared to the expected number over any particular day, for any single day range (expressed as a local date, for the given time_zone):
with data as
(
select
-- convert sample_date_time_UTC to equivalent local datetime for the timezone
DATETIME(sample_date_time_UTC,time_zone) as localised_sample_date_time,
milliseconds_between_samples
from `mytable`
where sample_date_time between '2019-03-31 00:00:00.000000+01:00' and '2019-04-01 00:00:00.000000+02:00'
)
select date(localised_sample_date_time) as localised_date, count(*)/(86400000/avg(milliseconds_between_samples)) as ratio_of_daily_sample_count_to_expected
from data
group by localised_date
order by localised_date
The problem is that this has a bug, as I've hardcoded the expected number of milliseconds in a day to 86400000. This is incorrect, as when daylight saving begins in the specified time_zone (Europe/Paris), a day is 1hr shorter. When daylight saving ends, the day is 1hr longer.
So, the query above is incorrect. It queries data for 31st March of this year in the Europe/Paris timezone (which is when daylight saving started in that timezone). The milliseconds in that day should be 82800000.
Within the query, how can I get the correct number of milliseconds for the specified localised_date?
Update:
I tried doing this to see what it returns:
select DATETIME_DIFF(DATETIME('2019-04-01 00:00:00.000000+02:00', 'Europe/Paris'), DATETIME('2019-03-31 00:00:00.000000+01:00', 'Europe/Paris'), MILLISECOND)
That didn't work - I get 86400000
You can get the difference in milliseconds for the two timestamps by removing the +01:00 and +02:00. Note that this gives the difference between the timestamps in UTC: 90000000, which is not the same as the actual milliseconds that passed.
You can do something like this to get the milliseconds for one day:
select 86400000 + (86400000 - DATETIME_DIFF(DATETIME('2019-04-01 00:00:00.000000', 'Europe/Paris'), DATETIME('2019-03-31 00:00:00.000000', 'Europe/Paris'), MILLISECOND))
Thanks #Juta, for the hint on using UTC times for the calculation. As I'm grouping my data for each day by a localised date, I figured out that I can work out milliseconds for each day by getting the beginning and end datetime (in UTC), for my 'localised' date, using the following logic:
-- get UTC start datetime for localised date
-- get UTC end datetime for localised date
-- this then gives the milliseconds for that localised date:
datetime_diff(utc_end_datetime, utc_start_datetime, MILLISECOND);
So, my full query becomes:
with daily_sample_count as (
with data as
(
select
-- get the date in the local timezone, for sample_date_time_UTC
DATE(sample_date_time_UTC,time_zone) as localised_date,
milliseconds_between_samples
from `mytable`
where sample_date_time between '2019-03-31 00:00:00.000000+01:00' and '2019-04-01 00:00:00.000000+02:00'
)
select
localised_date,
count(*) as daily_record_count,
avg(milliseconds_between_samples) as daily_avg_millis_between_samples,
datetime(timestamp(localised_date, time_zone)) as utc_start_datetime,
datetime(timestamp(date_add(localised_date, interval 1 day), time_zone)) as utc_end_datetime
from data
)
select
localised_date,
-- apply calculation for ratio_of_daily_sample_count_to_expected
-- based on the actual vs expected number of samples for the day
-- no. of milliseconds in the day changes, when transitioning in/out of daylight saving - so we calculate milliseconds in the day
daily_record_count/(datetime_diff(utc_end_datetime, utc_start_datetime, MILLISECOND)/daily_avg_millis_between_samples) as ratio_of_daily_sample_count_to_expected
from
daily_sample_count

How to filter data between date & time in sqlite

I have a table Orders with Order_Date datatype is smalldatetime and my Order_Date Format is 01/10/2018 10:00:00 PM
Now I want to filter data between 01/10/2018 04:00:00 PM AND 02/10/2018 04:00:00 AM
What I tried
SELECT distinct(Order_No),Order_Date from Orders WHERE Order_Date BETWEEN '01/10/2018 04:00:00 PM' and '02/10/2018 04:00:00 AM'
This query is showing only 01/10/2018 Data but I want the data BETWEEN 01/10/2018 04:00:00 PM and 02/10/2018 04:00:00 AM
Is there any way to get the data from today 4PM To Next Day 4AM?
First off, sqlite does not have actual date/time types. It's a simple database with only a few types. Your smalldatetime column actually has NUMERIC affinity (See the affinity rules).
For Sqlite's builtin functions to be able to understand them, date and times can be stored as numbers or text; numbers are either the number of seconds since the Unix epoch, or a Julian day. Text strings can be one of a number of formats; see the list in the docmentation. All these have the additional advantage that, when compared to other timestamps in the same format, they can be properly sorted.
You seem to be using text strings like '01/10/2018 04:00:00 PM'. This is not one of the formats that sqlite date and time functions understand, and it doesn't sort naturally, so you can't use it in comparisons aside from testing equality. Plus it's ambiguous: Is it October 1, or January 10? Depending on where you're from you'll have a different interpretation of it.
If you change your timestamp format to a better one like (Assuming October 1) '2018-10-01 16:00:00', you'll be able to sort and compare ranges, and use it with sqlite functions.

Correct date range in SQL

This has gotten me a little paranoid, but I'm retrieving a set of records that fall within a period of time, say, the period from the january 1, 2011 (starting at midnight) to march 31, 2011 (all records up to 11:59:59 PM)
I'm using the condition
t.logtime between to_date('2011-01-01', 'yyyy-mm-dd') and to_date('2011-03-31')
Note that logtime is a datetime field.
Does this reflect what I want? Or am I actually missing 24 hours less a second?
I could specify the time as well, but I was hoping I this could be done without it.
Yes, you are missing nearly all of the last day. There are various solutions; probablt the simplest is:
t.logtime >= to_date('2011-01-01', 'yyyy-mm-dd')
and t.logtime < to_date('2011-04-01', 'yyyy-mm-dd')
I'd use the ANSI date literal syntax too:
t.logtime >= date '2011-01-01'
and t.logtime < date '2011-04-01'
Another way is:
trunc(t.logtime) between date '2011-01-01' and date '2011-03-31'
but note that that can no longer use an index on logtime (though it can use an index on trunc(logtime)).

Resources