My Centos 7.6 server is physically located in Germany but is configured so that it presents me (in London) with the correct local time and UTC time.
$ date
Tue Jul 16 08:31:51 BST 2019
$ date -u
Tue Jul 16 07:31:55 UTC 2019
But inside an SQLite database, things aren't right.
$ !sql
sqlite3 apollo.db
SQLite version 3.7.17 2013-05-20 00:56:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> select datetime('now');
2019-07-16 07:32:48
sqlite> select datetime('now', 'localtime');
2019-07-16 08:32:53
sqlite> select datetime('now', 'utc');
2019-07-16 06:32:58
It's displaying the local time correctly, but UTC is an hour out. Is there a config setting I can tweak to fix this?
Update: Ok. Having read the documentation a bit more carefully, it seems I was misunderstanding and this behaviour is a) correct and b) expected. select datetime('now') is the correct way to get UTC time. So now I'm just a bit confused as to what select datetime('now', 'utc') is doing (probably nothing useful).
Update: Ok. Having read the documentation a bit more carefully, it seems I was misunderstanding and this behaviour is a) correct and b) expected. select datetime('now') is the correct way to get UTC time. So now I'm just a bit confused as to what select datetime('now', 'utc') is doing (probably nothing useful).
The last paragraph before the examples explains by saying :-
The "utc" modifier is the opposite of "localtime".
"utc" assumes that the string to its left is in the local timezone and adjusts that string to be in UTC.
If the prior string is not in localtime, then the result of "utc" is undefined.
SQL As Understood By SQLite - Date And Time Functions
Related
I have some graph data with date type values.
My gremlin query for the date type property is working, but output value is not the date value.
Environment:
Janusgraph 0.3.1
gremlinpython 3.4.3
Below is my example:
Data (JanusGraph): {"ID": "doc_1", "MY_DATE": [Tue Jan 10 00:00:00 KST 1079]}
Query: g.V().has("ID", "doc_1").valueMap("MY_DATE")
Output (gremlinpython): datetime(1079, 1, 16)
The error is 6 days (1079.1.10 -> 1079.1.16).
This mismatch does not occur when the years are above 1600.
Does the timestamp have some serialization/deserialization problems between janusgraph and gremlinpython?
Thanks
There were some issue with Python and dates but I would have them fixed for 3.4.3, which is the version you stated you were using. The issue is described here at TINKERPOP-2264 along with the fix, but basically there were some issues with timezones. From your example data, it looks like you store your date with a timezone (i.e. KST). I'm not completely sure, but I would imagine things would work as expected if the date was stored as UTC.
After some try & search, I found that there are some difference between java Date and python datetime. (Julian vs. Gregorian Calendar)
So I have replaced SimpleDateFormat with JodaTime and got the expected result as below:
Data (Raw): {"ID": "doc_1", "MY_DATE": "1079-1-29"}
Data (JanusGraph): {"ID": "doc_1", "MY_DATE": [Wed Jan 23 00:32:08 KST 1079]}
(I think the JanusGraph uses java Date object internally..)
Query: g.V().has("ID", "doc_1").valueMap("MY_DATE")
Output (gremlinpython): datetime(1079, 1, 29)
Thanks
Like in a similar query on this forum I need, but I need it to work in Impala:
In a workaround my colleague and myself attempted the following:
-- combine start date and time into a datetime
-- impala can't handle am/pm so need to look for pm indicator and add 12 hours
-- and then subtract 12 hours if it's 12:xx am or pm
================
t1.appt_time,
hours_add(
to_timestamp(concat(to_date(t1.appt_date),' ',t1.appt_time),'yyyy-MM-dd H:mm'),
12*decode(lower(strright(t1.appt_time,2)),"pm",1,0) -
12*decode(strleft(t1.appt_time,2),'12',1,0)
) as appt_datetime,
t1. ...
=========
Has anybody an easier and more elegant approach ?
Your workaround is valid, Impala does currently support AM/PM formatting for dates. There are a few open issues related
https://issues.apache.org/jira/browse/IMPALA-3381
https://issues.apache.org/jira/browse/IMPALA-5237
https://issues.apache.org/jira/browse/IMPALA-2262
I have imported same dbdump on 2 machines. When, I execute an sql, I see 2 different values.
select struct_doc_id, START_DATE, END_DATE from structured_doc where struct_doc_id = 1329 order by START_DATE;
Machine1:
1329 31-03-11 09:00:00.000000000 PM 01-01-16 08:59:59.999000000 PM
1329 01-04-11 12:00:00.000000000 AM 31-12-15 11:59:59.999000000 PM
Machine 2:
1329 01-04-11 12:00:00.000000000 AM 31-12-15 11:59:59.999000000 PM
1329 01-04-11 12:00:00.000000000 AM 31-12-15 11:59:59.999000000 PM
Also, I executed the sql:
select dbtimezone, sessiontimezone, systimestamp, current_timestamp
from dual;
and the results on both the machines are:
Machine 1:
-07:00 Asia/Calcutta 09-02-16 02:15:55.422190000 AM -08:00 09-02-16 03:45:55.422204000 PM ASIA/CALCUTTA
Machine 2:
-07:00 Asia/Calcutta 09-02-16 05:23:20.703408000 AM -05:00 09-02-16 03:53:20.703418000 PM ASIA/CALCUTTA
Note: I have 2 database running on 2 different machines.
Can anyone please tell me what may be the possible reasons for the difference in the values while running first sql?
As i can see it might be one reason for that :
Check the datatype of the column structured_doc is it :
"timestamp" or "timestamp with local timezone","timestamp with timezone".
Oracle will automatically ajust the date provided, according the session's attributes of the client.
I guess you're executing the import remotely in one database, and locally in the other.
Best Regards
I am using Splunk 6.2.X along with Django bindings to create a Splunk app.
To get access to the earliest/latest dates from the timerange picker, am using the following in my JS.
mysearchbar.timerange.val()
Am getting back a map where the values are in epoch format:
Object {earliest_time: 1440122400, latest_time: 1440124200}
When I convert them using moment using the following, I get different datetime than expected:
> moment.unix('1440122400').utc().toString()
"Fri Aug 21 2015 02:00:00 GMT+0000"
However, the time does not correspond to the values that have been selected on the time range picker i.e. 08/20/2015 22:00:00.000
Am not sure what the difference is getting caused by? Am sure tht the timezone is not the factor as the time difference is erratically not equivalent to derive using simple timezone add/subtract.
I was wondering if this behaviour can be explained as to how to get the Splunk epoch datetime to UTC would be helpful.
I was able to get rid of the timezone issue by performing the following:
Setting the timezone of the Splunk engine to UTC in props.conf as follows:
TZ = GMT
Setting up the CentOS (the server hosting Splunk) to UTC
Hope this helps anyone else who stumbles upon similar issues.
Thanks.
I have a test that checks to see if an item was shipped today.
let(:todays_date) {I18n.l(Date.today, format: '%m/%d/%Y')}
expect(order.shipped_date.strftime("%m/%d/%Y")).to eq(todays_date)
This test fails with the following error:
Failure/Error: expect(order.shipped_date.strftime("%m/%d/%Y")).to eq(todays_date)
expected: "10/14/2014"
got: "10/15/2014"
When I check the date in SQLite is one day ahead than the system date.
sqlite> select date('now');
2014-10-15
sqlite> .exit
u2#u2-VirtualBox:~/tools/$ date
Tue Oct 14 20:13:03 EDT 2014
I appreciate any help you can provide.
Thanks!
The documentation says:
Universal Coordinated Time (UTC) is used.
To get the time in the local time zone, use the localtime modifier:
select date('now', 'localtime');
Thanks to #CL, I resolved this. I now select all dates in UTC so that they compare.
let(:todays_date) {I18n.l(Time.now.utc, format: '%m/%d/%Y')}