I have imported same dbdump on 2 machines. When, I execute an sql, I see 2 different values.
select struct_doc_id, START_DATE, END_DATE from structured_doc where struct_doc_id = 1329 order by START_DATE;
Machine1:
1329 31-03-11 09:00:00.000000000 PM 01-01-16 08:59:59.999000000 PM
1329 01-04-11 12:00:00.000000000 AM 31-12-15 11:59:59.999000000 PM
Machine 2:
1329 01-04-11 12:00:00.000000000 AM 31-12-15 11:59:59.999000000 PM
1329 01-04-11 12:00:00.000000000 AM 31-12-15 11:59:59.999000000 PM
Also, I executed the sql:
select dbtimezone, sessiontimezone, systimestamp, current_timestamp
from dual;
and the results on both the machines are:
Machine 1:
-07:00 Asia/Calcutta 09-02-16 02:15:55.422190000 AM -08:00 09-02-16 03:45:55.422204000 PM ASIA/CALCUTTA
Machine 2:
-07:00 Asia/Calcutta 09-02-16 05:23:20.703408000 AM -05:00 09-02-16 03:53:20.703418000 PM ASIA/CALCUTTA
Note: I have 2 database running on 2 different machines.
Can anyone please tell me what may be the possible reasons for the difference in the values while running first sql?
As i can see it might be one reason for that :
Check the datatype of the column structured_doc is it :
"timestamp" or "timestamp with local timezone","timestamp with timezone".
Oracle will automatically ajust the date provided, according the session's attributes of the client.
I guess you're executing the import remotely in one database, and locally in the other.
Best Regards
Related
My Centos 7.6 server is physically located in Germany but is configured so that it presents me (in London) with the correct local time and UTC time.
$ date
Tue Jul 16 08:31:51 BST 2019
$ date -u
Tue Jul 16 07:31:55 UTC 2019
But inside an SQLite database, things aren't right.
$ !sql
sqlite3 apollo.db
SQLite version 3.7.17 2013-05-20 00:56:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> select datetime('now');
2019-07-16 07:32:48
sqlite> select datetime('now', 'localtime');
2019-07-16 08:32:53
sqlite> select datetime('now', 'utc');
2019-07-16 06:32:58
It's displaying the local time correctly, but UTC is an hour out. Is there a config setting I can tweak to fix this?
Update: Ok. Having read the documentation a bit more carefully, it seems I was misunderstanding and this behaviour is a) correct and b) expected. select datetime('now') is the correct way to get UTC time. So now I'm just a bit confused as to what select datetime('now', 'utc') is doing (probably nothing useful).
Update: Ok. Having read the documentation a bit more carefully, it seems I was misunderstanding and this behaviour is a) correct and b) expected. select datetime('now') is the correct way to get UTC time. So now I'm just a bit confused as to what select datetime('now', 'utc') is doing (probably nothing useful).
The last paragraph before the examples explains by saying :-
The "utc" modifier is the opposite of "localtime".
"utc" assumes that the string to its left is in the local timezone and adjusts that string to be in UTC.
If the prior string is not in localtime, then the result of "utc" is undefined.
SQL As Understood By SQLite - Date And Time Functions
Like in a similar query on this forum I need, but I need it to work in Impala:
In a workaround my colleague and myself attempted the following:
-- combine start date and time into a datetime
-- impala can't handle am/pm so need to look for pm indicator and add 12 hours
-- and then subtract 12 hours if it's 12:xx am or pm
================
t1.appt_time,
hours_add(
to_timestamp(concat(to_date(t1.appt_date),' ',t1.appt_time),'yyyy-MM-dd H:mm'),
12*decode(lower(strright(t1.appt_time,2)),"pm",1,0) -
12*decode(strleft(t1.appt_time,2),'12',1,0)
) as appt_datetime,
t1. ...
=========
Has anybody an easier and more elegant approach ?
Your workaround is valid, Impala does currently support AM/PM formatting for dates. There are a few open issues related
https://issues.apache.org/jira/browse/IMPALA-3381
https://issues.apache.org/jira/browse/IMPALA-5237
https://issues.apache.org/jira/browse/IMPALA-2262
What am I doing wrong?
I set PARALLEL=4 but number of files created are 3.
time expdp data DIRECTORY=EXT_DIR TABLES=DATA.ST_EURKMORDER:P108 LOGFILE=log.txt CONTENT=DATA_ONLY COMPRESSION=DATA_ONLY DUMPFILE=DATA.ST_EURKMORDER_P108_compr_%U_out_of_4.dmp PARALLEL=4
Expected 4 files, but got 3:
ls -alh /data/DATA.ST_EURKMORDER_P108_compr_1*
-rw-r----- 1 oracle oinstall 170M Apr 11 13:38 /data/DATA.ST_EURKMORDER_P108_compr_01_out_of_4.dmp
-rw-r----- 1 oracle oinstall 159M Apr 11 13:38 /data/DATA.ST_EURKMORDER_P108_compr_02_out_of_4.dmp
-rw-r----- 1 oracle oinstall 151M Apr 11 13:38 /data/DATA.ST_EURKMORDER_P108_compr_03_out_of_4.dmp
According to the documentation, the PARALLEL setting (emphasis added):
Specifies the maximum number of processes of active execution operating on behalf of the export job.
It also shows an example similar to yours with PARALLEL set to four, which it says results in an export
... in which up to four files could be created ...
There are various other examples that refer to 'up to' as well. So, this is the expected behaviour. It could create four files, it could create fewer than that.
I am using Splunk 6.2.X along with Django bindings to create a Splunk app.
To get access to the earliest/latest dates from the timerange picker, am using the following in my JS.
mysearchbar.timerange.val()
Am getting back a map where the values are in epoch format:
Object {earliest_time: 1440122400, latest_time: 1440124200}
When I convert them using moment using the following, I get different datetime than expected:
> moment.unix('1440122400').utc().toString()
"Fri Aug 21 2015 02:00:00 GMT+0000"
However, the time does not correspond to the values that have been selected on the time range picker i.e. 08/20/2015 22:00:00.000
Am not sure what the difference is getting caused by? Am sure tht the timezone is not the factor as the time difference is erratically not equivalent to derive using simple timezone add/subtract.
I was wondering if this behaviour can be explained as to how to get the Splunk epoch datetime to UTC would be helpful.
I was able to get rid of the timezone issue by performing the following:
Setting the timezone of the Splunk engine to UTC in props.conf as follows:
TZ = GMT
Setting up the CentOS (the server hosting Splunk) to UTC
Hope this helps anyone else who stumbles upon similar issues.
Thanks.
I have a test that checks to see if an item was shipped today.
let(:todays_date) {I18n.l(Date.today, format: '%m/%d/%Y')}
expect(order.shipped_date.strftime("%m/%d/%Y")).to eq(todays_date)
This test fails with the following error:
Failure/Error: expect(order.shipped_date.strftime("%m/%d/%Y")).to eq(todays_date)
expected: "10/14/2014"
got: "10/15/2014"
When I check the date in SQLite is one day ahead than the system date.
sqlite> select date('now');
2014-10-15
sqlite> .exit
u2#u2-VirtualBox:~/tools/$ date
Tue Oct 14 20:13:03 EDT 2014
I appreciate any help you can provide.
Thanks!
The documentation says:
Universal Coordinated Time (UTC) is used.
To get the time in the local time zone, use the localtime modifier:
select date('now', 'localtime');
Thanks to #CL, I resolved this. I now select all dates in UTC so that they compare.
let(:todays_date) {I18n.l(Time.now.utc, format: '%m/%d/%Y')}