why it is not the same result with command line type directly on the oracle server apres logging
/u01/app/oracle/product/11.2.0/dbhome_1/bin/sqlplus
*vis/passwd#10.252.41.123:1521/AA*
SQL> select count(*) from dispo where period = to_date('2017-10-01','YYYY-MM-DD') ;
result : 0
and when i use the software Oracle sql developper with a connection with the same vis/passwdv#10.252.41.123:1521/AA?
select count(*) from dispo where period = to_date('2017-10-01','YYYY-MM-DD')
result : 20000
enter image description here
Your SQL*Plus and SQL Developer sessions are in, or think they are in, different time zones.
The static date you're using for the filter is being implicitly converted to the local time zone. When that happens in your SQL*Plus session/time zone it still matches the timestamps you have stored. When you do it in the SQL Developer session/time zone it doesn't match.
You can demonstrate this in a single session by altering the session settings, with a single dummy row created in a UTC session:
create table dispo (id number, period timestamp(6) with local time zone);
alter session set time_zone = 'UTC';
insert into dispo (id, period) values (1, timestamp '2017-10-01 00:00:00');
then querying in the same time zone sees that data:
alter session set time_zone = 'UTC';
select cast(date '2017-01-01' as timestamp with local time zone) as local_time from dual;
LOCAL_TIME
-------------------
2017-01-01 00:00:00
select * from dispo where period = date '2017-10-01';
ID PERIOD
---------- -------------------
1 2017-10-01 00:00:00
but querying in a different time zone does not:
alter session set time_zone = 'Europe/London';
select cast(date '2017-01-01' as timestamp with local time zone) as local_time from dual;
LOCAL_TIME
-------------------
2017-01-01 00:00:00
select * from dispo where period = date '2017-10-01';
no rows selected
You can avoid this by specifying which time zone the fixed value is in:
alter session set time_zone = 'UTC';
select * from dispo where period = timestamp '2017-10-01 00:00:00 UTC';
ID PERIOD
---------- -------------------
1 2017-10-01 00:00:00
alter session set time_zone = 'Europe/London';
select * from dispo where period = timestamp '2017-10-01 00:00:00 UTC';
ID PERIOD
---------- -------------------
1 2017-10-01 00:00:00
Obviously use whichever time zone the data was actually created in...
Related
This is my value in the table : FY20 JAN
And i am looking for 'FY20 (M01) JAN'. How can convert like this in Oracle 11g SQL query ?
First you convert your string to a value of DATE type. Anything enclosed in double quotes is somewhat hard coded and TO_DATE function ignores them as long as they match the characters in the input in their specific locations. Here FY are in location (index) 1 and 2.
alter session set nls_date_format = 'yyyy-mm-dd';
select to_date('FY20 JAN', '"FY"yy MON') d from dual;
D
----------
2020-01-01
Then, you apply another function TO_CHAR to the date value we got above to get the desired output.
select to_char(
to_date('FY20 JAN', '"FY"YY MON')
, '"FY"yy "(M"mm")" MON'
) c from dual;
C
-----------------------
FY20 (M01) JAN
I have a teradata table ABC . I have a column in the table which is of PERIOD data type ( Name of the column is ef_dtm) . I need to update the starting bound of the period column(subtract it by 1 day) whenever starting bound of the period column is '12/31/9999'.
I am using the below query . But it is saying
INVALID Interval Literal.
Can you suggest me an update query?
Nonsequenced validtime
update ABC
set ef_dtm = PERIOD(CAST(end(ef_dtm) as Date) -INTERVAL '-1' DAY , end(ef_dtm))
where begin(ef_dtm) = '12/31/9999'
The error is because of part INTERVAL '-1' DAY
It should be INTERVAL -'1' DAY i.e. minus - outside the '1'
Your query has 2 more problems.
No need to cast period begin to DATE as INTERVAL arithmetic works on TIMESTAMP
DATE literals are wrong. It should be YYYY-MM-DD; Moreover it should be TIMESTAMP corresponding to period column datatype.
Correct query is as below.
nonsequenced validtime
UPDATE ABC
SET ef_dtm = PERIOD(begin(ef_dtm) + INTERVAL -'1' DAY, end(ef_dtm))
WHERE begin(ef_dtm) = TIMESTAMP '1999-12-31 00:00:00.000000';
OR
nonsequenced validtime
UPDATE ABC
SET ef_dtm = PERIOD(begin(ef_dtm) - INTERVAL '1' DAY, end(ef_dtm))
WHERE begin(ef_dtm) = TIMESTAMP '1999-12-31 00:00:00.000000';
DEMO
Create Table:
CREATE TABLE ABC ( ef_dtm period(timestamp(6)) AS validtime ) NO PRIMARY INDEX;
Insert Data:
INSERT INTO abc(period (TIMESTAMP '1999-12-31 00:00:00.000000', TIMESTAMP '1999-12-31 23:59:00.000000'));
After select
ef_dtm
------------------------------------------------------------
('1999-12-31 00:00:00.000000', '1999-12-31 23:59:00.000000')
Update Data:
nonsequenced validtime
UPDATE ABC
SET ef_dtm = PERIOD(begin(ef_dtm) + INTERVAL -'1' DAY, end(ef_dtm))
WHERE begin(ef_dtm) = TIMESTAMP '1999-12-31 00:00:00.000000';
After SELECT
ef_dtm
------------------------------------------------------------
('1999-12-30 00:00:00.000000', '1999-12-31 23:59:00.000000')
I want to put filter on an Informix query:
WHERE agentstatedetail.eventdatetime < '1753-01-01 00:00:00' - INTERVAL(3) DAY TO DAY
but it fails ...
Please tell where it goes wrong.
As noted in a comment, the solution is to ensure that the string is interpreted as a DATETIME value. The simple way to do that is to use the DATETIME literal notation:
DATETIME(1753-01-01 00:00:00) YEAR TO SECOND
To demonstrate:
CREATE TABLE agentstatedetail
(
eventdatetime DATETIME YEAR TO SECOND NOT NULL PRIMARY KEY,
eventname VARCHAR(64) NOT NULL
);
INSERT INTO agentstatedetail VALUES('1752-12-25 12:00:00', 'Christmas Day, Noon, 1752');
INSERT INTO agentstatedetail VALUES('1752-12-31 12:00:00', 'New Year''s Eve, Noon, 1752');
INSERT INTO agentstatedetail VALUES('1753-01-01 12:00:00', 'New Year''s Day, Noon, 1753');
SELECT * FROM agentstatedetail WHERE agentstatedetail.eventdatetime < '1753-01-01 00:00:00' - INTERVAL(3) DAY TO DAY;
This is your original WHERE clause embedded into a minimal SELECT statement. It yields the error:
SQL -1261: Too many digits in the first field of datetime or interval.
(NB: It would have been helpful to include the error message in the question.)
Here's an alternative version of the query, with the DATETIME literal in place:
SELECT * FROM agentstatedetail
WHERE agentstatedetail.eventdatetime < DATETIME(1753-01-01 00:00:00) YEAR TO SECOND -
INTERVAL(3) DAY TO DAY
;
Output from the sample data:
1752-12-25 12:00:00|Christmas DAY, Noon, 1752
I observe that the value calculated is a constant; you could rewrite the code as:
SELECT * FROM agentstatedetail
WHERE agentstatedetail.eventdatetime < DATETIME(1752-12-29 00:00:00) YEAR TO SECOND
I suspect that the value is passed as a parameter somewhere along the line.
Alternatively, you can cast the string to a DATETIME value and you'd get the same result:
SELECT * FROM agentstatedetail
WHERE agentstatedetail.eventdatetime < CAST('1753-01-01 00:00:00' AS DATETIME YEAR TO SECOND) -
INTERVAL(3) DAY TO DAY
;
or:
SELECT * FROM agentstatedetail
WHERE agentstatedetail.eventdatetime < '1753-01-01 00:00:00'::DATETIME YEAR TO SECOND -
INTERVAL(3) DAY TO DAY
I have a certain DATETIME value, and I would like to get the DATETIME value for a given weekday 'n' (where n is an integer from 1 thru to 7) that is just before the given date.
Question: How would I do this given a value for currentDate and a value for lastWeekDay?
For example, if given date is 06/15/2015 in mm/dd/yyyy format, then what is the date for a weekday of 6 that came just before 06/15/2015. In this example, given date is on Monday and we want the date for last Friday (i.e. weekday =6).
declare #currentDate datetime, #lastWeekDay int;
set #currentDate = getdate();
set #lastWeekDay = 6;--this could be any value from 1 thru to 7
select #currentDate as CurrentDate, '' as LastWeekDayDate --i need to get this date
UPDATE 1
In addition to the excellent answer by Anon, I also found an alternate way of doing it, which is as given below.
DECLARE #currentWeekDay INT;
SET #currentWeekDay = DATEPART(WEEKDAY, #currentDate);
--Case 1: when current date week day > lastWeekDay then subtract
-- the difference between the two weekdays
--Case 2: when current date week day <= lastWeekDay then go back 7 days from
-- current date, and then add (lastWeekDay - currentWeekDay)
SELECT
#currentDate AS CurrentDate,
CASE
WHEN #currentWeekDay > #lastWeekDay THEN DATEADD(DAY, -1 * ABS(CAST(#lastWeekDay AS INT) - CAST(#currentWeekDay AS INT)), #currentDate)
ELSE DATEADD(DAY, #lastWeekDay - DATEPART(WEEKDAY, DATEADD(DAY, -7, #currentDate)), DATEADD(DAY, -7, #currentDate))
END AS LastWeekDayDate;
Calculate how many days have passed since a fixed date, modulo 7, and subtract that from the input date. The magic number '5' is because Date Zero (1900-01-01) is a Monday. Shifting that forward 5 days makes the #lastWeekDay range [1..7] map to the range of weekdays [Sunday..Saturday].
SELECT DATEADD(day,-DATEDIFF(day,5+#lastWeekDay,#currentDate)%7,#currentDate)
I avoid the DATEPART(weekday,[...]) function because of SET DATEFIRST
I have a table of events, each with a StartTime and EndTime (as type DateTime) in a MySQL Table.
I'm trying to output the sum of overlapping times and the number of events that overlapped.
What is the most efficient / simple way to perform this query in MySQL?
CREATE TABLE IF NOT EXISTS `events` (
`EventID` int(10) unsigned NOT NULL auto_increment,
`StartTime` datetime NOT NULL,
`EndTime` datetime default NULL,
PRIMARY KEY (`EventID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=37 ;
INSERT INTO `events` (`EventID`, `StartTime`, `EndTime`) VALUES
(10001, '2009-02-09 03:00:00', '2009-02-09 10:00:00'),
(10002, '2009-02-09 05:00:00', '2009-02-09 09:00:00'),
(10003, '2009-02-09 07:00:00', '2009-02-09 09:00:00');
# if the query was run using the data above,
# the table below would be the desired output
# Number of Overlapped Events | Total Amount of Time those events overlapped.
1, 03:00:00
2, 02:00:00
3, 02:00:00
The purpose of these results is to generate a bill for hours used. (if you have one event running, you might pay 10 dollars per hour. But if two events are running, you only have to pay 8 dollars per hour, but only for the period of time you had two events running.)
Try this:
SELECT `COUNT`, SEC_TO_TIME(SUM(Duration))
FROM (
SELECT
COUNT(*) AS `Count`,
UNIX_TIMESTAMP(Times2.Time) - UNIX_TIMESTAMP(Times1.Time) AS Duration
FROM (
SELECT #rownum1 := #rownum1 + 1 AS rownum, `Time`
FROM (
SELECT DISTINCT(StartTime) AS `Time` FROM events
UNION
SELECT DISTINCT(EndTime) AS `Time` FROM events
) AS AllTimes, (SELECT #rownum1 := 0) AS Rownum
ORDER BY `Time` DESC
) As Times1
JOIN (
SELECT #rownum2 := #rownum2 + 1 AS rownum, `Time`
FROM (
SELECT DISTINCT(StartTime) AS `Time` FROM events
UNION
SELECT DISTINCT(EndTime) AS `Time` FROM events
) AS AllTimes, (SELECT #rownum2 := 0) AS Rownum
ORDER BY `Time` DESC
) As Times2
ON Times1.rownum = Times2.rownum + 1
JOIN events ON Times1.Time >= events.StartTime AND Times2.Time <= events.EndTime
GROUP BY Times1.rownum
) Totals
GROUP BY `Count`
Result:
1, 03:00:00
2, 02:00:00
3, 02:00:00
If this doesn't do what you want, or you want some explanation, please let me know. It could be made faster by storing the repeated subquery AllTimes in a temporary table, but hopefully it runs fast enough as it is.
Start with a table that contains a single datetime field as its primary key, and populate that table with every time value you're interested in. A leap years has 527040 minutes (31622400 seconds), so this table might get big if your events span several years.
Now join against this table doing something like
SELECT i.dt as instant, count(*) as events
FROM instant i JOIN event e ON i.dt BETWEEN e.start AND e.end
GROUP BY i.dt
WHERE i.dt BETWEEN ? AND ?
Having an index on instant.dt may let you forgo an ORDER BY.
If events are added infrequently, this may be something you want to precalculate by running the query offline, populating a separate table.
I would suggest an in-memory structure that has start-time,end-time,#events... (This is simplified as time(hours), but using unix time gives up to the second accuracy)
For every event, you would insert the new event as-is if there's no overlap, otherwise, find the overlap, and split the event to (up to 3) parts that may be overlapping, With your example data, starting from the first event:
Event 1 starts at 3am and ends at 10am: Just add the event since no overlaps:
3,10,1
Event 2 starts at 5am and ends at 9am: Overlaps,so split the original, and add the new one with extra "#events"
3,5,1
5,9,2
9,10,1
Event 3 starts at 7am and ends at 9am: also overlaps, do the same with all periods:
3,5,1
5,7,2
7,9,3
9,10,1
So calculating the overlap hours per #events:
1 event= (5-3)+(10-9)=3 hours
2 events = 7-5 = 2 hours
3 events = 9-7 = 2 hours
It would make sense to run this as a background process if there are many events to compare.