I've in the database the following lines
id | date_order | name | origin
----+---------------------+----------+---------
38 | 2016-05-10 14:00:00 | OT/00024 | GI/00005:
39 | 2016-05-26 14:00:00 | OT/00025 | GI/00005:
40 | 2016-06-11 14:00:00 | OT/00026 | GI/00005:
41 | 2016-06-27 14:00:00 | OT/00027 | GI/00005:
42 | 2016-07-13 14:00:00 | OT/00028 | GI/00005:
but it showed in the views as:
date_order | name | origin
--------------------+----------+-------------
10/05/2016 15:00:00 | OT/00024 | GI/00005:
26/05/2016 15:00:00 | OT/00025 | GI/00005:
11/06/2016 14:00:00 | OT/00026 | GI/00005:
27/06/2016 14:00:00 | OT/00027 | GI/00005:
13/07/2016 15:00:00 | OT/00028 | GI/00005:
I changed Timezone but I still get the difference !
When you store the datetime, you should use context like this:
from openerp.osv import fields
from datetime import datetime
...
my_date = fields.datetime.context_timestamp(cr, uid, datetime.now(), context=context)
The date stored in the database is UTC (GMT-0) timezone. Assume that the person is set with timezone GMT - 5:00, then while storing the value to the database, the date will be added with 5 hrs (exactly 5, not little more or little less) and thus we get the UTC time to store into the database. Now when displaying the same it will check for the users timezone and it finds that its GMT - 5:00 so the database time will be subtracted with 5 (again exactly 5, not little more or little less) and displayed the user.
This will be great for system which is used in different timezones. So the understanding is the input is taken in the user's timezone stored in UTC(GMT-0) and displayed to user's timezone (even if the user viewing is in the different timezone the time will be accurate to their timezone)
Odoo display the datetime field AS TIMEZONE, in this case the timezone is GMT+1 but it will be GMT+0 in june because of ramadan, that's why
Related
I would like my calendar to only allow the user to create new events during business hours. The catch is that the business hours are not the same each week, they depend on the date.
My table with the business hours looks like this:
+-----+-----------+----------+------------+------------+
| day | startTime | endTime | firstDate | lastDate |
+-----+-----------+----------+------------+------------+
| 6 | 08:00:00 | 12:30:00 | 2021-12-20 | NULL |
| 6 | 13:00:00 | 16:30:00 | 2021-12-20 | NULL |
| 2 | 08:00:00 | 17:00:00 | 2021-12-27 | 2021-12-27 |
| 4 | 08:00:00 | 17:00:00 | 2021-12-29 | 2021-12-29 |
+-----+-----------+----------+------------+------------+
The business hours in FullCalendar don't have an option to specify validity periods.
I have been looking at using background events but don't quite understand how to achieve the desired result that way.
How can I limit new events to specific times for each day of the week where these times vary each week?
This gets us recurring background events:
{
startTime: '08:00:00',
endTime: '12:30:00',
daysOfWeek: '6',
startRecur: '2021-12-20',
rendering: 'background',
groupId: 1
},
{
startTime: '13:00:00',
endTime: '16:30:00',
daysOfWeek: '6',
startRecur: '2021-12-20',
rendering: 'background',
groupId: 1
}
For more information on recurring events see https://fullcalendar.io/docs/recurring-events
I need to replace awkward strings in R, specifically the times that are in a weird format. The data looks like this:
Date | Time | AmbientTemp
2000-01-01 | 11:00 a | 25
2000-01-01 | 11:30 a | 25.5
2000-01-01 | 11:00 p | 20
2000-01-01 | 11:30 p | 19.5
The a and p mean AM and PM respectively (obviously).
lubridate and base R cannot convert these dates to a correct format. Thus, I turned to the cumbersome str_replace_all function (from package stringr) to convert ALL my times in a large dataframe: >130000 records.
Example functions:
uploadDat$Time = str_replace_all(uploadDat$Time,"11:00 a","11:00")
uploadDat$Time = str_replace_all(uploadDat$Time,"11:00 p","23:00")
I changed the class of the times using as.character() before applying stringr's functions.
The result is perfect except for the 11'o clock times (like above) that are converted as follow:
Date | Time | AmbientTemp
2000-01-01 | 101:00 | 25
2000-01-01 | 101:30 | 25.5
2000-01-01 | 113:30 | 20
2000-01-01 | 113:30 | 19.5
Why are these specific times converted incorrectly?
We can paste "m" at the end of time, convert it into POSIXct
format(as.POSIXct(paste0(df$Time, "m"), format = "%I:%M %p"), "%T")
#[1] "11:00:00" "11:30:00" "23:00:00" "23:30:00"
I want to move my excel calculation to Teradata but not sure how to do it. In excel is rather easy and I use simple if to give me DIFF =IF(A2=A3, (C2-B3) * 24, "")
NO T_DATE L_DATE DIFF
AAA 10/08/2019 17:02:00 10/08/2019 20:35:00 5.83
AAA 10/08/2019 14:45:00 10/08/2019 15:10:00 11.78
AAA 10/08/2019 03:23:00 10/08/2019 10:25:00 17.32
AAA 09/08/2019 17:06:00 10/08/2019 01:11:00 25.70
AAA 08/08/2019 23:29:00 09/08/2019 10:27:00
BBB 08/08/2019 09:34:00 08/08/2019 21:19:00 22.23
BBB 07/08/2019 23:05:00 08/08/2019 06:09:00 18.03
BBB 07/08/2019 12:07:00 07/08/2019 20:25:00 22.32
BBB 06/08/2019 22:06:00 07/08/2019 08:53:00 22.77
BBB 06/08/2019 10:07:00 06/08/2019 19:44:00
Is there a way of doing it in Teradata? I want to have again the difference in hours between L_DATE and T_DATE for each NO.
You can use window functions to achieve this. It's important to note that when you subtract two dates or timestamps (in this case) you will be returned in INTERVAL type so you will need to specify what type of INTERVAL you want as well as it's size (SECOND, MINUTE, HOUR, DAY, etc)..
CREATE MULTISET VOLATILE TABLE yourtable(
ID VARCHAR(3)
,T_DATE TIMESTAMP(0)
,L_DATE TIMESTAMP(0)
,DIFF NUMERIC(6,2)
) PRIMARY INDEX (ID) ON COMMIT PRESERVE ROWS;
INSERT INTO yourtable(ID,T_DATE,L_DATE,DIFF) VALUES ('AAA','2019-10-08 17:02:00','2019-10-08 20:35:00',5.83);
INSERT INTO yourtable(ID,T_DATE,L_DATE,DIFF) VALUES ('AAA','2019-10-08 14:45:00','2019-10-08 15:10:00',11.78);
INSERT INTO yourtable(ID,T_DATE,L_DATE,DIFF) VALUES ('AAA','2019-10-08 03:23:00','2019-10-08 10:25:00',17.32);
INSERT INTO yourtable(ID,T_DATE,L_DATE,DIFF) VALUES ('AAA','2019-09-08 17:06:00','2019-10-08 01:11:00',25.70);
INSERT INTO yourtable(ID,T_DATE,L_DATE,DIFF) VALUES ('AAA','2019-08-08 23:29:00','2019-09-08 10:27:00',NULL);
INSERT INTO yourtable(ID,T_DATE,L_DATE,DIFF) VALUES ('BBB','2019-08-08 09:34:00','2019-08-08 21:19:00',22.23);
INSERT INTO yourtable(ID,T_DATE,L_DATE,DIFF) VALUES ('BBB','2019-07-08 23:05:00','2019-08-08 06:09:00',18.03);
INSERT INTO yourtable(ID,T_DATE,L_DATE,DIFF) VALUES ('BBB','2019-07-08 12:07:00','2019-07-08 20:25:00',22.32);
INSERT INTO yourtable(ID,T_DATE,L_DATE,DIFF) VALUES ('BBB','2019-06-08 22:06:00','2019-07-08 08:53:00',22.77);
INSERT INTO yourtable(ID,T_DATE,L_DATE,DIFF) VALUES ('BBB','2019-06-08 10:07:00','2019-06-08 19:44:00',NULL);
SELECT yourtable.*,
CAST(((LEAD(T_DATE) OVER (PARTITION BY ID ORDER BY T_DATE) - L_DATE) HOUR(4)) AS INTEGER)
FROM yourtable;
+-----+---------------------+---------------------+--------+-------------------------------------------+
| ID | T_DATE | L_DATE | DIFF | (LEAD (<value expression>) - L_DATE) HOUR |
+-----+---------------------+---------------------+--------+-------------------------------------------+
| AAA | 2019-08-08 23:29:00 | 2019-09-08 10:27:00 | <null> | 7 |
| AAA | 2019-09-08 17:06:00 | 2019-10-08 01:11:00 | 25.70 | 2 |
| AAA | 2019-10-08 03:23:00 | 2019-10-08 10:25:00 | 17.32 | 4 |
| AAA | 2019-10-08 14:45:00 | 2019-10-08 15:10:00 | 11.78 | 2 |
| AAA | 2019-10-08 17:02:00 | 2019-10-08 20:35:00 | 5.83 | <null> |
| BBB | 2019-06-08 10:07:00 | 2019-06-08 19:44:00 | <null> | 3 |
| BBB | 2019-06-08 22:06:00 | 2019-07-08 08:53:00 | 22.77 | 4 |
| BBB | 2019-07-08 12:07:00 | 2019-07-08 20:25:00 | 22.32 | 3 |
| BBB | 2019-07-08 23:05:00 | 2019-08-08 06:09:00 | 18.03 | 3 |
| BBB | 2019-08-08 09:34:00 | 2019-08-08 21:19:00 | 22.23 | <null> |
+-----+---------------------+---------------------+--------+-------------------------------------------+
The reason this looks so ugly is because you are trying to compare (subtract) values in two different records. In a database there is no relationship between one record and another. There is no ordering. They live independently of one another. This is radically different than excel where rows (records) have order (a row number).
We use the Window Function LEAD() to establish a group of records as being in a group (a partition) using the PARTITION BY clause, and we give that partition an ordering with the ORDER BY clause. Then we use that LEAD() to say "The very next T_DATE in this ordered partition for this record".
Then we do our date math and subtract the two timestamps. We specify that we want an INTERVAL of type HOUR(4) back. This will hold up to 9999 hours and it will error if it goes over 9999 hours.
Lastly we cast that thing to integer so you can do math on it. You do not, however, have to do the casting if you don't want. I added it because often times we want to add hours together and whatnot.
If you are working on an older version of Teradata that doesn't have the LEAD() function (it's a newer addition) you can use MAX() or MIN() and some extra syntax in your windowing definition to explicitely say "Just the next record's T_DATE" like:
MAX(T_DATE) OVER (PARTITION BY ID ORDER BY T_DATE ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING)
This question already has answers here:
keep only hour: minute:second from a "POSIXlt" "POSIXt" object
(1 answer)
how do you pick the hour, minute and second from the posixct formated datetime in R
(2 answers)
Date time conversion and extract only time
(6 answers)
Closed 3 years ago.
I have observation data which only give hour and minute mark, the data looks like this:
- date | time | value
- 2019-01-01 | 00:00 | 14
- 2019-01-01 | 00:30 | 23
- 2019-01-01 | 01:00 | 32
- 2019-01-01 | 01:30 | 41
- 2019-01-02 | 00:00 | 41
- 2019-01-02 | 00:30 | 32
- 2019-01-02 | 01:00 | 23
- 2019-01-02 | 01:30 | 14
- ....
I successfully convert the date into the data format using
data$date <- as.Date(data$date, "%Y/%m/%d")
but when i try to convert the time to its format, i encounter problem, I tried using this:
data$time <- strptime(data$time, "%H:%M")
this give me the result of time with current date: "2019-03-14 00:00:00". which is not what i'm looking for and the date is false
I also tried using:
trydata$jam <- timestamp(trydata$jam, "%H:%M")
this give me the result:
%H:%M00:00 ------##
What is the best way to do this? I also want to extract the data in certain duration of time (like from 10:00 to 13:00)
Thank You
I want to write an R function with some optional params. It should subset some data by two core params and then I want to have the option to pass additional constraints. Eg.
filter_func <- function(start_datetime, end_datetime, user=*, type=*){
as.data.frame(subset(df, format(df$datetime,"%Y-%m-%d %H:%M:%S") > start_datetime &
format(df$datetime,"%Y-%m-%d %H:%M:%S") < end_datetime) &
df$user == user &
df$type == type)
So... if i pass a param it constrains it to that column on either user or type, but if i don't it uses the wildcard and gets everything in the column?
I've seen examples here that use %in% or grepl() but those seem more aimed at where you have part of a string and then want the rest... like new_york gets both new_york_city and new_york_state... i don't want to get any values that don't exactly match the param!
edit: now with examples
So... ideally go from something like this...
start | end | user | type |
-----------------|------------------|------|------|
2017-01-01 11:00 | 2017-01-01 20:00 | usr1 | typ1 |
2017-01-01 12:00 | 2017-01-01 19:00 | usr2 | typ2 |
2017-01-01 02:00 | 2017-01-01 03:00 | usr2 | typ1 |
2017-03-01 01:00 | 2017-03-01 09:00 | usr1 | typ2 |
2017-04-01 05:00 | 2017-04-01 07:00 | usr3 | typ4 |
2017-05-01 01:00 | 2017-05-01 08:00 | usr2 | typ5 |
use my function filter_func(2017-01-01 00:00, 2017-01-01 23:59)
gets me:
start | end | user | type |
-----------------|------------------|------|------|
2017-01-01 11:00 | 2017-01-01 20:00 | usr1 | typ1 |
2017-01-01 12:00 | 2017-01-01 19:00 | usr2 | typ2 |
2017-01-01 02:00 | 2017-01-01 03:00 | usr2 | typ1 |
but if i add a param filter_func(2017-01-01 00:00, 2017-01-01 23:59, usr2)
start | end | user | type |
-----------------|------------------|------|------|
2017-01-01 12:00 | 2017-01-01 19:00 | usr2 | typ2 |
2017-01-01 02:00 | 2017-01-01 03:00 | usr2 | typ1 |
or even filter_func(2017-01-01 00:00, 2017-01-01 23:59, usr2, typ2)
start | end | user | type |
-----------------|------------------|------|------|
2017-01-01 12:00 | 2017-01-01 19:00 | usr2 | typ2 |
Firstly,
[ is safer for programmatic use than subset.
You don't need format, which turns datetime objects into string; you need as.POSIXct or the like, which parse strings into datetimes. You could do this in the function, but you should do this before the function, as you'll always want your datetimes parsed, and there's no point in doing it repeatedly.
You can update a version of the data.frame internal to the function in several steps, which allows you to use control flow like if. You'll still need to check if the variable exists. Two options:
Use missing, which is built for checking whether function parameters exist.
Supply a default value of NULL and use is.null.
You'll need to pass in quoted strings or parsed datetimes (< operators will try to coerce objects that don't match to the same class).
I added a parameter to pass in the data.frame first, which gives the function broader use, but is not necessary.
Altogether, then,
df <- data.frame(start = c("2017-01-01 11:00", "2017-01-01 12:00", "2017-01-01 02:00",
"2017-03-01 01:00", "2017-04-01 05:00", "2017-05-01 01:00"),
end = c("2017-01-01 20:00", "2017-01-01 19:00", "2017-01-01 03:00",
"2017-03-01 09:00", "2017-04-01 07:00", "2017-05-01 08:00"),
user = c("usr1", "usr2", "usr2", "usr1", "usr3", "usr2"),
type = c( "typ1", "typ2", "typ1", "typ2", "typ4", "typ5"))
# parse in two steps if you like, e.g. df$start <- as.POSIXct(df$start)
df[1:2] <- lapply(df[1:2], as.POSIXct)
filter_func <- function(x, start_time, end_time, usr, typ = NULL){
x <- x[x$start > start_time & x$end < end_time, ]
if (!missing(usr)) {
x <- x[x$user %in% usr, ]
}
if (!is.null(typ)) {
x <- x[x$type %in% typ, ]
}
x
}
and test it:
str(df)
#> 'data.frame': 6 obs. of 4 variables:
#> $ start: POSIXct, format: "2017-01-01 11:00:00" "2017-01-01 12:00:00" ...
#> $ end : POSIXct, format: "2017-01-01 20:00:00" "2017-01-01 19:00:00" ...
#> $ user : Factor w/ 3 levels "usr1","usr2",..: 1 2 2 1 3 2
#> $ type : Factor w/ 4 levels "typ1","typ2",..: 1 2 1 2 3 4
filter_func(df, as.POSIXct('2017-01-01 00:00'), as.POSIXct('2017-01-01 23:59'))
#> start end user type
#> 1 2017-01-01 11:00:00 2017-01-01 20:00:00 usr1 typ1
#> 2 2017-01-01 12:00:00 2017-01-01 19:00:00 usr2 typ2
#> 3 2017-01-01 02:00:00 2017-01-01 03:00:00 usr2 typ1
filter_func(df, '2017-01-01 00:00', '2017-01-01 23:59')
#> start end user type
#> 1 2017-01-01 11:00:00 2017-01-01 20:00:00 usr1 typ1
#> 2 2017-01-01 12:00:00 2017-01-01 19:00:00 usr2 typ2
#> 3 2017-01-01 02:00:00 2017-01-01 03:00:00 usr2 typ1
filter_func(df, '2017-01-01 00:00', '2017-01-01 23:59', 'usr2')
#> start end user type
#> 2 2017-01-01 12:00:00 2017-01-01 19:00:00 usr2 typ2
#> 3 2017-01-01 02:00:00 2017-01-01 03:00:00 usr2 typ1
filter_func(df, '2017-01-01 00:00', '2017-01-01 23:59', 'usr2', 'typ2')
#> start end user type
#> 2 2017-01-01 12:00:00 2017-01-01 19:00:00 usr2 typ2
You need to use grepl() for pattern matching.
filter_func <- function(start_datetime, end_datetime, user_='*', type_='*'){
subset(df, as.POSIXlt(df$start) > as.POSIXlt(start_datetime) &
as.POSIXlt(df$end) < as.POSIXlt(end_datetime) &
grepl(user_, df$user) &
grepl(type_, df$type))
}
filter_func(start='2017-01-01 00:00', end='2017-01-01 23:59')
# start end user type
#1 2017-01-01 11:00 2017-01-01 20:00 usr1 typ1
#2 2017-01-01 12:00 2017-01-01 19:00 usr2 typ2
#3 2017-01-01 02:00 2017-01-01 03:00 usr2 typ1
filter_func(start='2017-01-01 00:00', end='2017-01-01 23:59', user='usr2')
# start end user type
#2 2017-01-01 12:00 2017-01-01 19:00 usr2 typ2
#3 2017-01-01 02:00 2017-01-01 03:00 usr2 typ1
filter_func(start='2017-01-01 00:00', end='2017-01-01 23:59', user='usr2', type='typ2')
# start end user type
#2 2017-01-01 12:00 2017-01-01 19:00 usr2 typ2