Select records within the time range - mariadb

I would like a help ... the clinic has several doctors and each one has a specific time of care. Example: 07:00 to 12:00, 12:00 to 17:00, 09:00 to 15:00 ... What is the SQL statement to display only records within the specified time range in the start_time and end_time ?
fields:
start_time | end_time
07:00:00 | 12:30:00
09:00:00 | 15:00:00
12:30:00 | 17:00:00
07:00:00 | 17:00:00
That is, in the morning, display only the records that are part of 07:00:00 to 12:30:00 from the current time. If it's afternoon show only record that are part of 12:30:00 until 17:00:00.
Thankful.

Related

Update year only in column timestamp date field SQLITE

I want to update the year only to 2025 without changing the month day and time
what I have
2027-01-01 09:30:00
2012-03-06 12:00:00
2014-01-01 17:24:00
2020-07-03 04:30:00
2020-01-01 05:50:00
2021-09-03 06:30:00
2013-01-01 23:30:00
2026-01-01 08:30:00
2028-01-01 09:30:00
what i required is below:
2025-01-01 09:30:00
2025-03-06 12:30:00
2025-01-01 17:24:00
2025-07-03 04:30:00
2025-01-01 05:50:00
2025-09-03 06:30:00
2025-01-01 23:30:00
2025-01-01 08:30:00
2025-01-01 09:30:00
I am using dB Browser for SQLite
what i have tried but it didn't worked
update t set
d = datetime(strftime('%Y', datetime(2059)) || strftime('-%m-%d', d));
You may update via a substring operation:
UPDATE yourTable
SET ts = '2025-' || SUBSTR(ts, 6, 14);
Note that SQLite does not actually have a timestamp/datetime type. Instead, these values would be stored as text, and hence we can do a substring operation on them.

Duplicate NETCDF data values across timesteps using R

I have 6-hourly data and will like to 'duplicate' it to hourly data.
The first 6-hour timestep starts on 2017-01-01 00:00:00 and the next 6-hour timestep starts on 2017-01-01 06:00:00. I would like to copy the value of 2017-01-01 00:00:00 and assign it to the next 5 time steps and so on ...
The outpout should follow this pattern (illustration only):
Date Time Value
2017-01-01 00:00:00 0.00012120
2017-01-01 01:00:00 0.00012120
2017-01-01 02:00:00 0.00012120
2017-01-01 03:00:00 0.00012120
2017-01-01 04:00:00 0.00012120
2017-01-01 05:00:00 0.00012120
.
.
.
2019-12-01 00:00:00 0.0024270
2019-12-01 01:00:00 0.0024270
2019-12-01 02:00:00 0.0024270
2019-12-01 03:00:00 0.0024270
2019-12-01 04:00:00 0.0024270
2019-12-01 05:00:00 0.0024270
.
.
.
Do the same for the next 6-hour timestep which is 2017-01-01 06:00:00 in the attached file.
Assume that the hourly rainfall remains constant during the 6h period. Thus, each hour in the 6h period has the same rainfall value.
Sample NETCDF data are found here
First create 5 NetCDF files with the time shifted by 1, 2, 3, 4 and 5 hours:
cdo -shifttime,1hour testing.nc testing1.nc
cdo -shifttime,2hour testing.nc testing2.nc
cdo -shifttime,3hour testing.nc testing3.nc
cdo -shifttime,3hour testing.nc testing4.nc
cdo -shifttime,4hour testing.nc testing5.nc
Then merge them using mergetime:
cdo mergetime testing*.nc out.nc

getting the current time at specific timezone

I've a data frame with date and time format of different time zones, I want to compare this with the current time at that timezone. so I want to add 1 hr to the below "Date and time column and then compare that with the current time in that time zone like for the first one (timezone is EDT and the current time is 2017-07-18 10:20 in EDT)
Date and time TZ
2017-07-08 16:00 EDT
2017-07-17 15:30 PDT
2017-07-17 11:00 EDT
2017-07-17 20:00 EDT
2017-07-17 10:00 EDT
2017-07-13 15:00 PDT
where EDT is "America/New_York" and PDT is pacific time zone.
I just has the time in the raw data and later with city names I created a column to know if it is "EDT OR PDT" not sure how to proceed from here, I tried something on this time zone not changing when specified
It's really tricky with timezone and my system has a default if "America/New_York" time zone and I'm not sure if whatever I tried was wrong.
Can anyone give some idea how to get the local time in a column ?
my desired output is:
Date and time | TZ | Localtime(current)
2017-07-08 16:00 | EDT | 2017-07-17 2017-07-18 10:24:19 EDT
2017-07-17 15:30 | PDT | 2017-07-17 2017-07-18 09:25:19 PDT
2017-07-17 11:00 | CDT | 2017-07-17 2017-07-18 09:25:19 CDT
2017-07-17 20:00 | EDT | 2017-07-17 2017-07-18 23:02:19 EDT
2017-07-17 10:00 | EDT | 2017-07-17 2017-07-18 10:24:19 EDT
2017-07-13 15:00 | PDT | 2017-07-17 2017-07-18 09:25:19 PDT
library(lubridate)
currentTime <- Sys.time()
tzs <- c("America/Los_Angeles", "America/Chicago", "America/New_York")
names(tzs) <- c("PDT", "CDT", "EDT")
lapply(tzs[data$TZ], with_tz, time = currentTime)
As suggested use with_tz from lubridate, then loop through.
If you don't want/need to use lubridate, then this will give the same result, using the tzs and currentTime objects from #troh's answer:
Map(`attr<-`, currentTime, "tzone", tzs[dat$TZ])

Maintain date time stamp when calculating time intervals

I calculated the time intervals between date and time based on location and sensor. Here is some of my data:
datehour <- c("2016-03-24 20","2016-03-24 06","2016-03-24 18","2016-03-24 07","2016-03-24 16",
"2016-03-24 09","2016-03-24 15","2016-03-24 09","2016-03-24 20","2016-03-24 05",
"2016-03-25 21","2016-03-25 07","2016-03-25 19","2016-03-25 09","2016-03-25 12",
"2016-03-25 07","2016-03-25 18","2016-03-25 08","2016-03-25 16","2016-03-25 09",
"2016-03-26 20","2016-03-26 06","2016-03-26 18","2016-03-26 07","2016-03-26 16",
"2016-03-26 09","2016-03-26 15","2016-03-26 09","2016-03-26 20","2016-03-26 05",
"2016-03-27 21","2016-03-27 07","2016-03-27 19","2016-03-27 09","2016-03-27 12",
"2016-03-27 07","2016-03-27 18","2016-03-27 08","2016-03-27 16","2016-03-27 09")
location <- c(1,1,2,2,3,3,4,4,"out","out",1,1,2,2,3,3,4,4,"out","out",
1,1,2,2,3,3,4,4,"out","out",1,1,2,2,3,3,4,4,"out","out")
sensor <- c(1,16,1,16,1,16,1,16,1,16,1,16,1,16,1,16,1,16,1,16,
1,16,1,16,1,16,1,16,1,16,1,16,1,16,1,16,1,16,1,16)
Temp <- c(35,34,92,42,21,47,37,42,63,12,35,34,92,42,21,47,37,42,63,12,
35,34,92,42,21,47,37,42,63,12,35,34,92,42,21,47,37,42,63,12)
df <- data.frame(datehour,location,sensor,Temp)
I used the following code to calculate the time differences. However it does not maintain the correct date hour with each entry. See columns datehour1 and datehour2.
df$datehour <- as.POSIXct(df$datehour, format = "%Y-%m-%d %H")
final.time.df <- setDT(df)[order(datehour, location, sensor), .(difftime(datehour[-length(datehour)],
datehour[-1], unit = "hour"),
datehour1 = datehour[1], datehour2 = datehour[2]), .(location, sensor)]
I would like each time difference to have the two times used to calculate it to identify it. I would like the result to be the following:
location sensor V1 datehour1 datehour2
out 16 -28 hours 2016-03-24 05:00:00 2016-03-25 09:00:00
1 16 -25 hours 2016-03-24 06:00:00 2016-03-25 07:00:00
2 16 -26 hours 2016-03-24 07:00:00 2016-03-25 09:00:00
3 16 -22 hours 2016-03-24 09:00:00 2016-03-25 07:00:00
4 16 -23 hours 2016-03-24 09:00:00 2016-03-25 08:00:00
4 1 -27 hours 2016-03-24 15:00:00 2016-03-25 18:00:00
3 1 -20 hours 2016-03-24 16:00:00 2016-03-25 12:00:00
2 1 -25 hours 2016-03-24 18:00:00 2016-03-25 19:00:00
1 1 -25 hours 2016-03-24 20:00:00 2016-03-25 21:00:00
out 1 -20 hours 2016-03-24 20:00:00 2016-03-25 16:00:00
Okay, so I'm not an expert by any means at data.tables solutions, and as a result I'm not quite sure how you're using the grouping statement to resolve the number of values down to 10.
That said, I think the answer to your question (if you haven't already solved this another way) lies in the difftime(datehour[-length(datehour)], datehour[-1], unit = "hour") chunk of code, but not in the fact that it's calculating the difference incorrectly, but in that it's preventing the grouping statement from resolving to the expected number of groups.
I tried separating the grouping from the time difference calculation, and was able to get to your expected output (obviously some formatting required):
final.time.df <- setDT(df)[order(datehour, location, sensor), .(datehour1 = datehour[1], datehour2 = datehour[2]), .(location, sensor)]
final.time.df$diff = final.time.df$datehour1 - final.time.df$datehour2
If I've missed the point, feel free to let me know and I'll delete the answer! I know it's not a particularly insightful answer, but it looks like this might do it, and I'm stuck on a problem myself right now, and wanted to try to help.

Alter the data frame to compute hourly values from the daily values in r

I am working on analyzing a delivery unit. I have total number of packets delivered by a unit crew in a day aggregated by session.
First column defines Date, second column indicates Session_start time, third column indicates Session_end time, fourth column indicate total deliveries done in that session packets and fifth column indicates session length Diff_Time. I just added Session_Type to indicate difference between sessions.
Data Frame: df1
Date Session_start Session_end Packets Diff_Time Session_Type
7/01/2016 00:00:00 03:00:00 6000 3 NIGHT
7/01/2016 04:00:00 06:00:00 5000 2 MORNING
Now I would like to convert above data which is aggregated according to session into hourly data as follows:
Data Frame: df2
Date Session_start Session_end Packets Diff_Time Session_Type
7/01/2016 00:00:00 01:00:00 2000(6000/3) 1 NIGHT
7/01/2016 01:00:00 02:00:00 4000(cumsum) 1 NIGHT
7/01/2016 02:00:00 03:00:00 6000 1 NIGHT
7/01/2016 03:00:00 04:00:00 6000 1 NIGHT
7/01/2016 04:00:00 05:00:00 8500 1 MORNING
7/01/2016 05:00:00 06:00:00 11000 1 MORNING
7/01/2016 06:00:00 07:00:00 11000 1 MORNING
.
.
7/01/2016 23:00:00 24:00:00 11000 1 MORNING
Total Packets in the day = 11000(6000+5000), which should be cumulative sum in reformed data frame df2 at the end of the day.
Could any one point me towards the right direction to take this forward?

Resources