I can't figure out how to handle the timezone efficiently with dynamodb get operation.
Say my partition primary key and range key (sk) are below, where each record is aggregated data for the whole day done by the user. 2022-09-13 00:00:00 to 2022-09-13 23:23:59
{
pk: 'userId',
sk: '2022-09-13'
}
What is the best approach to store the dates as UTC and fetch based on the client's timezone?
In the current behavior of my program, I'm saving the range key (sk) as UTC date. But if the client's local timezone is +8:00, and when he performs a save operation at 2022-09-14 00:01:00, it will still be stored in the 2022-09-13.
Just store a timestamp (date + time) in UTC instead of the date.
If the client needs data based on the local timezone, you calculate the timestamp of the day start and end in that timezone, shift it to UTC and then do a between query using the utc-start and utc-end.
Related
I have an external data source automatically inserting data into my BigQuery table, this data source includes a timestamp field which does not have a timezone connected to it, however, I know this timestamp is in the Europe/Amsterdam timezone.
The problem here is that when this timestamp is inserted into BigQuery, BigQuery automatically defaults the timestamp to UTC, which it is not. And in my specific case, I want to convert this timestamp to UTC. However because BigQuery already defaulted the timestamp to UTC (while it is actually Europe/Amsterdam), I cannot easily convert it to the actual UTC timezone.
Is there any way to convert this timestamp, which BigQuery thinks is already UTC, to the actual UTC timezone within a query? I can't just give it a -02:00 offset due to Daylight Savings coming into play which changes this offset from 2 hours to only 1 hour depending on the time of year.
Any help would be appreciated, I have been kind of stuck on this :)
An example of the timestamp in BigQuery would be 2022-09-30 01:23:45 UTC
There is probably a better way but this should work
with
input as (select timestamp("2022-09-30 01:23:45 UTC") as ts)
select
ts,
timestamp(replace(cast(ts as string), '+00', " Europe/Amsterdam")) updated_ts
from input
ts
updated_ts
2022-09-30 01:23:45 UTC
2022-09-29 23:23:45 UTC
I am storing one timestamp column in Bigquery table. The source is of CET timestamp. But bigquery is storing it by default in UTC.
How to store it as CET timestamp? otherwise while fetching if also I am doing the timestamp conversion it is manipulating the timestamp value. that I do not want.
Or else I want to convert the value to UTC while fetching but for that I have to make Bigquery understood that the original column is in CET format. How to resolve this?
In the teradata documentation it says:
"Suppose an installation is in the PST time zone and it is New Years Eve, 1998-12-31 20:30 local time.
The system TIMESTAMP WITH TIME ZONE for the indicated time is ' 1999-01-01 04:30-08:00 ' internally."
This does not mesh with my understanding. I figure it ought to be '1999-01-01 04:30+00:00' internally because it should be stored in UTC.
Or, it can be stored as a the local time with a -8 offset, but this example seems to mix the two. Perhaps I am misunderstanding the text?
Not sure if this is an answer, but it's too long for a comment.
That "internal" storage part is very misleading. We don't care how Teradata stores anything internally.
I find this easier to look at using BTEQ, since SQL Assistant doesn't show timezones, at least by default. So, assuming you've logged into BTEQ...
--set session timezone to pst (GMT - 8)
SET TIME ZONE INTERVAL -'08:00' HOUR TO MINUTE ;
create volatile table vt_foo (
ts_w_zone timestamp(0) with time zone,
ts_wo_zone timestamp) on commit preserve rows;
insert into vt_foo
select
cast('1998-12-31 20:30:00' as timestamp(0)),
cast('1998-12-31 20:30:00' as timestamp);
select * from vt_foo;
Currently the two values (with and without tz) will match.
ts_w_zone ts_wo_zone
------------------------- --------------------------
1998-12-31 20:30:00-08:00 1998-12-31 20:30:00.000000
Now let's change the timezone for your session to something else, and look at what we get.
SET TIME ZONE INTERVAL -'03:00' HOUR TO MINUTE ;
select * from vt_foo;
ts_w_zone ts_wo_zone
------------------------- --------------------------
1998-12-31 20:30:00-08:00 1999-01-01 01:30:00.000000
The timestamp with zone is still the same. Displaying it without timezone is automatically converting it to your session timezone, which in this example is GMT -3.
EDIT:
Technically, Teradata is actually storing the time with timezone as GMT (1999-01-01 04:30:00) with the timezone offset (-8). That's where the documentation gets the 1999-01-01 04:30-08:00 value from). But that is not how it displays it.
An API returns a timestamp as UNIX timestamp at UTC and I would like to know if this timestamp was more than x seconds ago. As expected, this works fine with os.time() - x > timestamp in UTC, but blows up in other timezones.
Unfortunately I can't find a good way solve this in lua.
os.date helpfully has the ! prefix (e.g. os.date("!%H:%M:%S")) to return time at UTC, but it seems that despite the documentation stating it supports all strftime options, this does not support the %s option. I have heard people mention that this is caused by Lua compile time options for a similar issue, but changing these is not possible as the interpreter is provided by the user.
You can use
os.time(os.date("!*t"))
to get the current UNIX epoch.
Ok, so you want the UTC time. Keep in mind that os.time actually knows nothing about timezones, so for example:
os.time(os.date("!*t"))
Will get UTC time and populate table struct.
Will convert table struct according to current timezone to unix timestamp.
So you actually would get your UNIX_TIME - TIMEZONE_OFFSET. If you are in GMT+5 you will get timestamp at UTC-5.
The correct way to do time conversion in lua is:
os.time() -- get current epoch value
os.time{ ... } -- get epoch value for local date/time values
os.date("*t"),os.date("%format") -- get your local date/time
os.date("!*t") or os.date("!%format") -- get UTC date/time
os.date("*t", timestamp),os.date("%format", timestamp) -- get your local date/time for given timestamp
os.date("!*t", timestamp) or os.date("!%format", timestamp) -- get UTC date/time for given timestamp
Kudos to Mons at https://gist.github.com/ichramm/5674287.
If you really need to convert any UTC date to timestamp, there's a good description on how to do this in this question: Convert a string date to a timestamp
os.time() gives you the unix timestamp. The timestamp is seconds since 00:00:00 UTC on 1 January 1970, so it's the same across timezones.
For example, run this code:
print('timestamp', os.time())
print('local hour', os.date("*t").hour)
print('utc hour', os.date("!*t").hour)
Presumably, your local and utc hour are different. Also run it in an online repl. The server's local and utc hour are the same, but both your and the server's timestamp are about the same.
I have a SQLite table which one of its columns is:
timestamp long DEFAULT CURRENT_TIMESTAMP NOT NULL
It stores in database the UTC date and time but also the time. As time is not important for me how can I store it with time set to 00:00:00 by defaut?
You should use:
timestamp long DEFAULT CURRENT_DATE NOT NULL
By the way - do you know that this format is stored as text (as you may see here)? In many cases, especially when you have reason to index this field, it's more efficient to use INTEGER and just take care and put low-level long date/time representation when inserting/updating
The built-in date functions allow to modify a date value in the way you want:
timestamp text DEFAULT (date('now', 'start of day')) NOT NULL
If you want to store the value as a number, you can convert it appropriately:
timestamp long DEFAULT (strftime('%s', 'now', 'start of day')) NOT NULL