I notice that the time range for Application Insights Search functionality is using my local timezone. Is there a way to set the time range to use UTC instead?
It seems not support to show UTC time in Time range.
Workaround:
Select your filter conditions first, go to Analytics, you will find the time in the query will be UTC time like requests | extend itemType = iif(itemType == 'request',itemType,"") | where (itemType == 'request' and (timestamp >= datetime(2018-12-06T06:36:00.000Z) and timestamp <= datetime(2018-12-07T06:36:00.000Z))) | top 101 by timestamp desc , then you can modify the time -> Run.
In the result, select CHART, you will see the result with UTC time.
As of now, Application Insights allows user to select whether to search by Local Timezone or UTC/GMT
Related
As part of setting GCP project level timezone to AEST, I have run the following command -
ALTER PROJECT `gcp-abc-def`
SET OPTIONS ( `region-us.default_time_zone` = 'Australia/Sydney')
Doing so, I see that current_datetime() is getting changed to AEST whereas timestamp remains UTC, as can be seen below.
Can someone help how this can be remedied? What other settings to be changed?
I see that current_datetime() is getting changed to AEST whereas timestamp remains UTC this is because the current_timestamp function shows time in timestamp type. A timestamp does not have a time zone; it represents the same instant in time globally. When querying for CURRENT_TIMESTAMP() it shows explicitly in UTC by having zero time zone offset. When you convert a timestamp to some other type that isn't tied to a particular timezone, you can specify the timezone for the conversion.You can use format_timestamp to convert the timestamp into your zone specific time.
Example:
ALTER PROJECT `gcp-abc-def`
SET OPTIONS ( `region-us.default_time_zone` = 'Australia/Sydney');
select current_datetime() as cdt, current_timestamp() as cts,format_timestamp('%c',current_timestamp(),'Australia/Sydney') as cts2
Using Kusto (application insights, log analytics, azure monitor) how can I compare data from my current day (eg: Monday), against data from all previous similar days?
a.k.a, how to compare current monday to previous Mondays only?
requests
| where timestamp > ago(90d)
| where dayofweek(timestamp) == dayofweek(now())
| summarize count() by bin(todatetime(strcat("20210101 ", format_datetime(timestamp,"HH:mm:ss"))),5m),tostring(week_of_year(timestamp))
| render timechart
The trick is to group the timestamp using the same date, keeping the time portion AND the week_of_year (which has to be casted to string otherwise it will be interpreted as another numeric value)
In the teradata documentation it says:
"Suppose an installation is in the PST time zone and it is New Years Eve, 1998-12-31 20:30 local time.
The system TIMESTAMP WITH TIME ZONE for the indicated time is ' 1999-01-01 04:30-08:00 ' internally."
This does not mesh with my understanding. I figure it ought to be '1999-01-01 04:30+00:00' internally because it should be stored in UTC.
Or, it can be stored as a the local time with a -8 offset, but this example seems to mix the two. Perhaps I am misunderstanding the text?
Not sure if this is an answer, but it's too long for a comment.
That "internal" storage part is very misleading. We don't care how Teradata stores anything internally.
I find this easier to look at using BTEQ, since SQL Assistant doesn't show timezones, at least by default. So, assuming you've logged into BTEQ...
--set session timezone to pst (GMT - 8)
SET TIME ZONE INTERVAL -'08:00' HOUR TO MINUTE ;
create volatile table vt_foo (
ts_w_zone timestamp(0) with time zone,
ts_wo_zone timestamp) on commit preserve rows;
insert into vt_foo
select
cast('1998-12-31 20:30:00' as timestamp(0)),
cast('1998-12-31 20:30:00' as timestamp);
select * from vt_foo;
Currently the two values (with and without tz) will match.
ts_w_zone ts_wo_zone
------------------------- --------------------------
1998-12-31 20:30:00-08:00 1998-12-31 20:30:00.000000
Now let's change the timezone for your session to something else, and look at what we get.
SET TIME ZONE INTERVAL -'03:00' HOUR TO MINUTE ;
select * from vt_foo;
ts_w_zone ts_wo_zone
------------------------- --------------------------
1998-12-31 20:30:00-08:00 1999-01-01 01:30:00.000000
The timestamp with zone is still the same. Displaying it without timezone is automatically converting it to your session timezone, which in this example is GMT -3.
EDIT:
Technically, Teradata is actually storing the time with timezone as GMT (1999-01-01 04:30:00) with the timezone offset (-8). That's where the documentation gets the 1999-01-01 04:30-08:00 value from). But that is not how it displays it.
I am using Odata V4 on my asp.net web api. When i make this request using odata to filter by datetime, it subtracts 8 hours (my computer local time is set to pacific standard time UTC -8:00) from the date i provide in the url from the date that entity framework sends to my DB.
...Documents?$select=Id,ModifiedDate&$filter=ModifiedDate ge 2018-02-10T00:00:00Z&$orderby=ModifiedDate
Here is the log of this request Entity Framework provides.
Opened connection at 2/9/2018 5:16:40 PM -08:00
SELECT TOP (101)
[Project1].[DocumentId] AS [DocumentId],
[Project1].[C2] AS [C1],
[Project1].[C3] AS [C2],
[Project1].[C4] AS [C3],
[Project1].[C5] AS [C4],
[Project1].[C1] AS [C5]
FROM ( SELECT
[Extent1].[DocumentId] AS [DocumentId],
[Extent1].[ModifiedDate] AS [ModifiedDate],
CAST( [Extent1].[DocumentId] AS bigint) AS [C1],
N'8da97389-55d6-4534-b683-2e767485606a' AS [C2],
N'ModifiedDate' AS [C3],
CAST( [Extent1].[ModifiedDate] AS datetime2) AS [C4],
N'Id' AS [C5]
FROM [dbo].[EstimateDocument] AS [Extent1]
WHERE [Extent1].[ModifiedDate] >= convert(datetime2, '2018-02-09 16:00:00.0000000', 121)
) AS [Project1]
ORDER BY [Project1].[ModifiedDate] ASC, [Project1].[C1] ASC
-- Executing at 2/9/2018 5:16:40 PM -08:00
-- Completed in 435 ms with result: SqlDataReader
It seems that odata is subtracting 8 hours from the time i said to filter by. I thought the Z at the end of the date i specified in the url stands for UTC time. Why is Odata changing this date before making the request? The dates stored in the db are also in UTC time. Why would it think any conversion needs to happen?
I have noticed if i change my computers local time to UTC (my service runs locally) then it does not try and make any conversions. But i cannot expect the service to run on a machine who's local time is set to UTC.
On your Api, where you define UseMvc, add SetTimeZoneInfo(TimeZoneInfo.Utc).
I have a model called "ticket" that has a start time and end time. I would like to be able to sort/divide tickets by time on the front-end. There will be 3 categories: past (now > end_time), current (start_time < now < end_time), future (now < start_time). Where now represents the real UTC time. I am thinking of having a "state" field in the ticket model which will contain the value "past", "current", or "future". I need a way to update the state of tickets based on time.
Would a cron job running every minute be an appropriate solution for this? It will iterate through all entries and do a check if the state should be updated and then perform the update if necessary. Is this solution scalable? Is there a better solution?
If it is necessary info I am thinking of using Firebase for the database and Google App Engine for the cron job.
In most cases it is the wrong approach to save data into a db which gets invalidated by time but can be calculated from the same record's data. There is also a risk of inconsistencies if the dates get updated but the cronjob didn't run yet or when dates are close to now.
I would suggest you to always calculate that info in your db queries by using the date fields. In MySQL this would work similar to this:
SELECT
(IF (end_time < NOW()) THEN 'past' ELSE IF (start_time < NOW()) THEN 'current' ELSE 'future') AS state
FROM table
Alternatively you can just fetch the start_time and end_time fields and handle the state in your application. If you want to query the entries by status, then you can also use the date columns in the filtering clauses.
This drops the need to have a cron job update the status.