I have a problem where I need to handle changes in the timezones and changes in the days within the same timezone while saving and then retrieving the records in the database. The two solutions that I could think of yet are, use UTC+0 timezone on the server, OR, send timezone of client each time while saving the record and then use that timezone value while retrieving the record. The former one can handle changes in the timezones of the client but does not handle the problem where client wants to query all the records of a particular day because the change in day won't be at the same time for the server and the client. The latter solution handles the problem of difference in the day change time but does not handle change in the timezone of the same client. What other solution could be there which can handle both the problems?
Related
My asp core 2.0 web api application stores DateTime in UTC format. I have user TimeZone information available in every request through user preference settings (stored in cookies).
The problem is that I need to return to user DateTime in his TimeZone instead of UTC. Do we have some build in utilities to do so?
Currently I follow the next flow:
User sends request with his preferences -> I retrieve his TimeZone
Process whatever request logic
In OnActionExecuting filter convert all response date time to appropriate TimeZone.
This approach seems to be very clumsy. I hope there is a better way to address such kind of issues. Please, suggest
I am migrating pushing the e-commerce transactions from controller, to a cron job, that will run every minute.
However, I cannot seem to find the parameter in measurement protocol, which I could to specify the exact time at which the transaction occurred?
Anyone have any ideas? Is this even necessary, since the maximum delay will be 1 minute?
You cannot specify a timestamp for the transaction (you can add a custom dimension with a timestamp, but GA will happily ignore this for session aggregation).
What you can do is add an offset in milliseconds between the actual transaction time (or time of any other hit) and the time you finally send the hit to Google. This is called "queue time", I think this was originally intended for native/web apps that might be offline for some time.
Just for a minute delay I probably would not bother. However it might be useful for cases when your cron job fails for some reason, and you want to pick up and send the transactions later.
My server's machine timezone is in HST format. When I try to get the timezone by using HTTP request in JavaScript, it is giving me in UTC format. Is there any way to get the server's timezone.
A few things:
Never rely on the server's time zone to be set to anything in particular. It can easily be changed, or you may simply want to move your server somewhere else. Either should not affect your data.
The HTTP header will only give you the time in UTC/GMT. This part of the HTTP specification, in RFC7231 section 7.1.1.1 and 7.1.1.2.
The client knows nothing about the server's time zone, unless you specifically go out of your way to send it yourself. Due to the previous two points, this should not be required anyway, or should be used in very rare circumstances.
The server knows nothing about the client time zone either. If you want to display the value of the server's clock in the client's local time, you have two options:
Send the UTC time to the client, and use JavaScript to convert from UTC to Local time. The JavaScript Date object is sufficient for this, but you may also find libraries like Moment.js and others useful.
Determine the client's local time zone by some other means, either by asking the user, or by guessing. See this answer for more detail. Once you have the time zone ID (such as America/Los_Angeles, etc.) use this on the server-side. This approach is only useful if you have a lot of date/time manipulation to do on the server-side. If you are simply converting to local time for display, prefer option 1.
This question already has answers here:
Daylight saving time and time zone best practices [closed]
(30 answers)
Closed 7 years ago.
So my friend says that when he configures servers he ALWAYS sets their timezone as UTC. He says that this helps him in making sure that he does not have timezone issues when working with multiple servers. His code picks up the datetime from his server setting, naturally.
My question was that: if I have a machine that is located in, say, California (EST) and I set it up saying its timezone is UTC, then wont the actual time in that server be in correct?
My friend says that: If that server gets an order at 7 in the morning local time, which is 7:00 AM EST - since the server is configured as UTC (which is 4 hours ahead of EST), then the order will be saved as having placed at 11 AM UTC, which in turn means that it can be converted to any time zone as required. (Converting 11 AM UTC to EST gives 7 AM EST which is correct)
I always maintained that servers should be configured to the correct time zone. I guess I was wrong. Is it ok to have servers set up like this? Are there any drawbacks if the server timing is not locale specific?
My impression is the only source of weirdness is the machine is not physically located in a place which adheres to the timezone it has in its time settings. But this is only 'weirdness' and should not have any drawback (as long as the time is set accurately).
Let's consider date objects such as Date or Calendar in Java. Their implementation actually stores time as a Unix timestamp (a Long number) - Unix timestamps are basically relative to a UTC epoch ((milli)seconds passed since 1970-01-01 00:00:00 UTC). As long as timezones get correctly stored/converted between client, server & database (& vice-versa) everything should be ok. How the time gets stored in different places of the architecture is a matter of convention.
In fact, it might even help if you choose UTC as the only timezone you deal with server-side (both on the machine & in the code) (e.g. debug process, log investigations, etc.). For example log timestamps will be comparable to times logged by your business logic or db layer.
Also see this question.
I have a form with a list that shows information from a database. I want the list the update in run time (or almost real time) every time something changes in the database. These are the three ways I can think of to accomplish this:
Set up a timer on the client to check every few seconds: I know how to do this now, but it would involve making and closing a new connection to the database hundreds of times an hour, regardless of whether there was any change
Build something sort of like a TCP/IP chat server, and every time a program updates the database it would also send a message to the TCP/IP server, which in turn would send a message to the client's form: I have no idea how to do this right now
Create a web service that returns the date and time of when the last time the table was changed, and the client would compare that time to the last time the client updated: I could figure out how to build a web service, but I don't how to do this without making a connection to the database anyway
The second option doesn't seem like it would be very reliable, and the first seems like it would consume more resources than necessary. Is there some way to tell the client every time there is a change in the database without making a connection every few seconds, or is it not that big of a deal to make that many connections to a database?
I would imagine connection pooling would make this a non-issue. Depending on your database, it probably won't even notice it.
Are you making the update to the database? Or is the update happening from an external source?
Generally, hundreds of updates per hour won't even bother the DB. Even Access, which is pretty slow, won't cause a performance issue.
Here's a rough idea if you really want to optimize it and you're doing the data updates. Store an application variable on the server side called, say, LastUpdateTime. When you make updates to the database, you can update the LastUpdateTime variable with the current time. Since LastUpdateTime is a very lightweight object in server memory, your clients can technically request the last update time hundreds if not thousands of times per second without any round trip to the database. Based on the last time the client retrieved new information vs. the last update time on the server, you can then go fetch the updated info.
We have a similar question Polling database for updates from C# application. Another idea (may be not a proper solution) would be to use Microsoft Sync Framework. You can use a timer to sync the DB.