AzerothCore Docker deployment set to America/New_York but Ingame Events still seem to use UTC - azerothcore

I have been trying to figure this out on my own, but it's been something that I am unable to. I'm not sure if this is an issue or just something on my end. In my Dockerfile, I changed all the instances of ENV TZ=Etc/UTC to ENV TZ=America/New_York and this works correctly for the realm time in game.
Realm Time and Local Time match
Events however, Is there some other configuration I am missing? Because, for example, the STV Finishing Event does not happen at 2PM Eastern Time, but it at 2PM UTC, so 10AM Eastern. I have not been able to do the event once by it just happening. I needed to start the event manually.
Also the evening event starts earlier. I think that is supposed to start at 6PM server time? Or the Leprithus event. I believe that one is supposed to start at 8PM.
Current events at 5:55PM Realm time
Is there another configuration file that I need to modify? It does not seem like the events are adjusting for the server's timezone.

Related

Firebase cloud functions dynamic time zones

So in my android app, I am using the real-time database to store information about my users. That information should be updated every Monday at 00:00 o'clock. I am using a cloud function to do this but the problem here is the time zones. Right now I have set the time zone to 'Europe/Sofia' for testing purposes. In the documentation, it is said that the time zone for cloud functions must be from the TZ database. So I figured I could ask the user before registering in my app their preferred time zone and save it in the database. My question is after getting the user's prefered time zone is there a way to only write one cloud function and execute it dynamically for each time zone in the TZ database or do I have to create individual functions for each time zone in the TZ database?
If I correctly understand your question, you could have a scheduled Cloud Function which runs every hour from 00:00 to 23:00 UTC+14:00 on Mondays, and, for every execution (i.e. for every hour within this range), query for the users that should be updated and execute the updates.
I'm not able to enter more into details, based on the info you have provided.
It's not possible to schedule a Cloud Function using a dynamic timezone. You must know the timezone at the time you write the function and declare it statically in your code.
If you want to schedule something dynamically, read through your options in this other question: https://stackoverflow.com/a/42796988/807126
So, you could schedule a repeating function that runs every hour, and check to see if something should be run for a user at the moment in time that it was invoked. Or, you can schedule a single future invocation of a function with a service like Cloud Run, and keep rescheduling it if needed.

Time Period of Firebase realtime profile operations

The official document of Firebase Realtime profiler says:
The profiler tool logs all the activity in your database over a given period of time, then generates a detailed report.
But it doesn't tell the specific time like last 24 hours.
My database usage shows that on a particular day, bandwidth consumed is X so I want to specify a particular day or time duration like last 24 hours in Firebase Realtime database profiler >
Q1. Is it possible to specify the duration in profile like last 24 hours?
Q2. How does profiler work?
I think, profiler just scans some log and keeps writing/streaming the operations to user console unless user stops the the profiling tool. Correct me if I am wrong here.
Q1. Is it possible to specify the duration in profile like last 24
hours?
No, it's not possible to profile "last" hours. But you can profile the next 24. (I'll get to that on Q2)
Q2. How does profiler work?
What the profiler does is it logs all the operations happening on your database from the time you run the command until the time you stop it. When you run the command, the console will show you how many operations have been logged so far and you can use Enter to stop logging. It will then show you (or save it to a file if you prefer) speed and bandwidth reports.
But it also has an option to set the logging duration (in seconds). For example, if you want to log the next 24 hours you can use:
firebase database:profile -d 86400
But have in mind that logging only happens if the computer that started it is still on. This means you'll need to keep your computer on for the next 24h.

Airflow Controller triggers Target DAG with a future execution date; Target DAG stalls

I have a Controller DAG (SampleController) that will call a Target DAG (SampleWait), both with a start_date of datetime.now() and a schedule_interval of None.
I trigger the Controller DAG from command line or the webserver UI, and it will run right away with an execution date of "right now" in my system time zone. In the screenshot, it is 17:25 - which isn't my "real" UTC time; it is my local time.
However, when the triggered DAG Run for the target is created, the execution date is "adjusted" to the UTC time, regardless of how I try to manipulate the start_date - it will ALWAYS be in the future (21:25 here). In my case, it is four hours in the future, so the target DAG just sits there doing nothing. I actually have a sensor in the Controller that waits for the Target DAG to finish, so that guy is going to be polling for no reason too.
Even the examples in the Github for the Controller-Target pattern exhibit the exact same behavior when I run them and I can't find any proper documentation on how to actually handle this issue, just that it is a "pitfall".
It is strange that Airflow seems to be aware of my time zone and adjusts within one operator, but not when I do it from command line or the web server UI.
What gives?

How to reschedule a coordinator job in OOZIE without restarting the job?

When i changed the start time of a coordinator job in job.properties in oozie, the job is not taking the changed time, instead its running in the old scheduled time.
Old job.properties:
startMinute=08
startTime=${startDate}T${startHour}:${startMinute}Z
New job.properties:
startMinute=07
startTime=${startDate}T${startHour}:${startMinute}Z
The job is not running at the changed time:07th minute,its running at 08th minute in every hour.
Please can you let me know the solution, how i can make the job pickup the updated properties(changed timing) without restarting or killing the job.
You can't really change the timing of the co-ordinator via any methods given by Oozie(v3.3.2) . When you submit a job the contents properties are stored in the database whereas the actual workflow is in the HDFS.
Everytime you execute the co-ordinator it is necessary to have the workflow in the path specified in properties during job submission but the properties file is not needed. What I mean to imply is the properties file does not come into the picture after submitting the job.
One hack is to update the time directly in the database using SQL query.But I am not sure about the implications of it.The property might become inconsistent across the database.
You have to kill the job and resubmit a new one.
Note: oozie provides a way to change the concurrency,endtime and pausetime as specified in the official docs.

Oozie: run at some time or at some frequency, whichever comes first

the benefits of coordinating by an absolute time is that (insofar as the jobs take a consistent amount of time) the output will be ready for others at some time (e.g. update a dashboard during the night for people to see in the morning).
the benefits of coordinating by a relative frequency is that, if oozie (or it's server) are down, no jobs are skipped (e.g. a daily job might run 2 hours late, but not 22 hours late).
how can i do something like:
start="2009-01-01T21:00Z"
frequency="${coord:days(1)}"
run-if-skipped="true"
i.e. when all's well, jobs run daily at 9pm. if something happens to oozie (e.g. the server is rolled) between 8pm and 10pm, once oozie comes back up at 10pm, the job should run at 10pm, and then tomorrow at 9pm as normal.
https://github.com/yahoo/oozie/wiki/Oozie-Coord-Use-Cases
Not sure that I fully understand the question.
If server is down, and you re-start you coordinator, so it will start from the coordinator start time.
Also you can make you job run every hour, check if the output folder exist - that stop . for that use Decision Control Node

Resources