Wordpress Multi-Master DB Replication: Deadlock when updating cron table in wp_options - wordpress

We're running Wordpress in an environment that features a multi-master DB behind a load-balancer. The error log was filling up with a deadlock error when WP tried to update the cron table in wp_options. We disabled wp-cron altogether but are still seeing the error, so, two questions:
1) What causes the cron table in wp_options to be updated?
2) It appears to run on every page load. Can this be disabled and a cronjob setup to run it periodically in crontab?
Thanks

Wordpress uses wp-cron.php as a means for running scheduled tasks when the user doesn't have access to or want to setup cronjobs via Unix. This process looks at the scheduled jobs in the cron table in wp_options and if the specified time (or more) has elapsed then the job executes.
wp-cron.php uses wp-includes/cron.php (the Wordpress Cron API) to run scheduled jobs. In cron.php you'll find a number of functions that update the cron table table, these functions are all around the scheduling of events.
Any function of Wordpress or plugin that requires a scheduled event uses the Cron API to do so. However, the action of scheduling an event (even if it already exists) updates the cron table in wp_options. Even with wp-cron.php totally disabled, these elements of Wordpress/the plugin are loading and scheduling their events, trying to update the cron table in the process.
I've not figured-out exactly why the deadlock occurs, other than knowing it must be related to the DB/site config, but I do now know that Wordpress is behaving itself.

I've run into this same issue -- the databases would go out of sync very quickly. Certain plugins made it occur faster (they scheduled lots of cron jobs), but even with them disabled, eventually the errors would block replication.
I was able to keep replication working by doing two things.
The first, in my.ini, was to add:
slave-skip-errors = 1062
This instructs MySql to skip creating entries when a duplicate key already exists. My cluster is set up as active-passive, so in theory, there should be no "real" writes to the passive MySql node unless the active node is down, in which case, there will be no "real" writes to that node. The only stuff that gets written to the passive node is as a result of wp-cron jobs, which (in theory) are also running on the active node.
The second, in each site's wp-config.ini, was to add:
/** disable cron */
define('DISABLE_WP_CRON', true);
This blocks wp-cron from running at all, so either one of these solutions should work on their own.
Another option would be to disable wp-cron, but leave the full database syncing in place, and schedule a script to call each site's wp-cron.php (you'd be accomplishing manually what the wp-cron service does automatically). That way, it will only run on the active node and the data should be synced over to the passive node with no problems.

Related

Understanding WordPress Cron

What will be the best way to activate WordPress cron if your site has no hits or visits?
Just starting out my new DAILY BIBLE QUOTE blog and I have no hits yet. I have scheduled a post to published “once every day at 6:00 AM”. Since I have no hits I’m afraid the scheduled post will not get published, so i add cronjob on my shared cpanel hosting using the code below:
0 6 * * * wget -O /dev/null --timeout=120 http://example.com/wp-cron.php?doing_wp_cron=true
So my question is,
Will the scheduled post gets published using the command above or querying only the site http://example.com/ on the cron command is enough to do the job?
Do i need to scheduled the post at 6:05AM and add crontab to run at 6:00AM?
Do not run cronjob on the site hosting since the IP is going to be same, so better use a cron service like EasyCron using the FREE plan https://www.easycron.com/user/plan since the IP is going to be different so that my site will think of it as a new visit, hence activating wp-cron.
To address your questions:
Yes, you would simply need to load the homepage of your site (though you could load any page of the site) to trigger the WP Cron event. When you load the WordPress stack, on any page, it will load the entire WordPress stack and part of that loading is checking the database to see if any cron events are ready to run. The cron information is saved in the database with information on the time the next instance of that cron runs and the function/hook to run at that time, if anything matches it will fire off the event.
For more information on WordPress cron, you can review their documentation here:
https://developer.wordpress.org/plugins/cron/
Why not schedule the post at 5:59AM and then run the cron at 6AM? That would ensure that it gets published right on the minute.

Batch Jobs Not Running When Set to Waiting on My Dev Server

My level of experience with the product is basic at best, but I'm expected to be a developer; I have a basic understanding of many things.
Right now my job is to investigate canceling lines in Purchase Orders. We have a workflow set up to handle those, and I'm trying to duplicate the scenario in my dev instance. Whenever a user cancels a line, the workflow is supposed to engage, and I've found that a batch job is what triggers that workflow to work (maybe that's the case with all workflows, but I don't know that for sure).
I've set up my personal Dev AX Instance under System Configuration => System => Server Configuration to use my personal Dev AOS server that my client is also running under, but when I go to System Configuration => Batch Jobs => Batch Jobs, then find the Batch Job I've been looking for and set the status to Waiting, the Batch Job never runs.
On our Test instance, the jobs is configured exactly the same way, except they use the AOS Server allotted for it.
I did a SQL script to change the batch job to use my personal Dev AOS Server, then did a restart of the Dynamics AX Servers.
There must be something I'm doing wrong for my personal dev instance. I've been reading some things from here about what may be going on and following down the list, but I'm pretty sure the problem is even stupider => https://www.daxrunbase.com/2017/07/02/troubleshooting-batch-jobs-in-ax/
First of all, do you have all 3 workflow jobs set up?
Workflow message processing
Workflow due date processing
Workflow line-item notifications
They can be set up from System administration > Setup > Workflow > Workflow infrastructure configuration.
Secondly, it is OK for the periodic batch jobs to have status Waiting. They will be in status Executing for a short time and then they will be Waiting for the next run. If the Scheduled start date/time value in this batch job is in the past, that could be a problem. Otherwise everything is OK.
Lastly, if you have already ticked the Is batch server check-box in System administration > Setup > System > Server configuration, please also make sure to move the workflow batch group in the Batch server groups section in the same form from Remaining groups to Selected groups.
The batch jobs should start at Scheduled start date/time - or a bit later, you'd need to wait a minute and refresh the grid.

How can I disable the git fileserver update schedule

I've set up a hook on my gitlab server to call salt-run fileserver.update from a post-update hook.
How can I disable the schedule that does a update every 60 to reduce the load on my gitlab server?
The 60 seconds interval in which the Git filesystem is updated is defined by the loop_interval setting, which you can set in your master configuration file:
# The loop_interval option controls the seconds for the master's maintenance
# process check cycle. This process updates file server backends, cleans the
# job cache and executes the scheduler.
#loop_interval: 60
However, this interval controls not only the GitFS update schedule, but also a number of other maintenance tasks, so you should not increase this interval by too much.
From a quick reading of the source code (I'm not a core Salt developer though, so I might be mistaken), the GitFS update is hard-coded to run on the same schedule as these other maintenance tasks. There does not appear to be a way to disable or change the interval of only the GitFS update schedule.

Unordering jobs that are scheduled to run in Control-M

How do I stop my running Control-M jobs from executing? Basically I want them to stop running, remove them from the Monitoring view.
in Control-M version 8 you can un-schedule a whole folder by just selecting Manual Order in Order Method. if it's just one specificjob, you can edit the scheduling properties of that job directly and select Manual Order too.
Right click on your job and select "Hold". Now more actions/delete should be available
at controlM V7 and V8/9 open Desktop/Planning environment, Load the schedule table, select the job individually, at Scheduling tab un-select Months, Verify and Checkin. This method is to disable jobs to load and execute( obviously will not show at monitoring window).
Note: Always at planning load the scheduling table, not individual jobs or subapplications. This will overwrite CtmServer DB.

How to reschedule a coordinator job in OOZIE without restarting the job?

When i changed the start time of a coordinator job in job.properties in oozie, the job is not taking the changed time, instead its running in the old scheduled time.
Old job.properties:
startMinute=08
startTime=${startDate}T${startHour}:${startMinute}Z
New job.properties:
startMinute=07
startTime=${startDate}T${startHour}:${startMinute}Z
The job is not running at the changed time:07th minute,its running at 08th minute in every hour.
Please can you let me know the solution, how i can make the job pickup the updated properties(changed timing) without restarting or killing the job.
You can't really change the timing of the co-ordinator via any methods given by Oozie(v3.3.2) . When you submit a job the contents properties are stored in the database whereas the actual workflow is in the HDFS.
Everytime you execute the co-ordinator it is necessary to have the workflow in the path specified in properties during job submission but the properties file is not needed. What I mean to imply is the properties file does not come into the picture after submitting the job.
One hack is to update the time directly in the database using SQL query.But I am not sure about the implications of it.The property might become inconsistent across the database.
You have to kill the job and resubmit a new one.
Note: oozie provides a way to change the concurrency,endtime and pausetime as specified in the official docs.

Resources