How to keep indexed a Maniphest task after editing its title - phabricator

After a new Maniphest task has been created, chances are that you may need to change the task title to a new one with different keywords. However, upon editing the title, the task cannot be found by its new keywords but only by the old ones.
After manually reindexing the database the edited tasks can be found again but further changes in title will fail again until a new reindexation is issued.
I suppose the normal behavior is that tasks should be found anytime searching by their title without reindexing the database. Should I expect a different behavior from Maniphest?
Phabricator Version:
phabricator cb033673b6eb3dc8330d2ddea0fd358eae3b939a (Nov 16 2018)

The usual culprit is your phabricator daemons (background workers) aren't running.
From the phabricator directory:
# Check the status of daemons:
./bin/phd status
# (re)start the daemons:
./bin/phd restart
See Managing Daemons with PHD. You can also try looking at the daemon console which should be reachable at https://your.phabricator.url/daemon/, this will show the queue of jobs so you can see if jobs are failing for some reason.

Related

OOZIE stuck in RUNING status

I use OOZIE to run a workflow. But a simple official example shell-wf (echo hello oozie) stuck in RUNNING state and never end. The workflow can be submitted but stuck at RUNNING state. There is not any error in job log in OOZIE UI.
When submitting a shell with spark-submit inside, the job will be never submitted and can not be seen in Spark UI. I suspect the shell didn't run at all.
What's the possible problem?
A Quick Checklist
For those who have the same problem, there is a checklist to check your system. Hope it helps!
Check jobTracker in your Oozie configuration. Note: If a job has been successfully run, it probably not the problem of jobTracker. Related discussion can be found here
Check your disk usage. If ## Heading ##disk usage is greater than 90%, remove some files to make sure disk usage is less than 90%. (That's my case!)
Check Console URL of the stuck action. It can be found in Job - Job Info tab - Actions - Action - Action Info tab. Job state here may help you to find the problem.
Check Oozie log. It's typically in /usr/local/oozie/logs. Check oozie.log* to find if there are exceptions.
Details
Disk usage
If your action state is
YarnApplicationState: ACCEPTED: waiting for AM container to be allocated, launched and register with RM.
That may be the disk problem. Relative discussion can be found in MapReduce job hangs, waiting for AM container to be allocated. Solutions can be found in Why does Hadoop report "Unhealthy Node local-dirs and log-dirs are bad"?.

Batch Jobs Not Running When Set to Waiting on My Dev Server

My level of experience with the product is basic at best, but I'm expected to be a developer; I have a basic understanding of many things.
Right now my job is to investigate canceling lines in Purchase Orders. We have a workflow set up to handle those, and I'm trying to duplicate the scenario in my dev instance. Whenever a user cancels a line, the workflow is supposed to engage, and I've found that a batch job is what triggers that workflow to work (maybe that's the case with all workflows, but I don't know that for sure).
I've set up my personal Dev AX Instance under System Configuration => System => Server Configuration to use my personal Dev AOS server that my client is also running under, but when I go to System Configuration => Batch Jobs => Batch Jobs, then find the Batch Job I've been looking for and set the status to Waiting, the Batch Job never runs.
On our Test instance, the jobs is configured exactly the same way, except they use the AOS Server allotted for it.
I did a SQL script to change the batch job to use my personal Dev AOS Server, then did a restart of the Dynamics AX Servers.
There must be something I'm doing wrong for my personal dev instance. I've been reading some things from here about what may be going on and following down the list, but I'm pretty sure the problem is even stupider => https://www.daxrunbase.com/2017/07/02/troubleshooting-batch-jobs-in-ax/
First of all, do you have all 3 workflow jobs set up?
Workflow message processing
Workflow due date processing
Workflow line-item notifications
They can be set up from System administration > Setup > Workflow > Workflow infrastructure configuration.
Secondly, it is OK for the periodic batch jobs to have status Waiting. They will be in status Executing for a short time and then they will be Waiting for the next run. If the Scheduled start date/time value in this batch job is in the past, that could be a problem. Otherwise everything is OK.
Lastly, if you have already ticked the Is batch server check-box in System administration > Setup > System > Server configuration, please also make sure to move the workflow batch group in the Batch server groups section in the same form from Remaining groups to Selected groups.
The batch jobs should start at Scheduled start date/time - or a bit later, you'd need to wait a minute and refresh the grid.

How to reschedule a coordinator job in OOZIE without restarting the job?

When i changed the start time of a coordinator job in job.properties in oozie, the job is not taking the changed time, instead its running in the old scheduled time.
Old job.properties:
startMinute=08
startTime=${startDate}T${startHour}:${startMinute}Z
New job.properties:
startMinute=07
startTime=${startDate}T${startHour}:${startMinute}Z
The job is not running at the changed time:07th minute,its running at 08th minute in every hour.
Please can you let me know the solution, how i can make the job pickup the updated properties(changed timing) without restarting or killing the job.
You can't really change the timing of the co-ordinator via any methods given by Oozie(v3.3.2) . When you submit a job the contents properties are stored in the database whereas the actual workflow is in the HDFS.
Everytime you execute the co-ordinator it is necessary to have the workflow in the path specified in properties during job submission but the properties file is not needed. What I mean to imply is the properties file does not come into the picture after submitting the job.
One hack is to update the time directly in the database using SQL query.But I am not sure about the implications of it.The property might become inconsistent across the database.
You have to kill the job and resubmit a new one.
Note: oozie provides a way to change the concurrency,endtime and pausetime as specified in the official docs.

Wordpress Multi-Master DB Replication: Deadlock when updating cron table in wp_options

We're running Wordpress in an environment that features a multi-master DB behind a load-balancer. The error log was filling up with a deadlock error when WP tried to update the cron table in wp_options. We disabled wp-cron altogether but are still seeing the error, so, two questions:
1) What causes the cron table in wp_options to be updated?
2) It appears to run on every page load. Can this be disabled and a cronjob setup to run it periodically in crontab?
Thanks
Wordpress uses wp-cron.php as a means for running scheduled tasks when the user doesn't have access to or want to setup cronjobs via Unix. This process looks at the scheduled jobs in the cron table in wp_options and if the specified time (or more) has elapsed then the job executes.
wp-cron.php uses wp-includes/cron.php (the Wordpress Cron API) to run scheduled jobs. In cron.php you'll find a number of functions that update the cron table table, these functions are all around the scheduling of events.
Any function of Wordpress or plugin that requires a scheduled event uses the Cron API to do so. However, the action of scheduling an event (even if it already exists) updates the cron table in wp_options. Even with wp-cron.php totally disabled, these elements of Wordpress/the plugin are loading and scheduling their events, trying to update the cron table in the process.
I've not figured-out exactly why the deadlock occurs, other than knowing it must be related to the DB/site config, but I do now know that Wordpress is behaving itself.
I've run into this same issue -- the databases would go out of sync very quickly. Certain plugins made it occur faster (they scheduled lots of cron jobs), but even with them disabled, eventually the errors would block replication.
I was able to keep replication working by doing two things.
The first, in my.ini, was to add:
slave-skip-errors = 1062
This instructs MySql to skip creating entries when a duplicate key already exists. My cluster is set up as active-passive, so in theory, there should be no "real" writes to the passive MySql node unless the active node is down, in which case, there will be no "real" writes to that node. The only stuff that gets written to the passive node is as a result of wp-cron jobs, which (in theory) are also running on the active node.
The second, in each site's wp-config.ini, was to add:
/** disable cron */
define('DISABLE_WP_CRON', true);
This blocks wp-cron from running at all, so either one of these solutions should work on their own.
Another option would be to disable wp-cron, but leave the full database syncing in place, and schedule a script to call each site's wp-cron.php (you'd be accomplishing manually what the wp-cron service does automatically). That way, it will only run on the active node and the data should be synced over to the passive node with no problems.

build queue issues in CC.net

Having a question on how the build queue is configured in CC.net.
I believe we have an issue , when trying to “force” build a scheduled project, the server tries to run several builds at the same time and fails
Most of them except the one that started first.
We need to get to a state when regardless how many builds are scheduled or how many we “force” start in about the same time, all build requests are placed in to a build queue and
executed one after finishing another in the order they were placed, and no extra request are generated.
Build Failed email is sent but the build was actually successful.
In short,The erroneous email is likely due to an error in the build server’s build scheduler/queue, trying to run 2 builds instead of one when asked for a “forced” build, as a result the first one is successful and the second one fails.
How to correct/resolve this issue....?
Thanks
Nilesh
To specify your projects' queue you need to set the queue property like this :
<project name="MyFirstProject" queue="Q1" queuePriority="1">
The default value is a queue per project. If you manually set the same queue (for example Q1) for all you project then, you will have a unique queue.
As for the queuePriority, the project (not yet started) in the queue are ordonned by queuePriority, low queuePriority projects start first.
It's all described in the cc net documentation which is now offline due to a problem at sourceforge.

Resources