I have a workflow that has many sessions that run in parallel to each other. When one of the session fails, the workflow waits for the other session to complete and then the entire workflow gets failed. We have selected the option "fail parent if this task fails". But we want the workflow to fail and stop immediately if any of the session fails without waiting for other sessions to finish.
ps: We have a unix shell script that calls all the workflows one by one. So if we can solve it using unix shell scripting that would be fine aswell.
Does anyone have any solution for it?
Best thing you can do in Informatica is use a Control Task to Abort the worklfow, and have it connected from all sessions with an OR condition. Something like:
start--S1--S2--S3
\ \ \
\---\---\-(OR)-CTL
Related
I have started n slurm jobs and I would like to let a separate process wait until at least one of them has finished. The waiting process should use as little cpu time as possible so polling would not be ideal (unless there is no other way).
I know of the scontrol wait_job, but as far as I can see this can only wait for exactly one job.
If you have sufficient privileges, you can use strigger.
Otherwise, you could use a workflow manager (for instance Fireworks). They typically do polling but at a reasonable pace.
Note that if the action to take is to submit another job, you can also submit it right away and use the --dependency parameter to delay its execution until ready.
If you want to run something in the current session and wait for jobs you have already submitted, you can wait by submitting a job with a dependency that finishes immediately(will still need to wait for the scheduler, but doesn't require installing anything new and doesn't require a bunch of polling):
sbatch --time=<some short time> <usual account args> --wait --dependency afterok:<your jobs> /dev/stdin <<< '#!/bin/bash \n sleep 1'
For example:
sbatch --time=0:15:00 --account my-account-abc --wait --dependency afterok:12345678:12345679 /dev/stdin <<< '#!/bin/bash \n sleep 1'
I know this is kind of late, but hope this helps someone :)
again, im stucking in Gearman. I was implementing the ulabox gearman Bundle which works nicely. But there are two things which I dont unterstand yet.
How do I start a Worker??
Im the documentation, I should first execute a worker and the start the code.
https://github.com/ulabox/GearmanBundle/blob/master/README.md
Open the first console and run:
$ php app/console gearman:worker:execute --worker=AcmeDemoBundle:AcmeWorker
Now open another console and run:
$ php app/console gearman:client:execute --client=UlaboxGearmanBundle:GearmanClient:hello_world --worker=AcmeDemoBundle:AcmeWorker --params="{\"foo\": \"bar\" }"
So, if I dont start the worker manually, the job would be done by itsself. If I start the worker, everysthin is fine. But at least, it is a bit strange to start in manually, even if there is set an iteration of x so that the worker will kill itsself after that amount of job.
So please, can anyone help me out of this :((((
Heeeeeelp :) lol
thanks in advance an kind regards
Phil
Yes to run some task in background not only Gearman need to be run but also workers.
So you have run "gearman" that wait for some command (e.x. email send).
Additionally you have waiting workers.
When gearman view new command he look for first free worker and pass this command to it.
Next worker process execution for command and after finish return to Gearman server that it finished and ready to process new command.
More worker you have faster commands in queue processed.
You can use "supervisor" for automatic maintenance workers running.
Bellow you can find few links with more information:
http://www.daredevel.com/php-jobs-with-gearman-and-supervisor/
http://www.masnun.com/2011/11/02/gearman-php-and-supervisor-processing-background-jobs-with-sanity.html
Running Gearman Workers in the Background
I am calling child process from a process in Powershell.
The child process will not end, it will be running continuously in the background.
I need the logs entered by the child process continuously.
process.standardoutput.rradtoend() will enter the logs into file when the child process ends.
But i need the logs continuously.
Please help me in this regard.
You can redirect the log to files as below:
start-process your_executable -ArgumentList "your arguments" -RedirectStandardOutput the_path_you_want_to_put_your_log -RedirectStandardError the_path_to_put_error_log
the logs of the child process will be written to the log files continuously, you can open the files to check them from time to time.
Edit
And I think Unix tail equivalent command in windows Powershell will do you more help, which tell you how to monitor the log file in real time, like use the Get-FileTail Cmdlet from PowerShell Community Extensions
All,
I'm looking for a good way to do some job backgrounding through either of these two services.
I see PHPFog supports IronWorks, but i need something more realtime. Through these cloud based PaaS services, I'm not able to use popen(background.php --token=1234). So I'm thinking the best solution, might be to try to kick off a gearman worker to handle the job. (Actually my preferred method would be to use websockets to keep a connection open and receive feedback from the job, rather than long polling a db table through AJAX, but none of these guys support websockets)
Question 1 is, is there a better solution than using gearman to offload the job?
Question 2 is, http://help.pagodabox.com/customer/portal/articles/430779 I see pagodabox supports 'worker listeners' ... has anybody set this up with gearman? Would it work?
Thanks
I am using PagodaBox with a background worker in an application I am building right now. Basically, PagodaBox daemonizes a PHP process for you (meaning it will continually run in the background), so all you really have to do is create a script that checks a database table for tasks to run, runs them, and then sleeps a bit so it's not running too many queries against your database.
This is a simplified version of what I have running:
// Remove time limit
set_time_limit(0);
// Show ALL errors
error_reporting(-1);
// Run daemon
echo "--- Starting Daemon ---\n";
while(true) {
// Query 'work_queue' table for new tasks
// Loop over items and do whatever tasks are associated with them
// Update row to mark task as completed
// Wait a bit
sleep(30);
}
A benefit to this approach is that it's easy to test via CLI:
php tasks.php
You will see all the echo statements come through in console as it's running, and of course it's much easier to do than a more complicated setup with other dependencies like Gearman.
So whenever you add a new task to the table, the maximum amount of time you'll wait for that task to be started in a batch is 30 seconds (or whatever your sleep time is). This is better and preferable to cron jobs, because if you setup a cron job to run every minute (the lowest possible interval) and the work you have to do takes longer than a minute, another cron process will start working on the same queue and you could end up with quite a lot of duplicated task work that could cause a lot of issues that are hard to debug and troubleshoot. So if you instead have either only one background worker that runs all tasks, or multiple background workers that work on different task types, you will never run into this issue.
I have a few work flows where I would like R to halt the Linux machine it's running on after completion of a script. I can think of two similar ways to do this:
run R as root and then call system("halt")
run R from a root shell script (could run the R script as any user) then have the shell script run halt after the R bit completes.
Are there other easy ways of doing this?
The use case here is for scripts running on AWS where I would like the instance to stop after script completion so that I don't get charged for machine time post job run. My instance I use for data analysis is an EBS backed instance so I don't want to terminate it, simply suspend. Issuing a halt command from inside the instance is the same effect as a stop/suspend from AWS console.
I'm impressed that works. (For anyone else surprised that an instance can stop itself, see notes 1 & 2.)
You can also try "sudo halt", as you wouldn't need to run as a root user, as long as the user account running R is capable of running sudo. This is pretty common on a lot of AMIs on EC2.
Be careful about what constitutes an assumption of R quitting - believe it or not, one can crash R. It may be better to have a separate script that watches the R pid and, once that PID is no longer active, terminates the instance. Doing this command inside of R means that if R crashes, it never reaches the call to halt. If you call it from within another script, that can be dangerous, too. If you know Linux well, what you're looking for is the PID from starting R, which you can pass to another script that checks ps, say every 1 second, and then terminates the instance once the PID is no longer running.
I think a better solution is to use the EC2 API tools (see: http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ for documentation) to terminate OR stop instances. There's a difference between the two of these, and it matters if your instance is EBS backed or S3 backed. You needn't run as root in order to terminate the instance - the fact that you have the private key and certificate shows Amazon that you're the BOSS, way above the hoi polloi who merely have root access on your instance.
Because these credentials can be used for mischief, be careful about running API tools from a given server, you'll need your certificate and private key on the server. That's a bad idea in the event that you have a security problem. It would be better to message to a master server and have it shut down the instance. If you have messaging set up in any way between instances, this can do all the work for you.
Note 1: Eric Hammond reports that the halt will only suspend an EBS instance, so you still have storage fees. If you happen to start a lot of such instances, this can clutter things up. Your original question seems unclear about whether you mean to terminate or stop an instance. He has other good advice on this page
Note 2: A short thread on the EC2 developers forum gives advice for Linux & Windows users.
Note 3: EBS instances are billed for partial hours, even when restarted. (See this thread from the developer forum.) Having an auto-suspend close to the hour mark can be useful, assuming the R process isn't working, in case one might re-task that instance (i.e. to save on not restarting). Other useful tools to consider: setTimeLimit and setSessionTimeLimit, and various checkpointing tools (I have a Q that mentions a couple). Using an auto-kill is useful if one has potentially badly behaved code.
Note 4: I recently learned of the shutdown command in package fun. This is multi-platform. See this blog post for commentary, and code is here. Dangerous stuff, but it could be useful if you want to adapt to Windows. I haven't tried it, though.
Update 1. Three more ideas:
You could use .Last() and runLast = TRUE for q() and quit(), which could shut down the instance.
If using littler or a script that invokes the script via Rscript, the same command line functions could be used.
My favorite package of today, tcltk2 has a neat timer mechanism, called tclTaskSchedule() that can be used to schedule the execution of an expression. You could then go crazy with the execution of stuff just before a hourly interval has elapsed.
system("echo 'rootpassword' | sudo halt")
However, the downside is having your root password in plain text in the script.
AFAIK those ways you mentioned are the only ones. In any case the script will have to run as root to be able to shut down the machine (if you find a way to do it without root that's possibly an exploit). You ask for an easier way but system("halt") is just an additional line at the end of your script.
sudo is an option -- it allows you to run certain commands without prompting for any password. Just put something like this in /etc/sudoers
<username> ALL=(ALL) PASSWD: ALL, NOPASSWD: /sbin/halt
(of course replacing with the name of user running R) and system('sudo halt') should just work.