Autosys BOX job containing no CMDs - autosys

I am new in my job and one of my responsibilities is to maintain a string of workload automation processes that run out of Autosys. In browsing through the jobs, I noticed that some of them are job_type: BOX with no children of job_type: CMD within and no conditions either. I can't figure the purpose of such a job.
Any idea?

Here are a few reasons:
One possible reason for this can be that the the children included jobs were decommissioned one by one but the box was never decommissioned.
Could be a place holder for future jobs that are to be put in later and are in design phase now.
Could be a coding error wherein the jobs that were to be included in the box got included in the main box.
Try running the following command to see the dependent jobs if any on your box.
job_depends -c -J box_name if you do not get any dependent processes, you can verify if it is a place holder for a future job and if not, you can go ahead and delete it.
Hope this helps.

Related

Autosys, FileWatcher Job, passing if the file is not present, is it possible?

I am new to Autosys, and looking for a way to achieve reverse of file watching
I am looking for a job similar to file watcher, which keeps on running till the file is present, and will only pass if the file is not present. The dependent job will only if the file is not present.
there are few
1) I am not sure if I can achieve this with fileWatcher.
2) Does FileWatcher job stops running after it finds the file,
3) is there any way to negate the success condition for filewatcher job.
Or if anyone can provide me some good extensive document on FileWatcher, that would be a help too.
Thanks
You cannot achieve this with filewatcher job alone.
Filewatcher jobs stops running and goes to Success state as soon as it finds the file in the defined path. There is no way to negate its Success state.
This is so as its assumed that such functionalities can be easily implemented by scripts.
You can achieve what you want by batch script(Windows) or Shell Script(Unix/Linux). A script can be triggered by the Autosys job which checks file presence at place you intend, then sleeps for some time ( say 20 secs) checks again, and sends exit code 0 if it finally doesn't find the file, or some other exit code if after certain checks file didnt move eventually.
You can keep downstream jobs depended on this Autosys job as per requirement.
Let me know if more clarification is needed on this.

Unix scripting for servces to check

I am struggling to write a script to check a particular service running on my server and then send me mails.
So should this script be the part of bash profile so that its always running..
regards
rick
The .profile, .bashrc and friends will be run on login, so they are of no good use for background monitoring. Two solutions come to mind:
Either use cron to run your script at predefined intervals
Or make it loop and use your system's init environment (SysV, Upstart, SystemD, ...) to control it
My recommendation is to stick with cron - it even makes the mailing of results dead easy - just create output.

Autosys job in windows fails to copy all files but doesnt fail

We have a BOX scheduled in Autosys. If the BOX gets triggered at the scheduled time, all the PDFs generated out of one of the steps is not getting copied but the job is also not failing. When we are HOLDING the box and running step by step all outputs are getting copied.
A good troubleshooting step would be to either add in a sleep/delay step of a short time between the generation of the files and the downstream jobs.
A better way might be to use a file trigger or file watcher that will only let the below steps proceed if the files are all there (you can trigger on number of files or whatever stat is appropriate).
If your copy step is a simple copy command without any validation (like copy abc_file_*.pdf) then it wouldn't have any trouble copying whatever files it sees, even if not as many as you intend.

Demote a build? How to delete promoted builds and run specified script on deletion in Jenkins

In the project I'm working for we're having a continuous deployment setup. The goal is to always install the latest working build to production, unless someone manually overrides this functionality.
In order to make this working we
Run static code analysis
Run unit tests
Run integration tests
Run automatic UI tests, to the extent this is feasible
If any of the above steps fail, the build process is halted, and the build marked as failed. If the installation package is created it is then in steps installed to
CI --> staging --> production
At each step we run a integration and UI tests for the environment, to make sure we didn't introduce some new things which fail on on the subsequent environments. If none of the tests fail, and N minutes pass without anyone pressing the panic button, the build gets promoted to the next env. If the tests fail, we want to delete the package, and discard it completely. The installation packages are, however, delivered to other servers, so we need to run a bunch of remote (shell) scripts to make this step happen.
The problem is, that there are a big set of failure cases which we cannot reliably test in the normal automated cycle, e.g. page layout, or some integrations fail only production and so on.
So the actual question: How shall I demote/delete builds, once they've been promoted? Is it possible to either run a remote script when doing delete build or use any of the promotion plugins to achieve this functionality? Is there some think-outside-the-box solution for this that I might not have thought about?
Instead of deleting builds manually, you may write a Jenkins job that accepts the build number as a parameter, deletes it, and then does the rest of the housekeeping. You can configure Jenkins access privileges so that people do not delete builds manually by accident.
This might be a very particular case, but we decided against creating a separate job for removing the builds, for the very simple reason of keeping all the logging related to a specific build number in one single place. The setup was the following:
Promotion here means make the installation package (RPM) available to the given server, where auto-update handles the actual upgrade of the package.
We have one main build, that builds every time a new commit is available. We had some fine-tuning related to quiet times etc. but basically every new pushed set of commits resulted in a new build. The build contains all the relevant and available testing, which is far from being complete, and probably never will be.
Every hour a separate promotion step handles promotion from staging to production. This build kicks off a separate promotion which takes the latest accepted build from CI to staging. There is a 30min delay before builds were promoted CI-->staging, to prevent accidental promotions for last second commits. Delays were achieved with some bash find scripting. The order of promotions is this, to make sure a build is available in staging for (at least) one hour before going to promotion.
The actual answer:
The promotion steps were done as separate builds. In order to do a real promotion, rather than a separate build with a separate log, the build kicks off an actual promotion in the main build, using curl and calling the remote HTTP API. This leaves a relevan promotions star in the main build log. Using different colors, the promotions are visible with one look.
To demote builds I decided to create a separate "demote build" promotion step. This would then issue a purple star as a sign of the build being defective, and thus removed. The demotion is done by accessing the correct build in the UI, and pressing the "Remove build" button. No automation has been added to this step, but by creating a separate test step, it would be fairly easy to automate the demotion as well. We, however, have not gotten quite this far yet.
The benefits of this approach include
A build is deleted by accessing the failed build, not by providing parameters. Makes it much easier to document, and get right under pressure
Having a "panic button" like this available for anyone to press, builds trust and ownership for the process not only amongst the developers but also managers and DevOps.
It's dead simple to spot dead builds, as the log is available besides the other promotion logs
Having all the relevant promotion calls in the same build makes further scripting easier
Acute things we still have to improve include automating the testing on the later stages of the build pipeline, and also a suitable way of downgrading builds after demotion. E.g. in production a defective build and a demotion must always lead to installing the last good build, which has turned out to be fairly hard to achieve. Production data centers are rarely allowed to be accessible to this level from the development DC where the CI system sits. Also stopping and starting the build pipeline must be automated, as else there is the chance of slipping back to the manual state.
Naturally, in the spirit of continuous improvement, there are always things to improve. The whole setup is something of a bash/perl scripting mess, but since it's scripted and repeatable, there is always the option of improving one small piece at a time. The most important thing is the automation, as it allows for incremental steps, which any manual steps more or less prevent.
For anyone looking for an easy way to delete a build with custom steps:
Create a 'defective' promotion.
Make it manually triggered.
Force it run on the master.
Add a choice parameter DELETE with choices NO and YES.
Add action Execute Shell.
_
if [ "${DELETE}" == "YES" ]; then
# TODO: my custom steps
curl -X POST ${PROMOTED_URL}/doDelete"
fi
To delete a build now, just go to promotions, flip the choice to YES and click approve.

How to specify prerequisite jobs in Hudson

I have a Hudson job that just does a check-out/update to a third-party library. Call this Job A.
Several other jobs depend on this library. Call them Jobs B and C. They use the stuff checked out by Job A, and need it to be up-to-date.
My question is, how can I require Jobs B and C to always run Job A (to update the library) before they run through their build routine?
If this is not possible, can someone recommend another way to achieve the same effect?
You can do it the other way with "child" jobs. For example, you can configure A to trigger B and C after it has succeeded. (You will find the option on job A configuration page).
If you need more advanced conditions for triggering the child jobs, you can take a look at the Parametrized Trigger plugin.
After thinking about the problem some more, I think I may have been over-complicating things.
Since the library in Job A is rarely updated, we decided it's probably acceptable to just scan SVN on an interval and update when there are changes. There's a small possibility that builds of B and C will miss library changes if they start right after the changes to A were checked in, but that should rarely be an issue.
If I follow you, it sounds like you might need the Join plugin:
This plugin allows a job to be run after all the immediate downstream jobs have completed. In this way, the execution can branch out and perform many steps in parallel, and then run a final aggregation step just once after all the parallel work is finished. The plugin is useful for creating a 'diamond' shape project dependency. This means there is a single parent job that starts several downstream jobs. Once those jobs are finished, a single aggregation job runs

Resources