Suppose we have some 400 jobs in different boxes then I want to put on hold the daily running jobs at 9-10 pm pm only?
Do you use WCC or command line?
In WCC you can just use a comma-separated list of jobs to see only the jobs you want. You can filter by status and select the jobs you want to take action on, then select 'change status' to do a sendevent but check off the 'future' box. Set it up so you send an 'on-hold' event at 9pm and again for an 'off-hold' at 10pm.
If you use command line you'll want to do something like below. Do all of your boxes have some naming conventions in common? If so you can run the command only once using the string that returns your boxes. In the AutoSys instance I work in we use a prefix structure...
To get the list of running jobs:
autorep -J prefix% | [find for windows or egrep for unix] " RU "
... Where you need the spaces between the double quotes and the two-letter status otherwise it would return lines where the item name contains those two characters.
To do a future sendevent use the usual sendevent syntax and just append the switches to indicate the time you want the action taken.
Will this accomplish what you're looking to do? If not please let us know if you're using windows or Unix as well as any additional information that can help us understand the specifics of your scenario.
Related
We have a control-m jobs out condition to trigger the next successor jobs.
But, we could see the some numbers are getting added at that. What would be the check in the job settings need to add to get rid of number at the end.
Apart from ordering the control-m folder with the check "Order adependent Flow"
This can happen in 2 different scenarios;
When manually ordering a set of jobs and you specify "Order as Independent Flow" then Control-M adds these suffixes to prevent interference between the manually ordered jobs and pre-existing jobs.
In the Planning Domain/Links Setting - there is a setting called Create unique names for conditions - this will add a random number the the end of a condition where that condition already exists. However, if this option is disabled, and a condition with the same name is created, a single condition is linked to multiple destinations.
If you have access to the BMC documentation you can see this info here -
https://documents.bmc.com/supportu/9.0.18/help/Main_help/en-US/index.htm#11880.htm
I am trying to append current time to the file. But, there is one thing that I need is the FTP job that I am using is transferring multiple files and all the files have same name expect for the time that we are appending to it. Could any one tell me how I can add one second/ minute to the %%TIME param so that I can pass it to my file name
I am assuming that you're using the AFT/MFT module. If so, you can use [T] to get the timestamp and then [C#] to get a counter number.
[N]__[T]_[C3].[E] -> this will give you something like MyFileName__235703_001.txt
If, however, you want to stick with using %%TIME you can create your own local variable based on %%TIME and then use the "%%PLUS 1" numeric expression to create a modified variable.
Is it possible to dynamically create control-M jobs.
Here's what I want to do:
I want to create two jobs. First one I call a discovery job, the second one I call a template job.
The discovery job runs against some database and comes back with an array of parameters. I then want to start the template job for each element in the returned array passing in that element as a parameter. So if the discovery job returned [a1,a2,a3] I want to start the template job 3 times, first one with parameter a1, second with parameter a2 and third one with parameter a3.
Only when each of the template jobs finish successfully should the discovery job show as completed successfully. If one of the template job instances fails I should be able to manually retry that one instance and when it succeeds the Discovery job should become successful.
Is this possible ? And if so, how should this be done ?
Between the various components of Control-M this is possible.
The originating job will have an On/Do tab - this can perform subsequent actions based on the output of the first job. This can be set to work in various ways but it basically works on the principle of "do x if y happens". The 'y' can be job status (ok or not) exit code (0 or not) or text string in standard output (e.g. "system wants you to run 3 more jobs"). The 'x' can be a whole list of things too - demand in a job, add a specific condition, set variables.
You should check out the Auto Edit variables (I think they've changed the name of these in the latest versions) but these are your user defined variables (use the ctmvar utility to define/alter these). The variables can be defined for a specific job only or across your whole system.
If you don't get the degree of control you want then the next step would be to use the ctmcreate utility - this allows full on-the-fly job definition.
You can do it and the way I found that worked was to loop through a create script which then plugs in your variable name from your look-up. You can then do the same for the job number by using a counter to generate a job name such as adhoc0001, adhoc0002, etc. What I have done is to create n number of adhoc jobs as required by the query, order them into a new group and then once the group is complete send the downstream conditions on. If one fails then you can re-run it as normal. I use ctmcreate -input_file . Which works a treat.
https://developers.google.com/analytics/devguides/collection/protocol/v1/reference
In the link above the Measurements Protocol is elaborated. Suppose I have a CSV file with columns like EventName, ClientID etc and I'd like to submit it to the Universal Analytics system. Is there a UNIX command, utility or a third-party software that will allow me to submit that data from command line or any kind of a friendlier UI?
I'm not a bash wizard myself so there will be any kind of ways to improve this ( I adapted an example I found on the web), but here is barebone example.
To group the hits into sessions (if applicable) you need a client id. Client id is a mandatory parameter, but if you want to log each row from your file as a new session you can use a random number for the cid parameter.
However the example assumes that the first column in your csv file contains a parameter that can be used as cid. Be aware that a session has max 500 hits (so after that you need to switch the cid) and that there is a limit of 20 hits per session that is replenished at 2 hits per second, so probably you want to build a delay into your script.
The example assumes a csv file with a semicolon as a delimiter (can be adjusted in the IFS variable). It also assumes that there are three columns, one for the cid, one for the page path, one for the document title. if you have more than three columns the last value (pagetitle) will consume all remaining columns (so if there are more than three columns append the columns names in the line that starts with "while").
Then the script simply builds an url (the variables from the line that starts with "while" are intefied by a dollar sign in front of the name) and uses wget to call the Google tracking server (the server returns a gif image which wget will store - I'm sure there is an option that tells wget to dismiss the content from the request).
#!/bin/bash
UAID="UA-XXXXX-XX" // Google Analytics Account ID
INPUT=data.cvs // Input file name
OLDIFS=$IFS // store default csv delimiter
IFS=; // set csv delimiter
[ ! -f $INPUT ] && { echo "$INPUT file not found"; exit 99; } // nice error message if input file is missing
while cid page pagetitle // while there are rows in the csv read fields
do
wget "www.google-analytics.com/collect?v=1&tid=$UAID&cid=$cid&t=pageview&dp=$page&dt=$pagetitle" // call Google Tracking server
done < $INPUT // no more rows
IFS=$OLDIFS // restore default csv delimiter
Obviously you'd have to make this script executable. I tested this (recent Debian/bash) so I'm rather sure this will work. It might be not very efficent though.
I am trying to schedule a Job in Autosys and I would like this job to run once a month. Say, 5th day of every month. Could you please help how we can configure this in Autosys?
I did some research and came to know that we need to create a calendar in Autosys. Could someone please help how we can create and configure such a calendar in autosys?
There are a few ways to create a calendar with tools provided with Autosys. The commands to launch the tools are autocal (which is a graphical editor) and autocal_asc (which is a text-based editor). The executables are available in the Autosys root user directory (e.g. /etc/autosys). I would recommend using the graphical interface, since it gives some options which will make it easier on you.
Once the utility is running, you should be able to create a new calendar (File > New). Give it a name (e.g. 5thOfTheMonth), and choose Edit > Apply Rule. Here you can configure the day(s) of the month you wish to run the job on, and many other options.
Once the calendar is created and saved, you can tie a job to it using the run_calendar JIL command (run_calendar: 5thOfTheMonth) or specifying the calendar in the Job Definition > Date/Time Options graphical interface.
make an extended calendar, like this-
5th_day_every_month
-------------------
Enter Name: 5th_day_evry_mnth
Enter Workdays [XXXXX..]:
Enter Non-workday Actions [' ',O,N,W,P]:
Enter Holiday Actions [' ',O,S,N,W,P]:
Enter Holiday Calendar [none]:
Enter Cycle Name [none]:
Enter Date Adjustment [0]:
Enter Date Conditions [DAILY]: MNTHD#5
If you have an Autosys web portal that allows you to import a calendar from a file, you can define an extended calendar inside a file. For the condition attribute of the extended calendar, you can specify a day of the month with MNTHD#nn, where nn is the nnth day of the month.
The extended calendar can be defined like
extended_calendar: fifth_of_month
condition: MNTHD#5
and you can include the calendar as the run_calendar attribute of your job like
insert_job: job_name
run_calendar: fifth_of_month
Find more information here on the different conditions for an extended calendar.