From Oozie (4.2.0, from hdp2.6) I want to be able to get the list of jobs belonging to a certain group:
oozie jobs -filter group=export
But this filter never matches.
I set up the job with a few variables (while experimenting):
variable oozie.job.acl
variable group.name
hadoop property oozie.job.acl
I can see (oozie jobs) the group name in the column Group (the one I gave for oozie.job.acl) but I cannot seem to use it in a filter.
Is there a trick I am missing or is it just not possible?
Related
We have a control-m jobs out condition to trigger the next successor jobs.
But, we could see the some numbers are getting added at that. What would be the check in the job settings need to add to get rid of number at the end.
Apart from ordering the control-m folder with the check "Order adependent Flow"
This can happen in 2 different scenarios;
When manually ordering a set of jobs and you specify "Order as Independent Flow" then Control-M adds these suffixes to prevent interference between the manually ordered jobs and pre-existing jobs.
In the Planning Domain/Links Setting - there is a setting called Create unique names for conditions - this will add a random number the the end of a condition where that condition already exists. However, if this option is disabled, and a condition with the same name is created, a single condition is linked to multiple destinations.
If you have access to the BMC documentation you can see this info here -
https://documents.bmc.com/supportu/9.0.18/help/Main_help/en-US/index.htm#11880.htm
Have scoured the net but was not able to find anything fruitful.
Assume that i have 5 boxes within 5 jobs within each.
I would want to update a common attribute within each one of those jobs.
Is there a way to perform the update to that attribute using a single update_job statement rather than having 25 lines of update_job commands for each job?
Zz
Yes,
you can just use for example:
update_job: job_name
condition: s(job_name1)
update_job: job_name_a
condition: s(job_name2)
update_job: job_name_b
condition: s(job_name3)
Dave
Is it possible to dynamically create control-M jobs.
Here's what I want to do:
I want to create two jobs. First one I call a discovery job, the second one I call a template job.
The discovery job runs against some database and comes back with an array of parameters. I then want to start the template job for each element in the returned array passing in that element as a parameter. So if the discovery job returned [a1,a2,a3] I want to start the template job 3 times, first one with parameter a1, second with parameter a2 and third one with parameter a3.
Only when each of the template jobs finish successfully should the discovery job show as completed successfully. If one of the template job instances fails I should be able to manually retry that one instance and when it succeeds the Discovery job should become successful.
Is this possible ? And if so, how should this be done ?
Between the various components of Control-M this is possible.
The originating job will have an On/Do tab - this can perform subsequent actions based on the output of the first job. This can be set to work in various ways but it basically works on the principle of "do x if y happens". The 'y' can be job status (ok or not) exit code (0 or not) or text string in standard output (e.g. "system wants you to run 3 more jobs"). The 'x' can be a whole list of things too - demand in a job, add a specific condition, set variables.
You should check out the Auto Edit variables (I think they've changed the name of these in the latest versions) but these are your user defined variables (use the ctmvar utility to define/alter these). The variables can be defined for a specific job only or across your whole system.
If you don't get the degree of control you want then the next step would be to use the ctmcreate utility - this allows full on-the-fly job definition.
You can do it and the way I found that worked was to loop through a create script which then plugs in your variable name from your look-up. You can then do the same for the job number by using a counter to generate a job name such as adhoc0001, adhoc0002, etc. What I have done is to create n number of adhoc jobs as required by the query, order them into a new group and then once the group is complete send the downstream conditions on. If one fails then you can re-run it as normal. I use ctmcreate -input_file . Which works a treat.
I'd like to create a random 'staging' directory name as part of a larger workflow. Is it possible to randomise a configuration property's value for use in sub-worflows ?
If the workflow ID is 'random' enough, you can use ${wf:id}.
I am trying to remove all users from a group using the Plone.Api method (within Plone4).
So I wrote this code:
users = api.user.get_users(groupname="The Test Group")
for user in users:
api.group.remove_user(groupname="The Test Group", username=user.id)
But the api.group.remove_user call does not seem to function. What is the proper way to remove users from a group within plone?
I paused this within my ipdb
This are the results from my calls:
ipdb> api.group.get(groupname=group_name)
<GroupData at /Plone/portal_groupdata/groupname:61fbc50d623142d7887384d70f25358b used for /Plone/acl_users/source_groups>
So far so good, I store this in a variable so I can try this again later (for the group argument).
ipdb> grp = api.group.get(groupname=group_name)
ipdb> api.user.get_users(groupname=group_name)
[<MemberData at /Plone/portal_memberdata/stolas#domain.org used for /Plone/acl_users>]
I notice I get my user from the group. Thus I am really in this group.
ipdb> user.id
'stolas#domain.org'
ipdb> api.group.remove_user(group=grp, username=user.id)
I try the remove call again, and check if my member is still within the group.
ipdb> api.user.get_users(groupname=group_name)
[<MemberData at /Plone/portal_memberdata/stolas#domain.org used for /Plone/acl_users>]
I still am..
Should I reindex security or something like that?
ps.
I also gave with api.env.adopt_roles(['Manager']) a try and the getToolByName(getSite(), 'portal_groups')
method portal_groups.removePrincipalFromGroup everthing rendered to a false.
plone.api uses the group tool to remove group memberships:
portal_groups = portal.get_tool('portal_groups')
portal_groups.removePrincipalFromGroup(user_id, group_id)
I guess your api.user.get_users(groupname="The Test Group") call returns an empty set.
Since you should pass the group name (group ID). Now you pass the group title.
The api.group.remove_user would also accept a group object instead of the group name.
Arguments ``groupname`` and ``group`` are mutually exclusive. You can
either set one or the other, but not both.
As I could not delete it as a user I thought of the following:
with api.env.adopt_roles(['Manager']):
api.user.delete(user=self.context)
parent = self.context.getParentNode()
parent.manage_delObjects([self.context.getId()])
As the user delete might fail I deleted the object as a Manager. This seemd to work without a Hitch.