Autosys Force-fun the Sub Boxes - autosys

My problem is simple,
I've 3 boxes.
-Restart Box (start_times:"23:30")
--Stop Box (start_times:"23:30")
---Stop This Job
---Stop That Job
--Start Box (start_times:"5:00")
---Start This Job
---Start That Job
Now my problem is the force-run. I want to be able to hit the force-run for Restart Box and have Stop Box and Start Box RUNNING.
However, as you can maybe imagine, instead it goes into ACTIVATED state, since it has start_times defined in it.
You may say that; don't define the start_times for the BOX, define them for the JOBS, but I have a lot of jobs under a box, so It's not very ideal to do that so.
PS: I don't have access to sendevent.
Any help & ideas are appreciated. Thank you in advance.

It cannot be done with the same job as per my understanding. You can simply create another box job (adhoc) with no start times/schedule defined. It can be triggered once whenever you need it to run out of schedule and the stop box will run immediately after start box (stop box has a condition as success(start box)).

Related

Holistic and simplified view for Airflow job status

Sorry if this is a dumb question. I'm still a somewhat novice dev.
I'm interested in creating a holistic view that shows the current status of every airflow job my team maintains. The point would be to simplify the view rather than having the user go into the Airflow UI to check the status. I would be interested in something along the lines of a front-end webpage that has a list of each of the DAGs and kind of a progress bar whose length depends on the number of tasks for each DAG. If a task is currently running, it would be light-green, solid green for success and red for failures. Similar to the Airflow UI but a lot simpler. I would also want the home view to show the current day with a left and right arrow to go through each day if the user is interested. Essentially it would be a airflow monitoring system for less technical users.
What would be a good way to go about this?
I'm also open to any other solutions anyone may have come up that could help with simplify monitoring a large amount of airflow jobs.
Kind of looking for some folks to help me brainstorm. Not sure if Stack is the right place for it. :)
I'll be the developer of this app so no need to pull punches as far as the technical end goes.
Currently, I'm thinking of using a standard web app where the screen will be populated by a log that I'll keep in a backend database that gets populated by a function that gets called whenever a task concludes within a DAG. The view will always show current day and whichever DAGs are scheduled to run during that day with whatever their progress is.
Airflow allows creating plugins to expose web views with FlaskAppBuilder, so you can create a view and add whatever you want in it, then add it to the Airflow UI.

Updating an Apple Watch's complication's content at midnight

I've add my complication entries and this all seems to work well - each complication entry is scheduled for midnight.
I'm testing the time change by setting my Mac's date to the following day where I'm expecting my complication to update to the next entry.
However, it only updates the entry when I open and close my app. I'm expecting to see it automatically change like the other standard complications do. Is this some behaviour I need to go out of my way to implement? I'd expect an automatic change as per the docs.
I've found the following:
ClockKit begins displaying a timeline entry precisely at the time specified by the entry’s date property.
But surely this is a greater than check too? I tried setting it exactly to midnight but (surprisingly) this doesn't work either.
Any help is appreciated.
Additionally, I found the following regarding updating the timeline, but I would have thought this would be for changing the timeline entries as oppose to just refreshing the complication for the current timeline:
During a background app refresh task. You can schedule background tasks to periodically update your watchOS content. This works best when your data changes at predictable times.
The Watch Simulator appears to have some quirks around handling of times. In particular, it doesn’t seem to obey time changes on the underlying system until you relaunch it.
Relaunch the Simulator after changing the system clock and check on your Complication then. If you want to test the transition to the next day specifically, you can set the system time to 11:58 pm and wait for it to cut over.

Taking an autosys job Off Hold but not wanting it to run immediately

I have 2 jobs in autosys: Job 1 and Job 2. Job2 will only run if Job1 is a success.
Now, in a normal scenario, Job1 and Job2 will run in succession as part of a daily batch.
But, sometimes, I need to re run Job1 without kicking off Job2. But if I just re run Job1, its success will automatically kick off Job 2.
So, I put Job2 'On Hold' and run Job1. All good so far.
Now, it is my experience that when I put Job 2 'Off Hold', it will immediately start running. I don't want that.
http://autosys-tutorials.blogspot.ca/2011/04/autosys-quick-reference.html
What I want is for Job2 to go into a Runnable state so that it gets picked up in the next batch run.
What status should I set Job2 to?
I am not sure whether you got answer for your doubt, there is one more option to do not let the ON_HOLD job to run immediately when off_hold it.
change the job 2 status from ON_HOLD to INACTIVE, so job 2 will not start immediately and same time job 2 will start running once condition(job 1 went to success state) met when job is in INACTIVE state.
If you mark the job 2 to ONICE, it will allow the dependent job to start if it has.
Regards,
Kaliraja (HP Autosys Team)
In this situation you should put Job2 'On Ice' before re-running Job1, instead of putting it 'On Hold'. From the link you posted:
The difference between "on hold" and "on ice" is that when an "on
hold" job is taken off hold, if its starting conditions are already
satisfied, it will be scheduled to run, and it will run. On the other
hand, if an "on ice" job is taken "off ice," it will not start, even
if its starting conditions are already satisfied. This job will not
run until its starting conditions reoccur.
You can put job2 on hold and run job1. After the completion of JOb1, change the status of Job2 to success,so that it can start next time as per schedule and based on its conditions.
Put Job_2 as 'On_Ice' before starting Job_1
You can place job 2 ON HOLD, But mark the job as success/Failure (ensure Job2 has no other dependency on this state)
You can put job2 on hold and run job1. After the completion of JOb1, change the status of Job2 to INACTIVE so that it can start next time as per schedule.
You could use override_job subcommand to bypass the execution only one time. The next schedule will automatically pick up the job to run.
This example shows how to define a one-time job override. The following script runs the job RunData with no conditions (where some had been previously specified) and outputs the results to a different output file:
UNIX:
override_job: RunData
condition: NULL
std_out_file: "tmp\SpecialRun.out"
If you want to cancel a override job defined, then use the below -
override_job: RunData delete
About your concern that you don't want to put Job2 'ON_ICE' because this will run the remaining jobs (job3 - job10), I think this scenario will not happen unless your main box is running. Since you've mentioned that you are going to run the job1 as standalone, the main box should not be running since, if so, job1 will be automatically triggered. Having said this, make sure that the main box is not running when you want to run the job1 alone. Else, the thing that your are saying will happen.
Options for the scenario:
1- Place job2 on "No Execution" each time you want to run ONLY job1, later remove it.
2- Place look-back as success condition as s(job1,0) on job2 for permanent fix to meet your adhoc request & permanent scheduled run if you are placing it on-hold & later removing it.
3- Place it on-Ice, but make sure there is no dependent job for Job2.
Put these jobs in the same box, put job 2 ON_HOLD, FORCESTART the box, wait for job 1 to finish with SUCCESS, KILLJOB the box, take job 2 OFF_HOLD, change status of all jobs in box to SUCCESS.

How can I let the IBM Jazz 'Check-in and Deliver' wizard immediately show my default work item?

In Eclipse, using IBM Jazz for source control, I have a default task with state 'In Progress', that's assigned to me.
In the 'Check-in and Deliver' wizard, in the 2nd wizard-pane ('Associate Work Item') sometimes this task is displayed immediately, and other times it is not. In the latter case it only appears when I type a matching search criterium
How can I make sure that my default task appears in this 2nd wizard-pane immediately, without the need to type a search criterium?
This sounds like a defect - and worth raising on jazz.net. Be sure to specify what version of RTC you are using.
However, there is an easier way to do what you want. At any point you can drag the work item to the bottom line of the Eclipse UI (you may see "no work item" or similar text in the area you need to drag the work item. This then becomes the default work item your code is associated with. You will no longer have to use the Associate work item.
Foe example, in RTC v4.0.1 - this part of the help explains this idea in more detail:
https://pic.dhe.ibm.com/infocenter/clmhelp/v4r0m1/topic/com.ibm.team.scm.doc/topics/t_scm_eclipse_workitems_working.html
This function has been part of RTC since v1 so will work regardless of what version you are using.
Hope that helps

Show workflow while running

I am trying out Workflow Foundation 4. Is it possible to show the workflow while it is running, with some sort of indicator of state (e.g. green box around activity = running)? The workflow would have to be read-only. However, I would also like to right click an activity and bring up info like how long the activity took to run, current logging state, etc.
Edit: I found the following links, but they are not for Workflow Foundation 4. Does anyone know what it has been replaced with?
WorkflowView: http://msdn.microsoft.com/en-us/library/ms617016.aspx
Workflow Monitor Sample: http://msdn.microsoft.com/en-us/library/ms741706.aspx

Resources