Why doesn't a milestone show 100% complete based on predecessor when all tasks are completed for the section? - ms-project

I have four tasks in a section with a milestone. The first three tasks completed by baseline dates and the fourth was delayed a few days. When the fourth task completed, the milestone that depended on the task didn't change to 100% complete. Why?

Milestones, like any other task, must be statused independently. Just because it is linked the % Complete will not automatically be updated.

Related

Delay between tasks in Airflow or any other option?

We are using airflow 2.00. I am trying to implement a DAG that does two things:
Trigger Reports via API
Download reports from source to destination.
There needs to at least 2-3 hours gap between tasks 1 and 2. From my research I two options
Two DAGs for two tasks. Schedule the 2nd DAG two hour apart from 1st DAG
Delay between two tasks as mentioned here
Is there a preference between the two options. Is there a 3rd option with Airflow 2.0? Please advise.
The other option would be to have a sensor waiting for the report to be present. You can utilise reschedule mode of sensors to free up workers slots.
generate_report = GenerateOperator(...)
wait_for_report = WaitForReportSensor(mode='reschedule', poke_interval=5 * 60, ...)
donwload_report = DonwloadReportOperator(...)
generate_report >> wait_for_report >> donwload_report
A third option would be to use a sensor between two tasks that waits for a report to become ready. An off-the-shelf one if there is one for your source, or a custom one that subclasses the base sensor.
The first two options are different implementations of a fixed waiting time. Two problems with it: 1. What if the report is still not ready after the predefined time? 2. Unnecessary waiting if the report is ready earlier.

Airflow - Xcoms and parallel jobs - problem (xcoms overwriting themselves)

I have the pipeline below. On my 2nd level of tasks (first paralel tasks) I write a Xcom with some ID, then in the 3rd (2nd col. in paralel) and 4th column of tasks (3rd in paralel) I retrieve the xcoms I wrote in column two. The problem is that since it's writing 6 tasks at the same time, the XCOM that sticks in the end is the last one that was written.
Is there a way for me to make these Xcoms work in parallel? by now because of this problem I have to run all of theses tasks linearly, so the xcoms work correctly...
Image can be seen here:
https://i.stack.imgur.com/aXFWz.png

Way to designate certain set of airflow tasks to run before others (order invariant)?

Have an airflow (v1.10.5) dag that looks like...
Is there a way to specify that all of the blue tasks should complete before scheduler moves on to any downstream tasks (as currently scheduler sometimes goes down an entire branch of tasks before doing the next blue task)?
Want to avoid just putting them in sequence (and using with trigger rule TriggerRule.ALL_DONE) because they do not actually have any logical order in which they need to be done (other than that they all need to be done before any other downstream tasks in any branch).
Anyone know of any way to do this (like some kind of "priority" pool for tasks)? Other workaround suggestions?
Asked this question on the airflow mailing list and this is the results...
white
blue = [blue_a, blue_b, blue_c]
green = [green_a, green_b, green_c]
yellow = [yellow_a, yellow_b]
cross_downstream(from_tasks=[white], to_tasks=[blue])
cross_downstream(from_tasks=blue, to_tasks=green)
cross_downstream(from_tasks=green to_tasks=yellow)
This should create the required network of dependencies between tasks.
Here is visualization available:
https://imgur.com/a/2jqyqQO
This is the easiest solution and in my opinion the correct one.
However, if you don't want a dependencies then you can create a new
schedule rule by editing the BaseOperator.deps property.
The docs for this helper dag building function can be found here: https://airflow.apache.org/docs/stable/concepts.html#relationship-helper
Which was a useful solution, but...
One thing about my case is that the next tasks (greens) in each branch should only run if the blue task in that same branch completes successfully (should not care about the success/failure status of the other blue tasks, only that they have been run). Thus I don't think the ALL_DONE trigger rule will help the greens and ALL_SUCCESS would be too strict.
Any ideas for such a thing?
After some more thought, here is my workaround...

Airflow: Concurrency Depth first, rather than breadth first?

In airflow, the default configuration seems to be to queue up tasks, in parallel, across days--from one day to the next.
However, if I spin this process up across, say, two years, then the airflow dag will churn through preliminary processes first, across all days, rather than taking, say, 4 days forward from start to finish concurrently.
How do I toggle airflow to execute tasks according to a depth first paradigm rather than a breadth first paradigm?
I have come across a similar situation. I used the following trick to achieve that depth-first behaviour.
Assign all tasks of your DAG to a single pool (with limited number of slots like, say, 20-30)
Set weight_rule=upstream to all the above tasks
Explaination
The UPSTREAM weight_rule reverses prioritization of tasks based on their position across breadth of workflow, resulting in all downstream tasks to have a higher priority than upstream tasks.
This would ensure that whatever branches are launched will go onto completion before next branch is picked, thereby achieving that depth-first behaviour
Try to toggle with the parallelism and max_active_runs parameters in your airflow.cfg and the concurrency parameter at your DAGs.

How to run Airflow DAG for specific number of times?

How to run airflow dag for specified number of times?
I tried using TriggerDagRunOperator, This operators works for me.
In callable function we can check states and decide to continue or not.
However the current count and states needs to be maintained.
Using above approach I am able to repeat DAG 'run'.
Need expert opinion, Is there is any other profound way to run Airflow DAG for X number of times?
Thanks.
I'm afraid that Airflow is ENTIRELY about time based scheduling.
You can set a schedule to None and then use the API to trigger runs, but you'd be doing that externally, and thus maintaining the counts and states that determine when and why to trigger externally.
When you say that your DAG may have 5 tasks which you want to run 10 times and a run takes 2 hours and you cannot schedule it based on time, this is confusing. We have no idea what the significance of 2 hours is to you, or why it must be 10 runs, nor why you cannot schedule it to run those 5 tasks once a day. With a simple daily schedule it would run once a day at approximately the same time, and it won't matter that it takes a little longer than 2 hours on any given day. Right?
You could set the start_date to 11 days ago (a fixed date though, don't set it dynamically), and the end_date to today (also fixed) and then add a daily schedule_interval and a max_active_runs of 1 and you'll get exactly 10 runs and it'll run them back to back without overlapping while changing the execution_date accordingly, then stop. Or you could just use airflow backfill with a None scheduled DAG and a range of execution datetimes.
Do you mean that you want it to run every 2 hours continuously, but sometimes it will be running longer and you don't want it to overlap runs? Well, you definitely can schedule it to run every 2 hours (0 0/2 * * *) and set the max_active_runs to 1, so that if the prior run hasn't finished the next run will wait then kick off when the prior one has completed. See the last bullet in https://airflow.apache.org/faq.html#why-isn-t-my-task-getting-scheduled.
If you want your DAG to run exactly every 2 hours on the dot [give or take some scheduler lag, yes that's a thing] and to leave the prior run going, that's mostly the default behavior, but you could add depends_on_past to some of the important tasks that themselves shouldn't be run concurrently (like creating, inserting to, or dropping a temp table), or use a pool with a single slot.
There isn't any feature to kill the prior run if your next schedule is ready to start. It might be possible to skip the current run if the prior one hasn't completed yet, but I forget how that's done exactly.
That's basically most of your options there. Also you could create manual dag_runs for an unscheduled DAG; creating 10 at a time when you feel like (using the UI or CLI instead of the API, but the API might be easier).
Do any of these suggestions address your concerns? Because it's not clear why you want a fixed number of runs, how frequently, or with what schedule and conditions, it's difficult to provide specific recommendations.
This functionality isn't natively supported by Airflow
But by exploiting the meta-db, we can cook-up this functionality ourselves
we can write a custom-operator / python operator
before running the actual computation, check if 'n' runs for the task (TaskInstance table) already exist in meta-db. (Refer to task_command.py for help)
and if they do, just skip the task (raise AirflowSkipException, reference)
This excellent article can be used for inspiration: Use apache airflow to run task exactly once
Note
The downside of this approach is that it assumes historical runs of task (TaskInstances) would forever be preserved (and correctly)
in practise though, I've often found task_instances to be missing (we have catchup set to False)
furthermore, on large Airflow deployments, one might need to setup routinal cleanup of meta-db, which would make this approach impossible

Resources