Airflow: DAG marked as success although one task failed - airflow

A similar question has been asked before but as far as I can tell, it can't be explained by this answer since I don't have any tasks with "all_done" trigger rule.
As seen from the image above, a task has failed but that DAG is still marked as a success. The trigger rule for all the tasks but one is the default one.
Any idea how to mark the whole DAG as Failed in such case?

Related

Finding out whether a DAG execution is a catchup or a regularly scheduled one

I have an Airflow pipeline that starts with a FileSensor that may perform a number of retries (which makes sense because the producing process sometimes takes longer, and sometimes simply fails).
However when I restart the pipeline, as it runs in catchup mode, the retries in the file_sensor become spurious: if the file isn't there for a previous day, it wont materialize anymore.
Therefore my question: is it possible to make the behavior of a DAG-run contingent on whether that is currently running in a catch up or in a regularly scheduled run?
My apologies if this is a duplicated question: it seems a rather basic problem, but I couldn't find previous questions or documentation.
The solution is rather simple.
Set a LatestOnlyOperator upstream from the FileSensor
Set an operator of any type you may need downstream from the FileSensor with its trigger rule set to TriggerRule.ALL_DONE.
Both skipped and success states count as "done" states, while an error state doesn't. Hence, in a "non-catch-up" run the FileSensor will need to succeed to give way to the downstream task, while in a catch-up run the downstream will right away start after skipping the FileSensor.

Airflow upstream task in "none status" status, but downstream tasks executed

As screenshot:
Any idea why this would occur, and how would I go about troubleshooting it?
--
Update:
It's "none status" not "queued" as I originally interpreted
The DAG run occurred on 3/8 and last relevant commit was on 3/1. But I'm having trouble finding the same DAG run....will keep investigating
It's not Queued status. It's None status.
This can happen in one of the following cases:
The task drop_staging_table_if_exists was added after create_staging_table started to run.
The task drop_staging_table_if_exists used to have a different task_id in the past.
The task drop_staging_table_if_exists was somewhere else in the workflow and you changed the dependencies after the DAG run started.
Note Airflow currently doesn't support DAG versioning (It will be supported in future versions when AIP-36 DAG Versioning is completed) This means that Airflow constantly reload the DAG structure, so changes that you make will also be reflected on past runs - This is by design! and it's very useful for cases where you want to backfill past runs.
Either way, if you will start a new run or clear this specific run the issue you are facing will be resolved.

Airflow subdag shows success while its tasks are skipped

I have a subdag that uses a sensor operator which contains a soft_fail=true , in order to skip instead of failing the task.
It works well, except that the status of the whole subdag is shown as "succeeded" instead of "skipped" which could be misleading when monitoring the flow, as I wouldn't know if the file has been found, or simply skipped. Any thoughts on how to make the subdag status inherit the subtasks' status?
A "skipped" status isn't a failure though, you requested not to execute a task and it did just that. Also think about what the opposite would be, a user being surprised that their run had failed just because Airflow did as they asked and skipped all the tasks.
This issue regarding the skipped status has been covered before. For example, it was reported in 1.8.0 and fixed in 1.8.1, but in later versions this fix was not propagated.
You could open an issue and request the change by selecting Reference in new issue in the three dots of this link.

Triggering an Airflow DAG from terminal always keep running state

I am trying to use airflow trigger_dag dag_id to trigger my dag, but it just show me running state and doesn't do anymore.
I have searched for many questions, but all people just say dag id paused. the problem is my dag is unpaused, but also keep the running state.
Note: I can use one dag to trigger another one in Web UI. But it doesn't work in command line.
please see the snapshot as below
I had the same issue many times, The state of the task is not running, it is not queued either, it's stuck after we 'clear'. Sometimes I found the task is going to Shutdown state before getting into stuck. And after a large time the instance will be failed, still, the task status will be in white. I have solved it in many ways, I
can't say its reason or exact solution, but try one of this:
Try trigger dag command again with the same Execution date and time instead of the clear option.
Try backfill it will run only unsuccessful instances.
or try with a different time within the same interval it will create another instance which is fresh and not have the issue.

How can an Airflow DAG fail if none of the tasks have failed?

We have a long dag (~60 tasks), and quite frequently we see a dagrun for this dag in a state of failed. When looking at the tasks in the DAG they are all in a state of either success or null (i.e. not even queued yet). It appears that the dag has got into a state of failed prematurely.
Under what circumstances can this happen, and what should people do to protect against it?
If it's helpful for context we're running Airflow using the Celery executor and currently running on version 1.9.0. If we set the state of the dag in question back to running then all the tasks (and the dag as a whole) complete successfully.
The only way that a DAG can fail without a task failing is through something not connected to any of the tasks. Besides manual intervention (check that nobody on the team is manually failing the dags!) the only thing that fails DAGs outside of considering task states is the timeout checker.
This runs inside the scheduler, while considering whether it needs to schedule a new dag_run. If it finds another active run, which has been running longer than the dagrun_timeout argument of the DAG, then it will get killed. As far as I can see this isn't logged anywhere, so the best way to diagnose this is to look at the time that the DAG started and the time that the last task finished to see if it's roughly the length of dagrun_timeout.
You can see the code in action here: https://github.com/apache/incubator-airflow/blob/e9f3fdc52cb53f3ac3e9721e5128d17d1c5c418c/airflow/jobs.py#L800

Resources