I followed the docs and created slack function:
It does work and I get notifications in channel, but get the name of the task and link to log are to another task, and not to the one that gets failed.
It gets the context of the upstream failed task, but not the failed task itself:
I tried with different operators and hooks, but get the same result.
If anyone could help, I would really appreciate it.
Thank you!
The goal of the on_failure_callback argument on the Dag level, is running this callback once when the DagRun fails, so we provide the context of the DagRun which is identical between the task instances, for that we don't care which task instance context we provide (I think we provide the context of the last defined task in the dag regardless its state).
If you want to run the callback on each failed ti, you can remove the on_failure_callback argument from the dag and add it to the default args: default_args=dict(on_failure_callback=task_fail_slack_alert).
Related
I'm trying to implement a way to get notified when my dag fails.
I tried to use the email_on_failure and a webhook method ( https://code.mendhak.com/Airflow-MS-Teams-Operator/ ).
But for both of them, I got a notification for every task that failed.
Is there a way to get notified only if the whole dag doesn't work?
I really appreciate any help you can provide.
You can choose to set on_failure_callback on operator level or on DAG level.
On Dag - A function to be called when a DagRun of this dag fails.
On Operator - a function to be called when a task instance
of this task fails.
In your case you need to set on_failure_callback in your DAG object:
dag = DAG(
dag_id=dag_id,
on_failure_callback=func_to_execute,
)
I have an airflow DAG that is triggered externally via cli.
I have a requirement to change order of the execution of tasks based on a Boolean parameter which I would be getting from the CLI.
How do I achieve this?
I understand dag_run.conf can only be used in a template field of an operator.
Thanks in advance.
You can not change tasks dependency with runtime parameter.
However you can pass runtime parameter (with dag_run.conf) that according to it's value tasks will be executed or be skipped for that you need to place operators in your workflow that can handle this logic for example: ShortCircuitOperator, BranchPythonOperator
I have a task in Airflow that uses bash operator which is running continuously.
Is there any way to make the task instance as success before its next scheduled run? I know how to mark it as success in api.
Thank you for your replies
If you just want to call the api automatically to mark the previous dagrun as success, you can use SimpleHttpOperator as the first task of your dag. This operator might call airflow REST API to request to mark the previous dagrun as success.
https://airflow.apache.org/docs/apache-airflow-providers-http/stable/operators.html#simplehttpoperator
Is it possible to somehow extract task instance object for upstream tasks from context passed to python_callable in PythonOperator. The use case is that I would like to check status of 2 tasks immediately after branching to check which one ran and which one is skipped so that I can query correct task for return value via xcom.
Thanks
I have a DAG defined that contains a number of tasks, the last of which is only run if any of the previous tasks fail. This task simply posts to a Slack channel that the DAG run experienced errors.
What I would really like is if the message sent to the Slack channel contained the actual error that is logged in the task logs, to provide immediate context to the error and perhaps save Ops from having to dig through the logs.
Is this at all possible?