I'm using the Airflow GUI.
I put these in default args:
'depends_on_past': False,
'retries': 3,
'cathup':True,
'start_date': datetime(2020,1,1),
in DAG args:
default_args=default_args,
start_date=datetime(2020,1,1),
schedule_interval='12 11 * * *'
catchup=True
But still, when 11:12 comes, it runs for today, and nothing else. I'd think it starts to backfill from Jan 1 automatically, but it does not. What am I doing wrong, or what must I do to get it triggered by itself?
Yes you are right, It should start from (2020, 1 , 1).
Maybe you could set the catchup on airflow.cfg, so it will be the default.
Turn off scheduler catchup by setting this to False.
# Default behavior is unchanged and
# Command Line Backfills still work, but the scheduler
# will not do scheduler catchup if this is False,
# however it can be set on a per DAG basis in the
# DAG definition (catchup)
[scheduler]
catchup_by_default = True
You can try to set catchup in dag parameter ,like schedule_interval.eg:dag = DAG("test", default_args=default_args, schedule_interval="0 11 * * 1",catchup=True)
Related
I am trying to use Cronitier to schedule job a monthly job to run every 2 Tuesday. I tried below
This is my partial script.
from croniter import croniter
cron = croniter("00 21 * * 2#2")
dag = DAG('recurring_job', catchup=False, default_args=default_args, schedule_interval=cron)
But it didn't work. Would you please help me or give some reference.
You can just specify the cron string directly:
dag = DAG('recurring_job', schedule_interval="00 21 * * 2#2")
Airflow will translate that using croniter internally.
Airflow supports also datetime.timedelta object for schedule_interval, see Scheduling & Triggers:
Each DAG may or may not have a schedule, which informs how DAG Runs are created. schedule_interval is defined as a DAG arguments, and receives preferably a cron expression as a str, or a datetime.timedelta object.
Together with start_date, it should work out without the usage of croniter:
dag = DAG('recurring_job', catchup=False, default_args=default_args, schedule_interval=datetime.timedelta(weeks=2), start_date = datetime(2020,11,3,21))
I have a requirement that I want to schedule an airflow job every alternate Friday. However, the problem is I am not able to figure out how to write a schedule for this.
I don't want to have multiple jobs for this.
I tried this
'0 0 1-7,15-21 * 5
However it's not working it's running from 1 to 7 and 15 to 21 everyday.
from shubham's answer I realize that we can have a PythonOperator which can skip the task for us. An I tried to implement the solution. However doesn't seem to work.
As testing this on 2 week period would be too difficult. This is what I did.
I schedule the DAG to run every 5 mins
However, I am writing python operator the skip althernate task (pretty similar to what I am trying to do, alternate friday).
DAG:
args = {
'owner': 'Gaurang Shah',
'retries': 0,
'start_date':airflow.utils.dates.days_ago(1),
}
dag = DAG(
dag_id='test_dag',
default_args=args,
catchup=False,
schedule_interval='*/5 * * * *',
max_active_runs=1
)
dummy_op = DummyOperator(task_id='dummy', dag=dag)
def _check_date(execution_date, **context):
min_date = datetime.now() - relativedelta(minutes=10)
print(context)
print(context.get("prev_execution_date"))
print(execution_date)
print(datetime.now())
print(min_date)
if execution_date > min_date:
raise AirflowSkipException(f"No data available on this execution_date ({execution_date}).")
check_date = PythonOperator(
task_id="check_if_min_date",
python_callable=_check_date,
provide_context=True,
dag=dag,
)
I doubt that a single crontab expression can solve this
Using airflow's tricks, solution is much more straightforward:
schedule your DAG every Friday 0 0 * * FRI and
on alternate Fridays (based on your business logic), skip the DAG by raising AirflowSkipException
Here you'll have to let your DAG begin with a dedicated skip_decider task that will let your DAG run / skip every alternate Friday by
conditionally raising AirflowSkipException (to skip the DAG)
not doing anything to let the DAG run
You can also leverage
ShortCircuitOperator
BranchPythonOperator
but IMO, AirflowSkipException is the cleanest solution
Reference: How to define a DAG that scheduler a monthly job together with a daily job?
Depending on your implementation you can use the hash. Worked in my airflow schedules using version 1.10:
Hash (#)
'#' is allowed for the day-of-week field, and must be followed by a number between one and five. It allows specifying constructs such as "the second Friday" of a given month.[19] For example, entering "5#3" in the day-of-week field corresponds to the third Friday of every month. Reference
you can use timedelta as mentioned below, combine it with start_date to schedule your job bi_weekly.
dag = DAG(
dag_id='test_dag',
default_args=args,
catchup=False,
start_date=datetime(2021, 3, 26),
schedule_interval=timedelta(days=14),
max_active_runs=1
)
Is there a way to persist an XCOM value during re-runs of a DAG step (after clearing the status)?
Below is a simplified version of what I'm trying to accomplish, namely when a DAG step status is cleared and the step re-run, I would like to be able to load the XCOM value pushed on the previous run. However, even though I can see the value in the XCOM interface, the value does not get pulled. I've looked through the source code for the pull_xcom() method but can't figure out where it is being filtered out.
The functionality I'm trying to achieve is to maintain some amount of state between failed runs of a DAG. In the example, this would mean that 1 is added to the stored value every time the DAG step is cleared and rerun.
from datetime import datetime
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
def test_step(**kwargs):
ti = kwargs.get('task_instance')
value = ti.xcom_pull(key='key', include_prior_dates=True)
if value is None:
value = 0
print(f'BEFORE VALUE: {value}')
value += 1
print(f'AFTER VALUE: {value}')
ti.xcom_push(key='key', value=value)
# Simulating a failure
raise Exception
default_args = {
'owner': 'Testing',
'depends_on_past': False,
'email': ['test#test.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 0,
}
dag = DAG(
'test_dag',
default_args=default_args,
schedule_interval=None,
start_date=datetime(2020, 4, 9),
)
t1 = PythonOperator(
task_id='test_step',
provide_context=True,
python_callable=test_step,
dag=dag,
)
t1
Anytime a task is about to run, its XCom is cleared for the current execution date (https://github.com/apache/airflow/blob/1.10.10/airflow/models/taskinstance.py#L960). This is why you won't ever pull values from previous task tries. Use of include_prior_dates=True only pulls from previous execution dates, but not previous runs of the same execution date.
One possible solution is to put a DummyOperator task upstream of your test_step task, called say xcom_store.test_step. Then use airflow.models.XCom.set() directly in test_step to your XCom values into the xcom_store.test_step task (reference xcom_push() as an example). When you need to pull, just pull as you usually would with but from the dummy task instead, i.e. ti.xcom_pull(task_ids='xcom_store.test_step', key='key'). Definitely not ideal and could lead to some confusion, but if you standardize it and build some helpers around it, it could be alright?
I am trying schedule a dag to run every x seconds. I put the start time as a past date with catchup = False and end time as few seconds into the future.
Although the dag starts as expected, it does not end and goes on forever.
The dag ends if I use an absolute end time like datetime(2019,9,26) but not with datetime.now()+timedelta(seconds=100)
start_date = datetime(2019, 1, 1)
end_date = datetime.now()+timedelta(seconds=200)
default_args = {
"owner": "airflow",
"depends_on_past": True,
"start_date": start_date,
"end_date": end_date
}
dag = DAG("file_dag", catchup=False, default_args=default_args, schedule_interval=timedelta(seconds=20), max_active_runs=1)
I expect the dag to stop executing after may be 10 or 11 runs depending on when it started. But it keeps executing even after 20 runs and does not seem to stop.
You cannot / must not use datetime.now() in start_date and end_date expressions
The behaviour that you are observing is pretty obvious:
Recall that dag-definition files are parsed continuously in background. Section [6] Restrict the number of Airflow variables in your DAG in Airflow: Lesser Known Tips, Tricks and Best Practices says
Your DAG files are parsed every X seconds
On each cycle of parsing of your dag-definition file, the end_date gets updated to 200 seconds after current time. Since parsing of dag-definition-file(s) goes on forever, the end_date keeps shifting and you get a never-ending dag
I have scheduled the execution of a DAG to run daily.
It works perfectly for one day.
However each day I would like to re-execute not only for the current day {{ ds }} but also for the previous n days (let's say n = 7).
For example, in the next execution scheduled to run on "2018-01-30" I would like Airflow not only to run the DAG using as execution date "2018-01-30", but also to re-run the DAGs for all the previous days from "2018-01-23" to "2018-01-30".
Is there an easy way to "invalidate" the previous execution so that a backfill is run automatically?
You can generate dynamically tasks in a loop and pass the offset to your operator.
Here is an example with the Python one.
import airflow
from airflow.operators.python_operator import PythonOperator
from airflow.models import DAG
from datetime import timedelta
args = {
'owner': 'airflow',
'start_date': airflow.utils.dates.days_ago(2),
'schedule_interval': '0 10 * * *'
}
def check_trigger(execution_date, day_offset, **kwargs):
target_date = execution_date - timedelta(days=day_offset)
# use target_date
for day_offset in xrange(1, 8):
PythonOperator(
task_id='task_offset_' + i,
python_callable=check_trigger,
provide_context=True,
dag=dag,
op_kwargs={'day_offset' : day_offset}
)
Have you considered having the dag that runs once a day just run your task for the last 7 days? I imagine you’ll just have 7 tasks that each spawn a SubDAG with a different day offset from your execution date.
I think that will make debugging easier and history cleaner. I believe trying to backfill already executed tasks will involve deleting task instances or setting their states all to NONE. Then you’ll still have to trigger a backfill on those dag runs. It’ll be harder to track when things fail and just seems a bit messier.