Airflow task randomly exited with return code 1 [Local Executor / PythonOperator] - airflow

To give some context, I am using Airflow 2.3.0 on Kubernetes with the Local Executor (which may sound weird, but it works for us for now) with one pod for the webserver and two for the scheduler.
I have a DAG consisting of a single task (PythonOperator) that makes many API calls (200K) using requests.
Every 15 calls, the data is loaded in a DataFrame and stored on AWS S3 (using boto3) to reduce the RAM usage.
The problem is that I can't get to the end of this task because it goes into error randomly (after 1, 10 or 120 minutes).
I have made more than 50 tries, no success and the only logs on the task are:
[2022-09-01, 14:45:44 UTC] {taskinstance.py:1159} INFO - Dependencies all met for <TaskInstance: INGESTION-DAILY-dag.extract_task scheduled__2022-08-30T00:00:00+00:00 [queued]>
[2022-09-01, 14:45:44 UTC] {taskinstance.py:1159} INFO - Dependencies all met for <TaskInstance: INGESTION-DAILY-dag.extract_task scheduled__2022-08-30T00:00:00+00:00 [queued]>
[2022-09-01, 14:45:44 UTC] {taskinstance.py:1356} INFO -
--------------------------------------------------------------------------------
[2022-09-01, 14:45:44 UTC] {taskinstance.py:1357} INFO - Starting attempt 23 of 24
[2022-09-01, 14:45:44 UTC] {taskinstance.py:1358} INFO -
--------------------------------------------------------------------------------
[2022-09-01, 14:45:44 UTC] {taskinstance.py:1377} INFO - Executing <Task(_PythonDecoratedOperator): extract_task> on 2022-08-30 00:00:00+00:00
[2022-09-01, 14:45:44 UTC] {standard_task_runner.py:52} INFO - Started process 942 to run task
[2022-09-01, 14:45:44 UTC] {standard_task_runner.py:79} INFO - Running: ['airflow', 'tasks', 'run', 'INGESTION-DAILY-dag', 'extract_task', 'scheduled__2022-08-30T00:00:00+00:00', '--job-id', '4390', '--raw', '--subdir', 'DAGS_FOLDER/dags/ingestion/daily_dag/dag.py', '--cfg-path', '/tmp/tmpwxasaq93', '--error-file', '/tmp/tmpl7t_gd8e']
[2022-09-01, 14:45:44 UTC] {standard_task_runner.py:80} INFO - Job 4390: Subtask extract_task
[2022-09-01, 14:45:45 UTC] {task_command.py:369} INFO - Running <TaskInstance: INGESTION-DAILY-dag.extract_task scheduled__2022-08-30T00:00:00+00:00 [running]> on host 10.XX.XXX.XXX
[2022-09-01, 14:48:17 UTC] {local_task_job.py:156} INFO - Task exited with return code 1
[2022-09-01, 14:48:17 UTC] {taskinstance.py:1395} INFO - Marking task as UP_FOR_RETRY. dag_id=INGESTION-DAILY-dag, task_id=extract_task, execution_date=20220830T000000, start_date=20220901T144544, end_date=20220901T144817
[2022-09-01, 14:48:17 UTC] {local_task_job.py:273} INFO - 0 downstream tasks scheduled from follow-on schedule check
But when I go to the pod logs, I get the following message:
[2022-09-01 14:06:31,624] {local_executor.py:128} ERROR - Failed to execute task an integer is required (got type ChunkedEncodingError).
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/local_executor.py", line 124, in _execute_work_in_fork
args.func(args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 377, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 183, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 241, in _run_task_by_local_task_job
run_job.run()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/local_task_job.py", line 105, in _execute
self.task_runner.start()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py", line 41, in start
self.process = self._start_by_fork()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py", line 125, in _start_by_fork
os._exit(return_code)
TypeError: an integer is required (got type ChunkedEncodingError)
What I find strange is that I never had this error on other DAGs (where tasks are smaller and faster). I checked, during an attempt, CPU and RAM usages are stable and low.
I have the same error locally, I also tried to upgrade to 2.3.4 but nothing works.
Do you have any idea how to fix this?
Thanks a lot!
Nicolas

As #EDG956 said, this is not an error from Airflow but from the code.
I solved it using a context manager (which was not enough) and recreating a session:
s = requests.Session()
while True:
try:
with s.get(base_url) as r:
response = r
except requests.exceptions.ChunkedEncodingError:
s.close()
s.requests.Session()
response = s.get(base_url)

Related

Airflow error Command exited with return code 127

I have a Ubuntu Server with airflow installed, I have a basic dag deployed there, the problem appear when I rebooted in the server and when I tried again to run the dag, airflow is showing this error:
[2022-11-29, 00:01:54 UTC] {subprocess.py:75} INFO - Running command: ['/usr/bin/bash', '-c', 'python /opt/***/dags/crypto/create_request.py 3000000 4000000']
[2022-11-29, 00:01:54 UTC] {subprocess.py:86} INFO - Output:
[2022-11-29, 00:01:54 UTC] {subprocess.py:93} INFO - /usr/bin/bash: line 1: python: command not found
[2022-11-29, 00:01:54 UTC] {subprocess.py:97} INFO - Command exited with return code 127
[2022-11-29, 00:01:54 UTC] {taskinstance.py:1851} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/airflow/operators/bash.py", line 196, in execute
raise AirflowException(
airflow.exceptions.AirflowException: Bash command failed. The command returned a non-zero exit code 127.
But before the reboot the dag was being executing normally.
The code is:
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.utils.dates import days_ago
default_args = {
"owner": "airflow",
"depends_on_past": False,
"email": ["pumasemj#hotmail.com"],
'params': {
"start_block": "3000000",
"end_block": "4000000"
}
}
with DAG("dag_infura", default_args=default_args, description='Dag for infura ingestion', schedule_interval=None, start_date=days_ago(2), tags=['dag_infura']) as dag:
print('executing...')
comd = "python /opt/airflow/dags/crypto/create_request.py {{params.start_block}} {{params.end_block}}"
t1 = BashOperator(task_id='ingest', dag=dag,depends_on_past=False, bash_command=comd)
something I figure out is that before there was this transformation when executing the bash operator:
[2022-11-10, 00:04:50 UTC] {dagbag.py:525} INFO - Filling up the DagBag from /home/ubuntu/***/dags/pipeline-***/dag_crypto_infura.py
[2022-11-10, 00:04:50 UTC] {subprocess.py:75} INFO - Running command: ['/usr/bin/bash', '-c', 'python3 /home/ubuntu/***/dags/pipeline-***/create_request.py 3000000 4000000']
and now when it is failing:
INFO - Filling up the DagBag from /home/ubuntu/***/dags/crypto/dag_crypto_infura.py
[2022-11-28, 19:34:42 UTC] {subprocess.py:75} INFO - Running command: ['/usr/bin/bash', '-c', 'python /***/dags/crypto/create_request.py 3000000 4000000']
Running command: ['/usr/bin/bash', '-c', 'python /opt/***/dags/crypto/create_request.py 3000000 4000000']
Here is the key, but I don't know why now is changing in the execution of the command now,
Any advice?
Regards
EDit: it's work when I change the command to:
comd = "python3 /home/ubuntu/***/dags/pipeline-***/create_request.py {{params.start_block}} {{params.end_block}}"
The problem now is that I develop from docker in windows and deploy
to ubuntu without docker any advice to deal whith this situation? I don't want to manually replace the path every time I deploy to ubuntu

Airflow 2: GoogleSheetsToGCSOperator gives Negsignal.SIGKILL

We're running airflow in google composer, and we're running into difficulties with the the GoogleSheetsToGCSOperator. We're using composer 2, and therefore I understand that we have to make sure to use a connection with the correct scopes. So that's fine, I've set up a connection with those scopes, and we now no longer get permission errors. However, the dag still doesn't work, it now fails in a couple of different ways.
Most of the time, any dag that tries to upload a google sheet to GCS fails with error Negsignal.SIGKILL. For example:
--------------------------------------------------------------------------------
[2022-10-03, 15:50:55 UTC] {taskinstance.py:1251} INFO - Starting attempt 1 of 1
[2022-10-03, 15:50:55 UTC] {taskinstance.py:1252} INFO -
--------------------------------------------------------------------------------
[2022-10-03, 15:50:55 UTC] {taskinstance.py:1271} INFO - Executing <Task(GoogleSheetsToGCSOperator): upload_sheet_to_gcs_airflow_permission_test_sheet> on 2022-10-03 15:50:38.412899+00:00
[2022-10-03, 15:50:55 UTC] {standard_task_runner.py:52} INFO - Started process 529848 to run task
[2022-10-03, 15:50:55 UTC] {standard_task_runner.py:79} INFO - Running: ['airflow', 'tasks', 'run', 'test_brunel_core_2', 'upload_sheet_to_gcs_airflow_permission_test_sheet', 'manual__2022-10-03T15:50:38.412899+00:00', '--job-id', '7342', '--raw', '--subdir', 'DAGS_FOLDER/DAGs/z_airflow_testing_dags/test_brunel_2_functions.py', '--cfg-path', '/tmp/tmpyuhkixqc', '--error-file', '/tmp/tmp7p2delaz']
[2022-10-03, 15:50:55 UTC] {standard_task_runner.py:80} INFO - Job 7342: Subtask upload_sheet_to_gcs_airflow_permission_test_sheet
/opt/python3.8/lib/python3.8/site-packages/airflow/utils/log/file_task_handler.py:110: ResourceWarning: unclosed file <_io.TextIOWrapper name='/home/airflow/gcs/logs/test_brunel_core_2/upload_sheet_to_gcs_airflow_permission_test_sheet/2022-10-03T15:50:38.412899+00:00/1.log' mode='a' encoding='utf-8'>
self.handler = NonCachingFileHandler(local_loc, encoding='utf-8')
[2022-10-03, 15:50:56 UTC] {task_command.py:298} INFO - Running <TaskInstance: test_brunel_core_2.upload_sheet_to_gcs_airflow_permission_test_sheet manual__2022-10-03T15:50:38.412899+00:00 [running]> on host airflow-worker-j28mn
[2022-10-03, 15:50:56 UTC] {taskinstance.py:1448} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=process_dev_joe_m
AIRFLOW_CTX_DAG_ID=test_brunel_core_2
AIRFLOW_CTX_TASK_ID=upload_sheet_to_gcs_airflow_permission_test_sheet
AIRFLOW_CTX_EXECUTION_DATE=2022-10-03T15:50:38.412899+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2022-10-03T15:50:38.412899+00:00
[2022-10-03, 15:51:02 UTC] {local_task_job.py:154} INFO - Task exited with return code Negsignal.SIGKILL
[2022-10-03, 15:51:02 UTC] {taskinstance.py:1279} INFO - Marking task as FAILED. dag_id=test_brunel_core_2, task_id=upload_sheet_to_gcs_airflow_permission_test_sheet, execution_date=20221003T155038, start_date=20221003T155055, end_date=20221003T155102
The rest of the time, some random task in the dag fails (not neccesarily the step with the GoogleSheetsToGCSOperator). Sometimes it a step fails with absolutely no log being generated at all, or sometimes log is generated but it contains no errors. Instead, the only clue is a warning:
/opt/python3.8/lib/python3.8/site-packages/airflow/utils/log/file_task_handler.py:110: ResourceWarning: unclosed file <_io.TextIOWrapper name='/home/airflow/gcs/logs/test_flakiness/create_table_JM_test_table.create/2022-10-04T09:11:58.425115+00:00/1.log' mode='a' encoding='utf-8'>
self.handler = NonCachingFileHandler(local_loc, encoding='utf-8')
The weird thing about that warning is that it's warning about the log file itself. As in, that message is written into log file gs://europe-west1-process-dev-ai-fd1dc540-bucket/logs/test_flakiness/create_table_JM_test_table.create/2022-10-04T09:11:58.425115+00:00/1.log. So of course the file is open, you're writing to it, so why are you warning about it being open?
Some other facts that may or may not be relevant:
composer-2.0.25 airflow-2.2.5
When monitoring the environment, all
resources (cpu, memory, etc) seem to be fine, nothing is hitting its
limits.
Our environment is configured to use between 1 and 4 workers.
Only ever one worker is used, so I don't think it can be a problem
with multiple workers all trying to write to the same file at once.
This is all happening in our test environment. The same dag will work absolutely fine in our prod environment. Our
prod environment is running composer-1.19.3-airflow-2.2.5, and
therefore is set up differently when it comes to things like Google
drive authentication scopes. So that's already 2 potential reasons
why things are different in the prod environment.

Airflow 2 Error sending Celery task: Timeout

I am in the process of migrating our Airflow environment from version 1.10.15 to 2.3.3. I have migrated 1 DAG over to the new environment and intermittently I get an email with this error: Executor reports task instance finished (failed) although the task says its queued. (Info: None) Was the task killed externally?
When looking at the logs, this is what I find in the scheduler logs:
[2022-08-09 07:00:08,621] {dag.py:2968} INFO - Setting next_dagrun for DAGRP-Get_Overrides to 2022-08-09T11:00:00+00:00, run_after=2022-08-09T16:00:00+00:00
[2022-08-09 07:00:08,652] {scheduler_job.py:353} INFO - 1 tasks up for execution:
<TaskInstance: DAGRP-Get_Overrides.Get_override scheduled__2022-08-08T16:00:00+00:00 [scheduled]>
[2022-08-09 07:00:08,652] {scheduler_job.py:418} INFO - DAG DAGRP-Get_Overrides has 0/3 running and queued tasks
[2022-08-09 07:00:08,652] {scheduler_job.py:504} INFO - Setting the following tasks to queued state:
<TaskInstance: DAGRP-Get_Overrides.Get_override scheduled__2022-08-08T16:00:00+00:00 [scheduled]>
[2022-08-09 07:00:08,654] {scheduler_job.py:546} INFO - Sending TaskInstanceKey(dag_id='DAGRP-Get_Overrides', task_id='Get_override', run_id='scheduled__2022-08-08T16:00:00+00:00', try_number=1, map_index=-1) to executor with priority 1 and queue default
[2022-08-09 07:00:08,654] {base_executor.py:91} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'DAGRP-Get_Overrides', 'Get_override', 'scheduled__2022-08-08T16:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/da_group/get_override.py']
[2022-08-09 07:00:12,665] {timeout.py:67} ERROR - Process timed out, PID: 1
[2022-08-09 07:00:12,667] {celery_executor.py:283} INFO - [Try 1 of 3] Task Timeout Error for Task: (TaskInstanceKey(dag_id='DAGRP-Get_Overrides', task_id='Get_override', run_id='scheduled__2022-08-08T16:00:00+00:00', try_number=1, map_index=-1)).
[2022-08-09 07:00:16,701] {timeout.py:67} ERROR - Process timed out, PID: 1
[2022-08-09 07:00:16,702] {celery_executor.py:283} INFO - [Try 2 of 3] Task Timeout Error for Task: (TaskInstanceKey(dag_id='DAGRP-Get_Overrides', task_id='Get_override', run_id='scheduled__2022-08-08T16:00:00+00:00', try_number=1, map_index=-1)).
[2022-08-09 07:00:21,704] {timeout.py:67} ERROR - Process timed out, PID: 1
[2022-08-09 07:00:21,705] {celery_executor.py:283} INFO - [Try 3 of 3] Task Timeout Error for Task: (TaskInstanceKey(dag_id='DAGRP-Get_Overrides', task_id='Get_override', run_id='scheduled__2022-08-08T16:00:00+00:00', try_number=1, map_index=-1)).
[2022-08-09 07:00:26,627] {timeout.py:67} ERROR - Process timed out, PID: 1
[2022-08-09 07:00:26,627] {celery_executor.py:294} ERROR - Error sending Celery task: Timeout, PID: 1
Celery Task ID: TaskInstanceKey(dag_id='DAGRP-Get_Overrides', task_id='Get_override', run_id='scheduled__2022-08-08T16:00:00+00:00', try_number=1, map_index=-1)
Traceback (most recent call last):
File "/opt/airflow/lib/python3.8/site-packages/kombu/utils/functional.py", line 30, in __call__
return self.__value__
AttributeError: 'ChannelPromise' object has no attribute '__value__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/airflow/lib/python3.8/site-packages/airflow/executors/celery_executor.py", line 177, in send_task_to_executor
result = task_to_run.apply_async(args=[command], queue=queue)
File "/opt/airflow/lib/python3.8/site-packages/celery/app/task.py", line 575, in apply_async
return app.send_task(
File "/opt/airflow/lib/python3.8/site-packages/celery/app/base.py", line 788, in send_task
amqp.send_task_message(P, name, message, **options)
File "/opt/airflow/lib/python3.8/site-packages/celery/app/amqp.py", line 510, in send_task_message
ret = producer.publish(
File "/opt/airflow/lib/python3.8/site-packages/kombu/messaging.py", line 177, in publish
return _publish(
File "/opt/airflow/lib/python3.8/site-packages/kombu/connection.py", line 523, in _ensured
return fun(*args, **kwargs)
File "/opt/airflow/lib/python3.8/site-packages/kombu/messaging.py", line 186, in _publish
channel = self.channel
File "/opt/airflow/lib/python3.8/site-packages/kombu/messaging.py", line 209, in _get_channel
channel = self._channel = channel()
File "/opt/airflow/lib/python3.8/site-packages/kombu/utils/functional.py", line 32, in __call__
value = self.__value__ = self.__contract__()
File "/opt/airflow/lib/python3.8/site-packages/kombu/messaging.py", line 225, in <lambda>
channel = ChannelPromise(lambda: connection.default_channel)
File "/opt/airflow/lib/python3.8/site-packages/kombu/connection.py", line 895, in default_channel
self._ensure_connection(**conn_opts)
File "/opt/airflow/lib/python3.8/site-packages/kombu/connection.py", line 433, in _ensure_connection
return retry_over_time(
File "/opt/airflow/lib/python3.8/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
return fun(*args, **kwargs)
File "/opt/airflow/lib/python3.8/site-packages/kombu/connection.py", line 877, in _connection_factory
self._connection = self._establish_connection()
File "/opt/airflow/lib/python3.8/site-packages/kombu/connection.py", line 812, in _establish_connection
conn = self.transport.establish_connection()
File "/opt/airflow/lib/python3.8/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
conn.connect()
File "/opt/airflow/lib/python3.8/site-packages/amqp/connection.py", line 323, in connect
self.transport.connect()
File "/opt/airflow/lib/python3.8/site-packages/amqp/transport.py", line 129, in connect
self._connect(self.host, self.port, self.connect_timeout)
File "/opt/airflow/lib/python3.8/site-packages/amqp/transport.py", line 184, in _connect
self.sock.connect(sa)
File "/opt/airflow/lib/python3.8/site-packages/airflow/utils/timeout.py", line 68, in handle_timeout
raise AirflowTaskTimeout(self.error_message)
airflow.exceptions.AirflowTaskTimeout: Timeout, PID: 1
[2022-08-09 07:00:26,627] {scheduler_job.py:599} INFO - Executor reports execution of DAGRP-Get_Overrides.Get_override run_id=scheduled__2022-08-08T16:00:00+00:00 exited with status failed for try_number 1
[2022-08-09 07:00:26,633] {scheduler_job.py:642} INFO - TaskInstance Finished: dag_id=DAGRP-Get_Overrides, task_id=Get_override, run_id=scheduled__2022-08-08T16:00:00+00:00, map_index=-1, run_start_date=None, run_end_date=None, run_duration=None, state=queued, executor_state=failed, try_number=1, max_tries=0, job_id=None, pool=default_pool, queue=default, priority_weight=1, operator=PythonOperator, queued_dttm=2022-08-09 11:00:08.652767+00:00, queued_by_job_id=56, pid=None
[2022-08-09 07:00:26,633] {scheduler_job.py:684} ERROR - Executor reports task instance <TaskInstance: DAGRP-Get_Overrides.Get_override scheduled__2022-08-08T16:00:00+00:00 [queued]> finished (failed) although the task says its queued. (Info: None) Was the task killed externally?
[2022-08-09 07:01:16,687] {processor.py:233} WARNING - Killing DAGFileProcessorProcess (PID=1811)
[2022-08-09 07:04:00,640] {scheduler_job.py:1233} INFO - Resetting orphaned tasks for active dag runs
I am running Airflow on 2 servers with 2 of each service (2 schedulers, 2 workers, 2 webservers). They are running in docker containers. They are configured to use celery executor and I'm using RabbitMQ version 3.10.6 (also 2 instances in docker containers behind a LB). I am using Postgres 13.7 for our database (running one instance in a docker container on the 1st server). Our environment is running on Python 3.8.12.
From my understanding, the timeout is between the scheduler and rabbitmq? From what I can tell we are hitting this timeout: AIRFLOW__CELERY__OPERATION_TIMEOUT (it's currently set to 4).
I would like to track down what is causing the issue before I just increase timeout settings. What can I do to find out what's going on? Anyone else run into this issue? Am I correct in assuming the timeout is between the scheduler and rabbitmq? Is it between the scheduler and database? Why am I seeing this with Airflow 2 when I have the same setup with Airflow 1 and it works with no problems? Any help is greatly appreciated!
Update:
I was able to reproduce the error by shutting down 1 of the rabbitmq nodes. Even though rabbitmq is behind a LB with a health probe, whenever a job was picked up by scheduler 1, it would fail with this error... But if scheduler 2 picked up the job, it would finish successfully. The odd thing is that I shut down rabbitmq 2..
So I think I've been able to solve this issue. Here is what I did:
I added a custom celery_config.py to the scheduler and worker docker containers, adding this environment variable: AIRFLOW__CELERY__CELERY_CONFIG_OPTIONS=celery_config.CELERY_CONFIG. As part of that celery config, I specified both my rabbitmq brokers under broker_url. This is the full config:
from airflow.config_templates.default_celery import DEFAULT_CELERY_CONFIG
import os
RABBITMQ_PW = os.environ["RABBITMQ_PW"]
CLUSTER_NODE = os.environ["RABBITMQ_CLUSTER_NODE"]
LOCAL_NODE = os.environ["RABBITMQ_NODE"]
CELERY_CONFIG = {
**DEFAULT_CELERY_CONFIG,
"worker_send_task_events": True,
"task_send_sent_event": True,
"result_extended": True,
"broker_url": [
f'amqp://rabbitmq:{RABBITMQ_PW}#{LOCAL_NODE}:5672',
f'amqp://rabbitmq:{RABBITMQ_PW}#{CLUSTER_NODE}:5672'
]
}
What happens now in the worker, if it looses connection to the 1st broker, it will attempt to connect to the 2nd broker.
[2022-08-11 12:00:52,876: ERROR/MainProcess] consumer: Cannot connect to amqp://rabbitmq:**#<LOCAL_NODE>:5672//: [Errno 111] Connection refused.
[2022-08-11 12:00:52,875: INFO/MainProcess] Connected to amqp://rabbitmq:**#<CLUSTER_NODE>:5672//
Also an interesting note, I still have the Airflow environment variable AIRFLOW__CELERY__BROKER_URL set to the load balancer URL. That's because Airflow 1 won't allow the worker to start without it, and 2 won't allow you to specify multiple brokers like the celery config does. So when the worker starts, it shows:
- ** ---------- .> transport: amqp://rabbitmq:**#<LOCAL_NODE>:5672//
[2022-08-26 11:37:17,952: INFO/MainProcess] Connected to amqp://rabbitmq:**#<LOCAL_NODE>:5672//
Even though I have the LB configured for the AIRFLOW__CELERY__BROKER_URL

Airlfow Execution Timeout not working well

I've set 'execution_timeout': timedelta(seconds=300) parameter on many tasks. When the execution timeout is set on task downloading data from Google Analytics it works properly - after ~300 seconds is the task set to failed. The task downloads some data from API (python), then it does some transformations (python) and loads data into PostgreSQL.
Then I've a task which executes only one PostgreSQL function - execution sometimes takes more than 300 seconds but I get this (task is marked as finished successfully).
*** Reading local file: /home/airflow/airflow/logs/bulk_replication_p2p_realtime/t1/2020-07-20T00:05:00+00:00/1.log
[2020-07-20 05:05:35,040] {__init__.py:1139} INFO - Dependencies all met for <TaskInstance: bulk_replication_p2p_realtime.t1 2020-07-20T00:05:00+00:00 [queued]>
[2020-07-20 05:05:35,051] {__init__.py:1139} INFO - Dependencies all met for <TaskInstance: bulk_replication_p2p_realtime.t1 2020-07-20T00:05:00+00:00 [queued]>
[2020-07-20 05:05:35,051] {__init__.py:1353} INFO -
--------------------------------------------------------------------------------
[2020-07-20 05:05:35,051] {__init__.py:1354} INFO - Starting attempt 1 of 1
[2020-07-20 05:05:35,051] {__init__.py:1355} INFO -
--------------------------------------------------------------------------------
[2020-07-20 05:05:35,098] {__init__.py:1374} INFO - Executing <Task(PostgresOperator): t1> on 2020-07-20T00:05:00+00:00
[2020-07-20 05:05:35,099] {base_task_runner.py:119} INFO - Running: ['airflow', 'run', 'bulk_replication_p2p_realtime', 't1', '2020-07-20T00:05:00+00:00', '--job_id', '958216', '--raw', '-sd', 'DAGS_FOLDER/bulk_replication_p2p_realtime.py', '--cfg_path', '/tmp/tmph11tn6fe']
[2020-07-20 05:05:37,348] {base_task_runner.py:101} INFO - Job 958216: Subtask t1 [2020-07-20 05:05:37,347] {settings.py:182} INFO - settings.configure_orm(): Using pool settings. pool_size=10, pool_recycle=1800, pid=26244
[2020-07-20 05:05:39,503] {base_task_runner.py:101} INFO - Job 958216: Subtask t1 [2020-07-20 05:05:39,501] {__init__.py:51} INFO - Using executor LocalExecutor
[2020-07-20 05:05:39,857] {base_task_runner.py:101} INFO - Job 958216: Subtask t1 [2020-07-20 05:05:39,856] {__init__.py:305} INFO - Filling up the DagBag from /home/airflow/airflow/dags/bulk_replication_p2p_realtime.py
[2020-07-20 05:05:39,894] {base_task_runner.py:101} INFO - Job 958216: Subtask t1 [2020-07-20 05:05:39,894] {cli.py:517} INFO - Running <TaskInstance: bulk_replication_p2p_realtime.t1 2020-07-20T00:05:00+00:00 [running]> on host dwh2-airflow-dev
[2020-07-20 05:05:39,938] {postgres_operator.py:62} INFO - Executing: CALL dw_system.bulk_replicate(p_graph_name=>'replication_p2p_realtime',p_group_size=>4 , p_group=>1, p_dag_id=>'bulk_replication_p2p_realtime', p_task_id=>'t1')
[2020-07-20 05:05:39,960] {logging_mixin.py:95} INFO - [2020-07-20 05:05:39,953] {base_hook.py:83} INFO - Using connection to: id: postgres_warehouse. Host: XXX Port: 5432, Schema: XXXX Login: XXX Password: XXXXXXXX, extra: {}
[2020-07-20 05:05:39,973] {logging_mixin.py:95} INFO - [2020-07-20 05:05:39,972] {dbapi_hook.py:171} INFO - CALL dw_system.bulk_replicate(p_graph_name=>'replication_p2p_realtime',p_group_size=>4 , p_group=>1, p_dag_id=>'bulk_replication_p2p_realtime', p_task_id=>'t1')
[2020-07-20 05:23:21,450] {logging_mixin.py:95} INFO - [2020-07-20 05:23:21,449] {timeout.py:42} ERROR - Process timed out, PID: 26244
[2020-07-20 05:23:36,453] {logging_mixin.py:95} INFO - [2020-07-20 05:23:36,452] {jobs.py:2562} INFO - Task exited with return code 0
Does anyone know how to enforce execution timeout out for such long running functions? It seems that the execution timeout is evaluated once the PG function finish.
Airflow uses the signal module from the standard library to affect a timeout. In Airflow it's used to hook into these system signals and request that the calling process be notified in N seconds and, should the process still be inside the context (see the __enter__ and __exit__ methods on the class) it will raise an AirflowTaskTimeout exception.
Unfortunately for this situation, there are certain classes of system operations that cannot be interrupted. This is actually called out in the signal documentation:
A long-running calculation implemented purely in C (such as regular expression matching on a large body of text) may run uninterrupted for an arbitrary amount of time, regardless of any signals received. The Python signal handlers will be called when the calculation finishes.
To which we say "But I'm not doing a long-running calculation in C!" -- yeah for Airflow this is almost always due to uninterruptable I/O operations.
The highlighted sentence above (emphasis mine) nicely explains why the handler is still triggered even after the task is allowed to (frustratingly!) finish, well beyond your requested timeout.

How to fix the error "AirflowException("Hostname of job runner does not match")"?

I'm running airflow on my computer (Mac AirBook, 1.6 GHz Intel Core i5 and 8 GB 2133 MHz LPDDR3). A DAG with several tasks, failed with below error. Checked several articles online but with little to no help. There is nothing wrong with the task itself(double checked).
Any help is much appreciated.
[2019-08-27 13:01:55,372] {sequential_executor.py:45} INFO - Executing command: ['airflow', 'run', 'Makefile_DAG', 'normalize_companies', '2019-08-27T15:38:20.914820+00:00', '--local', '--pool', 'default_pool', '-sd', '/home/airflow/dags/makefileDAG.py']
[2019-08-27 13:01:56,937] {settings.py:213} INFO - settings.configure_orm(): Using pool settings. pool_size=5, max_overflow=10, pool_recycle=1800, pid=40647
[2019-08-27 13:01:57,285] {__init__.py:51} INFO - Using executor SequentialExecutor
[2019-08-27 13:01:59,423] {dagbag.py:90} INFO - Filling up the DagBag from /home/airflow/dags/makefileDAG.py
[2019-08-27 13:02:01,736] {cli.py:516} INFO - Running <TaskInstance: Makefile_DAG.normalize_companies 2019-08-27T15:38:20.914820+00:00 [queued]> on host ajays-macbook-air.local
Traceback (most recent call last):
File "/anaconda3/envs/airflow/bin/airflow", line 32, in <module>
args.func(args)
File "/anaconda3/envs/airflow/lib/python3.6/site-packages/airflow/utils/cli.py", line 74, in wrapper
return f(*args, **kwargs)
File "/anaconda3/envs/airflow/lib/python3.6/site-packages/airflow/bin/cli.py", line 522, in run
_run(args, dag, ti)
File "/anaconda3/envs/airflow/lib/python3.6/site-packages/airflow/bin/cli.py", line 435, in _run
run_job.run()
File "/anaconda3/envs/airflow/lib/python3.6/site-packages/airflow/jobs/base_job.py", line 213, in run
self._execute()
File "/anaconda3/envs/airflow/lib/python3.6/site-packages/airflow/jobs/local_task_job.py", line 111, in _execute
self.heartbeat()
File "/anaconda3/envs/airflow/lib/python3.6/site-packages/airflow/jobs/base_job.py", line 196, in heartbeat
self.heartbeat_callback(session=session)
File "/anaconda3/envs/airflow/lib/python3.6/site-packages/airflow/utils/db.py", line 70, in wrapper
return func(*args, **kwargs)
File "/anaconda3/envs/airflow/lib/python3.6/site-packages/airflow/jobs/local_task_job.py", line 159, in heartbeat_callback
raise AirflowException("Hostname of job runner does not match")
airflow.exceptions.AirflowException: Hostname of job runner does not match
[2019-08-27 13:05:05,904] {sequential_executor.py:52} ERROR - Failed to execute task Command '['airflow', 'run', 'Makefile_DAG', 'normalize_companies', '2019-08-27T15:38:20.914820+00:00', '--local', '--pool', 'default_pool', '-sd', '/home/airflow/dags/makefileDAG.py']' returned non-zero exit status 1..
[2019-08-27 13:05:05,905] {scheduler_job.py:1256} INFO - Executor reports execution of Makefile_DAG.normalize_companies execution_date=2019-08-27 15:38:20.914820+00:00 exited with status failed for try_number 2
Logs from the task:
[2019-08-27 13:02:13,616] {bash_operator.py:115} INFO - Running command: python /home/Makefile_Redo/normalize_companies.py
[2019-08-27 13:02:13,628] {bash_operator.py:124} INFO - Output:
[2019-08-27 13:05:02,849] {logging_mixin.py:95} INFO - [[34m2019-08-27 13:05:02,848[0m] {[34mlocal_task_job.py:[0m158} [33mWARNING[0m - [33mThe recorded hostname [1majays-macbook-air.local[0m does not match this instance's hostname [1mAJAYs-MacBook-Air.local[0m[0m
[2019-08-27 13:05:02,860] {helpers.py:319} INFO - Sending Signals.SIGTERM to GPID 40649
[2019-08-27 13:05:02,861] {taskinstance.py:897} ERROR - Received SIGTERM. Terminating subprocesses.
[2019-08-27 13:05:02,862] {bash_operator.py:142} INFO - Sending SIGTERM signal to bash process group
[2019-08-27 13:05:03,539] {taskinstance.py:1047} ERROR - Task received SIGTERM signal
Traceback (most recent call last):
File "/anaconda3/envs/airflow/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 922, in _run_raw_task
result = task_copy.execute(context=context)
File "/anaconda3/envs/airflow/lib/python3.6/site-packages/airflow/operators/bash_operator.py", line 126, in execute
for line in iter(sp.stdout.readline, b''):
File "/anaconda3/envs/airflow/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 899, in signal_handler
raise AirflowException("Task received SIGTERM signal")
airflow.exceptions.AirflowException: Task received SIGTERM signal
[2019-08-27 13:05:03,550] {taskinstance.py:1076} INFO - All retries failed; marking task as FAILED
A weird thing I noticed from above log is:
The recorded hostname [1majays-macbook-air.local[0m does not match this instance's hostname [1mAJAYs-MacBook-Air.local[0m[0m
How is this possible and any solution to fix this?
I had the same problem on my Mac. The solution that worked for me was updating airflow.cfg with hostname_callable = socket:gethostname. The original getfqdn returns different hostnames from time to time.

Resources