XCOM variable push is getting failed - airflow

In Airflow DAG, I am trying to write data in return.json and trying to push it. This task is getting failed randomly saying below error.
**{taskinstance.py:1455} ERROR - Unterminated string starting at: line 1**
Once I retry this task, it is getting successfully executed. I am not able to understand why some tasks are getting randomly failed in flow. Sometime in flow execution, none of the task are failing. Could someone please help me on this.

Related

what is "Unknown(0x)" error when swapping tokens in web3.js?

I'm swapping tokens using web3.js sometimes it succeeds and sometimes it fails, but when it fails I get this error in my console:
VM Exception while processing transaction: revert callBytes failed: Unknown(0x)
and I believe it is the same as the error below from etherscan:
enter image description here
can someone please tell me what is this error and how to solve it?
thank you for your time.

The task is stuck in up for retry in Airflow

Hello All I am working on the airflow DAG where I have set retries as 1.The task fails in the first run but gets stuck in the second run in up for retry state.The expected result is that it should fail but remains stuck
When I see at the logs it shows Task exited with return code 1 but in the UI its in the up for retry state.
Not sure what is going wrong.Can anyone help me.
Thank you.

How to Identify Common error on Dag level

I want trying setup a dag (calling it as Indentify_Common_error_dag) which can read error logs from S3 bucket and Identify the common error. let's say if I run the Dag once in a week, it should read error logs from S3 bucket give me a report which says these many time this error occurred.
Example: If My other Dags fail continuously every day due to SQL compilation error. When I run my Dag (Indentify_Common_error_dag) once in a week which read logs from S3. It should say my other dags failed due to SQL compilation error 4 times in a week.
I have 200+ dags these dags fails due to many error like SQL compilation error, Duplicate row or Bash command failed or any other issue. Indentify_Common_error_dag Dag helps me to Identify what common error occurred in last one week or one month. that will help me to modify my 200+ dags to handle these error on its own.
I don't know if it really make sense, I also don't know if we can setup Dag like this. I am new Airflow if there any way of identifying it please let me know.
Any suggestion or feedback. if there is way it would be great. Really appreciate your ideas.

Resource Error when connecting dremio to Alteryx

I've data that is on Dremio and I'm looking to connect it with Alteryx.
It was working fine until once I had cancelled the Alteryx workflow in the middle of the execution. After that that instance - it is always giving the below error which I'm not able to figure out why.
Error: Input Data (1): Error SQLExecute: [Dremio][Connector] (1040) Dremio failed to execute the query: SELECT * FROM "Platform_Reporting"."PlatformActivity" limit 1000
[30039]Query execution error. Details:[
RESOURCE ERROR: Query cancelled by Workload Manager. Query enqueued time of 300.00 seconds exceeded for 'High Cost User Queries' queue.
[Error Id: 3a1e1bb0-18b7-44c0-965a-6933a156ab70 ]
Any help is appreciated!
I got this response from the Alteryx Support team:
Based on the error message it seems the error sit within Dremio itself I would advice to consult the admin to check: https://docs.dremio.com/advanced-administration/workload-management/
I would assume the cancellation of a previous pipeline was not send accordingly to the queue management and thus the error.

Is there any way to pass the error text of a failed Airflow task into another task?

I have a DAG defined that contains a number of tasks, the last of which is only run if any of the previous tasks fail. This task simply posts to a Slack channel that the DAG run experienced errors.
What I would really like is if the message sent to the Slack channel contained the actual error that is logged in the task logs, to provide immediate context to the error and perhaps save Ops from having to dig through the logs.
Is this at all possible?

Resources