I've data that is on Dremio and I'm looking to connect it with Alteryx.
It was working fine until once I had cancelled the Alteryx workflow in the middle of the execution. After that that instance - it is always giving the below error which I'm not able to figure out why.
Error: Input Data (1): Error SQLExecute: [Dremio][Connector] (1040) Dremio failed to execute the query: SELECT * FROM "Platform_Reporting"."PlatformActivity" limit 1000
[30039]Query execution error. Details:[
RESOURCE ERROR: Query cancelled by Workload Manager. Query enqueued time of 300.00 seconds exceeded for 'High Cost User Queries' queue.
[Error Id: 3a1e1bb0-18b7-44c0-965a-6933a156ab70 ]
Any help is appreciated!
I got this response from the Alteryx Support team:
Based on the error message it seems the error sit within Dremio itself I would advice to consult the admin to check: https://docs.dremio.com/advanced-administration/workload-management/
I would assume the cancellation of a previous pipeline was not send accordingly to the queue management and thus the error.
Related
From time to time, I see this error in Application Insights in Failures => failed dependencies :
Been searching through the documentation, but cannot see this mentioned anywhere. Does this status mean that the operation was cancelled through the token, or is it similar to the cross-partition response that used to be an 400 error? (https://github.com/Azure/azure-cosmos-dotnet-v2/issues/606#issuecomment-427909582)
Also, will this action be retried or is there loss of data for this?
In Airflow DAG, I am trying to write data in return.json and trying to push it. This task is getting failed randomly saying below error.
**{taskinstance.py:1455} ERROR - Unterminated string starting at: line 1**
Once I retry this task, it is getting successfully executed. I am not able to understand why some tasks are getting randomly failed in flow. Sometime in flow execution, none of the task are failing. Could someone please help me on this.
I am running a query from follower cluster pointing to the table that exists on the leader cluster and get the following error:-
Partial query failure: An unexpected error occurred. (message: 'StorageException with HttpStatusCode 503 was thrown.: : : ', details: 'Source: Kusto::CachedStorageObject')
Since the error seems to be related to cache , I am trying to understand exactly how to interpret it? If something is not found in the follower cache, ADX should have automatically got the data from leader storage right , I don't quite see why it should fail. It's not quite clear what the error means.
Judging by the StorageException with HttpStatusCode 503, this appears to be a transient failure in accessing underlying storage objects.
If the issue persists, I would recommend that you open a support ticket for your resource, via the Azure portal.
I see a bunch of 'permanent' failures when I fire the following command:-
.show ingestion failures | where FailureKind == "Permanent"
For all the entries that are returned the error code is UpdatePolicy_UnknownError.
The Details column for all the entries shows something like this:-
Failed to invoke update policy. Target Table = 'mytable', Query = '<some query here>': The remote server returned an error: (409) Conflict.: : :
What does this error mean? How do I find out the root cause behind these failures? The information I find through this command is not sufficient. I also copied OperationId for a sample entry and looked it up against the operations info:-
.show operations | where OperationId == '<sample operation id>'
But all I found in the Status is the message Failed performing non-transactional update policy. I know it failed, but can we find out the underlying reason?
"(409) Conflict" error usually comes from writing to the Azure storage.
In general, this error should be treated as a transient one.
If it happens in the writing of the main part of the ingestion, it should be retried (****).
In your case, it happens in writing the data of the non-transactional update policy - this write is not retried - the data enters the main table, but not the dependent table.
In the case of a transactional update policy, the whole ingestion will be failed and then retried.
(****) There was a bug in treating such an error, it was treated as permanent for a short period for the main ingestion data. The bug should be fixed now.
I m facing an issue with Teradata throttle violation. I am getting this error
Java::JavaSql::SQLException: [Teradata Database] [TeraJDBC 16.10.00.07] [Error 3153] [SQLState HY000] TDWM Throttle violation for Concurrent Sessions: For Rule Name 'BlockEndUsers', Limit of 0 concurrent requests for User
Any suggestions would be appreciated and Thanks for spending your time.
I was trying to connect to teradata using my automation script, made the connection strings. This was working upto last week, but suddenly i m getting this error. Not sure about this. Also found out in Teradata docs
3151 TDWM Throttle violation for Concurrent Queries: %VSTR
Explanation:
This error occurs if a query request is aborted by Teradata TDWM for a Throttle rule violation. Check currently active Teradata TDWM rules.
Generated By:
Dispatcher.
For Whom:
End User.
Remedy:
Wait for system activity to subside or contact your TDWM Administrator.
I waited for some time and tried running my script still it gives me the same error.
Expected results the teradata connection should be successful.