By default update policy on a Kusto table is non-transactional. Lets say I have an Update Policy defined on a table MyTarget for which the source is defined in the update policy as MySource. The update policy is defined as transactional. Ingestion has been set on the table MySource. So continuously data will be getting loaded to MySource. Now say certain ingestion data batch is loaded to MySource, right after that the query defined in the Update Policy will be triggered. Now lets say this query fails , due to memory issues etc -- even the data batch loaded to MySource will not be committed (because the update policy is transactional). I have heard that in this case the ingestion will be re-tried automatically. Is it so? I haven't found any documentation regarding this retry. Anyways -- my simple question is -- how many times retry will be attempted and how much is the interval after each attempt? Are these configurable properties (I am talking about ADX cluster which is available through Azure) if I am owner of the ADX cluster?
yes, there's an automatic retry for ingestions that failed due to a failure in a transactional update policy.
the full details can be found here: https://learn.microsoft.com/en-us/azure/kusto/management/updatepolicy#failures
Failures are treated as follows:
Non-transactional policy: The failure is ignored by Kusto. Any retry is the responsibility of the data owner.
Transactional policy: The original ingestion operation that triggered the update will fail as well. The source table and the database will not be modified with new data.
In case the ingestion method is pull (Kusto's Data Management service is involved in the ingestion process), there's an automated retry on the entire ingestion operation, orchestrated by Kusto's Data Management service, according to the following logic:
Retries are done until reaching the earliest between the maximum retry period (2 days) and maximum retry attempts (10 attempts).
The backoff period starts from 2 minutes, and grows exponentially (2 -> 4 -> 8 -> 16 ... minutes)
In any other case, any retry is the responsibility of the data owner.
Related
I need to check how the notary prevents the double spending in the Obligation Cordapp. I started the web server UI at the localhost ports and performed some multiple transactions and when I checked the notary's log ,I found this:
[WARN ] 2020-06-24T08:29:33,484Z [Notary request queue processor] transactions.PersistentUniquenessProvider. - Unable to notarise: One or more input states or referenced states have already been used as input states in other transactions. Conflicting state count: 1, consumption details:
7CF1BCA8EDF25F0602BBEDF8AD41FD60336F65EAC09C5326478A4CB7CD620579(0) -> StateConsumptionDetails(hashOfTransactionId=46552C5CE153712B65585A75C4D165CD4A05304564C8797ACEF317DCD925B72E, type=INPUT_STATE).
To find out if any of the conflicting transactions have been generated by this node you can use the hashLookup Corda shell command. [errorCode=1g4005y, moreInformationAt=https://errors.corda.net/OS/4.5-RC02/1g4005y]
net.corda.core.internal.notary.NotaryInternalException: Unable to notarise: One or more input states or referenced states have already been used as input states in other transactions. Conflicting state count: 1, consumption details:
7CF1BCA8EDF25F0602BBEDF8AD41FD60336F65EAC09C5326478A4CB7CD620579(0) -> StateConsumptionDetails(hashOfTransactionId=46552C5CE153712B65585A75C4D165CD4A05304564C8797ACEF317DCD925B72E, type=INPUT_STATE).
To find out if any of the conflicting transactions have been generated by this node you can use the hashLookup Corda shell command.
I performed hashLookup on the invalid txId and found this :
hashLookup 46552C5CE153712B65585A75C4D165CD4A05304564C8797ACEF317DCD925B72E
Found a matching transaction with Id: A86E3ECE4EC12A487E413E2BDAB9D88BFEBCB418FA0224189DE0C72BBBD34B12
I believe this is how notary has stopped the double spending. But I am unable to recreate that testing.Can someone tell me what possible input transaction has led to this error.I mean what test case can lead to this testing of double spend that is stopped by notary?
A notary is a network service that provides uniqueness consensus by attesting that, for a given transaction, it has not already signed other transactions that consumes any of the proposed transaction’s input states.
In other words, the notary will keep track of all the input states (only stores their hashes, not the real state) that are used in transactions, so when someone is trying to use these already-spent inputs, the notary will reject the transaction.
Hence, preventing the double spend.
I see a bunch of 'permanent' failures when I fire the following command:-
.show ingestion failures | where FailureKind == "Permanent"
For all the entries that are returned the error code is UpdatePolicy_UnknownError.
The Details column for all the entries shows something like this:-
Failed to invoke update policy. Target Table = 'mytable', Query = '<some query here>': The remote server returned an error: (409) Conflict.: : :
What does this error mean? How do I find out the root cause behind these failures? The information I find through this command is not sufficient. I also copied OperationId for a sample entry and looked it up against the operations info:-
.show operations | where OperationId == '<sample operation id>'
But all I found in the Status is the message Failed performing non-transactional update policy. I know it failed, but can we find out the underlying reason?
"(409) Conflict" error usually comes from writing to the Azure storage.
In general, this error should be treated as a transient one.
If it happens in the writing of the main part of the ingestion, it should be retried (****).
In your case, it happens in writing the data of the non-transactional update policy - this write is not retried - the data enters the main table, but not the dependent table.
In the case of a transactional update policy, the whole ingestion will be failed and then retried.
(****) There was a bug in treating such an error, it was treated as permanent for a short period for the main ingestion data. The bug should be fixed now.
I'm using IKustoIngestClient.IngestFromStorageAsync(). I see some transient failure type when querying .show ingestion failures. Does Azure Data Explorer recover from these automatically?
Yes, up to a predefined amount/time of retries.
For monitoring "final" statuses of queued ingestions, you should probably use the API described here: https://learn.microsoft.com/en-us/azure/kusto/api/netfx/kusto-ingest-client-status and demonstrated here: https://learn.microsoft.com/en-us/azure/kusto/api/netfx/kusto-ingest-queued-ingest-sample#code
I am using UFT 12.02 to create UFT API tests, the same tests i am using in Load Runner to check the transaction response time.
The challenge i am facing is to check for success and failure during execution. In Load runner we can easily check the response for different success indicators (e.g. response code '200 OK' or 'user ID' or 'Success ID' generated by the system) but in case of UFT API script we can add start and End transaction activities in the flow but cannot check the application status based on any indicator.
Please let me know if there is any way to check that the completed transaction is a pass transaction or a failure.
currently I am getting all transactions as passed but the records inserted in the DB are far less than the passed transactions.
Unfortunately, you cannot set a fail / pass criteria when using LoadRunner transactions in UFT. All transactions are considered as finished with "passed" status. The core usage of UFT - LoadRunner transaction integration is to measure response times, nothing else.
Within alfresco, I want to delete a node but I don't want to be used by any other users in a cluster environment.
I know that I will use LockService for lock a node (in a cluster environment) as in the folloing lines:
lockService.lock(deleteNode);
nodeService.deleteNode(deleteNode);
lockService.unlock(deleteNode);
the last line may cause an exception because the node has already been deleted, and indeed it causes the exception is
A system error happened during the operation: Node does not exist: workspace://SpacesStore/cb6473ed-1f0c-4fa3-bfdf-8f0bc86f3a12
So how to ensure concurrency in a cluster environment when delete a node to prevent two users to access the same node at the same time one of them want to update it and the second once want o delete it?
Depending on your cluster environment (e.g. same DB server used by all Alfresco instances), transactions might most likely just be enough to ensure no stale content is used:
serverA(readNode)
serverB(deleteNode)
serverA(updateNode) <--- transaction failure
The JobLockService allows more control in case of more complex operations, which might involve multiple, dynamic nodes (or no nodes at all, e.g. sending emails or similar):
serverA(acquireLock)
serverB(acquireLock) <--- wait for the lock to be released
serverA(readNode1)
serverA(if something then updateNode2)
serverA(updateNode1)
serverA(releaseLock)
serverB(readNode2)
serverB(releaseLock)