How to Use control Commands in Kusto Function - azure-data-explorer

I am ingesting json data from EventHub to my ADX-Kusto database table. Till here it is fine. Suppose If Ingestion fails due to some corrupt data, I could use the command .show ingestion failures | project <specific columns> to display the failure ingestion results. But When I try to wrap the command in-side a function, It throws Error. I get to know that we can't wrap control commands inside a function.
Now,
I want to trigger a function whenever ingestion failure happens.
Alert email notification if possible.

One solution I can offer you is to use Microsoft Flow or Azure Function to perform this task - periodically look for failures and issue mails.
There is no support for triggering anything by an ingestion failure - at high rates this simply is unfeasible.
Going forward we are planning to offer additional options for monitoring the ingest operations. Stay tuned.
UPDATE: it is now possible to monitor all ingestion operations using Diagnostic logs. Azure Data Explorer ingestion operations monitoring is now available in preview.

Related

Does Azure Data Explorer take care of ingestion's transient failures automatically in queued ingestion?

I'm using IKustoIngestClient.IngestFromStorageAsync(). I see some transient failure type when querying .show ingestion failures. Does Azure Data Explorer recover from these automatically?
Yes, up to a predefined amount/time of retries.
For monitoring "final" statuses of queued ingestions, you should probably use the API described here: https://learn.microsoft.com/en-us/azure/kusto/api/netfx/kusto-ingest-client-status and demonstrated here: https://learn.microsoft.com/en-us/azure/kusto/api/netfx/kusto-ingest-queued-ingest-sample#code

How to resolve celery.backends.rpc.BacklogLimitExceeded error

I am using Celery with Flask after working for a good long while, my celery is showing a celery.backends.rpc.BacklogLimitExceeded error.
My config values are below:
CELERY_BROKER_URL = 'amqp://'
CELERY_TRACK_STARTED = True
CELERY_RESULT_BACKEND = 'rpc'
CELERY_RESULT_PERSISTENT = False
Can anyone explain why the error is appearing and how to resolve it?
I have checked the docs here which doesnt provide any resolution for the issue.
Possibly because your process consuming the results is not keeping up with the process that is producing the results? This can result in a large number of unprocessed results building up - this is the "backlog". When the size of the backlog exceeds an arbitrary limit, BacklogLimitExceeded is raised by celery.
You could try adding more consumers to process the results? Or set a shorter value for the result_expires setting?
The discussion on this closed celery issue may help:
Seems like the database backends would be a much better fit for this purpose.
The amqp/RPC result backends needs to send one message per state update, while for the database based backends (redis, sqla, django, mongodb, cache, etc) every new state update will overwrite the old one.
The "amqp" result backend is not recommended at all since it creates one queue per task, which is required to mimic the database based backends where multiple processes can retrieve the result.
The RPC result backend is preferred for RPC-style calls where only the process that initiated the task can retrieve the result.
But if you want persistent multi-consumer result you should store them in a database.
Using rabbitmq as a broker and redis for results is a great combination, but using an SQL database for results works well too.

Firebase error: TOO_MANY_TRIGGERS

In our Firebase application there is a list with lots of items in Realtime Database. Every create, update and delete operation on single item is processed by Firebase Cloud Function with onWrite trigger (in simplest case this function just counts items). But sometimes there is a need for bulk operation on items without need for individual processing. Let's say we want in single transaction remove all items and reset counters.
Earlier it worked just fine. Due to the limit of 1000 for number of Cloud Functions triggered by a single write (https://firebase.google.com/docs/database/usage/limits), no functions where triggered at all and it was desired outcome.
Now, without any change to application code we have an error
Error: TOO_MANY_TRIGGERS: This request would cause too many functions to be triggered.
Same error appears in client application, Admin API and even when importing json using the web interface. Only option that works for us is processing of items in batches. But it is not transactional and takes up to tens of minutes instead of milliseconds as before.
What options do we have to bypass this error? Optimally this would be some switch to skip function triggering in case of exceeding the limit.
For anybody reading this question post-2018, there is now an option to disable strict enforcement for trigger limits.
Strict validation is enabled by default for write operations that trigger events. Any write operations that trigger more than 1000 Cloud Functions or a single event greater than 1 MB in size will fail and return an error reporting the limit that was hit. This might mean that some Cloud Functions aren't triggered at all if they fail the pre-validation.
If you're performing a larger write operation (for example, deleting your entire database), you might want to disable this validation, as the errors themselves might block the operation.
To turn off strictTriggerValidation, follow these steps:
Get your Database secret from the Service accounts tab of your Project settings in the Firebase console.
Run the following CURL request from your command line:
curl -X PUT -d "false" https://NAMESPACE.firebaseio.com/.settings/strictTriggerValidation/.json?auth\=SECRET
See here for the docs: https://firebase.google.com/docs/database/usage/limits
There is currently no way to prevent triggers from running in special circumstances. The only way around this is to undeploy all your triggers, perform your updates, then deploy all your triggers again.
I would encourage you to file a feature request for this.
I just got this error message in an older, Flutter project that I hadn't touched in quite some time.
[firebase_database/unknown] TOO_MANY_TRIGGERS: This request would cause too many functions to be
triggered.
It turned out that here it was caused by the fact that my Cloud Functions were still set to use Node v8, which was retired in early 2021.
Upgrading the Cloud Functions to use Node v12 (no other changes needed) removed the error message for me.
Turning off strictTriggerValidation is solved my issue.
if you are using firebase tool you can follow these steps.
Turn off strictTriggerValidation for entire project:
MAC
sudo firebase database:settings:set strictTriggerValidation false --project *my_project_id*
If you need to turn off for particular instance:
MAC
sudo firebase database:settings:set strictTriggerValidation false --project *my_project_id* --instance *my_instance_name*
check instances
sudo firebase database:instances:list --project *my_project_id*
Note: windows user please try without sudo
FYR:
Limitations: https://firebase.google.com/docs/database/usage/limits
Firebase CLI Commands: https://firebaseopensource.com/projects/firebase/firebase-tools/

Google Dataflow writing insufficient data to datastore

One of my Batch-Jobs tonight failed with a Runtime-Exception. It writes Data to Datastore like 200 other jobs that were running tonight. This one failed with a very long list auf causes, the root of it should be this:
Caused by: com.google.datastore.v1.client.DatastoreException: I/O error, code=UNAVAILABLE
at com.google.datastore.v1.client.RemoteRpc.makeException(RemoteRpc.java:126)
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:95)
at com.google.datastore.v1.client.Datastore.commit(Datastore.java:84)
at com.google.cloud.dataflow.sdk.io.datastore.DatastoreV1$DatastoreWriterFn.flushBatch(DatastoreV1.java:925)
at com.google.cloud.dataflow.sdk.io.datastore.DatastoreV1$DatastoreWriterFn.processElement(DatastoreV1.java:892)
Caused by: java.io.IOException: insufficient data written
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.close(HttpURLConnection.java:3501)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:81)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981)
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:87)
at com.google.datastore.v1.client.Datastore.commit(Datastore.java:84)
at com.google.cloud.dataflow.sdk.io.datastore.DatastoreV1$DatastoreWriterFn.flushBatch(DatastoreV1.java:925)
at com.google.cloud.dataflow.sdk.io.datastore.DatastoreV1$DatastoreWriterFn.processElement(DatastoreV1.java:892)
at com.google.cloud.dataflow.sdk.util.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:49)
at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase.processElement(DoFnRunnerBase.java:139)
at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:188)
at com.google.cloud.dataflow.sdk.runners.worker.ForwardingParDoFn.processElement(ForwardingParDoFn.java:42)
at com.google.cloud.dataflow.sdk.runners.
How can this happen? It's very similar to all the other jobs I run. I am using the Dataflow-Version 1.9.0 and the standard DatastoreIO.v1().write....
The jobIds with this error message:
2017-08-29_17_05_19-6961364220840664744
2017-08-29_16_40_46-15665765683196208095
Is it possible to retrieve the errors/logs of a job from an outside application (Not cloud console) to automatically being able to restart jobs, if they would usually succeed and fail because of quota-issues or other reasons that are temporary?
Thanks in advance
This is most likely because DatastoreIO is trying to write more mutations in one RPC call than the Datastore RPC size limit allows. This is data-dependent - suppose the data for this job differs somewhat from data for other jobs. In any case: this issue was fixed in 2.1.0 - updating your SDK should help.

how can i monitor iccube server and data via an external tool

I'd like to put iccube under solid monitoring so that we know when a) cube load failure or b) cube last update time exceeded the expected.
is there an api i can use to integrate with standard monitoring tools?rest, command-line etc ...
thanks in advance, assaf
Regarding the schema load failure you can check the notification service (www); you can for example receive an eMail on failure. Note that you can implement (JAVA) your own transport service to receive notifications. There is no "notification" for last update time exceeded but if you could use an external LOAD command (www) for loading your schema; in that case you will know the last update time and perform whatever logic required.
Edit: XMLA commands can be sent via any tools (e.g., Bash).
Hope that helps.

Resources