I have the following snippet
DatastoreV1.Query q = DatastoreV1.Query.newBuilder()
.addKind(DatastoreV1.KindExpression.newBuilder()
.setName("Example").build())
.setLimit(100).build()
PCollection<Example> examples = pipeline
.apply(DatastoreIO.readFrom("example", q));
and over 1000 entries from DataStore are returned. I am using DataFlow SDK 1.3.0.
This was a bug in the Dataflow SDK. The fix has been pushed to Github. Note the following: "when this limit is set the read from Cloud Datastore is performed
by a single worker rather than executing in parallel across a cluster." Thank you for your patience!
Update: the fix has since been released in 1.4.0.
Related
I have many v1 cloud functions. Written in typescript and running on NodeJS 18. Deployed with "firebase deploy functions". Completely server-less.
If I open Cloud Trace ( https://console.cloud.google.com/traces ), I find all my functions, but there are no traces, just one single long bar (yes of course, I am not collecting them...)
Now, my question is, can I add tracing mechanism to my functions so to see detailed traces? How would I go about that?
I found this guide, https://cloud.google.com/trace/docs/setup/nodejs-ot but it is either for Compute Engine or Google Kubernetes Engine. And Firebase Functions are neither of these...
Also, I know that sometimes the function instance is disposed, some other time no, like google try to reuse resources (indeed the second call to one of my function, within a 1 minute, is waaaaaaaay faster than the initial call). In that case, wouldn't this "recycling" be an issue for the sake of traces?
Edit: what I did (thanks to Doug for your comment).
I haven't set up anything at all for tracing, my question is to know if it is possible and how to do it.
The only documentation I found, as said before is ( https://cloud.google.com/trace/docs/setup/nodejs-ot ) and to me it seems that said doc does not apply to my case (I am not on Compute Engine nor on Google Kubernetes Engine) I have searched also for the documentation mentioned by Doug https://cloud.google.com/trace but I could not find a practical step by step implementation guide.
So If you want to reproduce my behaviour
firebase init a project
then write a function definition
deploy it with firebase deploy --only functions
go to trace on google cloud platform: https://console.cloud.google.com/traces
observe that the function has much as it is complex, calls different subfunctions, and makes multiple other http requests etc. has no detailed trace, and just one single span as depicted in the picture above
In our Firebase application there is a list with lots of items in Realtime Database. Every create, update and delete operation on single item is processed by Firebase Cloud Function with onWrite trigger (in simplest case this function just counts items). But sometimes there is a need for bulk operation on items without need for individual processing. Let's say we want in single transaction remove all items and reset counters.
Earlier it worked just fine. Due to the limit of 1000 for number of Cloud Functions triggered by a single write (https://firebase.google.com/docs/database/usage/limits), no functions where triggered at all and it was desired outcome.
Now, without any change to application code we have an error
Error: TOO_MANY_TRIGGERS: This request would cause too many functions to be triggered.
Same error appears in client application, Admin API and even when importing json using the web interface. Only option that works for us is processing of items in batches. But it is not transactional and takes up to tens of minutes instead of milliseconds as before.
What options do we have to bypass this error? Optimally this would be some switch to skip function triggering in case of exceeding the limit.
For anybody reading this question post-2018, there is now an option to disable strict enforcement for trigger limits.
Strict validation is enabled by default for write operations that trigger events. Any write operations that trigger more than 1000 Cloud Functions or a single event greater than 1 MB in size will fail and return an error reporting the limit that was hit. This might mean that some Cloud Functions aren't triggered at all if they fail the pre-validation.
If you're performing a larger write operation (for example, deleting your entire database), you might want to disable this validation, as the errors themselves might block the operation.
To turn off strictTriggerValidation, follow these steps:
Get your Database secret from the Service accounts tab of your Project settings in the Firebase console.
Run the following CURL request from your command line:
curl -X PUT -d "false" https://NAMESPACE.firebaseio.com/.settings/strictTriggerValidation/.json?auth\=SECRET
See here for the docs: https://firebase.google.com/docs/database/usage/limits
There is currently no way to prevent triggers from running in special circumstances. The only way around this is to undeploy all your triggers, perform your updates, then deploy all your triggers again.
I would encourage you to file a feature request for this.
I just got this error message in an older, Flutter project that I hadn't touched in quite some time.
[firebase_database/unknown] TOO_MANY_TRIGGERS: This request would cause too many functions to be
triggered.
It turned out that here it was caused by the fact that my Cloud Functions were still set to use Node v8, which was retired in early 2021.
Upgrading the Cloud Functions to use Node v12 (no other changes needed) removed the error message for me.
Turning off strictTriggerValidation is solved my issue.
if you are using firebase tool you can follow these steps.
Turn off strictTriggerValidation for entire project:
MAC
sudo firebase database:settings:set strictTriggerValidation false --project *my_project_id*
If you need to turn off for particular instance:
MAC
sudo firebase database:settings:set strictTriggerValidation false --project *my_project_id* --instance *my_instance_name*
check instances
sudo firebase database:instances:list --project *my_project_id*
Note: windows user please try without sudo
FYR:
Limitations: https://firebase.google.com/docs/database/usage/limits
Firebase CLI Commands: https://firebaseopensource.com/projects/firebase/firebase-tools/
Today after a simple deploy for adjusts (without install new features) my angularfire application stay lot delay. When checked the console log see this message, in login screen, without any operation or request.
Somebody can give some trick?
Firestore (4.9.0) 2018-05-22T22:05:09.569Z: FirebaseError: [code=resource-exhausted]: Quota exceeded.
Firestore (4.9.0) 2018-05-22T22:04:17.606Z: Using maximum backoff delay to prevent overloading the backend.
Problem solved after unistall old unused functions
npm uninstall firabase-functions
I am also facing the same error in angularfire 2. It looks like there are two possible options to solve this issue
1) User firebase cloud functions. Move your code there and call the function from angular. Retrieve the data.
2) Write a python script and execute it using tools like PyCharm.
One of my Batch-Jobs tonight failed with a Runtime-Exception. It writes Data to Datastore like 200 other jobs that were running tonight. This one failed with a very long list auf causes, the root of it should be this:
Caused by: com.google.datastore.v1.client.DatastoreException: I/O error, code=UNAVAILABLE
at com.google.datastore.v1.client.RemoteRpc.makeException(RemoteRpc.java:126)
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:95)
at com.google.datastore.v1.client.Datastore.commit(Datastore.java:84)
at com.google.cloud.dataflow.sdk.io.datastore.DatastoreV1$DatastoreWriterFn.flushBatch(DatastoreV1.java:925)
at com.google.cloud.dataflow.sdk.io.datastore.DatastoreV1$DatastoreWriterFn.processElement(DatastoreV1.java:892)
Caused by: java.io.IOException: insufficient data written
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.close(HttpURLConnection.java:3501)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:81)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981)
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:87)
at com.google.datastore.v1.client.Datastore.commit(Datastore.java:84)
at com.google.cloud.dataflow.sdk.io.datastore.DatastoreV1$DatastoreWriterFn.flushBatch(DatastoreV1.java:925)
at com.google.cloud.dataflow.sdk.io.datastore.DatastoreV1$DatastoreWriterFn.processElement(DatastoreV1.java:892)
at com.google.cloud.dataflow.sdk.util.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:49)
at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase.processElement(DoFnRunnerBase.java:139)
at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:188)
at com.google.cloud.dataflow.sdk.runners.worker.ForwardingParDoFn.processElement(ForwardingParDoFn.java:42)
at com.google.cloud.dataflow.sdk.runners.
How can this happen? It's very similar to all the other jobs I run. I am using the Dataflow-Version 1.9.0 and the standard DatastoreIO.v1().write....
The jobIds with this error message:
2017-08-29_17_05_19-6961364220840664744
2017-08-29_16_40_46-15665765683196208095
Is it possible to retrieve the errors/logs of a job from an outside application (Not cloud console) to automatically being able to restart jobs, if they would usually succeed and fail because of quota-issues or other reasons that are temporary?
Thanks in advance
This is most likely because DatastoreIO is trying to write more mutations in one RPC call than the Datastore RPC size limit allows. This is data-dependent - suppose the data for this job differs somewhat from data for other jobs. In any case: this issue was fixed in 2.1.0 - updating your SDK should help.
I'm using Google Cloud Functions to:
Watch for a new Firebase entry
Download a file that's referenced in the Firebase entry
Generate a thumbnail based on that file.
Upload the thumbnail to the cloud bucket.
Unfortunately I'm getting ECONNRESET errors repeatedly on step 4, and the only way to fix it seems to be to redeploy the function. Any ideas how to further debug this?
Edit:
It seems like many times when this happens, when I try to deploy the function again, it errors, and I have to run the deploy twice. Is something hanging or something?
Update May 9 2017
According to this thread, the google cloud nodejs API developers have made some changes to the defaults that are used when initializing that should solve these ECONNRESET socket issues.
From #stephen++ on github GoogleCloudPlatform/google-cloud-node issue 2254:
We have disabled the forever agent by default in Cloud Function
environments. If you un- and re-install #google-cloud/storage, you
will pick up the new behavior automatically. Thanks for all of the
helpful debugging, everyone!
Older Post Follows:
The solution for me to similar ECONNRESET issues using storage on the cloud functions platform was to use npm:promise-retry, but set up your own retry strategy because the default of 10 retries is too many.
I reported an ECONNRESET issue with cloud functions to Google Support (which you might star if you are also getting ECONNRESET in this context but not in other contexts) and they replied with a "won't fix" that the behavior is expected. Google support said the socket that the API client library uses to connect times out after a few minutes, and then when your cloud function tries to use it again you get ECONNRESET. They recommended adding autoRetry:true when initializing the storage API, but that did not help.
The ECONNRESETs happen on the read side too. In both read and write cases promise-retry helps, and most of the time with only 1 retry needed to reset the bad socket.
So I wrote npm:pipe-to-storage to return a promise to do the retries, check md5, etc., but I haven't tested it with binary data, only text, so I don't know if you could use it with image files. The calls would look like this:
const fs = require('fs');
const storage = require('#google-cloud/storage')();
const pipeToStorage = require('pipe-to-storage')(storage);
const source = ()=>(fs.createReadStream("/path/to/your/file/to/upload"));
pipeToStorage(source, bucketName, fileNameInBucket).then(//do next step);
See also How do I read the contents of a new cloud storage file of type .json from within a cloud function?
You can directly report a bug to the Firebase Support team, or open a support ticket with Firebase to troubleshoot a specific issue.
You may also report a Cloud Functions specific issue in the Google Issue Tracker, which is similar to Stack Overflow in that it is accessible by the public (but specifically used for filing issue reports).