I'm getting error like this - Cloud Firestore 'Oops, collections failed to load!' [duplicate] - firebase

I'm building a small chat app with expo, connected to Firestore. Here is the code to fetch the chat data:
useEffect(() => {
console.log("Loading snapShots on firebase");
const unsubscribe = db.collection('chats').onSnapshot(snapshot => (
setChats(snapshot.docs.map(doc => ({
id: doc.id,
data: doc.data()
})))
))
setTimeout(()=>{
unsubscribe();
}, 1000);
}, [])
This code is normally if I followed correctly documentation, supposed to close the snapShot listener after one second. If it does, I still get a [FirebaseError: Quota exceeded.] message and my app is very small, the data too.

Firebase quotas are reset daily at midnight (Pacific time). According to your timezone, it may differ. If you're located in Europe, it actually may be in the middle of the day. So if you reach the daily limitation, there is nothing you can do, but wait until the "next" day. Or you can update to the Spark Plan.
But remember, once you got the quota exceeded message your project will not be accessible until the quotas are reset.
As also #Dharmaraj mentioned in his comment, you might also consider using a get() call, and not listen for real-time changes. In this way, you attach a listener that is discounted automatically once you got the data.
Please also remember to not keeping your Firebase console open, as it is considered another Firestore client that reads data. So you'll be also billed for the reads that are coming from the console.

Related

If I implement onSnapshot real-time listener to Firestore in Cloud Function will it cost more?

I have a listener to Firestore DB changes and it fetches automatically every time there is a change, however, if I decide to implement it in Cloud Function and call it from the client app, will it cost more because it will running 24h/7 even when users are not using the app?
This is in Client side:
firestore()
.collection('Collection').doc().collection('public')
.where('act', '==', 1)
.orderBy('time', 'asc')
.limit(10)
.onSnapshot({
error: (e) => this.setState({ errorMessage: e, loading: false }),
next: (querySnapshot) => { this._calculateLocationDistance(querySnapshot) },
});
Moreover, is it necessary to do it in Cloud Function? Is it risky if I leave it in the client side?
You can't really use listeners effectively in Cloud Functions. Cloud Functions are meant to be stateless. They serve a single request at a time, and clean up afterward. If you try to use a listener, it just won't work the way you expect. Cloud Functions also don't keep a socket open to the requester. Once a response is sent, the connection is closed, and there's no way to keep it open.
Given these constraints, functions typically just use get() to fetch data a single time, and return the results to the client. If you want realtime results, that should be implemented on the client.
If you are working with a backend that can keep a socket connection open to a client, it is no less expensive to have a listener on the backend that delivers results to the client. You are still charged a document read for each document read by the listener as it continues to receive results.

Firestore Deadline Exceeded Node

I would like to load collection that is ~30k records. I.e load it via.
const db = admin.firestore();
let documentsArray: Array<{}> = [];
db.collection(collection)
.get()
.then(snap => {
snap.forEach(doc => {
documentsArray.push(doc);
});
})
.catch(err => console.log(err));
This will always throw Deadline Exceeded error. I have searched for some sorts of mechanism that will allow me to paginate trough it but I find it unbelievable not to be able to query for not that big amount in one go.
I was thinking that it may be that due to my rather slow machine I was hitting the limit but then I deployed simple express app that would do the fetching to app engine and still had no luck.
Alternatively I could also export the collection with gcloud beta firestore export but it does not provide JSON data.
I'm not sure about firestore, but on datastore i was never able to fetch that much data in one shot, I'd always have fetch pages of about 1000 records at a time and build it up in memory before processing it. You said:
I have searched for some sorts of mechanism that will allow me to paginate trough
Perhaps you missed this page
https://cloud.google.com/firestore/docs/query-data/query-cursors
In the end the issue was that machine that was processing the 30k records from the Firestore was not powerful enough to get the data needed in time. Solved by using, GCE with n1-standard-4 GCE.

Firebase: First write is slow

Currently developing a hybrid mobile app using ionic. When the app starts up, and a user writes to the Realtime Database for the first time, it's always delayed by around 10 or more seconds. But any subsequent writes are almost instantaneous (less than 1 second).
My calculation of delay is based on watching the database in the Firebase console.
Is this a known issue, or maybe I am doing something wrong. Please share your views.
EDIT:
The write is happening via Firebase Cloud Function.
This is the call to the Firebase Cloud function
this.http.post(url+"/favouritesAndNotes", obj, this.httpOptions)
.subscribe((data) => {
console.log(data);
},(error)=>{
console.log(error);
});
This is the actual function
app.post('/favouritesAndNotes', (request, response) => {
var db = admin.database().ref("users/" + request.body.uid);
var favourites = request.body.favourites;
var notes = request.body.notes;
if(favourites!==undefined){
db.child("favourites/").set(favourites);
}
if(notes!==undefined){
db.child("notes/").set(notes);
}
console.log("Write successfull");
response.status(200).end();
});
The first time you interact with the Firebase Database in a client instance, the client/SDK has to do quite some things:
If you're using authentication, it needs to check if the token that it has is still valid, and if not refresh it.
It needs to find the server that the database is currently hosted on.
It needs to establish a web socket connection.
Each of these may take multiple round trips, so even if you're a few hundred ms from the servers, it adds up.
Subsequent operations from the same client don't have to perform these steps, so are going to be much faster.
If you want to see what's actually happening, I recommend checking the Network tab of your browser. For the realtime database specifically, I recommend checking the WS/Web Socket panel of the Network tab, where you can see the actual data frames.

Firebase: Cloud Functions, How to Cache a Firestore Document Snapshot

I have a Firebase Cloud Function that I call directly from my app. This cloud function fetches a collection of Firestore documents, iterating over each, then returns a result.
My question is, would it be best to keep the result of that fetch/get in memory (on the node server), refreshed with .onSnapshot? It seems this would improve performance as my cloud function would not have to wait for the Firestore response (it would already have the collection in memory). How would I do this? Simple as populating a global variable? How to do .onSnaphot realtime listener with cloud functions?
it might depend how large these snapshots are and how many of them may be cached ...
because, it is a RAM disk and without house-keeping it might only work for a limited time.
Always delete temporary files
Local disk storage in the temporary directory is an in-memory file-system. Files that you write consume memory available to your function, and sometimes persist between invocations. Failing to explicitly delete these files may eventually lead to an out-of-memory error and a subsequent cold start.
Source: Cloud Functions - Tips & Tricks.
It does not tell there, what exactly the hard-limit would be - and caching elsewhere might not improve access time that much. it says 2048mb per function, per default - while one can raise the quotas with IAM & admin. it all depends, if the quota per function can be raised far enough to handle the cache.
here's an example for the .onShapshot() event:
// for a single document:
var doc = db.collection('cities').doc('SF');
// this also works for multiple documents:
// var docs = db.collection('cities').where('state', '==', 'CA');
var observer = doc.onSnapshot(docSnapshot => {
console.log(`Received doc snapshot: ${docSnapshot}`);
}, err => {
console.log(`Encountered error: ${err}`);
});
// unsubscribe, to stop listening for changes:
var unsub = db.collection('cities').onSnapshot(() => {});
unsub();
Source: Get realtime updates with Cloud Firestore.
Cloud Firestore Triggers might be another option.

Where to sync redux store and firebase?

I recently started learning react and redux and am confused as how to sync the data? I'm using create-react-app and redux-thunk.
Suppose I'm making a todolist. So should I add a task into redux and update the entire store to firebase using store.subscribe in index.js?
Or should I update the firebase store and then update the redux store? I think this would be the better method as the application would move only in response to the firebase store. But this approach slows the application re-render as the data change has to wait for the async request to finish which isn't good on a slow internet connection. And would make the user experience slow.
Like this.
export const startAddTaskAction = (task) => {
return (dispatch, getState) => {
database.ref('tasks').push(task)
.then(() => {
dispatch(addTaskAction(task));
});
};
};
Or do I update both simultaneously in the redux store and in firebase in the dispatch? But then what if the users internet connection fails? Firebase won't be able to write to itself but redux would show the task as saved?
Like this:
export const startAddTaskAction = (task) => {
return (dispatch, getState) => {
database.ref('tasks').push(task);
dispatch(addTaskAction(task));
};
};
Which way should I do this and why is it better then the others?
Go with First one you should wait until you push the data into firebase that is the best practice to do.
Let me explain the cases why you should do in this way
What if there is no connection at all user able to add all tasks but at the end when he reload the page he will be surprised that there is no data at all.
For all crud we use _id which is generated by databases if you don't hit the server you don't even get data at all.
Rather than giving unpredictable results to users it's better to let him wait put some time interval to api if it exceeds show some message net connection is slow

Resources