I'm really interested in the synchronized database paradigm championed by Firebase and others (Couchbase Sync Gateway for example). It really does a great job in replacing 80% of what an API does, which is storing and retrieving data. But usually, that's not all an API does. While we are storing and retrieving data, we are also doing non-data related stuff like sending emails or push notifications. To do those things, I should be able to intercept data changes and do something when a new record is created, when an existing record is changed in a certain way, or when a record is deleted. Parse has a great mechanism for that in its Cloud Code (https://parse.com/docs/cloud_code_guide#functions-aftersave) but I couldn't find something similar in Firebase. Did I miss something or am I thinking of it the wrong way?
firebaseRef.on('child_changed', function(childSnapshot, prevChildKey) {
// code to handle child data changes.
});
This is the article : https://www.firebase.com/docs/web/api/query/on.html
child_change event
Related
Background: I am using Firestore as the main database for my (web) application. I also pre-render the data stored in there, which basically means that I collect all data needed for specific requests so I can later fetch them in a single read access, and I store that pre-rendered data in a separate Firestore collection.
When a user changes some data, I want to know when this background rendering is finished, so I can then show updated data. Until rendering is finished, I want to show a loading indicator ("spinner") so the user knows that what he is currently looking at is outdated data.
Until now, I planned to have the application write the changed data into the database and use a cloud funtion to propagate the changed data to the collection of pre-rendered data. This poses a problem because the writing application only knows when the original write access is finished, but not when the re-rendering is finished, so it doesn't know when to update its views. I can hook into the table of rendered views to get an update when the rendering finished, but that callback won't be notified if nothing visibly changes, so I still do not know when to remove the spinner.
My second idea was to have the renderer function publish to a pubsub topic, but this fails because if the user's requests happens to leave the original data unchanged, the onUpdate/renderer is not called, so nothing gets published on the pubsub and again the client does not know when to remove the spinner.
In both cases, I could theoretically first fetch the data and look if something changed, but I feel that this too easily introduces subtle bugs.
My final idea was to disallow direct writes to the database and have all write actions be performed through cloud functions instead, that is, more like a classical backend. These functions could then run the renderer and only send a response (or publish to a pubsub) when the renderer is finished. But this has two new problems: First, these functions have full write access to the whole database and I'm back to checking the user's permissions manually like in a classical backend, not being able to make use of Firestore's rules for permissions. Second, in this approach the renderer won't get before/after snapshots automatically like it would get for onUpdate, so I'm back to fetching each record before updating so the renderer knows what changed and won't re-render huge parts of the database that were not actually affected at all.
Ideally, what (I think) I need is either
(1) a way to know when a write access to the database has finished including the onUpdate trigger, or
(2) a way to have onUpdate called for a write access that didn't actually change the database (all updated fields were updated to the values they already contained).
Is there any way to do this in Firestore / cloud functions?
You could increment a counter in the rendered documents, in such a way a field always changes even if there is no change for the "meaningful" fields
For that, the best is to use FieldValue.increment.
So I have a Firestore database where i have all my posts in, I push them on page load into the store so I can have a fast navigation and only need to fetch once.
What I wanna do now is to use a persisted state so i dont need to refetch it if the user opens a new window or F5(reloads) the page.
The problem is im not sure how to check if new Posts are in the Firestore without querying all posts and I havent found any methods to do it in a healthy and Read efficient way.
There's no super easy way around it - at the end you have some data, and the server has another, you need to check for differences.
If you're only trying to figure out if there are new posts on the backend, which are not loaded on your frontend, then just get the date on your last post, and then ask Firebase for all posts after this date :)
Of course if you don't have posts, ask for everything.
Keep in mind you need to manually check if posts are deleted ;)
Realtime updates with the onSnapshot method can be used to keep local data in sync with the server. If you initially load it into vuex then subsequent changes on server side will be reflected automatically.
https://firebase.google.com/docs/firestore/query-data/listen
To share one set of data across tabs/windows you could look at something like this
https://github.com/xanf/vuex-shared-mutations
I'm working on an iOS app which has (whoah surprise!) chat functionality. The whole app is heavily using the Firebase Tools, for the database I’m using the new Cloud Firestore solution.
Currently I'm in the process of tightening the security using the database rules, but I'm struggling a bit with my own data model :) This could mean that my data model is poorly chosen, but I'm really happy with it, except for implementing the rules part.
The conversation part of the model looks like this. At the root of my database I have a conversations collection:
/conversations/$conversationId
- owner // id of the user that created the conversation
- ts // timestamp when the conversation was created
- members: {
$user_id_1: true // usually the same as 'owner'
$user_id_2: true // the other person in this conversation
...
}
- memberInfo: {
// some extra info about user typing, names, last message etc.
...
}
And then I have a subcollection on each conversation called messages. A message document is a very simple and just holding information about each sent message.
/conversations/$conversationId/messages/$messageId
- body
- sender
- ts
And a screenshot of the model:
The rules on the conversation documents are fairly straightforward and easy to implement:
match /conversations/{conversationId} {
allow read, write: if resource.data.members[(request.auth.uid)] == true;
match /messages/{messageId} {
allow read, write: if get(/databases/$(database)/documents/conversations/$(conversationId)).data.members[(request.auth.uid)] == true;
}
}
Problem
My problem is with the messages subcollection in that conversation. The above works, but I don’t like using the get() call in there.
Each get() call performs a read action, and therefore affects my bill at the end of the month, see documentation.
Which might become a problem if the app I’m building will become a succes, the document reads ofcourse are really minimal, but to do it every time a user opens a conversation seems a bit inefficient. I really like the subcollection solution in my model, but not sure how to efficiently implement the rules here.
I'm open for any datamodel change, my goal is to evaluate the rules without these get() calls. Any idea is very welcome.
Honestly, I think you're okay with your structure and get call as-is. Here's why:
If you're fetching a bunch of documents in a subcollection, Cloud Firestore is usually smart enough to cache values as needed. For example, if you were to ask to fetch all 200 items in "conversions/chat_abc/messages", Cloud Firestore would only perform that get operation once and re-use it for the entire batch operation. So you'll end up with 201 reads, and not 400.
As a general philosophy, I'm not a fan of optimizing for pricing in your security rules. Yes, you can end up with one or two extra reads per operation, but it's probably not going to cause you trouble the same way, say, a poorly written Cloud Function might. Those are the areas where you're better off optimizing.
If you want to save those extra reads, you can actually implement a "cache" based on custom claims.
You can, for example, save the chats the user has access to in the custom claims under the object "conversations". Keep in mind custom claims has a limit of 1000 bytes as mentioned in their documentation.
One workaround to the limit is to just save the most recent conversations in the custom claims, like the top 50. Then in the security rules you can do this:
allow read, write: if request.auth.token.conversations[conversationId] || get(/databases/$(database)/documents/conversations/$(conversationId)).data.members[(request.auth.uid)] == true;
This is especially great if you're already using cloud functions to moderate messages after they were posted, all you need is to update the custom claims
For writing data in Firebase Database I use setValue() in my android app.
My question is: can a value of a variable change, if at the same time I change the value using the Admin API?
All writes to the database from all clients are ordered. It doesn't matter if its from a client app or the admin SDK. If there are two database clients trying to write different values to the same location in the database, the last writer in the order overwrites the previous value, which is then what all the other clients will eventually see.
If you want to decide what to do in the event of a conflict like this, you can use a transaction to make sure that each client gets to know exactly what the prior data was, and what the new data will be. This is how you make things like a counter safe to increment when there are lots of writers trying to increment it.
In the firebase example (https://gist.github.com/anantn/4323981), to add an user to the game, we attach the transaction method to playerListRef. Now, every time firebase attempts to update data, it will call the callback passed to the transaction method with the list of userid of all players. If my game supports thousands of users to join at a time, every instance this method executes, the entire user list will be downloaded and passed which will be bad.
If this is true, what is the recommended way to assign users then?
This is specifically what Firebase was designed to handle. If your application needs to actually assign player numbers, this example is the way to go. Otherwise, if the players just need to be in the same "game" or "room" without any notion of ordering you could remove the transaction code to speed things up a bit. The snippet as well as the backend have handled the number of concurrent connections you've mentioned—if you're seeing any specific problems with your code or behavior with Firebase that appears to be a bug, please contact us at support#firebase.com and we can dig into it.