Streaming updates with RTK-Query mutation and updating cache - redux

I'm using sockets to allow users to send messages in real-time. I read the RTK-Query documentation and saw an example for a query, where I would be fetching data, as opposed to mutating/sending data.
Is there any difference in implementation between fetching and posting using streaming/sockets? I tried to understand how it works with query, and apply that to mutation, but it didn't exactly work.
I am struggling to get this to work, so any help would be appreciated.

Related

Redux Error: Application state or actions payloads are too large - what is the issue?

VM75:3075 Application state or actions payloads are too large making Redux DevTools serialization slow and consuming a lot of memory. See https://git.io/fpcP5 on how to configure it.
I am getting the error above when working with Redux Toolkit RTK Query and Entity Adapters.
I am getting data from the query and use setAll to normalise the data. I am also getting the data from websocket and updating the nomalise data with the new data. I am Json parsing it before.
I am looking to find out what the issue is. Is the action payload too large? The list of items is not that big so Its probably not.
Any help would be appreciated.

Does Firebase Realtime Database guarantees FCFS order when serving requests?

This is rather just a straight forward question.
Does Firebase Realtime Database guarantees to follow 'First come first serve' rule when handling requests?
But when there is a write-request, and then instantaneously followed by a read-request, is the read-request will fetch updated data?
When there is a write-read-write sequence of requests, does for sure read-request fetch the data written by first write?
Suppose there is a write-request, which was unable to perform (due to some connection issues). As firebase can work even in offline, that change will be saved locally. Now from somewhere else another write-request was made and it completed successfully. After this, if the first device comes online, does it modify the values(since it arrived latest)? or not(since it was initiated prior to the latest changes)?
There are a lot of questions in your post, and many of them depend on how you implement the functionality. So it's not nearly as straightforward as you may think.
The best I can do is explain a bit of how the database works in the scenarios you mention. If you run into more questions from there, I recommend implementing the use-case and posting back with an MCVE for each specific question.
Writes from a single client are handled in the order in which that client makes them.
But writes from different clients are handled with a last-write-wins logic. If your use-case requires something else, include a client-side timestamp in the write and use security rules to reject writes that are older than the current state.
Firebase synchronizes state to the listeners, and not necessarily all (write) events that led to this state. So it is possible (and fairly common) for listeners to not get all state changes that happened, for example if multiple changes to the same state happened while they were offline.
A read of data on a client that this client itself has changed, will always see the state including its own changes.

Firebase - optimizing read / write

Consider a chat like implementation where clients write using a transaction on the head and read using on('child_added') listener.
When a client writes he will also get a read of the same version he sent which means a redundant transfer of that version from the database. In the case of only one connected client typing for example, all responses to the listener will be redundant.
I tried to optimize this by turning off the listener before writing and turning it on again when writing ended with a startAt(new head). This way I don't get the redundant read of the location that was sent.
This all works fine but I now don't know if the cost of removing and adding the listener may be high as well ? what is the best strategy here ?
Firebase automatically optimizes it for you. This is pretty much the standard use case; it's what Firebase was designed for. The best strategy is to leave the listener on. Let Firebase do its thing.

Firebase Database Migration

Coming from a SQL background, I'm wondering how does one go about doing database migration in firebase?
Assume I have the following data in firebase {dateFrom: 2015-11-11, timeFrom: 09:00} .... and now the front-end client will store and expects data in the form {dateTimeFrom: 2015-011-11T09:00:00-07:00}. How do I update firebase such that all dateFrom: xxxx and timeFrom: yyyy are removed and replaced with dateTimeFrom: xxxxyyyy? Thanks.
You have to create your own script that reads, transform and write it back. You may eider read one node at the time or read the whole DB if it is not big. You may decide to leave the logic to your client when it access to it (if it ever does)
I think you are looking for this: https://github.com/kevlened/fireway
I think is a bad idea to pollute a project with conditionals to update data on the fly.
It is a shame firestore doesn't implement a process for this as it is very common and required to keep the app and db in sync.
FWIW, since I'm using Swift and there isn't a solution like Fireway (that I know of), I've submitted a feature request to the Firebase team that they've accepted as a potential feature.
You can also submit a DB migration feature request to increase the likelihood that they create the feature.

When a Firebase node syncs, is the full new value sent to the server or just the difference?

And does it work the same way when an object is being updated via a callback like ref.on('value', ...?
I tried to figure it out in myself in the Chrome dev tools but wasn't able to.
This makes a difference for me because I'm working on an app where users might store large amounts of text. If only diffs are sent over the wire, it's a lot more lightweight and I can sync much more frequently. If full values are sent, I wouldn't want to do that.
When data is written, the Firebase server currently sends all the data being written down to the server. If you write a large object, and then rewrite the whole object with the same object again, the entire object will be sent over the wire (This may be changing in the future, but that's the current implementation).
When sending data from the server back out to other clients, we do do some optimization and don't transmit some of the duplicate data.
Firebase is designed to allow you to granularly access data. I would strongly suggest you address into the data that is changing and only update the relevant portions. For example:
//inefficient method:
ref.set(HUGE_BLOCK_OF_JSON);
//efficient method:
ref.child("a").child("b").child("c").set(SOME_SMALL_PIECE_OF_DATA);
When you address into a piece of data, only that small piece is transmitted and rebroadcast to other clients.
Firebase is intended for true real-time apps where updates are made as soon as data changes. If you find yourself intentionally caching changes for a while and saving them as big blobs for performance reasons, you should probably be breaking up your data and only writing the relevant portions.

Resources