Redux Store: What goes into data what goes into UI? - redux

It seems to be common practice to have separate stores for data and ui, respectively, e.g.
store
|
+--- data
| |
| + foo.reducer.ts
| + ...
|
+--- ui
|
+ bar.reducer.ts
+ ...
I'm wondering how to tell data from ui, there seems to be no clear distinction. For example, I might want to store the state of my sidebar (open, floating, pinned etc.), that clearly goes into ui. Then there are things such as app bar title which also appear to be a pure UI concern, although they have some relation to the data displayed in the view.
Next, I might have a list of objects (e.g. contacts) to choose from where I'd like to track which one is selected etc. While this still is to do with UI state these objects come from a data store and I might want to change the selected item(for instance, select a contact, pop up a dialog and edit the name of that contact).
Are there any hard and fast rules or guidelines that I can follow or am I left to my own devices?

I wouldn't even bother making a distinction between the two. It'll be clear from your variable names.
isSidebarOpen
contacts
selectedContact
isLoading

Placing state in the store vs UI in Redux apps
This is a subjective question and is difficult to answer. However there are some guidelines you can follow. Experience is the best teacher for this topic. Having said that best practice is to:
Keep as much data as possible within your Redux store unless it is inconvenient to do so
*Global data that is shared by components in your application belongs or mutates in complex ways belong in the store. The since the majority of your state in your app fits this criteria most of it should belong in the store.
By following this guideline and endeavouring to keep all applicaton state in your store you will stay true to the single source of truth principle of Redux. This principle holds that
The state of your whole application is stored in an object tree within
a single store.
Data that matters globally or can be mutated in a complex manner should go in state. If data is used by more than one part of your UI then it matters globally.
Data that only matters to one small part that doesn't mutate in a complicated way can go in the ui
Ephemeral Data that is only relevant to an isolated part of the UI such as whether some part of the UI is toggled can go in the UI.

Related

How to handle Complex Object in the ngrx?

Hello,
I'm working on a project in which the main part of the data has a complex structure as you can see in the above picture.
Now, the object, in reality, is much complex than that but what I showed it servers the purpose.
Because in DB they are linked together in tables relationship the first time when the website is launched, after log in, a list of projects will come together with some small details of technology and dataObjects.
I created separated action and effects files but everything is handled by a single reducer. What I mean is at the start, the list of projects will be saved on a state, than any other actions like Create a project, technology, data object, edit, delete has to perform actions over the same state "projects-state".
For example besides technologyAPIS will be another 3-4 technologies, inside each technology object will be another list of objects.
The issue here is that the reducer file is getting bigger and bigger when it handles all kinds of actions that will perform actions over the specific data from the state. It is important that the chain of Objects stay together.
My question is, is this a bad approach? it can be handled in a different way? I know I can create a reducer for each entity (project, technology, data app) but I will lose that relationship between them, where one belongs to the other?
Thank you so much for your feature response
I've only been doing reactive/NGRX for a few months now, but from my understanding, thats defiantly a bad approach. It should still work, but may be a hassle to debug/maintain.
Ngrx seems to be promoting 'Normalising' data, just like the usual Relational Database concept.
You could break down your project-state into smaller states, with relational keys in them.
Example
Projects state (not to be confused with project-state)
id:number
projectDescription:string
createdBy:string
techAPIs:number[] //where the content here is the id of the TechApi
TechApi state
id:number
otherInfo:any
And then when you need to access the TechApi state for a project, you retrieve it and filter/map it in a selector.
This is a some what General Example if my explanation is not understandable.

something in ngrx (redux pattern) than I still dont get for large applications

I've been building data driven applications for about 18 years and for the past two, I've been successfuly using angular for my large forms/crud based apps. You know, the classic sql server db with hundreds of tables with millons of records. So far, so good.
Now I'm porting/re-engineering a desktop app with about 50 forms, all complex, all fully functional, "smart". My approach for the last couple years was to simply work tightly with the backend rest API to retrieve, insert or update data as needed and everything works fine.
Then I stumbled across ngrx and I understand exactly how it works, what it does and why it is good for a "reactive" app.
My problem is the following: In the usual lifecycle of the kind of systems i mentioned, you always have to deal with fresh data and always have to tell everything to the server. Almost no data in such apps can be safely "stored" localy since transactional systems rely on centralized data interactions. There's no such thing as "hey lets keep this employee's sales here for later use".
So why would it be so important to manage a local 'store' when most of my data is volatile? I understand why it would be useful for global app data like user-profile or general ui related state, but for the core data itself? I dont get it. You query for data, plug that data in the form, it gets processed by the user and sent back to the server. That data is no longer needed, and if you do need it, you ask for it again, as it could have changed its state since the last time you interacted with it.
I do not understand the great lengths i have to go to mantain a local store and all the boilerplate if that state is so volatile.
They say change detection does not scale but I've build some really large web apps with a simple "http service" pattern and it works just fine, cause most of the component-tree is destroyed anyway as you go somewhere else in the app, and any previous subscriptions become useless. Even with large-bulky-kinky forms, it's never that big of a problem the inner workings of a form as to require external "aid" fro a store. The way I see it, the "state" of a form is a concern of that form in that moment alone. Is it to keep the component tree in sync? never had problems with that before... even for complicated trees with lots of shared data, master detail is kind of a flat pattern in the end if al lthe data is there.
For other components, such as grids, charts, reporte, etc, same thing applyes. They get the data they need and then "puf", gone.
So now you see my mindset. I AM trying to change it to something better. Why am I missing out the redux pattern?
I have a bit of experience here! It's all subjective, so what I've done may not suit you. My system is a complex system that sounds like it's on a similar scale as yours. I battled at first with the same issues of "why build complex logic on the front end and back end", and "why bother keeping stuff in state".
A redux/NGRX approach works for me because there are multiple ways data can be changed - perhaps it's a single user using the front end, perhaps it's another user making a change and I want to respond to that change straight away to avoid concurrency issues down the track. Perhaps there are multiple parts within my front end that can manipulate the same data.
At the back end, I use a CQRS pattern instead of a traditional REST API. Typically, one might suggest to re-implement the commands/queries to "reduce" changes to the state, however I opted for a different approach. I don't just want to send a big object graph back to the server and have it blindly insert, and I don't want to re-implement logic on the client and server.
My basic "use case" life cycle looks a bit like:
Load a list of data (limited size, not all attributes).
User selects item from list
Client requests "full" object/view/dto from server
Client stores response in object entity state
User starts modifying data
These changes are stored as "in progress" changes in a different part of state. The system is now responding to the data in the "in progress" part
If another change comes in from server, it doesn't overwrite the "in progress" data, but it does replace what is in the object entity state.
If required, UI shows that the underlying data has changed / is different to what user has entered / whatever.
User clicks on the "perform action" button, or otherwise triggers a command to be sent to server
server performs command. Any errors are returned, or success
server notifies client that change was successful, the client clears the "in progress" information
server notifies client that Entity X has been updated, client re-requests entity X and puts it into the object entity state. This notification is sent to all connected clients, so they can all behave appropriately.

Redux: is state normalization necessary for composition relationship?

As we know, when saving data in a redux store, it's supposed to be transformed into a normalized state. So embedded objects should be replaced by their ids and saved within a dedicated collection in the store.
I am wondering, if that also should be done if the relationship is a composition? That means, the embedded data isn't of any use outside of the parent object.
In my case the embedded objects are registrations, and the parent object is a (real life) event. Normalizing this data structure to me feels like a lot of boilerplate without any benefit.
State normalization is more than just how you access the data by traversing the object tree. It also has to do with how you observe the data.
Part of the reason for normalization is to avoid unnecessary change notifications. Objects are treated as immutable so when they change a new object is created so that a quick reference check can indicate if something in the object changed. If you nest objects and a child object changes then you should change the parent. If some code is observing the parent then it will get change notifications every time a child changes even though it might not care. So depending on your scenario you may end up with a bunch of unnecessary change notifications.
This is also partly why you see lists of entities broken out into an array of identifiers and a map of objects. In relation to change detection, this allows you to observe the list (whether items have been added or removed) without caring about changes to the entities themselves.
So it depends on your usage. Just be aware of the cost of observing and the impact your state shape has on that.
I don't agree that data is "supposed to be [normalized]". Normalizing is a useful structure for accessing the data, but you're the architect to make that decision.
In many cases, the data stored will be an application singleton and a descriptive key is more useful than forcing some kind of id.
In your case I wouldn't bother unless there is excessive data duplication, especially because your would have to then denormalize for the object to function properly.

How to set up async processes with Vuex in any specific order

First let's understand the app:
The sample app mocks grabbing data from 2 sources: an array of available objects, and an array of objects being used.
The app also displays new objects (available ones not being used).
Finally, it also allows you to use (register) one of the new objects
Requirements:
The app needs to display the list of New objects first
The app needs to minimize the number of API calls to the bare minimum and only make calls when strictly necessary
Calls to produce changes in API data (register) should be reactive and display changes in UI immediately
The code I have implemented meets these 3 requirements. However, I'm really unhappy with this implementation, and I'm sure that's not the way the Vuex store is supposed to be used.
For starters, my implementation only works for the specific order in which the components are displayed in the screen:
<new name="New" :selected="true"></new>
<available name="Available"></available>
<using name="Using"></using>
If I, for example, want to move <available> to the last tab, the code will break.
This happens because I haven't been able to simply call dispatch('getNews') once and have everything else fall into place, without at the same time duplicating one or to API calls, and thus not meeting the requirements...
I tried using dispatch('...').then().then() but I haven't been able to make it work and meet the requirements.
I would greatly appreciate anyone with experience in similar situations with Vuex tell me how they'd do this.
Bonus if you can do it without adding extra mutations.

How to realize persistence of a complex graph with an Object Database?

I have several graphs. The breadth and depth of each graph can vary and will undergo changes and alterations during runtime. See example graph.
There is a root node to get a hold on the whole graph (i.e. tree). A node can have several children and each child serves a special purpose. Furthermore a node can access all its direct children in order to retrieve certain informations. On the other hand a child node may not be aware of its own parent node, nor other siblings. Nothing spectacular so far.
Storing each graph and updating it with an object database (in this case DB4O) looks pretty straightforward. I could have used a relational database to accomplish data persistence (including database triggers, etc.) but I wanted to realize it with an object database instead.
There is one peculiar thing with my graphs. See another example graph.
To properly perform calculations some nodes require informations from other nodes. These other nodes may be siblings, children/grandchildren or related in some other kind. In this case a specific node knows the other relevant nodes as well (and thus can get the required informations directly from them). For the sake of simplicity the first image didn't show all potential connections.
If one node has a change of state (e.g. triggered by an internal timer or triggered by some other node) it will inform other nodes (interested obsevers, see also observer pattern) about the change. Each informed node will then take appropriate actions to update its own state (and in turn inform other observers as needed). A root node will not know about every change that occurs, since only the involved nodes will know that something has changed. If such a chain of events is triggered by the root node then of course it's not much of an issue.
The aim is to assure data persistence with an object database. Data in memory should be in sync with data stored within the database. What adds to the complexity is the fact that the graphs don't consist of simple (and stupid) data nodes, but that lots of functionality is integrated in each node (i.e. events that trigger state changes throughout a graph).
I have several rough ideas on how to cope with the presented issue (e.g. (1) stronger separation of data and functionality or (2) stronger integration of the database or (3) set an arbitrary time interval to update data and accept that data may be out of synch for a period of time). I'm looking for some more input and options concerning such a key issue (which will definitely leave significant footprints on a concrete implementation).
(edited)
There is another aspect I forgot to mention. A graph should not reside all the time in memory. Graphs that are not needed will be only present in the database and thus put in a state of suspension. This is another issue which needs consideration. While in suspension the update mechanisms will probably be put to sleep as well and this is not intended.
In the case of db4o check out "transparent activation" to automatically load objects on demand as you traverse the graph (this way the graph doesn't have to be all in memory) and check out "transparent persistence" to allow each node to persist itself after a state change.
http://www.gamlor.info/wordpress/2009/12/db4o-transparent-persistence/
Moreover you can use db4o "callbacks" to trigger custom behavior during db4o operations.
HTH
German
What's the exact question? Here a few comments:
As #German already mentioned: For complex object graphs you probably want to use transparent persistence.
Also as #German mentione: Callback can help you to do additional stuff when objects are read/written etc on the database.
To the Observer-Pattern. Are you on .NET or Java? Usually you don't want to store the observers in the database, since the observers are usually some parts of your business-logic, GUI etc. On .NET events are automatically not stored. On Java make sure that you mark the field holding the observer-references as transient.
In case you actually want to store observers, for example because they are just other elements in your object-graph. On .NET, you cannot store delegates / closures. So you need to introduce a interface for calling the observer. On Java: Often we use anonymous inner classes as listener: While db4o can store those, I would NOT recommend that. Because a anonymous inner class gets generated name which can change. Then db4o will not find that class later if you've changed your code.
Thats it. Ask more detailed questions if you want to know more.

Resources