Ngrx complex state reducer - ngrx

I struggle finding the right way to mutate my state in an ngrx application as the state is rather complex and depending on many factors. This Question is not about doing one piece of code correct but more about how to design such a software in general, what are doe's and don'ts when finding some hacky solutions and workarounds.
The app 'evolved' by time and i wan't to share this process in an abstracted way to make my point clear:
Stage 1
State contains Entities. Those represent nodes in a tree and are linked by ids. Modifying or adding an entity requires a check about the type of nodes the new/modified ones should be connected with. Also it might be that upon modifying a node, other nodes had to be updated.
The solution was: create functions that do the job. Call them right in the reducer so everything is always up to date and synchronus when used (there are services that might modify state).
Stage 2
A configuration is added to the state having an impact on the way the automatically modifyed nodes are modifyed/created. This configuration is saved in it's own state right under the root state.
The solution:
1) Modify the actions to also take the required data from the configuration.
2) Modify the places where the actions are created/dispatched (add some ugly
this.state.select(fromRoot.getX)
.first()
.(subscribe(element => {this.state.dispatch(new Action({...old_payload, newPayload: element}))})
wrapper around the dispatch-calls)
3) modify the functions doing the node-modification and
4) adding the argument-passing to the function calls inside the reducer
Stage 3
Now i'am asked to again add another configuration to the process, also retrived by the backend and also saved in another state right under the root state
State now looks like:
root
|__nodes
|__config_1
|__config_2
i was just about to repeat the steps as in stage 2 but the actions get really ig with all the data passed in and functions have to carry around a lot of data. This seems to be wrong, when i actually dispatch the action on the state containing all the needed info.
How can i handle this correct?
Some ideasi already had:
use Effects: they are able to get everything from state they need and can create everything - so i only need to dispatch an action with only the actions payload, the effect then can grab everything from the state it needs. I don't like this idea because it triggers asynchronus tasks to modify the state and add not-state-changing actions.
use a service: with a service holding state it would be much like with effects but without using actions to just create asynchronus calls which then dispatch the actions that relly change state.
do all the stuffi n the component: at the moment the components are kept pretty simple when it comes to changing state as i prefer the idea that actions carry as little data as possible, since reducers can access the state to get theyr data - but this is where the problem occus, this time i can't get hands on the data i need.

Related

Two conflicting long lived process managers

Let assume we got two long lived process managers. Both sagas operates over 10 milion items for example. First saga adds something to each item. Second saga removes it from each item. Given both process managers need few minutes to complete its job if I run them simultaneously I get into troubles.
Part of those items would hold the value while rest of them not. The result is close to random actually and depends on command order that affect particular item. I wondered if redispatching "Remove" command in case of failure would solve the problem. I mean if you try remove non existing value you should wait for the first saga to add the value. But while process managers are working someone else may dispatch "Remove" or "Add" command. In such case my approach would fail.
How may I solve such problem? :)
It seems that you would want the second saga to not run if the first saga is running (and presumably not run until some process which depends on whatever the first saga added being there). So the apparent solution would be to have a component (could be a microservice, could also be a record in a strongly consistent datastore like zookeeper/etcd/consul) that gives permission for the sagas to start executing. An example protocol might look like:
Saga sends a message to the component identifying the saga and conveying the intention to start
Component validates that no sagas might be running which would prevent this saga from running
Component responds with permission to start running
Subsequent saga attempts result in rejection until the running saga tells the component it's OK to run the other saga
Assuming that this component is reliably durable, the failure mode to worry about is that permission is granted but this component never processes the message that the saga finished (causes of this could include the permission message not getting delivered/processed or the saga crashing). No amount of acknowledgements or extra messages can solve this (it's basically the Two Generals' Problem).
A mitigation is to have this component (or something watching this component) alert if it seems that too much time has passed without saga completion. Whatever/whoever is responsible for ensuring liveness would then investigate to see if the saga is still running and if none is running, inform the component that it's OK to run the other saga. Note that this is not foolproof: it's quite possible for the decider in question to make what turns out to be the wrong decision.
I feel like I need more context. Whilst you don't say it explicitly, is the problem that the second saga tries to remove values that haven't been added by the first?
If that was true, a simple solution would be to just use a third state.
What I mean by that is to just more explicitly define and declare item state. You currently seem to have two states with value, and without value, but nothing to indicate if an item is ready to be processed by the second saga because the first saga has already done it's work on the item in question.
So all that needs to happen is that the second saga keeps looking for items where:
(with_value == true & ready_for_saga2 == true)
Ready_for_saga2 or "Saga 1 processing complete", whatever seems more appropriate in your context.
I'd say that the solution would vary based on which actual problem, we're trying to solve.
Say it's an inventory and add are items added to the inventory and remove are items requested for delivery. Then the order of commands does not matter that much because you could just process the request for delivery, when new items are added to the inventory.
This would lead to an aggregate root with two collections: Items and PendingOrders.
One process manager adds new inventory to Items - if any orders are pending, it will complete these orders in the same transaction and remove both the item and the order from the collections.
If the other process manager adds an order (tries to remove an item), it will either do it right away, if there's any items left - or it will add the order to the pending orders to be processed when new items arrive (and maybe notify someone about the delay, while we're at it).
This way we end up with the same state regardless of the order of commands, but the actual real-world-problem has great influence on the model chosen.
If we have other real world problems, we can make a model those too.
Let's say you have two users that each starts a process that bulk updates titles on inventory items. In this case you - and the users - have to decide how best to resolve this conflict - what will lead to the best real world outcome.
If you want consistency across all the items - all or no items should be updated by a single bulk update - I would embed this knowledge in a new model. Let's call it UpdateTitlesProcesses. We have only one instance of this model in the system. The state is shared between processes. This model is effectually a command queue, and when a user initiates the bulk operation, it adds all the commands to the queue and starts processing each item one at a time.
When the second user initiates another title update, the business logic in our models will reject this, as there's already another update started. Or if the experts say that the last write should win, then we ditch the remaining commands from the first process and add the new ones (and similarly we should decide what should happen if a user issues a single title update, not bulk - should it be rejected, prioritized or put on hold?).
So in short I'd say:
Make it clear which real world problem we are solving - and thus which conflict resolution outcome is best (probably a trade off, often also something that requires user interaction or notification).
Model this explicitly (where processes, actions and conflict handling are also part of the model).

Why reducer must return new state?

Why reducer must return new state what is the reason for that .Why can't we return the updated state? Is that the pattern that we must follow or what?Also please let me know that ngrx and redux are they completely different?
I think because the view layer needs to compare current state and previous state, they should be different objects. Also, it can support other features like debugging, time travel.
In both the library, They return a newly modified state or the original state
Just going through the official docs of both NgRX reducer and Redux reducer
NGRX Reducer
Reducers in NgRx are responsible for handling transitions from one state to the next state in your application.
Reducer functions are pure functions in that they produce the same output for a given input. They are without side effects and handle each state transition synchronously. Each reducer function takes the latest Action dispatched, the current state, and determines whether to return a newly modified state or the original state
Redux Reducer
Reducers specify how the application's state changes in response to actions sent to the store.
Regardless of the state management pattern, You need to change the state through reducers as actions are responsible fpr source of information for the store. They are the entry points to interact with store in Both NgRx and 'redux', moreover in Vuex too.
As per the state management library implementation, I guess they both follow same principle of Actions, Reducer to update the state async. There might be some possibly they may have different feature.
Hope this helps!
Both libraries aim to manage a state which is only manipulated in particular, predefined ways; reducers are the access they provide to the state.
By limiting the ability to manipulate the state directly, they make it easier to understand how a particular state was reached; it is always possible to reach a particular state by dispatching the same actions again, and a given state can only be reached as a result of the actions dispatched to state (at least, this is the ideal - impure* reducers would potentially lead to different states being reached from the same actions).
If we imagine a state manager which allowed functions to manipulate state, which is what would be required to return a mutated version of the original state, then it would be far more difficult to understand how a given state was reached, as the store could have been manipulated at any point by any function.
This article gives a good overview of the key ideas behind redux and explains why redux does the things it does. Here are the relevant parts for your question:
State is read-only
The only way to change the state is to emit an action, an object describing what happened.
This ensures that neither the views nor the network callbacks will ever write directly to the state. Instead, they express an intent to transform the state. Because all changes are centralized and happen one by one in a strict order, there are no subtle race conditions to watch out for. As actions are just plain objects, they can be logged, serialized, stored, and later replayed for debugging or testing purposes.
Changes are made with pure functions
To specify how the state tree is transformed by actions, you write pure reducers.
Reducers are just pure functions that take the previous state and an action, and return the next state. Remember to return new state objects, instead of mutating the previous state. You can start with a single reducer, and as your app grows, split it off into smaller reducers that manage specific parts of the state tree. Because reducers are just functions, you can control the order in which they are called, pass additional data, or even make reusable reducers for common tasks such as pagination.
I have far less experience with ngrx, though as it seems like a redux-inspired store, I'll presume it follows more or less the same principles. I'd be happy to be proven wrong, in which case I can update this answer.
*An impure function function would do one or many of the following:
Access state other than the arguments it was passed
Manipulate the arguments it was passed
Contain a side effect - something which affects state outside of itself
Mutating state is the most common cause of bugs in Redux applications, including components failing to re-render properly, and will also break time-travel debugging in the Redux DevTools. Actual mutation of state values should always be avoided, both inside reducers and in all other application code.
Use tools such as redux-immutable-state-invariant to catch mutations during development, and Immer to avoid accidental mutations in state updates.
Note: it is okay to modify copies of existing values - that is a normal part of writing immutable update logic. Also, if you are using the Immer library for immutable updates, writing "mutating" logic is acceptable because the real data isn't being mutated - Immer safely tracks changes and generates immutably-updated values internally.
From Redux Doc.

How to set up async processes with Vuex in any specific order

First let's understand the app:
The sample app mocks grabbing data from 2 sources: an array of available objects, and an array of objects being used.
The app also displays new objects (available ones not being used).
Finally, it also allows you to use (register) one of the new objects
Requirements:
The app needs to display the list of New objects first
The app needs to minimize the number of API calls to the bare minimum and only make calls when strictly necessary
Calls to produce changes in API data (register) should be reactive and display changes in UI immediately
The code I have implemented meets these 3 requirements. However, I'm really unhappy with this implementation, and I'm sure that's not the way the Vuex store is supposed to be used.
For starters, my implementation only works for the specific order in which the components are displayed in the screen:
<new name="New" :selected="true"></new>
<available name="Available"></available>
<using name="Using"></using>
If I, for example, want to move <available> to the last tab, the code will break.
This happens because I haven't been able to simply call dispatch('getNews') once and have everything else fall into place, without at the same time duplicating one or to API calls, and thus not meeting the requirements...
I tried using dispatch('...').then().then() but I haven't been able to make it work and meet the requirements.
I would greatly appreciate anyone with experience in similar situations with Vuex tell me how they'd do this.
Bonus if you can do it without adding extra mutations.

Responsibility of store in flux

In flux I'm wondering, is it okay to
make async operation
change multiple values (by different keys) in state
trigger actions
in a single store? If I need to update 2 keys of store, should I create another store to separate concerns and make store responsible for a single first level property in state?
E.g. in Redux reducer is responsible for a single first level key on resulted object, asaik
Make async operations:
Typically it is better to keep your stores synchronous... they should be dumb and just receive data. Makes everything easier and testable! The action creator should dispatch the appropriate action once it has resolved.
Change multiple values (by different keys) in state:
This isn't that bad, but as you eluded too, perhaps you need to rethink how your app state is structured. It depends on the action though... hard to say without knowing the context.
Trigger actions:
Your views are responsible for triggering actions... So stores should not trigger actions!
Some links:
Async requests with React.js and Flux, revisited.
Using a Redux store in your React.js application

How to realize persistence of a complex graph with an Object Database?

I have several graphs. The breadth and depth of each graph can vary and will undergo changes and alterations during runtime. See example graph.
There is a root node to get a hold on the whole graph (i.e. tree). A node can have several children and each child serves a special purpose. Furthermore a node can access all its direct children in order to retrieve certain informations. On the other hand a child node may not be aware of its own parent node, nor other siblings. Nothing spectacular so far.
Storing each graph and updating it with an object database (in this case DB4O) looks pretty straightforward. I could have used a relational database to accomplish data persistence (including database triggers, etc.) but I wanted to realize it with an object database instead.
There is one peculiar thing with my graphs. See another example graph.
To properly perform calculations some nodes require informations from other nodes. These other nodes may be siblings, children/grandchildren or related in some other kind. In this case a specific node knows the other relevant nodes as well (and thus can get the required informations directly from them). For the sake of simplicity the first image didn't show all potential connections.
If one node has a change of state (e.g. triggered by an internal timer or triggered by some other node) it will inform other nodes (interested obsevers, see also observer pattern) about the change. Each informed node will then take appropriate actions to update its own state (and in turn inform other observers as needed). A root node will not know about every change that occurs, since only the involved nodes will know that something has changed. If such a chain of events is triggered by the root node then of course it's not much of an issue.
The aim is to assure data persistence with an object database. Data in memory should be in sync with data stored within the database. What adds to the complexity is the fact that the graphs don't consist of simple (and stupid) data nodes, but that lots of functionality is integrated in each node (i.e. events that trigger state changes throughout a graph).
I have several rough ideas on how to cope with the presented issue (e.g. (1) stronger separation of data and functionality or (2) stronger integration of the database or (3) set an arbitrary time interval to update data and accept that data may be out of synch for a period of time). I'm looking for some more input and options concerning such a key issue (which will definitely leave significant footprints on a concrete implementation).
(edited)
There is another aspect I forgot to mention. A graph should not reside all the time in memory. Graphs that are not needed will be only present in the database and thus put in a state of suspension. This is another issue which needs consideration. While in suspension the update mechanisms will probably be put to sleep as well and this is not intended.
In the case of db4o check out "transparent activation" to automatically load objects on demand as you traverse the graph (this way the graph doesn't have to be all in memory) and check out "transparent persistence" to allow each node to persist itself after a state change.
http://www.gamlor.info/wordpress/2009/12/db4o-transparent-persistence/
Moreover you can use db4o "callbacks" to trigger custom behavior during db4o operations.
HTH
German
What's the exact question? Here a few comments:
As #German already mentioned: For complex object graphs you probably want to use transparent persistence.
Also as #German mentione: Callback can help you to do additional stuff when objects are read/written etc on the database.
To the Observer-Pattern. Are you on .NET or Java? Usually you don't want to store the observers in the database, since the observers are usually some parts of your business-logic, GUI etc. On .NET events are automatically not stored. On Java make sure that you mark the field holding the observer-references as transient.
In case you actually want to store observers, for example because they are just other elements in your object-graph. On .NET, you cannot store delegates / closures. So you need to introduce a interface for calling the observer. On Java: Often we use anonymous inner classes as listener: While db4o can store those, I would NOT recommend that. Because a anonymous inner class gets generated name which can change. Then db4o will not find that class later if you've changed your code.
Thats it. Ask more detailed questions if you want to know more.

Resources