I could not understand what the below lines on first page of REDUX mean https://redux.js.org/introduction/motivation
This complexity is difficult to handle as we're mixing two concepts
that are very hard for the human mind to reason about: mutation and
asynchronicity. I call them Mentos and Coke. Both can be great in
separation, but together they create a mess. Libraries like React
attempt to solve this problem in the view layer by removing both
asynchrony and direct DOM manipulation. However, managing the state of
your data is left up to you. This is where Redux enters.
Note: Marked bold are the strong lines I was enable to understand.
Mutation simply means you will need to be able to change the state of things (variables, global store etc) and also you will need to be able to react to when those things change.
Asynchronicity means that events may occur at different times - you can't predict precisely when they will occur or when they will complete.
Therefore, in an app that has to be able to change data (mutation) and can have that data change asynchronously, things get difficult.
I'd suggest you read up more on redux (and in general, libraries that promote a specific "flow" of data mutations). At the heart of the issue is that if data can mutate at any time whereby you are changing the data directly and it can be changed asychronously (for instance via API calls to external services) then without careful thought/use of libraries and understanding, your app can turn into an unholy mess.
Related
I would like to store a function / component inside the Redux state to pass it to another part of the application.
I've read that this is not recommended, but what exactly will happen?
Will it affect the performance of other data stored on Redux to the point it becomes visibly slow to the user?
Will it cause components that depend on this component or any data stored on Redux to continuously re-render causing the app to slow to a crawl?
Will Redux just break randomly all the time?
What are the actual, practical downsides to doing this, putting aside the intentions behind the creation of the library?
Do not put functions or components (non-serializable value) in redux state or actions
Avoid putting non-serializable this can break the following things:
debugging
server side rendering
state persistence
and more, this can create performance issue
This is described more in depth here in the official documentation
I've decided to just store the component I needed on Redux and haven't seen any downsides, which led me to believe there is no real issue in doing this, as long as you don't care about serialization.
According to Redux Style Guide, it is strongly recommended to connect more components to read data from the store.
For example, rather than just connecting a <UserList> component and reading the entire array of users, have <UserList> retrieve a list of all user IDs, render list items as <UserListItem userId={userId}>, and have <UserListItem> be connected and extract its own user entry from the store.
This, though, sounds a bit contradicting to what has been encouraged earlier in "Usage with React" section to separate presentational components from container components where the presentational components are to read data from props, not from the store.
Does this mean that:
It is best practice to keep the number of presentational components to minimum, hence increasing the number of stateful components?
Or the connected components can also be actually stateless components?
I'm a Redux maintainer, and I wrote the Style Guide page.
The short answer is that the Redux docs have been written over time, and thus some of the older docs page are out of date.
The Style Guide is our latest and current advice on how you should write your app.
We're in the process of rewriting the Redux core docs. That exact "Usage with React" page is something I intend to rewrite very soon, and when I do, I'll be dropping the terms "presentational" and "container" entirely.
I'd also encourage you to read my post Thoughts on React Hooks, Redux, and Separation of Concerns and watch my React Boston 2019 talk on Hooks, HOCs, and Tradeoffs to get some more thoughts on how hooks change the way we think about writing components.
Like everything in programming, there is a balance.
On the one hand, you have separation of concerns, making sure each block of code is focusing on one task. This can help reduce the complexity of a given component.
On the other hand, you have reduction of parameters, reducing the brittleness of your code by keeping track of fewer parameters at any given moment.
The first bullet is typically required when your state management is complex, or you have to manage server connections, and want to keep that work separate from the presentation to reduce confusion.
Redux takes care of that for you, by putting that code into the reducer. If you use the connect() higher-order component, that's exactly what you're doing: creating a component to translate state for your base presentation component. The useSelector() and useDispatch() hooks are another way of reducing the state management code in your component.
Redux stresses the second bullet because Redux's purpose is to reduce the clutter to the point that you don't need to separate your code into presentation and business logic components. Instead of passing several props back and forth, you can pass a single key, make a simple function inside your component to retrieve the data, and get on with the presentation directly.
The folks who wrote Redux also want to reassure folks that Redux is quite fast, and not to be afraid to use it generously.
My own experience is that Redux manages the business logic side of things well enough that I rarely need to create a separate wrapper component for business logic. The state code is a few lines calling hooks at the top, and that's it.
If I do have complex business logic, typically it involves deciding what state to display. That involves determining which key to use in my Redux state. So I might put all that logic into a wrapper, but the end result of the wrapper is a single key that my presentation component uses to pull the appropriate state from Redux.
I'm learning redux and am struggling to understand why state has to be immutable. Could you provide me with an example, in code preferably, where breaking the immutable contract results in an not so obvious side effect.
Redux was originally invented to demonstrate the idea of "time-travel debugging" - being able to step back and forth through the history of dispatched actions, and see what the UI looks like at each step. Another aspect is being able to live-edit the code, reload it, and see what the output looks like given the new reducer logic.
In order to be able to properly step back and forth between states, we need to make sure that reducer functions have no side effects. That means data updates need to be applied immutably. If a reducer function actually directly modifies its data, then stepping back and forth between states will cause the application to behave in an unexpected fashion, and the debugging effort will be wasted.
Also, the React-Redux library relies on shallow equality checks to see if the incoming data for a component has changed. If the data references are the same, then the wrapper components generated by connect assume that the data has not changed, and that the component does not need to re-render. Immutable data updates means that new object references are created, and thus connect will see that the data has changed and the UI needs to update.
Two Ideas
There are two ideas about immutability that you need to understand:
Mutate the state only in the reducers
Using an immutable data structure
Mutate the state only in the reducers
Redux tries to ensure that you only mutate the state in the Reducers. This is important because it makes easier to think about your application data flow.
Let's say that a value is not displayed in the UI as you expected or that a value that should have changed still showing its original value.
In that case, you could just think about which reducer is causing the mutation and see what went wrong.
This makes thinking about Redux issues very simple and makes developers highly productive.
Sometimes you can mutate the state in a view or in an action by mistake. If you think about the life-cycle:
Action -> Reducer -> Views-> Action
If the state changes in a view and then an action is triggered the state change could be override. This would make very hard to find out what is going on for developers. We solve this by mutating state only in the reducers.
Note: another nice thing is that reducers are pure functions and all the async stuff takes places in the actions/middleware. So all side effects take place in the actions/middleware layer. Actions are our connection with the external world (HTTP, Local Storage, etc.).
Using an immutable data structure
As I have already mentioned, sometimes you can mutate the state in a view or in an action by mistake. To reduce chances of this happening we can make state mutations explicit by using and immutable data structure.
Using immutable data structures also has performance benefits because we don't need to perform deep equality checks to check for mutations.
The most commonly used provider of immutable data structures is immutable.js.
I think the key ideas would be
Easy to replay any given situation/test flow - you can replay with the same actions starting from the same initial state
Fast component re-rendering - because the reference is changed and not its values, it is much much faster to test the change
Of course, there are other aspects and I suggest this nice article which explains in more details.
From my experience, keeping backend DB and the frontend state tree in sync becomes a non-trivial task as the app grows more complicated.
For example, when you create a new Post in a blog, you have to create an object in the DB as well as attach a post object in your state tree (ex. inside posts reducer). It gets more complicated if the state tree is nested. If you update a comment that belongs to a post, you have to find the right post in the state tree, and find the right comment and update.
I understand why having an App State Tree is beneficial but this syncing causes too much overhead for me to truly appreciate Redux.
Q. Is there a way to do this syncing more easily?
Meteor is specifically designed to solve the problem you're mentioning.
Redux is only to manage state on the client. More precisely, its job is to hold the state needed for the ui itself.
It's often used with React where it functions as a smart place to stash anything and everything needed to render the ui. This often includes a complicated state and may include a lot of app data. It may then start to look more like a database, but databases have a number of properties that Redux lacks. Persistence comes to mind...
Meteor is a framework and Redux is not. As such, Meteor comes with an enormous amount of buy-in where Meteor decides on how to deal with a large number of issues in your app. Redux on the other hand is very permissive. It doesn't decide on what your state looks like, or how you talk to your backend, or how you render your ui.
So inasmuch as Redux allows you the freedom to decide for yourself how to deal with these issues, it also leaves you with the responsibility to do so. Redux is super tiny and you should only expect rock solid state management, and nothing more.
How much of your global app state you move to the client is entirely up to you, and how you connect it to your backend, and if this backend runs node.js+Mongo or php+mysql or lisp+text files is entirely up to you. The same can not be said when using Meteor.
With great power comes great responsibility.
I have the same question and I'm sure there's a tried-and-true correct response to this. However, I brainstormed two possible solutions:
1) add the post to the app state 'manually', then the next time a GET is requested, reconcile the state and the response (your strategy). This is a lot easier if your client state is normalized
2) do a GET immediately after the POST status is 200, and deal with the latency with an activity indicator.
The accepted answer to me , is not a fundamental explanation.
So just started playing with Meteor and trying to get my head around the security model. It seems there's two ways to modify data.
The Meteor.call way which seems pretty standard - pretty much just a call to the server with its own set of business rules implemented.
Then there is the Collection.allow method which seems much more different to anything I've done before. So it seems that if you put an collection.allow, you're saying that the client can make any write operation to that collection as long as it can get past the validations in its allow function.
That makes me feel uneasy cause it's feels like a lot of freedom and my allow function would need to be pretty long to make sure it's locked down securely enough.
For instance, mongodb has no schema, so you'd have to basically have a rule that defines which fields would be accepted and the format those fields must be in.
Wouldn't you also have to put in the business logic for every type of update that might be made to your system.
So say, I had a SoccerTeam collection. There may be several situations I may need to make a change, like if I'm adding or removing a player, changing the team name, team status has changed etc.
It seems to me that you'd have to put everything into this one massive function. It just sounds like a radical idea, but it seems Meteor.call methods would just be a lot simpler.
Am I thinking about this in the wrong manner (or for the wrong use case?) Does anyone have any example of how they can structure an allow or deny function with a list of what I may need to check in my allow function to make my collection secure?
You are following the same line of reasoning I used in deciding how to handle data mutations when building Edthena. Out of the box, meteor provides you with the tools to make a simple tradeoff:
Do I trust the client and get a more responsive UI (latency compensation)? Or do I require strict control over data validation, but force the client to wait for an update?
I went with the latter, and exclusively used method calls for a few reasons:
I sleep better a night knowing there exists exactly one way to update each of my collections.
I found that some of my updates required side effects that only made sense to execute on the server (e.g. making denormalized updates to other collections).
At present, there isn't a clear benefit to latency compensation for our app. We found the delay for most writes was inconsequential to the user experience.
allow and deny rules are weak tools. They are essentially only good for validating ownership and other simple checks.
At the time when we first released to production (August 2013) this seemed like a radical conclusion. The meteor docs, the API, and the demos highlight the use of client-side writes, so I wasn't entirely sure I had made the right decision. A couple of months later I had my first opportunity to sit down with several of the meteor core devs - this is a summary of their reaction to my design choices:
This seems like a rational approach. Latency compensation is really useful in some contexts like mobile apps, and games, but may not be worth it for all web apps. It also makes for cool demos.
So there you have it. As of this writing, my advice for production apps would be to use client-side updates where you really need the speed, but you shouldn't feel like you are doing something wrong by making heavy use of methods.
As for the future, I'd imagine that post-1.0 we'll start to see things like built-in schema enforcement on both the client and server which will go a long way towards resolving my concerns. I see Collection2 as a significant first step in that direction, but I haven't tried it yet in any meaningful way.
stubs
A logical follow-up question is "Why not use stubs?". I spent some time investigating this but reached the conclusion that method stubbing wasn't useful to our project for the following reasons:
I like to keep my server code on the server. Stubbing requires that I either ship all of my model code to the client or selectively repeat parts of it again. In a large app, I don't see that as practical.
I found the the overhead required to separate out what may or may not run on the client to be a maintenance challenge.
In order for the stub to do anything other than reject a database mutation, you'd need to have an allow rule in place - otherwise you'd end up with a lot of UI flicker (the client allows the write but the server immediately invalidates it). But having an allow rule defeats the whole point, because a user could still write to the db from the console.
The usual allow methods I have are these:
MyCollection.allow({
insert: false
update: false
remove: false
})
And then, I have methods which take care of all insertions. These methods perform the type checks and permission assessment. I have found that to be a much more maintainable method: completely decoupling the data layer from the code which runs on the client.
For instance, mongodb has no schema, so you'd have to basically have a rule that defines which fields would be accepted and the format those fields must be in.
Take a look at Collection2. They support schema checking at run-time before inserting documents into the Collection.