Is it bad to commit mutations without using actions in Vuex? - redux

I have been using Vuex for awhile now, and I have always been following the pattern: Components use Actions to commit Mutations to mutate the Store. I thought this was the proper way to do things considering this diagram from the docs:
I came across code where people were committing mutations directly in components, and not even creating simple actions which have no purpose other than to trigger mutations. I even found several examples of this in the Vuex docs.
I figured since it's used in the docs this must be an acceptable pattern, and I was wondering if skipping Actions and directly triggering Mutations was a pattern endorsed by any other state management libraries such as Redux of Flux itself.
TLDR: Is it ok to commit mutations directly in Vuex, and if so, do other state management libraries such as Redux use a pattern like this? If so why don't they?

Just keep in mind that mutations have to be synchronous. You can commit directly in components if you (and your team) are sure is there no chance of appearing of something async. In other words, use it with simple and direct operations.
Committing only in actions as a rule adds some clarity and reliability to application's code.
I didn't used Redux, but as far as I know, some state managers have middleware.
Using mutations and action (Vuex-way) may cause a difficult maintenance into large applications.
In the future version of Vuex, mutations and actions should be merged into the one entity.

Similar discussion: https://github.com/vuejs/vuex/issues/587
Good topic here!
Actions are for more complex logic functionality specially async,
but mutations on the other hand are for changing the state .
And is totally fine to commit mutations from within your component!
(best practices are outdated by time, most of the time anyway)

Related

Redux: Style Guide confusion on connecting more components to the store

According to Redux Style Guide, it is strongly recommended to connect more components to read data from the store.
For example, rather than just connecting a <UserList> component and reading the entire array of users, have <UserList> retrieve a list of all user IDs, render list items as <UserListItem userId={userId}>, and have <UserListItem> be connected and extract its own user entry from the store.
This, though, sounds a bit contradicting to what has been encouraged earlier in "Usage with React" section to separate presentational components from container components where the presentational components are to read data from props, not from the store.
Does this mean that:
It is best practice to keep the number of presentational components to minimum, hence increasing the number of stateful components?
Or the connected components can also be actually stateless components?
I'm a Redux maintainer, and I wrote the Style Guide page.
The short answer is that the Redux docs have been written over time, and thus some of the older docs page are out of date.
The Style Guide is our latest and current advice on how you should write your app.
We're in the process of rewriting the Redux core docs. That exact "Usage with React" page is something I intend to rewrite very soon, and when I do, I'll be dropping the terms "presentational" and "container" entirely.
I'd also encourage you to read my post Thoughts on React Hooks, Redux, and Separation of Concerns and watch my React Boston 2019 talk on Hooks, HOCs, and Tradeoffs to get some more thoughts on how hooks change the way we think about writing components.
Like everything in programming, there is a balance.
On the one hand, you have separation of concerns, making sure each block of code is focusing on one task. This can help reduce the complexity of a given component.
On the other hand, you have reduction of parameters, reducing the brittleness of your code by keeping track of fewer parameters at any given moment.
The first bullet is typically required when your state management is complex, or you have to manage server connections, and want to keep that work separate from the presentation to reduce confusion.
Redux takes care of that for you, by putting that code into the reducer. If you use the connect() higher-order component, that's exactly what you're doing: creating a component to translate state for your base presentation component. The useSelector() and useDispatch() hooks are another way of reducing the state management code in your component.
Redux stresses the second bullet because Redux's purpose is to reduce the clutter to the point that you don't need to separate your code into presentation and business logic components. Instead of passing several props back and forth, you can pass a single key, make a simple function inside your component to retrieve the data, and get on with the presentation directly.
The folks who wrote Redux also want to reassure folks that Redux is quite fast, and not to be afraid to use it generously.
My own experience is that Redux manages the business logic side of things well enough that I rarely need to create a separate wrapper component for business logic. The state code is a few lines calling hooks at the top, and that's it.
If I do have complex business logic, typically it involves deciding what state to display. That involves determining which key to use in my Redux state. So I might put all that logic into a wrapper, but the end result of the wrapper is a single key that my presentation component uses to pull the appropriate state from Redux.

Explain Redux : mutation and asynchronicity

I could not understand what the below lines on first page of REDUX mean https://redux.js.org/introduction/motivation
This complexity is difficult to handle as we're mixing two concepts
that are very hard for the human mind to reason about: mutation and
asynchronicity. I call them Mentos and Coke. Both can be great in
separation, but together they create a mess. Libraries like React
attempt to solve this problem in the view layer by removing both
asynchrony and direct DOM manipulation. However, managing the state of
your data is left up to you. This is where Redux enters.
Note: Marked bold are the strong lines I was enable to understand.
Mutation simply means you will need to be able to change the state of things (variables, global store etc) and also you will need to be able to react to when those things change.
Asynchronicity means that events may occur at different times - you can't predict precisely when they will occur or when they will complete.
Therefore, in an app that has to be able to change data (mutation) and can have that data change asynchronously, things get difficult.
I'd suggest you read up more on redux (and in general, libraries that promote a specific "flow" of data mutations). At the heart of the issue is that if data can mutate at any time whereby you are changing the data directly and it can be changed asychronously (for instance via API calls to external services) then without careful thought/use of libraries and understanding, your app can turn into an unholy mess.

Redux: Actions and reducers for every resource?

I'm currently building an application that uses a lot of 'resources' and performs the same operations on them. Every resource (customers, projects, products, invoices) has a list, edit-form, CRUD operations and more.
You can imagine a lot of code repetition. I'm thinking of generalizing into a single 'Resource' with actions like FETCH_RESOURCE, RECEIVE_RESOURCES, etc. where the resource type is a parameter. The same can be done for components.
As I am new to Redux, I'm trying to find out if this is a good or a bad idea, and if it fits with the Redux philosophy. I've looked at Redux-CRUD, but it is still generating actions and reducers for every resource.
Yes, it's very common to generalize repetitive/reusable CRUD logic like that. However, the "copies" of the logic do need some way to distinguish between dispatched actions to know which "copy" is supposed to respond.
There's examples of this kind of pattern in the Structuring Reducers - Reusing Reducer Logic section of the Redux docs, and my Redux addons catalog has large sections for existing libraries covering entity management and action/reducer generation.

Redux: Syncing backend DB with frontend app state tree

From my experience, keeping backend DB and the frontend state tree in sync becomes a non-trivial task as the app grows more complicated.
For example, when you create a new Post in a blog, you have to create an object in the DB as well as attach a post object in your state tree (ex. inside posts reducer). It gets more complicated if the state tree is nested. If you update a comment that belongs to a post, you have to find the right post in the state tree, and find the right comment and update.
I understand why having an App State Tree is beneficial but this syncing causes too much overhead for me to truly appreciate Redux.
Q. Is there a way to do this syncing more easily?
Meteor is specifically designed to solve the problem you're mentioning.
Redux is only to manage state on the client. More precisely, its job is to hold the state needed for the ui itself.
It's often used with React where it functions as a smart place to stash anything and everything needed to render the ui. This often includes a complicated state and may include a lot of app data. It may then start to look more like a database, but databases have a number of properties that Redux lacks. Persistence comes to mind...
Meteor is a framework and Redux is not. As such, Meteor comes with an enormous amount of buy-in where Meteor decides on how to deal with a large number of issues in your app. Redux on the other hand is very permissive. It doesn't decide on what your state looks like, or how you talk to your backend, or how you render your ui.
So inasmuch as Redux allows you the freedom to decide for yourself how to deal with these issues, it also leaves you with the responsibility to do so. Redux is super tiny and you should only expect rock solid state management, and nothing more.
How much of your global app state you move to the client is entirely up to you, and how you connect it to your backend, and if this backend runs node.js+Mongo or php+mysql or lisp+text files is entirely up to you. The same can not be said when using Meteor.
With great power comes great responsibility.
I have the same question and I'm sure there's a tried-and-true correct response to this. However, I brainstormed two possible solutions:
1) add the post to the app state 'manually', then the next time a GET is requested, reconcile the state and the response (your strategy). This is a lot easier if your client state is normalized
2) do a GET immediately after the POST status is 200, and deal with the latency with an activity indicator.
The accepted answer to me , is not a fundamental explanation.

What logic must I cover in Collection.allow and Collection.deny to ensure it's secure?

So just started playing with Meteor and trying to get my head around the security model. It seems there's two ways to modify data.
The Meteor.call way which seems pretty standard - pretty much just a call to the server with its own set of business rules implemented.
Then there is the Collection.allow method which seems much more different to anything I've done before. So it seems that if you put an collection.allow, you're saying that the client can make any write operation to that collection as long as it can get past the validations in its allow function.
That makes me feel uneasy cause it's feels like a lot of freedom and my allow function would need to be pretty long to make sure it's locked down securely enough.
For instance, mongodb has no schema, so you'd have to basically have a rule that defines which fields would be accepted and the format those fields must be in.
Wouldn't you also have to put in the business logic for every type of update that might be made to your system.
So say, I had a SoccerTeam collection. There may be several situations I may need to make a change, like if I'm adding or removing a player, changing the team name, team status has changed etc.
It seems to me that you'd have to put everything into this one massive function. It just sounds like a radical idea, but it seems Meteor.call methods would just be a lot simpler.
Am I thinking about this in the wrong manner (or for the wrong use case?) Does anyone have any example of how they can structure an allow or deny function with a list of what I may need to check in my allow function to make my collection secure?
You are following the same line of reasoning I used in deciding how to handle data mutations when building Edthena. Out of the box, meteor provides you with the tools to make a simple tradeoff:
Do I trust the client and get a more responsive UI (latency compensation)? Or do I require strict control over data validation, but force the client to wait for an update?
I went with the latter, and exclusively used method calls for a few reasons:
I sleep better a night knowing there exists exactly one way to update each of my collections.
I found that some of my updates required side effects that only made sense to execute on the server (e.g. making denormalized updates to other collections).
At present, there isn't a clear benefit to latency compensation for our app. We found the delay for most writes was inconsequential to the user experience.
allow and deny rules are weak tools. They are essentially only good for validating ownership and other simple checks.
At the time when we first released to production (August 2013) this seemed like a radical conclusion. The meteor docs, the API, and the demos highlight the use of client-side writes, so I wasn't entirely sure I had made the right decision. A couple of months later I had my first opportunity to sit down with several of the meteor core devs - this is a summary of their reaction to my design choices:
This seems like a rational approach. Latency compensation is really useful in some contexts like mobile apps, and games, but may not be worth it for all web apps. It also makes for cool demos.
So there you have it. As of this writing, my advice for production apps would be to use client-side updates where you really need the speed, but you shouldn't feel like you are doing something wrong by making heavy use of methods.
As for the future, I'd imagine that post-1.0 we'll start to see things like built-in schema enforcement on both the client and server which will go a long way towards resolving my concerns. I see Collection2 as a significant first step in that direction, but I haven't tried it yet in any meaningful way.
stubs
A logical follow-up question is "Why not use stubs?". I spent some time investigating this but reached the conclusion that method stubbing wasn't useful to our project for the following reasons:
I like to keep my server code on the server. Stubbing requires that I either ship all of my model code to the client or selectively repeat parts of it again. In a large app, I don't see that as practical.
I found the the overhead required to separate out what may or may not run on the client to be a maintenance challenge.
In order for the stub to do anything other than reject a database mutation, you'd need to have an allow rule in place - otherwise you'd end up with a lot of UI flicker (the client allows the write but the server immediately invalidates it). But having an allow rule defeats the whole point, because a user could still write to the db from the console.
The usual allow methods I have are these:
MyCollection.allow({
insert: false
update: false
remove: false
})
And then, I have methods which take care of all insertions. These methods perform the type checks and permission assessment. I have found that to be a much more maintainable method: completely decoupling the data layer from the code which runs on the client.
For instance, mongodb has no schema, so you'd have to basically have a rule that defines which fields would be accepted and the format those fields must be in.
Take a look at Collection2. They support schema checking at run-time before inserting documents into the Collection.

Resources