Redux provides the subscribe function for the store changes notifying. But it doesn't contains detailed info about what changes was done... Is exist any way to subscribe only for changes are interesting for me and get detailed info about them (something like object which we pass into dispatch method)?
Related
I have a list of items which I consume from an own custom built API (in the example I'll use typicode) and want to display them. Additionally I want to add a client side search functionality. It is exactly like this REPL from this question.
But the given list is hardcoded, yet I can't seem to build a fetch call to get those items prior and afterwards display them. Only then can the user search and filter them.
Here is my REPL.
response.json() returns a promise, so you should await it.
Working example: https://svelte.dev/repl/a93ac2dcff584b2f8d11e430c6a96fa9?version=3.31.2
My Firebase Cloud Function for my Realtime Database (NOT CloudStore) listens onWrite and provides a change object with before and after.
This documentation here states:
If fieldMask is set, then only fields that changed are present in before.
How do I set this fieldMask? And when I set this fieldMask, will the resulting before object have the JSON structure of only the changed fields?
I don't think that class you linked "ChangeJson" is supposed to be part of the public documentation. When using an onWrite trigger, you actually get a Change object, which is different. Pay attention to that instead, not ChangeJson.
Feel free to use the "Send feedback" link at the top right of any page of Firebase documentation indicate what your confusion was on that page.
I've tried it with a simple demonstration, and it seems to work. But just to make sure, is there anything in redux or redux middlewares that require an action to be an object with a 'type' property? I've read some tutorials that the 'type' property thing was emphasized to be a must-have.
Before:
dispatch({ type: 'DO_SOMETHING', info: 1 });
after:
dispatch(['DO_SOMETHING', 1]);
#TomW is correct - you can't dispatch arrays with a standard Redux store. It's possible to write middleware that look for arrays being dispatched and intercept them in some way. However, anything that actually reaches the reducers must be a plain object with a type field.
You may want to read my blog post Idiomatic Redux: The Tao of Redux, Part 1 - Implementation and Intent, where I discuss the actual technical limitations that Redux requires and why those exist. There's also a somewhat related discussion in redux#1813, where someone submitted a pull request trying to add the ability to dispatch multiple actions at once without actually understanding how all the pieces fit together.
The Redux documentation explicitly requires that you provide a type property:
http://redux.js.org/docs/basics/Actions.html
Actions are plain JavaScript objects. Actions must have a type property that indicates the type of action being performed. Types should typically be defined as string constants.
Furthermore, Redux appears to throw an exception in dispatch if you:
Don't have a plain object: https://github.com/reactjs/redux/blob/v3.7.0/src/createStore.js#L150
With a type property: https://github.com/reactjs/redux/blob/v3.7.0/src/createStore.js#L158
Do you have some middleware that is transforming the dispatched payload in some way?
Namely, what are the advantages and disadvantages of the following approaches to building a server-side database API in Meteor?
Method-based
import Db from 'Db';
Meteor.method({"insert": (data) => {Db.insert(data)});
Subclass-based
import {Mongo} from "meteor/mongo";
class MyCollcetion extends Mongo.Collection {
insert: (data) => {super.insert(data);}
}
This problem has been solved below; there is a similar question for further reading: Meteor method vs. deny/allow rules
This is mainly a matter of ease vs control. Subclassing may be easier for simple things, and methods are more powerful.
This can also be affected by your state of mind (or affect it): CRUD vs. action-based mutation.
insert/update/remove go well with a CRUD state-of-mind, while you can associate methods with action-centric RPC mutators.
Eventually, this is a matter of personal preference, so I will try to give a short factual description and let the readers to decide based on their taste.
Subclassing
By default, Meteor automatically generates mutation methods (insert, update, remove) when a collection is instantiated.
Those methods are called behind the scenes when calling MyCollection.insert(mutator, cb) on the client side (outside client-side method code). When arriving to the server, the data are first passed through allow/deny rules and then executed.
When subclassing, you override those methods and get a 'hook' into the process.
Using methods
When defining a Meteor method you get full control of the process.
You set the parameters and the name of the method and you can perform the validation and authorization as you wish.
You can also create a method stub for client-side use, which generates optimistic UI until the results of the method server execution are received.
You can use something like a validatedMethod to get some extra validation logic and modularity to your method.
You can also prevent the creation of the default mutation methods by setting a false value for the defineMutationMethods option when instantiating the collection. You can also forbid direct mutation from the client by supplying the appropriate deny rules.
While subclassing allows you to use MyCollection.insert(...), etc. on the client, you need to call the method name with the arguments that you defined in order to mutate data.
I have a tree-like structure with a couple of entities: a process is composed of steps and a step may have sub-processes. Let's say I have 2 failure modes: abort and re-do. I have tree traversal logic implemented that cascades the fail signal up and down the tree. In the case of abort, all is well; abort cascades correctly up and down, notifying its parent and its children. In the case of re-do, the same happens, EXCEPT a new process is created to replace the one that failed. Because I'm using the DataMapper pattern, the new object can't save itself, nor is there a way to pass the new object to the EntityManager for persistence, given that entities have no knowledge of persistence or even services in general.
So, if I don't pass the EntityManager to the domain layer, how can I pick up on the creation of new objects before they go out of scope?
Would this be a good case for implementing AOP, such as with the JMSAopBundle? This is something I've read about, but haven't really found a valid use case for.
If I understand your problem correctly (your description seems to be written a bit in a hurry), I would do the following:
mark your failed nodes and your new nodes with some kind of flag (i.e. dirty flag)
Have your tree iterator count the number of failed and new nodes
Repeat tree-iteration / Re-Do prcocess as often as you want, until no more failed or new nodes are there that need to be handled
I just found a contribution from Benjamin Eberlei, regarding business logic changes in the domain layer on a more abstract level: Doctrine and Domain Events
Brief quote and summary from the blog post:
The Domain Event Pattern allows to attach events to entities and
dispatch them to event listeners only when the transaction of the
entity was successfully executed. This has several benefits over
traditional event dispatching approaches:
Puts focus on the behavior in the domain and what changes the domain triggers.
Promotes decoupling in a very simple way
No reference to the event dispatcher and all the listeners required except in the Doctrine UnitOfWork.
No need to use unexplicit Doctrine Lifecycle events that are triggered on all update operations.
Each method requiring action should:
Call a "raise" method with the event name and properties.
The "raise" method should create a new DomainEvent object and set it into an events array stored in the entity in memory.
An event listener should listen to Doctrine lifecycle events (e.g. postInsert), keeping entities in memory that (a) implement events, and (b) have events to process.
This event listener should dispatch a new (custom) event in the preFlush/postFlush callback containing the entity of interest and any relevant information.
A second event listener should listen for these custom events and trigger the logic necessary (e.g. onNewEntityAddedToTree)
I have not implemented this yet, but it sounds like it should accomplish exactly what I'm looking for in a more automated fashion that the method I actually implemented.