I'm using ngrx/component-store and loving it so far. Having prior store knowledge building my own simple ones, the only real headache I've had so far is when I've had to update an array and figured out I have to always create a new one for the internal compare() pipe to realize the array got updated.
Anyway, reading through the documentation it talks about updater methods and patchState. To me they do exactly the same thing, but their creation is slightly different. You would call patchState inside of a method while this.updater() returns a method giving you a function you can expose in your service. Anytime I'm updating my state it's always after a network call. I assume there are plenty of scenarios where you'd want to update your state without a network call so this is why you would want to have an updater available to your component to call. The question is if an updater and patchState are really doing the same thing then is it a better practice to call an updater in an effect or use patchState, or maybe am I putting too much logic in my effect?
On a side note, the docs say an updater method is supposed to be a pure function. If you're using it to your push an object onto an array then is it really pure?
// adding the selectors so people know what components are subscribing to
readonly approvals$ = this.select(state => state.requestApprovals);
readonly registration$ = this.select(state => state);
readonly updateAssessment = this.effect(($judgement: Observable<{id: string, isApproved: boolean}>) => {
return $judgement.pipe(
switchMap((evaluation) => {
const state = this.get();
return this.requestApproval.patch(state.id, state.companyName, evaluation.id, evaluation.isApproved).pipe(
tapResponse(
(result) => {
// is it better to call patchState()?
this.patchState((state) => {
for(let i = 0; i < state.requestApprovals.length; i++) {
if(state.requestApprovals[i].id == result.id) {
state.requestApprovals[i].isApproved = result.isApproved;
}
}
// the take away is you must assign a whole new array object when you update it.
state.requestApprovals = Object.assign([], state.requestApprovals);
return state;
});
// or this updater?
// this.applyDecisionPatch(evaluation);
},
// oh look! another updater reassigning my array to the state so
// it propagates to subscribers to reset the UI
() => { this.reverseDecision(); }
)
);
})
);
});
// this is private to make sure this can only be called after a network request
private readonly applyDecisionPatch = this.updater((state, value: {id: string, isApproved: boolean}) => {
for(let i = 0; i < state.requestApprovals.length; i++) {
if(state.requestApprovals[i].id == value.id) {
state.requestApprovals[i].isApproved = value.isApproved;
}
}
state.requestApprovals = Object.assign([], state.requestApprovals);
return state;
});
Note: There's no tag for ngrx-component-store so couldn't tag it.
An updater can be compared to a reducer.
All the options to modify the state should change it in an immutable way.
A library like ngrx-immer can be used to make this easier.
The main difference is that updater receives the current state, and you can change the state based on it. E.g. a conditional update, or can be used with #ngrx/entity
While with setState and patchState, you just set state properties.
setState updates the whole state object, whereas patchState only sets the given properties and doesn't touch the rest of the state object.
These two methods are also easier to use when you just want to set the state, because you don't have to create an updater function.
To answer the side question, push is not immutable. Instead of creating a new instance, it updates the array instance.
Related
Let's say that two users do changes to the same document while offline, but in different sections of the document. If user 2 goes back online after user 1, will the changes made by user 1 be lost?
In my database, each row contains a JS object, and one property of this object is an array. This array is bound to a series of check-boxes on the interface. What I would like is that if two users do changes to those check-boxes, the latest change is kept for each check-box individually, based on the time the when the change was made, not the time when the syncing occurred. Is GroundDB the appropriate tool to achieve this? Is there any mean to add an event handler in which I can add some logic that would be triggered when syncing occurs, and that would take care of the merging ?
The short answer is "yes" none of the ground db versions have conflict resolution since the logic is custom depending on the behaviour of conflict resolution eg. if you want to automate or involve the user.
The old Ground DB simply relied on Meteor's conflict resolution (latest data to the server wins) I'm guessing you can see some issues with that depending on the order of when which client comes online.
Ground db II doesn't have method resume it's more or less just a way to cache data offline. It's observing on an observable source.
I guess you could create a middleware observer for GDB II - one that checks the local data before doing the update and update the client or/and call the server to update the server data. This way you would have a way to handle conflicts.
I think to remember writing some code that supported "deletedAt"/"updatedAt" for some types of conflict handling, but again a conflict handler should be custom for the most part. (opening the door for reusable conflict handlers might be useful)
Especially knowing when data is removed can be tricky if you don't "soft" delete via something like using a "deletedAt" entity.
The "rc" branch is currently grounddb-caching-2016 version "2.0.0-rc.4",
I was thinking about something like:
(mind it's not tested, written directly in SO)
// Create the grounded collection
foo = new Ground.Collection('test');
// Make it observe a source (it's aware of createdAt/updatedAt and
// removedAt entities)
foo.observeSource(bar.find());
bar.find() returns a cursor with a function observe our middleware should do the same. Let's create a createMiddleWare helper for it:
function createMiddleWare(source, middleware) {
const cursor = (typeof (source||{}).observe === 'function') ? source : source.find();
return {
observe: function(observerHandle) {
const sourceObserverHandle = cursor.observe({
added: doc => {
middleware.added.call(observerHandle, doc);
},
updated: (doc, oldDoc) => {
middleware.updated.call(observerHandle, doc, oldDoc);
},
removed: doc => {
middleware.removed.call(observerHandle, doc);
},
});
// Return stop handle
return sourceObserverHandle;
}
};
}
Usage:
foo = new Ground.Collection('test');
foo.observeSource(createMiddleware(bar.find(), {
added: function(doc) {
// just pass it through
this.added(doc);
},
updated: function(doc, oldDoc) {
const fooDoc = foo.findOne(doc._id);
// Example of a simple conflict handler:
if (fooDoc && doc.updatedAt < fooDoc.updatedAt) {
// Seems like the foo doc is newer? lets update the server...
// (we'll just use the regular bar, since thats the meteor
// collection and foo is the grounded data
bar.update(doc._id, fooDoc);
} else {
// pass through
this.updated(doc, oldDoc);
}
},
removed: function(doc) {
// again just pass through for now
this.removed(doc);
}
}));
When using immutablejs with Redux, we will get a regular javascript object back from combineReducers, meaning it won't be an immutable data structure even if everything within it is. Doesn't this mean that using immutablejs will be in vain since a whole new state object will be created on every action anyhow?
Example:
const firstReducer = (state = Immutable.Map({greeting : 'Hey!'})) => {
return state
}
const secondReducer = (state = Immutable.Map({foo : 'bar'})) => {
return state
}
const rootReducer = combineReducers({
firstReducer, secondReducer
})
a whole new state object will be created on every action
Yes but the slices of that state that are assigned by combineReducers are not recreated. It's sort of analogous to doing this:
const person = { name: 'Rob' };
const prevState = { person };
const nextState = { person };
I created a new state object (nextState), but its person key is still set to the same exact object as was prevState's person key. There is only one instance of the string 'Rob' in memory.
The problem is when I mutate the person object, I'm changing it for multiple states:
const person = { name: 'Rob' };
const prevState = { person };
person.name = 'Dan'; // mutation
const nextState = { person };
console.log(prevState.person.name); // 'Dan'
Coming back to Redux, once all reducers have been called for the first time, they will have initialized their slices of the application's state, and your application's entire overall state would basically be equal to this:
{
firstReducer: Immutable.Map({greeting : 'Hey!'}),
secondReducer: Immutable.Map({foo : 'bar'}),
}
Note that this is a normal object. It has properties that hold Immutable objects.
When an action is dispatched and goes through each reducer, the way you've got it, the reducer simply returns the existing Immutable object back again, it doesn't create a new one. Then the new state is set to an object with a property firstReducer simply pointing back to the same Immutable object that the previous state was pointing to.
Now, what if we didn't use Immutable for firstReducer:
const firstReducer = (state = {greeting : 'Hey!'}) => {
return state
}
Same idea, that object that is used as the default for state when the reducer is first called is just passed from the previous state to the next state. There is only ever one object in memory that has the key greeting and value Hey!. There are many state objects, but they simply have a key firstReducer that points to the same object.
This is why we need to make sure we don't accidentally mutate it, but rather replace it when we change anything about it. You can accomplish this without Immutable of course by just being careful, but using Immutable makes it more foolproof. Without Immutable, it's possible to screw up and do this:
const firstReducer = (state = {greeting : 'Hey!'}, action) => {
switch (action.type) {
case 'CAPITALIZE_GREETING': {
const capitalized = state.greeting.toUpperCase();
state.greeting = capitalized; // BAD!!!
return state;
}
default: {
return state;
}
}
}
The proper way would be to create a new state slice:
const firstReducer = (state = {greeting : 'Hey!'}, action) => {
switch (action.type) {
case 'CAPITALIZE_GREETING': {
const capitalized = state.greeting.toUpperCase();
const nextState = Object.assign({}, state, {
greeting: capitalized,
};
return nextState;
}
default: {
return state;
}
}
}
The other benefit Immutable gives us is that if our reducer's slice of the state happened to have a lot of other data besides just greeting, Immutable can potentially do some optimizations under the hood so that it doesn't have to recreate every piece of data if all we did was change a single value, and yet still ensure immutability at the same time. That's useful because it can help cut down on the amount of stuff being put in memory every time an action is dispatched.
combineReducers() that comes with Redux indeed gives you a plain root object.
This isn’t a problem per se—you can mix plain objects with Immutable objects as long as you don’t mutate them. So you can live with this just fine.
You may also use an alternative combineReducers() that returns an Immutable Map instead. This is completely up to you and doesn’t make any big difference except that it lets you use Immutable everywhere.
Don’t forget that combineReducers() is actually easy to implement on your own.
Doesn't this mean that using immutablejs will be in vain since a whole new state object will be created on every action anyhow?
I’m not sure what you mean by “in vain”. Immutable doesn’t magically prevent objects from being created. When you set() in an Immutable Map, you still create a new object, just (in some cases) more efficiently. This doesn’t make a difference for the case when you have two keys in it anyway.
So not using Immutable here doesn’t really have any downsides besides your app being slightly less consistent in its choice of data structures.
Meteor Collections have a transform ability that allows behavior to be attached to the objects returned from mongo.
We want to have autopublish turned off so the client does not have access to the database collections, but we still want the transform functionality.
We are sending data to the client with a more explicit Meteor.publish/Meteor.subscribe or the RPC mechanism ( Meteor.call()/Meteor.methods() )
How can we have the Meteor client automatically apply a transform like it will when retrieving data directly with the Meteor.Collection methods?
While you can't directly use transforms, there is a way to transform the result of a database query before publishing it. This is what the "publish the current size of a collection" example describes here.
It took me a while to figure out a really simple application of that, so maybe my code will help you, too:
Meteor.publish("publicationsWithHTML", function (data) {
var self = this;
Publications
.find()
.forEach(function(entry) {
addSomeHTML(entry); // this function changes the content of entry
self.added("publications", entry._id, entry);
});
self.ready();
});
On the client you subscribe to this:
Meteor.subscribe("publicationsWithHTML");
But your model still need to create a collection (on both sides) that is called 'publications':
Publications = new Meteor.Collection('publications');
Mind you, this is not a very good example, as it doesn't maintain the reactivity. But I found the count example a bit confusing at first, so maybe you'll find it helpful.
(Meteor 0.7.0.1) - meteor does allow behavior to be attached to the objects returned via the pub/sub.
This is from a pull request I submitted to the meteor project.
Todos = new Meteor.Collection('todos', {
// transform allows behavior to be attached to the objects returned via the pub/sub communication.
transform : function(todo) {
todo.update = function(change) {
Meteor.call('Todos_update', this._id, change);
},
todo.remove = function() {
Meteor.call('Todos_remove', this._id);
}
return todo;
}
});
todosHandle = Meteor.subscribe('todos');
Any objects returned via the 'todos' topic will have the update() and the remove() function - which is exactly what I want: I now attach behavior to the returned data.
Try:
let transformTodo = (fields) => {
fields._pubType = 'todos';
return fields;
};
Meteor.publish('todos', function() {
let subHandle = Todos
.find()
.observeChanges({
added: (id, fields) => {
fields = transformTodo(fields);
this.added('todos', id, fields);
},
changed: (id, fields) => {
fields = transformTodo(fields);
this.changed('todos', id, fields);
},
removed: (id) => {
this.removed('todos', id);
}
});
this.ready();
this.onStop(() => {
subHandle.stop();
});
});
Currently, you can't apply transforms on the server to published collections. See this question for more details. That leaves you with either transforming the data on the client, or using a meteor method. In a method, you can have the server do whatever you want to the data.
In one of my projects, we perform our most expensive query (it joins several collections, denormalizes the documents, and trims unnecessary fields) via a method call. It isn't reactive, but it greatly simplifies our code because all of the transformation happens on the server.
To extend #Christian Fritz answer, with Reactive Solution using peerlibrary:reactive-publish
Meteor.publish("todos", function() {
const self = this;
return this.autorun(function(computation) {
// Loop over each document in collection
todo.find().forEach(function(entry) {
// Add function to transform / modify each document here
self.added("todos", entry._id, entry);
});
});
});
In my client UI I have a form with differents search criterias, and I'd like to reactively update the results list. The search query is transformed into a classical minimongo selector, saved in a Session variable, and then I have observers to do things with the results:
// Think of a AirBnb-like application
// The session variable `search-query` is updated via a form
// example: Session.set('search-query', {price: {$lt: 100}});
Offers = new Meteor.Collection('offers');
Session.setDefault('search-query', {});
resultsCursor = Offers.find(Session.get('search-query'));
// I want to add and remove pins on a map
resultCursor.observe({
added: Map.addPin,
removed: Map.removePin
});
Deps.autorun(function() {
// I want to modify the cursor selector and keep the observers
// so that I would only have the diff between the old search and
// the new one
// This `modifySelector` method doesn't exist
resultsCursor.modifySelector(Session.get('search-query'));
});
How could I implement this modifySelector method on the cursor object?
Basically I think this method needs to update the compiled version of the cursor, ie the selector_f attribute, and then rerun observers (without losing the cache of the previous results). Or is there any better solution?
Edit: Some of you have misunderstood what I'm trying to do. Let me provide a complete example:
Offers = new Meteor.Collection('offers');
if (Meteor.isServer && Offers.find().count() === 0) {
for (var i = 1; i < 4; i++) {
// Inserting documents {price: 1}, {price: 2} and {price: 3}
Offers.insert({price:i})
}
}
if (Meteor.isClient) {
Session.setDefault('search-query', {price:1});
resultsCursor = Offers.find(Session.get('search-query'));
resultsCursor.observe({
added: function (doc) {
// First, this added observer is fired once with the document
// matching the default query {price: 1}
console.log('added:', doc);
}
});
setTimeout(function() {
console.log('new search query');
// Then one second later, I'd like to have my "added observer" fired
// twice with docs {price: 2} and {price: 3}.
Session.set('search-query', {});
}, 1000);
}
This doesn't solve the problem in the way you seem to be wanting to, but I think the result is still the same. If this is a solution you explicitly don't want, let me know and I can remove the answer. I just didn't want to put code in a comment.
Offers = new Meteor.Collection('offers');
Session.setDefault('search-query', {});
Template.map.pins = function() {
return Offers.find(Session.get('search-query'));
}
Template.map.placepins = function(pins) {
// use d3 or whatever to clear the map and then place all pins on the map
}
Assuming your template is something like this:
<template name="map">
{{placepins pins}}
</template>
One solution is to manually diff the old and the new cursors:
# Every time the query change, do a diff to add, move and remove pins on the screen
# Assuming that the pins order are always the same, this use a single loop of complexity
# o(n) rather than the naive loop in loop of complexity o(n^2)
Deps.autorun =>
old_pins = #pins
new_pins = []
position = 0
old_pin = undefined # This variable needs to be in the Deps.autorun scope
# This is a simple algo to implement a kind of "reactive cursor"
# Sorting is done on the server, it's important to keep the order
collection.find(Session.get('search-query'), sort: [['mark', 'desc']]).forEach (product) =>
if not old_pin?
old_pin = old_pins.shift()
while old_pin?.mark > product.mark
#removePin(old_pin)
old_pin = old_pins.shift()
if old_pin?._id == product._id
#movePin(old_pin, position++)
new_pins.push(old_pin)
old_pin = old_pins.shift()
else
newPin = #render(product, position++)
new_pins.push(newPin)
# Finish the job
if old_pin?
#removePin(old_pin)
for old_pin in old_pins
#removePin(old_pin)
#pins = new_pins
But it's a bit hacky and not so efficient. Moreover the diff logic is already implemented in minimongo so it's better to reuse it.
Perhaps an acceptable solution would be to keep track of old pins in a local collection? Something like this:
Session.setDefault('search-query', {});
var Offers = new Meteor.Collection('offers');
var OldOffers = new Meteor.Collection(null);
var addNewPin = function(offer) {
// Add a pin only if it's a new offer, and then mark it as an old offer
if (!OldOffers.findOne({_id: offer._id})) {
Map.addPin(offer);
OldOffers.insert(offer);
}
};
var removePinsExcept = function(ids) {
// Clean out the pins that no longer exist in the updated query,
// and remove them from the OldOffers collection
OldOffers.find({_id: {$nin: ids}}).forEach(function(offer) {
Map.removePin(offer);
OldOffers.remove({_id: offer._id});
});
};
Deps.autorun(function() {
var offers = Offers.find(Session.get('search-query'));
removePinsExcept(offers.map(function(offer) {
return offer._id;
}));
offers.observe({
added: addNewPin,
removed: Map.removePin
});
});
I'm not sure how much faster this is than your array answer, though I think it's much more readable. The thing you need to consider is whether diffing the results as the query changes is really much faster than removing all the pins and redrawing them each time. I would suspect that this might be a case of premature optimization. How often do you expect a user to change the search query, such that there will be a significant amount of overlap between the results of the old and new queries?
I have the same problem in my own hobby Meteor project.
There is filter session var where selector is storing. Triggering any checkbox or button changes filter and all UI rerender.
That solution have some cons and the main - you can't share app state with other users.
So i realized that better way is storing app state in URL.
May be it is also better in your case?
Clicking button now change URL and UI rendering based on it. I realize it with FlowRouter.
Helpful reading: Keeping App State on the URL
I have two collections
Offers (relevant fields: _id)
ShareRelations (relevant fields: receiverId and offerId)
and I'd like to publish only Offers to the logged in user which have been shared to him.
Actually, I'm doing this by using a helper array (visibleOffers) which I fill by looping for each ShareRelations and use this array later on the Offers.find as $in selector.
I wonder if this might be the meteor way to do this, or if I could do with less and/or prettier code?
My actual code to publish the Offers is the following:
Meteor.publish('offersShared', function () {
// check if the user is logged in
if (this.userId) {
// initialize helper array
var visibleOffers = [];
// initialize all shareRelations which the actual user is the receiver
var shareRelations = ShareRelations.find({receiverId: this.userId});
// check if such relations exist
if (shareRelations.count()) {
// loop trough all shareRelations and push the offerId to the array if the value isn't in the array actually
shareRelations.forEach(function (shareRelation) {
if (visibleOffers.indexOf(shareRelation.offerId) === -1) {
visibleOffers.push(shareRelation.offerId);
}
});
}
// return offers which contain the _id in the array visibleOffers
return Offers.find({_id: { $in: visibleOffers } });
} else {
// return no offers if the user is not logged in
return Offers.find(null);
}
});
Furthermore, the actual solution has the downside that if a new share relations is being created, the Offers collection on the client doesn't get updated with the newly visible offer instantly (read: page reload required. But I'm not sure if this is the case because of this publish method or because of some other code an this question is not primary because of this issue).
What you are looking for is a reactive join. You can accomplish this by directly using an observe in the publish function, or by using a library to do it for you. Meteor core is expected to have a join library at some point, but until then I'd recommend using publish-with-relations. Have a look at the docs, but I think the publish function you want looks something like this:
Meteor.publish('offersShared', function() {
return Meteor.publishWithRelations({
handle: this,
collection: ShareRelations,
filter: {receiverId: this.userId},
mappings: [{collection: Offers, key: 'offerId'}]
});
});
This should reactively publish all of the ShareRelations for the user, and all associated Offers. Hopefully publishing both won't be a problem.
PWR is a pretty legit package - several of us use it in production, and Tom Coleman contributes to it. The only thing I'll caution you about is that as of this writing, the current version in atmosphere (v0.1.5) has a bug which will result in a fairly serious memory leak. Until it gets bumped, see my blog post about how to run an updated local copy.
update 2/5/14:
The discover meteor blog has an excellent post on reactive joins which I highly recommend reading.
The way to do this is along the lines of this Question using observeChanges(). Still trying to figure out how to get it all working for my example, see Meteor, One to Many Relationship & add field only to client side collection in Publish?
You can use the reactive-publish package (I am one of authors):
Meteor.publish('offersShared', function () {
// check if the user is logged in
if (this.userId) {
this.autorun(function (computation) {
// initialize helper array
var visibleOffers = [];
// initialize all shareRelations which the actual user is the receiver
var shareRelations = ShareRelations.find({receiverId: this.userId}, {fields: {offerId: 1}});
// loop trough all shareRelations and push the offerId to the array if the value isn't in the array actually
shareRelations.forEach(function (shareRelation) {
if (visibleOffers.indexOf(shareRelation.offerId) === -1) {
visibleOffers.push(shareRelation.offerId);
}
});
// return offers which contain the _id in the array visibleOffers
return Offers.find({_id: { $in: visibleOffers } });
});
} else {
// return no offers if the user is not logged in
return Offers.find(null);
}
});
You can simply wrap your existing non-reactive code into an autorun and it will start to work. Just be careful to be precise which fields you query on because if you query on all fields then autorun will be rerun on any field change of ShareRelations, not just offerId.