I've been struggling with the following issue in Meteor + iron router:
I have a page (route) that has a subscription to a mongo collection
on that page, I have some logic which relies on a cursor querying the collection, also utilizing an observeChanges handler (namely, I'm running a search on that collection)
the problem in this case is the collection is being preserved in the client throughout route changes, which causes 2 unwanted effects:
1) the collection isn't necessarily needed outside that route, meaning i'm wasting client RAM (the collection, or even a subset of it, is likely to be quite big)
2) whenever i go back to that route, I want to start off with an empty subset for the observeChanges handler to work properly.
Any advice on how to clear the mirrored collection? (using the Collection._collection.remove({}) hack is bad practice, and doesn't even solve the problem)
Thanks!
solved this by storing the subscription handles. used them to unsubscribe the data (i.e. subscription_handle.stop() ) on template.destroyed()
Related
Background: I am using Firestore as the main database for my (web) application. I also pre-render the data stored in there, which basically means that I collect all data needed for specific requests so I can later fetch them in a single read access, and I store that pre-rendered data in a separate Firestore collection.
When a user changes some data, I want to know when this background rendering is finished, so I can then show updated data. Until rendering is finished, I want to show a loading indicator ("spinner") so the user knows that what he is currently looking at is outdated data.
Until now, I planned to have the application write the changed data into the database and use a cloud funtion to propagate the changed data to the collection of pre-rendered data. This poses a problem because the writing application only knows when the original write access is finished, but not when the re-rendering is finished, so it doesn't know when to update its views. I can hook into the table of rendered views to get an update when the rendering finished, but that callback won't be notified if nothing visibly changes, so I still do not know when to remove the spinner.
My second idea was to have the renderer function publish to a pubsub topic, but this fails because if the user's requests happens to leave the original data unchanged, the onUpdate/renderer is not called, so nothing gets published on the pubsub and again the client does not know when to remove the spinner.
In both cases, I could theoretically first fetch the data and look if something changed, but I feel that this too easily introduces subtle bugs.
My final idea was to disallow direct writes to the database and have all write actions be performed through cloud functions instead, that is, more like a classical backend. These functions could then run the renderer and only send a response (or publish to a pubsub) when the renderer is finished. But this has two new problems: First, these functions have full write access to the whole database and I'm back to checking the user's permissions manually like in a classical backend, not being able to make use of Firestore's rules for permissions. Second, in this approach the renderer won't get before/after snapshots automatically like it would get for onUpdate, so I'm back to fetching each record before updating so the renderer knows what changed and won't re-render huge parts of the database that were not actually affected at all.
Ideally, what (I think) I need is either
(1) a way to know when a write access to the database has finished including the onUpdate trigger, or
(2) a way to have onUpdate called for a write access that didn't actually change the database (all updated fields were updated to the values they already contained).
Is there any way to do this in Firestore / cloud functions?
You could increment a counter in the rendered documents, in such a way a field always changes even if there is no change for the "meaningful" fields
For that, the best is to use FieldValue.increment.
I have two components,in the first one i have an array of objects I get by calling the API (in y useEffect, only if the array in the store is empty, to avoid unnecessary calls). In the second one, the same array but with buttons to call the API and DELETE or POST a new object to that array in the server. My question is, is it better to create 2 actions in my second component's actions, for making the api updates, and then filter or push that object in my first component's reducer? Or just make the first component to always call the API and update itself?
I don't know if it's better to update the store as mutch as u can without relying on an API call or just an API call for updating will be more smooth.
I would recommend you to update the first component always with API call. If so, you can be sure that data has updated properly in DB via service call and you will always get updated data from DB.
Doing more and more updates at UI and keeping them in store will leave you with dirty data after a while.
It makes your app buggy (You should keep reverting/clearing the data in store in failure scenarios. Which we might miss at some point of time) too.
Proponents of ngrx claim (here for example) that you can and should keep all your application state in a single Store. That would suggest the #ngrx/Store can be used for caching because the contents of the cache is a type of application state.
A cache in a web-application is a thing that returns data when it has it, and encapsulates the requesting of the data from the server when it doesn’t. The Wikipedia article on caching calls it a cache hit when the data is available and a cache miss when it isn't.
From a functional programming perspective we can immediately see that reading data from a cache is functionally-impure - it has a side-effect which is that data may be requested from the server and retained in the cache. I don't know how to do this with ngrx which, for example, requires its Selectors to be functionally-pure.
It might help to consider this Caching With RxJs Observables in Angular tutorial (rxjs is said to be extremely complementary to ngrx). We don't have to scroll far to find the getFriends() function complete with side-effect:
getFriends() {
if(!this._friends){
this._friends = this._http.get('./components/rxjs-caching/friends.json')
.map((res:Response) => res.json().friends)
.publishReplay(1)
.refCount();
}
return this._friends;
}
Also the contents of the Store seem to be universally available to the entire application. The only controls are on how the state can be updated, but it is foolish to go poking around in a cache's raw data unmediated because there are no guarantees about which cache items are available and which are not available.
Hopefully these concerns can be relieved and there's a way of doing this that I've missed. Please can you show me a good way to go about using #ngrx/Store as a cache?
You can do something like this
friends$ = createEffect (() =>{
this.actions$.pipe(
.ofType(ListActions.LOAD)
.withLatestFrom(
this.store.select(fromSelectors.loadedList),
(action: fromList.Load, store: fromList.State) => store.friends
)
.switchMap((friends: Friend[]) => {
if (friends) {
return [new ListActions.Loaded(friends)];
}
return this.http
.get('/friendApi')
.map((friends: Friend[]) => new ListActions.Loaded(friends))
.catch(err => Observable.of(new ListActions.Failed()));
});
}
With ‘withLatestFrom’ operator, we get the store (loaded list) state into our effect with the dispatched action and check if the list is filled if so; dispatch the existing list otherwise make the rest call and update our store in the reducer.
For a detailed answer refer to this Medium post
In ngrx there are reducers (pure functions) which change the state (or cache) if you will. Those reducers get triggered by actions which you dispatch on the store.
From the store you request slices of data by selectors and subscribe to their changes.
To implement a cache logic you check if data is available and if not you
dispatch an action like "LoadDataSliceAction" which would trigger a side Effect which then loads the data into the store.
Does this help?
It’s worth noting that browsers come with a cache as standard, and it’s the real deal, encapsulating requests and encapsulating data, because it possesses none of the limitations of nrgx discussed in the question.
We’re using the browsers cache for storing entities these days. Borrowing my definition from here:
An entity is an object with long-lived data values that you read from the server (and possibly write back to it).
Entity data is only one kind of application data. Configuration data, user roles, the user’s prior selections, the current state of screen layouts… these are all important data that may benefit from coding with standard ngrx techniques, but they’re not entities.
This usurps one of Redux's fundamental principles (ngrx being an implementation of Redux), namely that the ngrx store is the single source of truth of your whole application. Here we are saying something different. Here we are saying that the browser cache is the single source of truth for our entities, and the ngrx Store is the single source of truth for everything else (actually it's slightly worse than that: if an entity is editable, then the Store has to assume responsibility for it while it is being edited).
The basics are trivial. We set a version number and a cache age on the http response of our entities:
X-Version: 23
Cache-Control: max-age=86400
When we receive the http response we process its body exactly as we might have done before but we do not store it anywhere. We leave that up to the browser to do that for us, and if we need the data a second time we simply re-request it, and it arrives almost instantaneous because the browser serves it from its cache.
And if we see an entity with an out of date version, then we immediately re-request it, being sure to override the cache by setting Cache-Control: no-cache in the request headers. This ensures we are always working with the current version of the entity.
this.http.get<T>(
url,
{...options, headers: {'Cache-Control': 'no-cache'}}
)
But how do we know a version is out of date? We are fortunate in that our versioning system is very granular, so we don’t get updates very often. Updates to current version numbers come via a web socket which I wasn’t involved with programing. So bear in mind that we are blessed, and this approach may not work for you (at least without putting some serious thought into it), and there are only two hard things in Computer Science: cache invalidation and naming things ;-)
Also some vigilance is required as there are a couple of ways I am aware of that the browser cache can be accidentally disabled which obviously now has terrible potential as a performance drain:
Inside the debugger, e.g. there's a "Disable cache" check-box in Chrome DevTools.
A bad SSL certificate. If the browser doesn't trust the server then it's right not to be retaining the data received. This is only a problem in a test environment where it's tempting to cheat and respond to "your connection is not private" by clicking "advanced" and "proceed".
So how's that for a solution (it's not going to make everyone happy is it)? Ngrx cannot be used as a bona-fide cache, but the browser can!
There are two main approaches i have found. They both involve adding a timestamp to any sections of your store that need to expire. The first is to check the age in the selector function and if it exceeds the limit you just dispatch the retrieval action. This one is fairly straight forward using the createSelector syntax and having access to a store for dispatch. The other requires you to dispatch the retrieval action as normal then you check the age of the data in the effect prior to retrieval. This can be achieved by combining the sttore observable with something like combineLatest(https://medium.com/#viestursv/how-to-get-store-state-in-ngrx-effect-fab9e9c8f087)
Example for first option
createSelector(previousSelector, (item) => {
const now = new Date();
const limit = new Date(now.getTime() - (5 * 60000));
if (item.age < limit) {
this.store.dispatch(new getFreshDataAction());
}
});
I'm starting to read up on Redux and I like what I see so far. There is one area of concern that I have and it's async actions. I've seen how sagas, thunk, and others help in this area but I feel like I'm missing something.
I'm a fan of using FSMs to handle application logic and prevent things from getting out of sync, so coming to redux and not seeing something like this is a bit jarring to me.
I have an example in my mind that I keep coming back to that I want redux (or some plugin) to solve:
I have a login form. When the user presses login, an async call is made to a server. This is fine. But if the user presses login again, then another call is made to the server and the application state is now out of sync. I could add a state property that defines loggingIn, but this only works in this example.
Lets say I have a much bigger dependency. Lets say when a user logs in, an action is done to preload some data. And that data contains a preflight of more data to load. This now introduces a lot of if/else conditional logic and adds more information to the state. In a FSM, I would define multiple concurrent states for theses. Such as user:loggedIn, manifest:fetched, pageData:fetched. And each state would be a child of each other: user:loggedIn -> manifest:fetched -> pageData:fetched. So if a request was made to login, or refetch data, an error would be thrown because it's not a valid handler for the current state. How does one accomplish such complexity in Redux? Is it even possible?
Another example. Stemming off the Reddit API example. Lets say a user goes to a reddit post. I don't want the user to be able to submit a comment on the post before it's even loaded. I know this could be handled on the UI side with conditionals and such, but that seems really archaic.
I really appreciate any guidance.
Thanks!
I want to write a web app order system using the REST methodology for the first time. I understand the concept of the "message id" when things get posted to a page but this scenario comes up. Once a user posts to the web app, you can keep track of their state with an id attached to the URI but what happens if they hit the back button of the browser to the entry point of the app when they didn't have any id? They then lose their state in the transaction.
I know you can always give them a cookie but you can't do that if they have cookies turned off and, worst case thinking here, they also have javascript turned off.
Now, I understand the answer may be "Yes, that's what will happen", that's the end of the story, and I can live with that but, being new to this, is there something I'm missing?
REST doesn't really have states server-side; you simply point to resources. User sessions aren't tracked; instead cookies are used to track application state. However, if you find that you really do need session state, then you are going to have to break REST and track it on the server.
A few things to consider:
How many of your users have cookies disabled anyway? How many even know how to do that?
Is it really likely that your users will have JS turned off? If so, how will you accomplish PUT (edit) and DELETE (delete) without AJAX?
EDIT: Since you do not want to force cookies and JavaScript, then you cannot have a truly RESTful system. But you can somewhat fake it. You are going to need to track a user server-side. You could do this with a session object, as found in your language/framework of choice or by adding a field to the database for whatever you want to know. Of course, when the user hits the back button, they will likely be going to a cached page. If that's not OK, then you will need to modify the headers to disallow caching. Basically, it gets more complicated if you don't use cookies, but not unmanageable.
What about the missing PUT and DELETE HTTP methods? You can fake those with POSTs and a hidden parameter specifying whether or not you are making something new, editing something, or deleting a record. The GET shouldn't really change.
The answer is that your application (in a REST scenario) simply doesn't keep track of what happens. All state is managed by the client, and state transitions are effected through URI navigation. The "State Transfer" part of REST refers to client navigation to new URIs which are new states.
A URI accessed with GET is effectively a read-only operation as per both the HTTP spec and the REST methodology. That means if the client "backs up" to some previous URI, "the worst" that happens is another GET is made and more data is loaded.
Let's say the client does this (using highly simplified pseudo-HTTP)...
GET //site.com/product/123
This retrieves information (or maybe a page) about product ID 123, which presumably includes a reference to a URI which can be used to POST that item into the user's shopping cart. So the user decides to buy the item. Again, it's oversimplified but:
POST //site.com/shoppingcart/
{productid = 123}
The return from this might be a representation of the shopping cart, or a reference to the added item (which could be used on the shoppingcart URI with DELETE to remove the item again), or a variety of other things (such as deeper XML describing the cart contents with other URIs pointing to the cart items and back to the original products). It's all up to you.
But the "state" is defined by whatever the client is doing. It isn't tracked on the server at all, although you will certainly keep a copy of his shopping cart contents in your database for some period of time. (I once returned to a website two years later and my shopping cart items were still there...) But it's up to him to keep track of the ID. To your server app it's just another record, presumably with some kind of expiration.
In this way you don't need cookies, and javascript is entirely dependent on the client implementation. It's difficult to build a decent REST client without script -- you could probably build something with XSLT and only return XML from the server, but that's probably more pain than anyone needs.
The starting point is to really understand REST, then design the system, then build it. It definitely doesn't lend itself to building it on the fly like most other systems do (right or wrong).
This is an excellent article that gives you a fairly "pure" look at REST without getting too abstract and without bogging you down with code:
http://www.infoq.com/articles/subbu-allamaraju-rest
It is true that the "S" in REST stands for "state" and the "T" for "transfer". But the state is is kept on the client, not on the server. The client always hast all information necessary to decide for himself in whicht direction he wants to change the state.
The way you describe it, your system is not restful.