I am working with rtk query and I have 2 different sources of data:
ongoing websocket connection that updates cache with updateCachedData whenever relevant notification is received,
http requests to the api to get an existing data from db
The cached data is store inside of the api > queries > endpointname > data.
When I made a particular api call I want to update the cached data with that result. updateCachedData is not available for the query.mutation so I am not sure how this can be achieved.
Also, should I keep the copy of that cached data inside of the normal slice?
Thanks
I've tried researching the subject but its unclear. Some people state it's a good idea to keep the copy of that data inside of the normal slice and update it there but then this data will be there indefinitely.
Hmm, that sounds like here you are trying to use a mutation to keep data around for longer - and mutations are just not meant for that. A mutation is meant to make a change on the server, not more.
So I would assume that you are just using a mutation for something that should rather be a query here - if you want to trigger it a little bit later, maybe also a lazy query or a query with a skip option.
Related
I have a usecase wherein I save an entity for the first time and a second after saving it I fetch it, update it a bit and save it in batch along with two other entites (different 'kinds'). In a few cases (10 out of 50K), the update to datastore is ignored.
I mean, it's there in the objectify cache but the change didn't happen in datastore.
How I am able to justify the above, is because after the save, I fetch it again after a second and I'm able to see it.
PS: I also use .now() while saving. This shouldn't happen when now() is used right?
Sounds like you are seeing eventual consistency in the datastore. There is quite a bit of Google documentation available, but this looks to be the most comprehensive:
https://cloud.google.com/datastore/docs/articles/balancing-strong-and-eventual-consistency-with-google-cloud-datastore/
There are a lot of ways to deal with eventual consistency, either by avoiding it (using get-by-key operations), changing the structure of your data (using #Parent relationships), or masking it with UI behavior (say, add the new thing to the list in UI code instead of just refreshing the whole list).
Most examples of Flux use a todo or chat example. In all those examples, the data set you are storing is somewhat small and and be kept locally so not exactly sure if my planned use of stores falls in line with the flux "way".
The way I intend to use stores are somewhat like ORM repositories. A way to access data in multiple ways and persist data to the data service, whatever that might be.
Lets say I am building a project management system. I would probably have methods like these for data retrieval:
getIssueById
getIssuesByProject
getIssuesByAssignedUser
getIssueComments
getIssueCommentById
etc...
I would also have methods like this for persisting data to the data service:
addIssue
updateIssue
removeIssue
addIssueComment
etc...
The one main thing I would not do is locally store any issue data (and for that matter most store data that related to a data store). Most of the data is important to have fresh because maybe the issue status has updated since I last retrieved that issue. All my data retrieval method would probably always make an API requests to the the latest data.
Is this against the flux "way"? Are there any issue with going about flux in this way?
I wouldn't get too hung up on the term "store". You need create application state in some way if you want your components to render something. If you need to clear that state every time a different request is made, no problem. Here's how things would flow with getIssueById(), as an example:
component calls store.getIssueById(id)
returns empty object since issue isn't in store's cache
the store calls action.fetchIssue(id)
component renders empty state
server responds with issue data and calls action.receiveIssue(data)
store caches that data and dispatches a change event
component responds to event by calling store.getIssueById(id)
the issue data is returned
component renders data
Persisting changes would be similar, with only the most recent server response being held in the store.
user interaction in component triggers action.updateIssue(modifiedIssue)
store handles action, sending changes to server
server responds with updated issue and calls action.receiveIssue(data)
...and so on with the last 4 steps from above.
As you can see, it's not really about modeling your data, just controlling how it comes and goes.
From what I know it seems that Meteor Framework stores part of data on the client. It's clear how to do it for personal todo list - because it's small and you can just copy everything.
But how it works in case of let's say Q&A site similar to this? The collection of questions are huge, you can't possibly copy it to the client. And you need to have filtering by tags and sorting by date and popularity.
How Meteor Framework handles such case? How it partition data? Does it make sense to use Meteor for such use case?
Have a look at the meteor docs, in particular the publish and subscribe section. Here's a short example:
Imagine your database contains one million posts. But your client only needs something like:
the top 10 posts by popularity
the posts your friends made in the last hour
the posts for the group you are in
In other words, some subset of the larger collection. In order to get to that subset, the client starts a subscription. For example: Meteor.subscribe('popularPosts'). Then on the server, there will be a corresponding publish function like: Meteor.publish('popularPosts', function(){...}.
As the client moves around the app (changes routes), different subscriptions may be started and stopped.
The subset of documents are sent to the client and cached in memory in a mongodb-like store called minimongo. The client can then retrieve the documents as needed in order to render the page.
The amount of coding that goes into the making of a DataSet is often significant. Now I'm not sure what the industry standard or best practise when dealing with data requests from multiple ASP.NET pages. Should I use a cache/session to pass on the DataSet from page to page or should I fetch directly from the database for each page?
What's the most common approach here?
Here are my thoughts:
It depends on the database and the type of data that you're trying to get, as well as what may modify the data. Do you have backend processes that run concurrent with the data you're going to want? Is this data only updated because of the current page, or does it update at all? How many people are going to use said page?
I personally almost always call to the database, simply because there are so many what-ifs when it comes to this kind of thing. At any time the data can change; it's never as static as people would think it would be. I would personally trade correct data over performance any day.
But that's just me personally. This question is so open ended that it's impossible to take every single thing into consideration since I don't know your database structure, nor how expensive it is to retrieve it, nor what you're using it for.
Sorry I couldn't really be more help.
It depends upon you need. If data size is very large then don't save it in Session or Cahce, because Session or Cache is stored in server Memory. Session is user specific and it will store data for each user in the server, so avoid from it. I think you should directly fetch data each time you need, don't save it in session. If data is very small/limited then you can save it in session ( example UserName or UserId etc ). If you are using a gridview to showdata then use paging and on each page request fetch the data from the database.
I have method in my BLL that interacts with the database and retrieves data based on the defined criteria.
The returned data is a collection of FAQ objects which is defined as follows:
FAQID,
FAQContent,
AnswerContent
I would like to cache the returned data to minimize the DB interaction.
Now, based on the user selected option, I have to return either of the below:
ShowAll: all data.
ShowAnsweredOnly: faqList.Where(Answercontent != null)
ShowUnansweredOnly: faqList.Where(AnswerContent != null)
My Question:
Should I only cache all data returned from DB (e.g. FAQ_ALL) and filter other faqList modes from cache (= interacting with DB just once and filter the data from the cache item for the other two modes)? Or should I have 3 cache items: FAQ_ALL, FAQ_ANSWERED and FAQ_UNANSWERED (=interacting with database for each mode [3 times]) and return the cache item for each mode?
I'd be pleased if anyone tells me about pros/cons of each approach.
Food for thought.
How many records are you caching, how big are the tables?
How much mid-tier resources can be reserved for caching?
How many of each type data exists?
How fast filtering on the client side will be?
How often does the data change?
how often is it changed by the same application instance?
how often is it changed by other applications or server side jobs?
What is your cache invalidation policy?
What happens if you return stale data?
Can you/Should you leverage active cache invalidation, like SqlDependency or LinqToCache?
If the dataset is large then filtering on the client side will be slow and you'll need to cache two separate results (no need for a third if ALL is the union of the other two). If the data changes often then caching will return stale items frequently w/o a proactive cache invalidation in place. Active cache invalidation is achievable in the mid-tier if you control all the updates paths and there is only one mid-tier instance application, but becomes near really hard if one of those prerequisites is not satisfied.
It basically depends how volatile the data is, how much of it there is, and how often it's accessed.
For example, if the answered data didn't change much then you'd be safe caching that for a while; but if the unanswered data changed a lot (and more often) then your caching needs might be different. If this was the case it's unlikely that caching it as one dataset will be the best option.
It's not all bad though - if the discrepancy isn't too huge then you might be ok cachnig the lot.
The other point to think about is how the data is related. If the FAQ items toggle between answered and unanswered then it'd make sense to cache the base data as one - otherwise the items would be split where you wanted it together.
Alternatively, work with the data in-memory and treat the database as an add-on...
What do I mean? Well, typically the user will hit "save" this will invoke code which saves to the DB; when the next user comes along they will invoke a call which gets the data out of the DB. In terms of design the DB is a first class citizen, everything has to go through it before anyone else gets a look in. The alternative is to base the design around data which is held in-memory (by the BLL) and then saved (perhaps asynchronously) to the DB. This removes the DB as a bottleneck but gives you a new set of problems - like what happens if the database connection goes down or the server dies with data only in-memory?
Pros and Cons
Getting all the data in one call might be faster (by making less calls).
Getting all the data at once if it's related makes sense.
Granularity: data that is related and has a similar "cachability" can be cached together, otherwise you might want to keep them in separate cache partitions.