when using createSlice, I can give the slice a name, eg: 'person'. Within reducer code, store refers obviously to the 'person' part of the whole redux store (provided that I used the proper configureStore setup). So I can say store.firstName = 'Bernhad'; That's fine.
in selectors, things seem to be sligthly different: store is bound to the (whole?) redux store. So I have to code 'firstNameSelector = (store) => store.person.firstName;'
I think this is quite asymetric - one time having to use store.person, the other time just store - and I wonder: WHY IS THIS SO?
Any explanation highly appreciated!
Bernhard
A slice is that: a "part" of your state, that works on it's own. It does not know anything exists outside of it. You could mount it at different positions in your store or even in other stores and it would still behave exactly the same.
Selectors are a different thing. With a selector you look from the outside perspective at the whole store and select out those values you need. Within that selector you might want to access multiple slices to derive a value, so you'd always look at "the whole".
Related
Very simple:
Let’s say we use RTK-Query to retrieve the current weather.
For that, I pass to the endpoint the arg ‘Paris’ as city.
It will serve the current weather of my « game ».
Then later, in a Redux selector, I need to compute some derived state based on that current weather.
How to read the state without having to pass the cache key « Paris »?
Indeed, that knowledge of « Paris » was only necessary at the beginning of the app.
It seems that with RTK-Query we’re stuck since you have to pass the argument that was used (the cache key) to the endpoint#select method.
Am I right in saying that RTK-Query does not currently allow that kind of state reading:
« select that current (and single) store entry X whatever the argument that was needed at loading time is ».
No, since that's an edge case.
Usually, there are multiple cache entries per endpoint, and there is also no concept of a "latest entry" or something, since multiple different components can render at the same time, displaying different entries for the same endpoint - the concept of a "latest" there would come down to pretty random React rendering order.
The most common solution would be to just safe "Paris" somewhere in global state to have it readily available, or to write your selector against RTKQ store internals by hand (although there might be changes to the state internals in the future).
My table is (device, type, value, timestamp), where (device,type,timestamp) makes a unique combination ( a candidate for composite key in non-DynamoDB DBMS).
My queries can range between any of these three attributes, such as
GET (value)s from (device) with (type) having (timestamp) greater than <some-timestamp>
I'm using dynamoosejs/dynamoose. And from most of the searches, I believe I'm supposed to use a combination of the three fields (as a single field ; device-type-timestamp) as id. However the set: function of Schema doesn't let me use the object properties (such as this.device) and due to some reasons, I cannot do it externally.
The closest I got (id:uuidv4:hashKey, device:string:GlobalSecIndex, type:string:LocalSecIndex, timestamp:Date:LocalSecIndex)
and
(id:uuidv4:rangeKey, device:string:hashKey, type:string:LocalSecIndex, timestamp:Date:LocalSecIndex)
and so on..
However, while using a Query, it becomes difficult to fetch results of particular device,type as the id, (hashKey or rangeKey) keeps missing from the scene.
So the question. How would you do it for such kind of table?
And point to be noted, this table is meant to gather content from IoT devices, which is generated every 5 mins by each device on an average.
I'm curious why you are choosing DynamoDB for this task. Advanced queries like this seem to be much better suited for a SQL based database as opposed to a NoSQL database. Due to the advanced nature of SQL queries, this task in my experience is a lot easier in SQL databases. So I would encourage you to think about if DynamoDB is truly the right system for what you are trying to do here.
If you determine it is, you might have to restructure your data a little bit. You could do something like having a property that is device-type and that will be the device and type values combined. Then set that as an index, and query based on that and sort by the timestamp, and filter out the results that are not greater than the value you want.
You are correct that currently, Dynamoose does not pass in the entire object into the set function. This is something that personally I'm open to exploring. I'm a member on the GitHub project, and if you would like to submit a PR adding that feature I would be more than happy to help explore that option with you and get that into the codebase.
The other thing you might want to explore is having a DynamoDB stream, that will set that device-type property whenever it gets added to your DynamoDB table. That would abstract that logic out of DynamoDB and your application. I'm not sure if it's necessary for what you are doing to decouple it to that level, but it might be something you want to explore.
Finally, depending on your setup, you could figure out which item will be more unique, device or type, and setup an index on that property. Then just query based on that, and filter out the results of the other property that you don't want. I'm not sure if that is what you are looking for, it will of course work, but I'm not sure how many items you will have in your table, and there get to be questions about scalability at a certain level. One way to solve some of those scalability questions might be to set the TTL of your items if you know that you the timestamp you are querying for is constant, or predictable ahead of time.
Overall there are a lot of ways to achieve what you are looking to do. Without more detail about how many items, what exactly those properties will be doing, the amount of scalability you require, which of those properties will be most unique, etc. it's hard to give a good solution. I would highly encourage you to think about if NoSQL is truly the best way to go. That query you are looking to do seems a LOT more like a SQL query. Not saying it's impossible in DynamoDB, but it will require some thought about how you want to structure your data model, and such.
Considering opinion of #charlie-fish, I decided to jump into Dynamoose and improvise the code to pass the model to the set function of the attribute. However, I discovered that the model is already being passed to default parameter of the attribute. So I changed my Schema to the following:
id:hashKey;default: function(model){ return model.device + "" + model.type; }
timestamp:rangeKey
For anyone landing here on this answer, please note that the default & set functions can access attribute options & schema instance using this . However both those functions should be regular functions, rather than arrow functions.
Keeping this here as an answer, but I won't accept it as an answer to my question for sometime, as I want to wait for someone else to hit out a better approach.
I also want to make sure that if a value is passed for id field, it shouldn't be set. For this I can use set to ignore the actual incoming value, which I don't know how, as of yet.
As we know, when saving data in a redux store, it's supposed to be transformed into a normalized state. So embedded objects should be replaced by their ids and saved within a dedicated collection in the store.
I am wondering, if that also should be done if the relationship is a composition? That means, the embedded data isn't of any use outside of the parent object.
In my case the embedded objects are registrations, and the parent object is a (real life) event. Normalizing this data structure to me feels like a lot of boilerplate without any benefit.
State normalization is more than just how you access the data by traversing the object tree. It also has to do with how you observe the data.
Part of the reason for normalization is to avoid unnecessary change notifications. Objects are treated as immutable so when they change a new object is created so that a quick reference check can indicate if something in the object changed. If you nest objects and a child object changes then you should change the parent. If some code is observing the parent then it will get change notifications every time a child changes even though it might not care. So depending on your scenario you may end up with a bunch of unnecessary change notifications.
This is also partly why you see lists of entities broken out into an array of identifiers and a map of objects. In relation to change detection, this allows you to observe the list (whether items have been added or removed) without caring about changes to the entities themselves.
So it depends on your usage. Just be aware of the cost of observing and the impact your state shape has on that.
I don't agree that data is "supposed to be [normalized]". Normalizing is a useful structure for accessing the data, but you're the architect to make that decision.
In many cases, the data stored will be an application singleton and a descriptive key is more useful than forcing some kind of id.
In your case I wouldn't bother unless there is excessive data duplication, especially because your would have to then denormalize for the object to function properly.
As far as I can tell, reducers change the state of the tree, mapStateToProps transform the state tree for the UI. However, the boundary is not clear. For instance, consider the example from the "Computing Derived Data" docs (http://redux.js.org/docs/recipes/ComputingDerivedData.html). See below.
My first instinct is to put the calculation of the visible todos in the reducer (and not mapStateToProps), that is, whenever a todo or the visibility filter changes, update the list of completed or active todos. This has several advantages:
No need for Reselect
Having all the logic in one place helps reduce the learning curve (when onboarding) and probably also makes it easier to test (since the integration tests for mapStateToProps are simpler, if non-existent).
On the other hand, 2) is subjective. So guidance on mapStateToProps would be helpful.
const getVisibleTodos = (todos, filter) => {
switch (filter) {
case 'SHOW_ALL':
return todos
case 'SHOW_COMPLETED':
return todos.filter(t => t.completed)
case 'SHOW_ACTIVE':
return todos.filter(t => !t.completed)
}
}
const mapStateToProps = (state) => ({
todos: getVisibleTodos(state.todos, state.visibilityFilter)
})
const mapDispatchToProps = (dispatch) => ({
onTodoClick: (id) => dispatch(toggleTodo(id))
})
const VisibleTodoList = connect(
mapStateToProps,
mapDispatchToProps
)(TodoList)
Update in response to #DDS:
To update multiple interrelated states based on one action means that these states can become out of sync... This could mean the visibleTodoList acquires items that don't exist in the original.
If by multiple interrelated states you mean visibilityFilter and todos, then as per the redux docs, one idiomatic solution is to refactor the state tree so that they are one state. There are other approaches mentioned in the docs as well. Certainly, as you allude, you now have the burden of ensuring the code to compute the derived state (visible todos) is always called. Once the code base gets large, a custom combineReducer (another idiomatic solution) that does additional transforms before returning the state makes sense.
Note that the code lives in separate files and the execution order of reducers is not guaranteed. Also, reducers don't normally have access to sibling state meaning they cannot derive data from it
See my comments above.
The example above may be contrived but the point is that to make the data stable, it's best to have a single source every component can rely on, and this requires the data to be normalized.
Yea, it all comes down to normalized vs denormalized state. I'm not yet convinced that normalized state is always the way to go...for the same reason NoSQL databases are sometimes the way to go.
Reasoning about more complex state [normalized and denormalized state] becomes difficult very quickly. This is why it is better to not put derived data in state.
Ah I see your point. Six months from now I may not see that visibleTodos is derived state (and so should be treated as readonly). Unexpected things will result.
NOTE: The are my two cents based on my personal experience, and not necessarily in line with best practices.
Redux state
This should be normalised, primarily because it makes writes (inserts/updates/deletes) easy to reason about.
Normalising redux state would mean that you should not be storing derived data in the redux state.
Derived data:
My personal experience of using react/redux (before the nice docs on http://redux.js.org/docs/recipes/ComputingDerivedData.html emerged) made me try to follow something that you (OP) are striving for: simplifying the places where code is written.
After accepting the normalization principle, the natural place for me to start writing derivation or 'state view' logic was in the react component's render function (which now sounds a little ugly). As code evolves, the render function becomes neater by creating derivation functions and keeping them outside the react component.
This creates an easy mental model for people working with the codebase:
redux-state: Normalized store
mapStateToProps: Just maps state to prop values in a dumb way
ReactComponent: Contains all the logic to 'view' the right pieces of the state and then render it. Further modularisation done as deemed necessary by author.
The reason Reselect is used over putting this in a reducer and in the state is analogous to why React is used instead of jQuery.
To update multiple interrelated states based on one action means that these states can become out of sync. Maybe one reducer interprets ADD_ITEM to mean "upsert" when another, coded in a different file months later by someone else, takes it to mean "insert with duplicates allowed". This could mean the visibleTodoList acquires items that don't exist in the original.
Note that the code lives in separate files and the execution order of reducers is not guaranteed. Also, reducers don't normally have access to sibling state meaning they cannot derive data from it.
The example above may be contrived but the point is that to make the data stable, it's best to have a single source every component can rely on, and this requires the data to be normalized. Storing derived data means storing the same data in multiple places and forms, but being interdependent.
Having a single source and having data flow unidirectionally, prevents disagreement on which data is authoritative.
The state should be thought of as clean-room data. It has a number of properties that make it dependable and easy to reason about:
All state is immutable, which means it can be stored anywhere and treated like a value without fear it will later be modified by external code.
All state is serializable which guarantees that it contains no code or cycles and can be shipped and stored easily.
It is normalized so it contains only one copy of each datum. There can be no desynchronisation or disagreement between different parts of the state. This makes the state harder to make internally inconsistent.
Reasoning about more complex state becomes difficult very quickly. This is why it is better to not put derived data in state.
Note that what may seem like a great and simple idea when you code it may trip you up years later when that little side project has blossomed into a highly sought-after essential tool. It's not hard to hack up working code on the first round so doing things the Redux Way is very much a future-proofing strategy.
In doing a bit more programming with Firebase today, I found myself wishing for a couple of features:
1) Merge set:
Say I have a firebase ref that has the value {a:1,b:2,c:3}.
If I do something like ref.set({a:-1,b:-2}) the new value will (unsurprisingly) be {a:-1,b:-2}.
Instead, imagine ref.mergeSet({a:-1,b:-2}) which would have a result in the value of the ref being {a:-1,b:-2,c:3}.
Now, I realize that I could do something like ref.child("a").set(-1) and ref.child("b").set(-2) to achieve this result, but in at least some cases, I'd prefer to get only a single call to my .on() handler.
This segues into my second idea.
2) Batch set:
In my application I'd like a way to force an arbitrary number of calls to .set to only result in one call to .on in other clients. Something like:
ref.startBatch()
ref.child("a").set(1)
ref.child("b").set(2)
....
ref.endBatch()
In batch mode, .set wouldn't result in a call to .on, instead, the minimal number of calls to .on would all result from calling .endBatch.
I readily admit that these ideas are pretty nascent, and I wouldn't be surprised if they conflict with existing architectural features of Firebase, but I thought I'd share them anyway. I find that I'm having to spend more time ensuring consistency across clients when using Firebase than I expected to.
Thanks again, and keep up the great work.
UPDATE: We've added a new update() method to the Firebase web client and PATCH support to the REST API, which allow you to atomically modify multiple siblings at a particular location, while leaving the other siblings unmodified. This is what you described as "mergeSet" and can be used as follows:
ref.update({a: -1, b: -2});
which will update 'a' and 'b', but leave 'c' unmodified.
OLD ANSWER
Thanks for the detailed feature request! We'd love to hear more about your use case and how these primitives would help you. If you're willing to share more details, email support#firebase.com and we can dig into your scenario.
To answer your question though, the primary reason we don't have these features is related our architecture and the performance / consistency guarantees that we're trying to maintain. Not to go too deep, but if you imagine that your Firebase data is spread across many servers, it's easier for us to have stronger guarantees (atomicity, ordering, etc.) when modifying data that's close in the tree than when modifying data that's far away. So by limiting these guarantees to data that you can replace with a single set() call, we push you in a direction that will perform well with the Firebase architecture.
In some cases, you may be able to get roughly what you want by just reorganizing your tree. For instance, if you know you always want to set 'a' and 'b' together, you could put them under a common 'ab' parent and do ref.child('ab').set({a:-1, b:-2});, which won't affect the 'c' child.
Like I said, we'd love to hear more about your scenario. We're in beta so that we can learn from developers about how they're using the API and where it's falling short! support#firebase.com :-)