Suppose I have the following object on DynamoDB. This object is a map of maps, where each nested map represents an issue. The key value pair of the nested maps represent issueId:issueDetails. Now suppose there are 100s of issues (that are ever-growing or decreasing), and I only want to grab the status & criticality of each issue. How do I go about doing so?
{
"issues": {
"ISU000000000": {
"impactOn": "Balance Sheets",
"issueName": "TestIssue",
"criticality": "High",
"status": "open"
},
"ISU000000001": {
"impactOn": "Balance Sheets",
"issueName": "TestIssue",
"criticality": "High",
"status": "open"
},
"ISU000000002": {
"impactOn": "Balance Sheets",
"issueName": "TestIssue",
"criticality": "High",
"status": "open"
}
}
}
I understand that you can write a projection expression on DynamoDB with attribute names and values like this:
ProjectionExpression = "issues.#IssueID.criticality, issues.#IssueID.status"
ExpressionAttributeNames = {'#IssueID':'ISU000000000'}
But the thing is, in this approach I've declared the IssueID in the ExpressionAttributeNames parameter. So that means, with this approach, I'd have to declare all the IssueIDs beforehand. Is there a way to write an expression such that I don't have to declare the issueID? The end goal is to be able to grab the above-mentioned desired information in an object that dynamically changes in size (meaning issues will be added or deleted).
Note, I'm using the Boto3 SDK
Does my question make sense? Please let me know your thoughts.
My gut feeling tells me this is a suboptimal design. What triggers me is you say it can be an ever growing map, but a single record has a limitation on 400KB for a single item. I would try to flatten the items out and avoid deep nesting if you have frequently mutating data.
Are you able to redesign so each item is it’s own issue. There are other ways to achieve multi tenancy if that is your goal.
Related
I need to update a field in a nested object with a dynamic key.
the path could look like this: level1.level2.DYNAMIC_KEY : updatedValue
The update-method deletes everything else on level1 instead of only updating the field in the nested object. The update() acts more like a set(). What am I doing wrong?
I tried the following already:
I read the documentation https://firebase.google.com/docs/firestore/manage-data/add-data#update-data
but that way it is a) static and b) still deletes the other fields.
Update fields in nested objects
If your document contains nested objects, you can use "dot notation" to reference nested fields within the document when you call update()
This would be static and result in
update({
'level1.level2.STATIC_KEY' : 'updatedValue'
});
Then I found this answer https://stackoverflow.com/a/47296152/5552695
which helped me to make the updatepath dynamic.
The desired solution after this could look like
field[`level1.level2.${DYNAMIC_KEY}`] = updateValue;
update(field);
But still: it'll delete the other fields in this path.
UPDATE:
The Structure of my Doc is as follows:
So inside this structure i want to update only complexArray > 0 > innerObject > age
Writing the above path into the update() method will delete everything else on the complexArray-level.
A simple update on first-level-fields works fine and lets the other first-level-fields untouched.
Is it possible, that firestore functions like update() can only act on the lowest field-level on an document. And as soon as i put complex objects into an document its not possible to select such inner fields?
I know there would be the solution to extract those "complex" objects into separate collections + documents and put these into my current lowest document level. I think this would be a more accurate way to stick to the FireStore principles. But on Application side it is easier to work with complex object than to always dig deeper in firestore collection + document structure.
So my current solution is to send the whole complex object into the update() method even though I just changed only one field on application side.
Have you tried using the { merge: true } option in your request?
db
.collection("myCollection")
.doc("myDoc")
.set(
{
level1: { level2: { myField: "myValue" } }
},
{ merge: true }
)
In Firebase Realtime Database, it's a pretty common transactional thing that you have
"table" A - think of it as "pending"
"table" B - think of it as "results"
Some state happens, and you need to "move" an item from A to B.
So, I certainly mean this would likely be a cloud function doing this.
Obviously, this operation has to be atomic and you have to be guarded against racetrack effects and so on.
So, for item 123456, you have to do three things
read A/123456/
delete A/123456/
write the value to B/123456
all atomically, with a lock.
In short what is the Firebase way to achieve this?
There's already the awesome ref.transaction system, but I don't think it's relevant here.
Perhaps using triggers in a perverted manner?
IDK
Just for anyone googling here, it's worth noting that the mind-boggling new Firestore (it's hard to imagine anything being more mind-boggling than traditional Firebase, but there you have it...), the new Firestore system has built-in .......
This question is about good old traditional Firebase Realtime.
Gustavo's answer allows the update to happen with a single API call, which either complete succeeds or fails. And since it doesn't have to use a transaction, it has much less contention issues. It just loads the value from the key it wants to move, and then writes a single update.
The problem is that somebody might have modified the data in the meantime. So you need to use security rules to catch that situation and reject it. So the recipe becomes:
read the value of the source node
write the value to its new location while deleting the old location in a single update() call
the security rules validate the operation, either accepting or rejecting it
if rejected, the client retries from #1
Doing so essentially reimplements Firebase Database transactions with client-side code and (some admittedly tricky) security rules.
To be able to do this, the update becomes a bit more tricky. Say that we have this structure:
"key1": "value1",
"key2": "value2"
And we want to move value1 from key1 to key3, then Gustavo's approach would send this JSON:
ref.update({
"key1": null,
"key3": "value1"
})
When can easily validate this operation with these rules:
".validate": "
!data.child("key3").exists() &&
!newData.child("key1").exists() &&
newData.child("key3").val() === data.child("key1").val()
"
In words:
There is currently no value in key3.
There is no value in key1 after the update
The new value of key3 is the current value of key1
This works great, but unfortunately means that we're hardcoding key1 and key3 in our rules. To prevent hardcoding them, we can add the keys to our update statement:
ref.update({
_fromKey: "key1",
_toKey: "key3",
key1: null,
key3: "value1"
})
The different is that we added two keys with known names, to indicate the source and destination of the move. Now with this structure we have all the information we need, and we can validate the move with:
".validate": "
!data.child(newData.child('_toKey').val()).exists() &&
!newData.child(newData.child('_fromKey').val()).exists() &&
newData.child(newData.child('_toKey').val()).val() === data.child(newData.child('_fromKey').val()).val()
"
It's a bit longer to read, but each line still means the same as before.
And in the client code we'd do:
function move(from, to) {
ref.child(from).once("value").then(function(snapshot) {
var value = snapshot.val();
updates = {
_fromKey: from,
_toKey: to
};
updates[from] = null;
updates[to] = value;
ref.update(updates).catch(function() {
// the update failed, wait half a second and try again
setTimeout(function() {
move(from, to);
}, 500);
});
}
move ("key1", "key3");
If you feel like playing around with the code for these rules, have a look at: https://jsbin.com/munosih/edit?js,console
There are no "tables" in Realtime Database, so I'll use the term "location" instead to refer to a path that contains some child nodes.
Realtime Database provides no way to atomically transaction on two different locations. When you perform a transaction, you have to choose a single location, and you may only make changes under that single location.
You might think that you could just transact at the root of the database. This is possible, but those transactions may fail in the face of concurrent non-transaction write operations anywhere within the database. It's a requirement that there must be no non-transactional writes anywhere at the location where transactions take place. In other words, if you want to transact at a location, all clients must be transacting there, and no clients may write there without a transaction.
This rule is certainly going to be problematic if you transact at the root of your database, where clients are probably writing data all over the place without transactions. So, if you want perform an atomic "move", you'll either have to make all your clients use transactions all the time at the common root location for the move, or accept that you can't do this truly atomically.
Firebase works with Dictionaries, a.k.a, key-value pair. And to change data in more than one table on the same transaction you can get the base reference, with a dictionary containing "all the instructions", for instance in Swift:
let reference = Database.database().reference() // base reference
let tableADict = ["TableA/SomeID" : NSNull()] // value that will be deleted on table A
let tableBDict = ["TableB/SomeID" : true] // value that will be appended on table B, instead of true you can put another dictionary, containing your values
You should then merge (how to do it here: How do you add a Dictionary of items into another Dictionary) both dictionaries into one, lets call it finalDict,
then you can update those values, and both tables will be updated, deleting from A and "moving to" B
reference.updateChildValues(finalDict) // update everything on the same time with only one transaction, w/o having to wait for one callback to update another table
I'm working on a game. Originally, the user was in a single dungeon, with properties:
// state
{
health: 95,
creatures: [ {}, {} ],
bigBoss: {},
lightIsOn: true,
goldReward: 54,
// .. you get the idea
}
Now there are many kingdoms, and many dungeons, and we may want to fetch this data asynchronously.
Is it better to represent that deeply-nested structure in the user's state, effectively caching all the other possible dungeons when they are loaded, and every time we want to update a property (e.g. action TURN_ON_LIGHT) we need to find exactly which dungeons we're talking about, or to update the top-level properties every time we move to a new dungeon?
The state below shows nesting. Most of the information is irrelevant to my presentational objects and actions, they only care about the one dungeon the user is currently in.
// state with nesting
{
health: 95,
kingdom: 0,
dungeon: 1,
kingdoms: [
{
dungeons: [
{
creatures: [ {}, {} ],
bigBoss: {},
lightIsOn: true,
goldReward: 54
}
{
creatures: [ {}, {}, {} ],
bigBoss: {},
lightIsOn: false,
goldReward: 79
}
{
//...
}
]
}
{
// ...
}
]
}
One of the things that's holding me back is that all the clean reducers, which previously could just take an action like TURN_ON_LIGHT and update the top-level property lightIsOn, allowing for very straight-forward reducer composition, now have to reach into the state and update the correct property depending on the kingdom and dungeon that we are currently in. Is there a nice way of composing the reducers that would keep this clean?
The recommended approach for dealing with nested or relational data in Redux is to normalize it, similar to how you would structure a database. Use objects with IDs as keys and the items as values to allow direct lookup by IDs, use arrays of IDs to indicate ordering, and any other part of your state that needs to reference an item should just store the ID, not the item itself. This keeps your state flatter and makes it more straightforward to update a given item.
As part of this, you can use multiple levels of connected components in your UI. One typical technique with Redux is to have a connected parent component that retrieves the IDs of multiple items, and renders <SomeConnectedChild itemID={itemID} /> for each ID. That connected child would then look up its own data using that ID, and pass the data to any presentational children below it. Actions dispatched from that subtree would reference the item's ID, and the reducers would be able to update the correct normalized item entry based on that.
The Redux FAQ has further discussion on this topic: http://redux.js.org/docs/FAQ.html#organizing-state-nested-data . Some of the articles on Redux performance at https://github.com/markerikson/react-redux-links/blob/master/react-performance.md#redux-performance describe the "pass an ID" approach, and https://medium.com/#adamrackis/querying-a-redux-store-37db8c7f3b0f is a good reference as well. Finally, I just gave an example of what a normalized state might look like over at https://github.com/reactjs/redux/issues/1824#issuecomment-228609501.
edit:
As a follow-up, I recently added a new section to the Redux docs, on the topic of "Structuring Reducers". In particular, this section includes chapters on "Normalizing State Shape" and "Updating Normalized Data".
I'm interested in a one-way-many association. To explain:
// Dog.js
module.exports = {
attributes: {
name: {
type: 'string'
},
favorateFoods: {
collection: 'food',
dominant: true
}
}
};
and
// Food.js
module.exports = {
attributes: {
name: {
type: 'string'
},
cost: {
type: 'integer'
}
}
};
In other words, I want a Dog to be associated w/ many Food entries, but as for Food, I don't care which Dog is associated.
If I actually implement the above, believe it or not it works. However, the table for the association is named in a very confusing manner - even more confusing than normal ;)
dog_favoritefoods__food_favoritefoods_food, with id, dog_favoritefoods, and food_favoritefoods_food.
REST blueprints function with the Dog model just fine, I don't see anything that "looks bad" except for the funky table name.
So, the question is, is it supposed to work this way, and does anyone see something that might potentially go haywire?
I think you should be ok.
However, there does not really seem any reason to not complete the association for a Many to Many. The reason would be because everything is already being created for that single collection. The join table and its attributes are already there. The only thing missing in this equation is the reference back on food.
I could understand if putting the association on food were to create another table or create another weird join, but that has already been done. There really is no overhead to creating the other association.
So in theory you might as well create it, thus avoiding any potential conflicts unless you have a really compelling reason not to?
Edited: Based on the comments below we should note that one could experience overhead in lift based the blueprints and dynamic finders created.
When first building my app, I had manually created data and it looked similar to this:
{
"data": {
0: {
"author": "gracehop",
"title": "Announcing COBOL, a New Programming Language"
},
1: {
"author": "alanisawesome",
"title": "The Turing Machine"
}
}
}
I could retrieve the data using core-ajax and iterate through a custom component without problem, like this:
<template is="auto-binding">
<core-ajax id="ds" auto url="https://mysite.firebaseio.com/data.json" response="{{data}}"></core-ajax>
<my-items items="{{data}}"></my-items>
</template>
However, now I'm attempting to create new data in my app using push(). The problem is that the new data looks like this:
{
"data": {
"-JRHTHaIs-jNPLXOQivY": {
"author": "gracehop",
"title": "Announcing COBOL, a New Programming Language"
},
"-JRHTHaKuITFIhnj02kE": {
"author": "alanisawesome",
"title": "The Turing Machine"
}
}
}
This lines up with their documentation, which states
Your first instinct might be to use set() to store children with
auto-incrementing integer indexes ... Firebase provides a push()
function that generates a unique ID every time a new child is added to
the specified Firebase reference. By using unique child names for each
new element in the list, several clients can add children to the same
location at the same time without worrying about write conflicts. The
unique ID generated by push() is based on a timestamp, so list items
will automatically be ordered chronologically.
After pushing an item or two, I no longer see any items in my list. If I delete anything created using push() the other items show up again.
Instead of using the core-ajax element, you should be using the native firebase-element. You can then connect it with your data by assigning the location attribute to your unique Firebase URL, then access the data with the data attribute. Likewise, if you are interested in looping through your data, you will need to access the keys attribute since Polymer currently only iterates through arrays.
Your firebase element would look something like this:
<firebase-element id="base" location="https://YOUR.firebaseio.com/"
data="{{data}}" keys="{{keys}}"></firebase-element>
You would then access it like so:
<template repeat="key in keys">
<p>This is your uniqueId generated keys: {{keys}}</p>
<p>This is the data in the keys: {{data[key]['author']}}</p>
</template>
Here is the ref: http://polymer.github.io/firebase-element/components/firebase-element/
The items are not missing or deleted. Indeed if you check out your account dashboard, you'll see both the numeric indices and the push ids. What happens is related to Firebase's handling of array-like data. (You'll want to give that a serious read to understand this)
Essentially, since you've used sequential, numeric ids, the data is treated as an array and returned as an array. But when you add a string as a key, it decides this is now a hash of key/value pairs and treats it as pojo/json data (rightly so).
I don't know much (anything) about Polymer, but I'm guessing that method accepts an array and does not iterate object keys. Thus, you'll need to iterate that data as an object and not an array.