Firebase insert with generated key - firebase

I have a root in firebase like in the I am trying to enable user to delete an item on list. But user can give up his decision. When user give up this decision, I want to insert the deleted item again in the database. But, I want to insert with old firebase generated key, because I am using firebase push keys. Is that a bad practice. How firebase generate these keys? Does it checks every key on db and generate a new one? Is that any possibility, that key marked as removed and generated later for another item? Sorry for the language. It has been hard to express.
EDITED: I want to use the old key because, I am getting the data with orderByKey. I dont want to lose order.

How firebase generate these keys? Does it checks every key on db and generate a new one?
Whenever you use push on a Database Reference, a new data node is generated with a unique key that includes the server timestamp. These keys look like -KiGh_31GA20KabpZBfa.
Because of the timestamp, you can be sure that the given key will be unique, without having to check the other keys inside your database.
Is that any possibility, that key marked as removed and generated later for another item?
No, it is not possible that two keys will collide, regardless of wether one has been removed or not.
But, I want to insert with old firebase generated key, because I am using firebase push keys. Is that a bad practice
Unfortunately, you can't generate the same key twice by just using push. So, it is not possible to delete a node with a given key and then use push to insert it again at the same path with the same key, because push would generate a different and unique key.
Instead of this, if ordering by key is that important to you, and there's a possibility that a deleted node can be reinserted then I would recommend you to do one of the following :-
Either save the key on the client side when it's deleted from the database, and use it when you need to reinsert.
Or , maybe, have a "deleted-keys" path in your database and save the deleted keys there. Of course, with this approach, you'd need to store additional information to identify the data that the key corresponds to.
It all really depends on your use case.

Calling push() will generate a key for you.
If instead you use child(), you can determine they key/path yourself.
ref.child("yourvalue").setValue("setting custom key when pushing new data to firebase database");
https://firebase.googleblog.com/2015/02/the-2120-ways-to-ensure-unique_68.html

Related

How to create a unique key for a column in a Cosmos DB collection?

I read this article but here it only writes about Unique key per partition: https://learn.microsoft.com/en-us/azure/cosmos-db/unique-keys.
It is given in the above link that, unique keys cannot be created for existing collections in a container.
Can someone please suggest solutions to a create unique index for existing collections in a container?
A unique key is something which logically partition the values in the container. In the shared document it has already been mentioned that unique key can only be created when creating the Azure Cosmos container, because unique key policy defines the structure (you can say Schema) of the container which prevent any duplicate entries afterwards.
Now, suppose if anybody wants to create new unique key within the container, it might conflict the existing unique key and therefore the complete structure will need to be redesigned. That's the reason, changing and adding new unique key in existing container isn't supported.
The possible workaround to achieve the requirement is:
To set a unique key for an existing container, create a new container
with the unique key constraint. Use the appropriate data migration
tool to move the data from the existing container to the new
container. For SQL containers, use the Data Migration tool to move
data. For MongoDB containers, use mongoimport.exe or mongorestore.exe
to move data.
Other possible way is to implement a logic programmatically to check the uniqueness of the elements for any column before inserting the values into the Cosmos DB. You can try pre-triggers to implement the checks.

How to introduce a new column in dynamo DB running in production?

I have a use case where DynamoDB is running in production and I need to add a new column IDUpdatedAt which will also be serving as a sort key for one of the GSIs.
I tried a thing in test where my application adds the new rows with IDUpdatedAt, it's working fine but what about the existing rows? How to add the values for those?
Also the new rows will not be added without IDUpdatedAt, but how will the search be impacted for older rows?
PS: IDUpdatedAt is being used as a filter in the application, i.e., user can search for specific ID and can get results sorted by date. That's why IDUpdatedAt is also a part of GSI (sort key).
Please help.
You've got the right idea by adding the field to new items. After all, DynamoDB does not enforce a particular schema outside of the primary key.
This also happens to be a very useful feature, especially when defining a GSI on that attribute; if the atttibute exists on the item, it ends up in the index! For example, imagine modeling an email inbox in DDB where each item represents an email. You could include an attribute 'is_read' and define a GSI using that atttibute.
If the 'is_read' attribute exists on the item, it's in the index. Otherwise, it's not. A cool way to use GSIs to implement filtering.
Pretty neat stuff!
However, there is no way to retroactively update all items with a new attribute other than manually updating each item (or in batches). The equivalent in SQL databases is defining a new column. Unfortunately, an analogous operation in DDB does not exist.

Conditional insert in Dynamodb

I am creating a leave tracker app where I want to store the user ID along with the from date and to date. I am using Amazon's DynamoDB as the database, and the user enters a leave through a custom command.
Eg: apply-leave from-date to-date
I want to avoid duplicate entries in the database. For example, if a user has already applied for a leave between 06-10-2019 to 10-10-2019 and applies for a leave between the same dates again, they should get a message saying that this already exists and a new record should not be created for the same.
However, a user can apply for multiple leaves and two users can take a leave between the same dates.
I tried using a conditional statement as follows:
table.put_item(
Item={
'leave_id': leave_id,
'user_id': user_id,
'from_date': from_date,
'to_date': to_date,
},
ConditionExpression='attribute_not_exists(user_id) AND attribute_not_exists(from_date) AND attribute_not_exists(to_date)'
)
where leave_id is the partition key. However, this does not work and a new row is added every time, even if it is the same dates. I have looked through similar other questions, but haven't been able to understand how to get this configured correctly.
Any ideas on how I should go about this, or if there is a different design that I should follow?
If you are calling your code with the leave_id that doesn't yet exist in the table, the item will always be inserted. If you call your code with leave_id that does already exist in your table you should be getting An error occurred (ConditionalCheckFailedException) when calling the PutItem operation: The conditional request failed error message.
I have two suggestions:
If you don't want to change your table, you can create a secondary index with user_id as the partition key and then query the index for all the items where the given user has some from_date and to_date attributes.
Like this:
table.query(
IndexName='user_id-index',
KeyConditionExpression=Key('user_id').eq(user_id),
FilterExpression=Attr('from_date').exists() & Attr('from_date').exists()
)
Then you will need to check for overlapping leave requests, etc. (eg. leave request that starts before the one that is already in place finishes). After deciding that the leave request is a valid one you will call put_item.
Another suggestion and probably a better one would be to create a composite primary key on your table with user_id as a partition key and leave_id as a sort key. That way you could execute a query for all leave requests from a particular user without the need to create a secondary index.

Removing one-to-many relations using splice

I'm having some trouble getting relation deletion to work exactly how I would expect it to.
For example I have two simple tables, users and permissions with a one-to-many relation between users and permissions (or it could be many-to-many in this example as well).
I first tried deleting one of the related permissions using userDatasource.deleteItem() or userDatasource.item.permissions[index]._delete() but when you use either of those functions it marks the record as deleted client side so you run into trouble when you need to insert again.
I then found a related question that said to use item.relation.splice(startIndex, 1) to just break the relation and that worked as expected but now I have a bunch of extra rows in my database with the user foreign key null. I would much rather have the same behavior as .splice but also have it delete those records from the database. Is there any way to do that or is App Maker supposed to detect the broken relation and automatically delete the row from the table?
Just do a check after the splice like this:
if (item.relation.length === 0) {
item._delete();
}

how to generate unique id per user?

I have a webpage Default.aspx which generate the id for each new user after that the id will be subbmitted to database on button click on Default.aspx...
if onother user is also entering the same time the id will be the same ... till they press button on default.aspx
How to get rid of this issue...so that ... each user will be alloted the unique id ...
i m using the read write code to generate unique id ..
You could use a Guid as ids. And to generate an unique id:
Guid id = Guid.NewGuid();
Another possibility is to use an automatically incremented primary column in the database so that it is the database that generates the unique identifiers.
Three options
Use a GUID: Guid.NewGuid() will generate unique GUIDs. GUIDs are, of course, much longer than an integer.
Use intelocked operations to increment a shared counter. Interlocked.Increment is thread safe. This will only work if all the requests happen in the same AppDomain: either process cycling on a refresh of the code will create a new AppDomain and restart the count.
Use an IDENTITY column in the database. The database is designed to handle this, within the request that inserts the new row, use SCOPE_IDENTITY to select the value of the identity to update in memory data (ORMs should handle this for you). (This is SQL Server, other databases have equivalent functionality.)
Of there #3 is almost certainly best.
You could generate a Guid:
Guid.NewGuid()
Or you could let the database generate it for you upon insert. One way to do this is via a Sequence. See the wikipedia article for Surrogate Keys
From the article:
A surrogate key in a database is a unique identifier for either an entity in the modeled world or an object in the database. The surrogate key is not derived from application data.
The Sequence/auto-incremented column option is going to be simpler, and easier to remember when manually querying your DB (during debugging), but the DBA at my work says he's gotten 20% increases in performance by switching to Guids. He was using Oracle, and his database was huge, though :)
I use a utility static method to generate id's, basically use the full datetime(including seconds) and generate a random number of say 3 or 4 characters and return the whole thing, then you can save it to the database.

Resources