Move points from user to another using transactions realtime database - firebase

Using firebase real time database i want to move points from user to another but to keep conflicts away ( may user get coins from multi other users at the same time ) i have to use transactions.
My data structure :
{
uid-1:
{
points: 30
},
uid-2:
{
points:60
}
}
So i need two transactions one substracts uid-1 and second increases uid-2
But I'm afraid of that if one transaction success and other one fails .. any sol to revert the operation or update both same time?

There is no secure way to implement conditionality between multiple transactions.
If both operations depend on each other they should be run as a single transaction. That means you have an optimistic lock on the entire "users", but in your current data structure and solution that is required.
An alternative is to not update the balance, but just keep a list of transactions. In that case you can ensure both the addition for the first user and subtraction for the second user are written atomically by using a multi-location update. In JavaScript this would look something like:
ref = firebase.database().ref("users");
var updates = {};
let transactionID = ref.push().key;
updates["uid1/transactions/"+transactionID] = 20;
updates["uid2/transactions/"+transactionID] = -20;
ref.update(updates);
The above write operation will either succeed completely, or fail completely. This ensures your database is always correct.

Related

Specify format for autogenerated Firestore document Ids

I'm creating a small game where people are able to join a room using a six-digit pin. Every room is represented by a document in a Firestore collection, where the room pin is the id of a room document.
My initial idea was to randomly generate a six-digit pin and check if a document with that id exists. If it does, create a new document with the generated pin, if not, generate a new pin and check if that id is free. This method will work, however, with a bit of bad luck it might cause a lot of unnecessary requests to the database.
My question is, therefore: is it possible to specify a format of the autogenerated id's? Or, if it is not possible, is there a way to fetch all documents id's to locally check whether or not the id exists?
You cannot specify a format for the auto-generated IDs but you can check if a room with same ID exists. If yes, then try new ID else create a room with same ID.
async function addRoom(roomId) {
const roomRef = admin.firestore().collection("rooms").doc(roomId)
if ((await roomRef.get()).exists) {
return addRoom(generatePin())
} else {
await roomRef.set({ ... })
}
return `${roomId} created`
}
function generatePin() {
return String(Math.floor(Math.random() * 999999) + 100000)
}
return addRoom(generatePin())
.then(() => console.log("Room created"))
.catch((e) => console.error(e))
PS: This might end up in some recursive state so I'd recommend using Firestore's IDs or uuid-int is you need numerical IDs only.
There is no way for you to specify the format of automatic IDs generated by the Firestore SDKs. They are purely random, and have an amount of entropy that statistically guarantees there won't be collisions (the chance where two clients generate the same ID is infinitesimally small), and minimizes the chance of hotspots (created when writing lots of documents or index values to the same area on a disk).
You can generate whatever auto-ID format you want however. You'll just have to accept a higher chance of collisions, as you already did, and the fact that you may experience hotspots when the documents are in the same area on a disk.

How to guarantee a business rule in DynamoDB?

We have a business rule that one account can only have 3 projects at any given time.
In order to keep it efficient, we track the number of projects in an "userData" field instead of doing a COUNT query
Consider the following example objects already in DynamoDB:
userData : { createdProjects : 2 }
project1 : { id : 1 }
project2 : { id : 2 }
In order to enforce this rule, we've done the following when creating a project (pseudo code)
in transaction:
putItem(key = "project3", object = { id : 3 })
updateItem(
key = "userData",
expression = "createdProjects = createdProjects + 1"
condition = "createdProjects < 3"
)
Now, if the user tries to create a project at the same time with two computers let's say, will DynamoDB guarantee that he won't be able to create more than 3?
I know there are similar questions like this one, but I wanted to know if this also works in a transaction, because my condition is in another object.
Also, is my pseudo code the best approach? open to other ways
You can use a transaction for this.
Just include the PutItem request and UpdateItem request with the condition in a transaction and either both will complete or none of them.
Transactions are the way to provide this all or nothing behavior.
With the transaction write API, you can group multiple Put, Update, Delete, and ConditionCheck actions. You can then submit the actions as a single TransactWriteItems operation that either succeeds or fails as a unit. The same is true for multiple Get actions, which you can group and submit as a single TransactGetItems operation.
— docs

Dynamodb thread safe update

A Lambda function gets triggered by SQS messages. the reserved concurrency is set to the maximum which means I can have concurrent Lambda execution. each Lambda will read the SQS message and needs to update a Dynamodb table that holds the sum of message lengths. it's a numeric value that increases.
Although I have implemented the optimistic locking, I still see the final value doesn't match with the actual correct summation. any thoughts?
here is the code that does the update:
public async Task Update(T item)
{
using (IDynamoDBContext dbContext = _dataContextFactory.Create())
{
T savedItem = await dbContext.LoadAsync(item);
if (savedItem == null)
{
throw new AmazonDynamoDBException("DynamoService.Update: The item does not exist in the Table");
}
await dbContext.SaveAsync(item);
}
}
Best to use DynamoDB streams here, and batch writes. Otherwise you will unavoidably have transaction conflicts, probably sitting in some logs somewhere are a bunch of errors. You can also see this cloudwatch metric for your table: TransactionConflict.
DynamoDB Streams
To perform aggregation, you will need to have a table which has a stream enabled on it. Set the MaximumBatchingWindowInSeconds & BatchSize to values which suit your requirements. That is say you need the able to be accurate within 10 seconds, you would set MaximumBatchingWindowInSeconds to no more than 10. And you might not want to have more than 100 items waiting to be aggregated so set BatchSize=100. You will create a Lambda function which will process the items coming into your table in the form of:
"TransactItems": [{
"Put": {
"TableName": "protect-your-table",
"Item": {
"id": "123",
"length": 4,
....
You would then iterate over this and sum up the length attribute, and do an update ADD statement to a summation in another table, which holds calculated statistics based on the stream. Note you may receive duplicate messages which may cause you errors. You could handle this in Dynamo by making sure you don't write an item if it exists already, or use message deduplication id.
Batching
Make sure you are not processing many many tiny messages one at a time, but instead are batching them together say in your Lambda function which reads form SQS that it can read up to 100 messages at a time and do a batch write. Also set a low concurrency limit on it, so that messages can bank up a little over a couple of seconds.
The reason you want to do this is that you can't actually increment a value in DynamoDB many times a second, it will give you errors and actually slow your processing. You'll find your system as a whole will be performing at a fraction of the cost, be more accurate, and the real time accuracy should be close enough to what you need.

Cloud Firestore - ensuring data consistency

My database uses redundant data to speed up fetches and minimise the number of documents that need to be read for certain queries. For example I'd store the names of followed users in a map in a users document so I don't have to read another document to retrieve the names of each of the followed users.
User: (Collection) {
userID: (Document) {
//user state
name: ...
followingUsers: (Map) {
followingUserID: nameOfUser,
followingUserID: nameOfUser
}
}
}
If a user was to change their name, what is the best way to propagate these changes to all places with the redundant data?
Good question!
For starters, I'd recommend doing this kind of administrative task in a server SDK or cloud function, since you don't want a client to necessarily have the ability to start mucking with every single User doc.
The good news is that, once you start using the server SDKs, you can then put a query into a transaction. So let's say user_123 changes their name from "Jenny" to "Jen". Your transaction would look something like this in pseudo-code:
Start Transaction
transaction.get(usersRef.where("followingUsers.user_123", ">=", ""))
Loop through query results. Grab the doc_id from each doc and use that to start building out the writes in your transaction.
transaction.update("/users/<doc_id>/", {"followingUsers.user_123" : "Jen"})
Also make sure you add transactions.update("/users/user_123", {"name": "Jen"})
End transaction
This general approach would also work on the client-side, but you just wouldn't be able to do this in a transaction. (You could still put all of these changes into a batch write, though.)

do document IDs in Meteor need to be random or just unique?

i'm migrating data from a rails system, and it would be really convenient to assign the migrated objects IDs like post0000000000001, etc.
i've read here
Creating Meteor-friendly id's in Mongo?
that Meteor creates random 17 character strings from
23456789ABCDEFGHJKLMNPQRSTWXYZabcdefghijkmnopqrstuvwxyz
which looks to be chosen to avoid possibly ambiguous characters (omits 1 and I, etc.)
do the IDs need to be random for some reason? are there security implications to being able to guess a Meteor document's ID?! or it is just an easy way of generating unique IDs?
Mongo seems fine with sequential ids:
http://docs.mongodb.org/manual/core/document/#the-id-field
http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/
so i would guess this would have to be a Meteor constraint if it exists.
The IDs just need to be unique.
Typically there is an element of order: Such as using integers, or timestamps, or something with sequentiality.
This can't work in Meteor since inserts can come from the client, they may be disconnected for a period, or clients clocks may be off/have varying latency. Also its not possible to know the previous _id (in the case of a sequential _id) at the time an _id is written owing to latency compensation (instant inserts).
The consequence of the lack of order in the DDP protocol is the decision to use entirely random ids. That is not to say you can't use your own _ids.
while there is a risk of a collision with this strategy it is minimal on the order of [number of docs in your collection]/[55^17] * 100 % or nearly impossible. In the event this occurs the client will temporarily insert it and cancel it once the server confirms the error with a Mongo Duplicate Key error.
Also when it comes to security with the other answer. It is not too much of an issue if the _id of the user is known. It is not possible to log in without a valid hashed login token or retrieve any information with it. This applies to the user collection only of course. If you have your own collection an easily guessable URL containing an id as a reference without publish method checks on the eligibility to read the data is a risk the high entropy random ids generated by Meteor can mitigate.
As long as they are unique it should be ok to use your own ids.
I am not an expert, but I suppose Mongo needs a unique ID so when it updates the document, it in fact creates a new version of the document of that same ID.
The real question is - I too whish to know - if we can change the ID without screwing Mongo mechanism and reliability, or we need to create a secondary attribute? (It can make a smaller index too I suppose)?
But me too, I can imagine that security wise, it is better if document IDs are difficult to guess, especially user IDs! Otherwise, could it be easy or possible to fake a user, knowing the ID? Anybody, correct me if I am wrong.
I don't think it's possible and desirable to change ID from Mongo.
But you can easily create a autoincrement ID with http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/
function getNextSequence(name) {
var ret = db.counters.findAndModify(
{
query: { _id: name },
update: { $inc: { seq: 1 } },
new: true
}
);
return ret.seq;
}
I have created a package that does just that and that is configurable.
https://atmospherejs.com/stivaugoin/fluid-refno
var refNo = generateRefNo({
name: 'invoices', // default: 'counter'
prefix: 'I-', // default: ''
size: 5, // default: 5
filling: '0' // default: '0'
});
console.log(refNo); // output: "I-00001"
you now can use refNo to add in your document on Insert
maybe it will help you

Resources