Is it necessary to use the Save data as transactions Technic if a value only increases?. In the example Firebase doc social blogging app the starCount can go upp or down so it´s logical to use the Transaction Technic right. But if value only increases I suppose the Transaction Technic is not needed right? or?
Multiple users at the same time increasing a value right.
Edit: 17th, Aug 2021
Now it's also possible to solve this problem without the use of a transaction. We can simply increment a value using:
rootRef.child("score").setValue(ServerValue.increment(1));
And for decremenet, the following line of code is required:
rootRef.child("score").setValue(ServerValue.increment(-1));
The counter can grow up or down, which means that a user can click on the counter to increase the value, but can also decrease the value if he clicks again. When we use transactions, we don't use only to increase or decrease a counter, we use if know that in our app is a possibility that two users can make the same action at the same time. If we don't use transactions, the counter can be increased/decreased only by one, instead of two times, if two users take the action at the same time.
If we use transactions, both actions will take place in different threads of execution, so there is no way in which a counter can be increased/decreased only once, even both users take the same action at the same time.
In conclusion, use transactions every time you think that is a possibility that two or more users can change the same value in your Firebase database at the same time.
Related
I'm writing a small game for Android in Unity. Basically the person have to guess whats on the photo. Now my boss wants me to add an additional function-> after successful/unsuccessful guess the player will get the panel to rate the photo (basically like or dislike), because we want to track which photos are not good/remove the photos after a couple of successful guesses.
My understanding is that if we want to add +1 to the variable in Firebase first I have to make the call and get it then we have to make a separate call with adding 1 to the value we got. I was wandering if there is a more efficient way to do it?
Thanks for any suggestions!
Instead of requesting firebase when you want to add ,you can request firebase in the beginning (onCreate like method) and save the object and then use it when you want to update it.
thanks
Well, one thing you can do is to store your data temporarily in some object, but NOT send it to Firebase right away. Instead, you can send the data to Firebase in times when the app/game is about to get paused/minimized; hence, reducing potential lags and increasing player satisfaction. OnApplicationPause(bool) is one of such functions that gets called when the game is minimized.
To do what you want, I would recommend using a Transaction instead of just doing a SetValueAsync. This lets you change values in your large shared database atomically, by first running your transaction against the local cache and later against the server data if it differs (see this question/answer).
This gets into some larger interesting bits of the Firebase Unity plugin. Reads/writes will run against your local cache, so you can do things like attach a listener to the "likes" node of a picture. As your cache syncs online and your transaction runs, this callback will be asynchronously triggered letting you keep the value up to date without worrying about syncing during app launch/shutdown/doing your own caching logic. This also means that generally, you don't have to worry too much about your online/offline state throughout your game.
I have a entity in Datastore that looks something like this:
public class UserEntry {
#Parent
private Ref<User> parent;
#Id
private String id;
private String seqNumber;
private String name;
}
I am trying to maintain a sequence number for each user. i.e first entry for user should have seqNumber as 1 the next as 2 and so on. What is the best way to achieve this?
i.e:
1) how can i get the seqNumber for the last entry for a user
2) How do i ensure while writing that another process has not written an entry for the user with the same seqNumber. I cannot make seqNumber the id for the entry.
I am afraid that the only way to achieve this is to use datastore's support for transactions. Note, however, that this solution comes with a considerable contention risk. And with a risk of skipping some values in the sequence when done incorrectly. Let me start with a naive approach to illustrate the basic idea.
A straight forward solution (NAIVE APPROACH):
You could create a dedicated entity, let's call it Sequence, which would have a single property, let's call it value. At the beginning, the value property would contain 0 (or 1, depending where you want the sequence to start). Then, prior to creating any new UserEntry you would have to execute a transaction which would:
obtain the current value,
increment value by one (within the same transaction).
The fact that you would be using transactions would prevent concurrent requests from obtaining the same sequential id. Note, however, that there would have to be exactly one "instance" of the Sequence entity kind stored in the datastore. Updating this entity too rapidly could lead to contention issues. Also, this approach uses non-idempotent transactions which could lead to skipping some values from the sequence.
Contention risk:
Beware that the straight forward solution described above would limit throughput of your application. Your application wouldn't be able to handle creating more than one UserEntry per second for an extended period of time. This is because creating a UserEntry would require updating the Sequence entity. And "one write per second" is an approximate limit for writing into a single entity, see https://cloud.google.com/datastore/docs/concepts/limits
Danger of non-idempotent transactions:
Datastore can occasionally throw an error claiming that a transaction failed even though it did not, see https://cloud.google.com/datastore/docs/concepts/transactions If you would retry the transaction after such "non-error", you would end up executing the transaction twice. In your scenario, you would end up incrementing value twice for creation of a single UserEntry, thus skipping one value from the sequence (or more if you would be extremely unlucky and got the "non-error" several times in a row).
This is why Google suggests to make your transactions idempotent, meaning that executing the transaction a thousand times should have the same effect on the resulting state of the underlying data as executing it once. A good example of an idempotent transaction is renaming a user. If you tell someone to be renamed to "Carl" a thousand times, he will end up being called... well, "Carl". If, on the other hand, you tell our value counter to be incremented a thousand times... You get the picture.
Better solutions:
If you are ok with the above mentioned risks of the straight forward solution, you are ok to go. But here are some tips how to avoid these issues:
Avoiding contention:
You could use a task queue to postpone the assignment of seqNumber. By making sure that the queue won't send requests more than once per second, you would easily avoid possible contention issues. Obvious downside of this solution is that there would be some delay before the seqNumber property would be assigned to the newly created UserEntry. I don't know if this is acceptable for you.
Design transactions to be idempotent:
Here is a simple modification which would make the transactions idempotent: Instead of using the value property to hold the actual counter value, use it to store id of the lastly created UserEntry. Then, when deciding what the seqNumber for the next UserEntry should be, retrieve the lastly added UserEntry, use its seqNumber to calculate the next value, and than update the Sequence entity (as many times as you want) telling it, your value property is now equal to "some particular id".
Final note:
You are very correct in NOT using the seqNumber as id of the entity. Using monotonically increasing values as entity ids is another well known contention trap, see https://cloud.google.com/datastore/docs/best-practices
Hope this helps.
The problem
I have a firebase application in combination with Ionic. I want the user to create a group and define a time, when the group is about to be deleted automatically. My first idea was to create a setTimeout(), save it and override it whenever the user changes the time. But as I have read, setTimeout() is a bad solution when used for long durations (because of the firebase billing service). Later I have heard about Cron, but as far as I have seen, Cron only allows to call functions at a specific time, not relative to a given time (e.g. 1 hour from now). Ideally, the user can define any given time with a datetime picker.
My idea
So my idea is as following:
User defines the date via native datepicker and the hour via some spinner
The client writes the time into a seperate firebase-database with a reference of following form: /scheduledJobs/{date}/{hour}/{groupId}
Every hour, the Cron task will check all the groups at the given location and delete them
If a user plans to change the time, he will just delete the old value in scheduledJobs and create a new one
My question
What is the best way to schedule the automatic deletion of the group? I am not sure if my approach suits well, since querying for the date may create a very flat and long list in my database. Also, my approach is limited in a way, that only full hours can be taken as the time of deletion and not any given time. Additionally I will need two inputs (date + hour) from the user instead of just using a datetime (which also provides me the minutes).
I believe what you're looking for is node schedule. Basically, it allows you to run serverside cron jobs, it has the ability to take date-time objects and schedule the job at that time. Since I'm assuming you're running a server for this, this would allow you to schedule the deletion at whatever time you wish based on the user input.
An alternative to TheCog's answer (which relies on running a node server) is to use Cloud Functions for Firebase in combination with a third party server (e.g. cron-jobs.org) to schedule their execution. See this video for more or this blog post for an alternative trigger.
In either of these approaches I recommend keeping only upcoming triggers in your database. So delete the jobs after you've processed them. That way you know it won't grow forever, but rather will have some sort of fixed size. In fact, you can query it quite efficiently because you know that you only need to read jobs that are scheduled before the next trigger time.
If you're having problems implementing your approach, I recommend sharing the minimum code that reproduces where you're stuck as it will be easier to give concrete help that way.
My site allows users to post items for sale. Each item has an expiration date and time, at which point I plan on marking it as expired and removed from the view. Right now, the client has a helper function that determines the time remaining, and marks it as expired once time remaining reaches 0. The issue with this is that the item still appears on the user's view until they have reloaded the page.
I have considered running a cron job to mark expired items, but was concerned this may be too costly as it would have to run very often to be an efficient method.
Is there a more efficient way to handle this? I was hoping to get each item reactively remove itself from the list once the time expires.
I had a similar requirement in an app. I ended up using the remcoder:chronos package to make time reactive. This removed the need for an expiration key as well as any crown jobs. I used reactive time in my Collection.find() query which was returning the cursor of documents to display. At the expiration time they disappear automatically.
I'm implementing a leaderboard which is backed up by DynamoDB, and their Global Secondary Index, as described in their developer guide, http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
But, two of the things that are very necessary for a leaderboard system is your position within it, and the total in a leaderboard, so you can show #1 of 2000, or similar.
Using the index, the rows are sorted the correct way, and I'd assume these calls would be cheap enough to make, but I haven't been able to find a way, as of yet, how to do it via their docs. I really hope I don't have to get the entire table every single time to know where a person is positioned in it, or the count of the entire table (although if that's not available, that could be delayed, calculated and stored outside of the table at scheduled periods).
I know DescribeTable gives you information about the entire table, but I would be applying filters to the range key, so that wouldn't suit this purpose.
I am not aware of any efficient way to get the ranking of a player. The dumb way is to do a query starting from the player with the highest point, move downward, keep incrementing your counter until you reach the target player. So for the user with lowest point, you might end up scanning the whole range.
That being said, you can still get the top 100 player with no problem (Leaders). Just do a query starting from the player with the highest point, and set the query limit to 100.
Also, for a given player, you can get 100 players around him with similar points. You just need do two queries like:
query with hashkey="" and rangekey <= his point, limit 50
query with hashkey="" and rangekey >= his point, limit 50
This was the exact same problem we were facing when we were developing our app. Following are two solutions we had come with to deal with this problem:
Query your index with scanIndex->false that will give you all top players (assuming your score/points key in range) with limit 1000. Then applying this mathematical formula y = mx+b where you can take 2 iteration, mostly 1 and last value to find out m and b, x-points, and y-rank. Based on this you will get the rank if you have user's points (this will not be exact rank value it would be approximate, google does the same if we search some thing in our mail it show
and not exact value in first call.
Get all the records and store it in cache until the next update. This is by far the best and less expensive thing we are using.
The beauty of DynamoDB is that it is highly optimized for very specific (and common) use cases. The cost of this optimization is that many other use cases cannot be achieved as easily as with other databases. Unfortunately yours is one of them. That being said, there are perfectly valid and good ways to do this with DynamoDB. I happen to have built an application that has the same requirement as yours.
What you can do is enable DynamoDB Streams on your table and process item update events with a Lambda function. Every time the number of points for a user changes you re-compute their rank and update your item. Even if you use the same scan operation to re-compute the rank, this is still much better, because it moves the bulk of the cost from your read operation to your write operation, which is kind of the point of NoSQL in the first place. This approach also keeps your point updates fast and eventually consistent (the rank will not update immediately, but is guaranteed to update properly unless there's an issue with your Lambda function).
I recommend to go with this approach and once you reach scale optimize by caching your users by rank in something like Redis, unless you have prior experience with it and can set this up quickly. Pick whatever is simplest first. If you are concerned about your leaderboard changing too often, you can reduce the cost by only re-computing the ranks of first, say, 100 users and schedule another Lambda function to run every several minutes, scan all users and update their ranks all at the same time.