I have a problem regarding the organization of my data. What I want to achieve:
What I want to achieve
TL/DR: One data point updated in real time in many different groups, how to organize?
Each user sets a daily goal (goal) he wants to achieve
Upon working each user increases his time to get closer to his daily goal (daily_time_spent). (say from 1 minute spent to 2 minute spent).
Each user can also be in a group with other users.
If there is a group of users, you can see each other's progress (goal/daily_time_spent) in real time (real time being every 2-5 minutes, for cost reasons).
It will later also be possible to set a daily goal for a specific group. Your own daily goal would contribute to each of the groups.
Say you are part of three groups with the goals 10m/20m/30m and you already did 10m then you would complete the first group and have done 50% of the second group and 30% of the third group. Your own progress (daily_time_spent) contributes to all groups, regardless of the individual goals (group_daily_goal).
My ideas
How would I organize that? One idea is if a user increments his/her time, the time gets written down into each group the user is part of and then, when the user increases his time, it gets increased in each group he/she is part of. But this seems to be pretty inefficient, because I would potentially write the same data in many different places (coming from the background of a SQL-Developer, it might also be expensive?).
Another option: Each user tracks his time, say under userTimes/{user} and then there are the groups: groups/{groupname} with links to userTimes. But then I don't know how to get realtime updates.
Thanks a lot for your help!
Both approach can work fine, and there is no singular best approach here - as Alex said, it all depends on the use-cases of your app, and your comfort level with the code that is required for each of them.
Duplicating the data under each relevant user will complicate the code that performs the write operation, and store more data. But in return for that, reading the data will be really simple and scale very well to many users.
Reading the data from under all followed users will complicate the code that performs the read operation, and slow it down a bit (though not nearly as much as you may expect, as Firebase can pipeline the requests). But it does keep your data minimal and your write operations simple.
If you choose to duplicate the data, that is an operation that you can usually do well in a (RTDB-triggered) Cloud Function, but it's also possible to do it through a multi-path write operation from the client.
Related
I'm creating my first ever project with Firebase, and I come to the point when I need some statistics based on user input. I know Firebase (or NoSQL databases in general) are not ideal for statistics but they work for me in any other cases so I would like to give it a try.
What I have:
I work on the application where people can invite a friend to work for their company, so I have a collection of "referrals" where ID of each referral is basically UserID of a user to who the referral belongs, and then there is a subcollection with name "items" where data are stored.
How my data looks like:
Each item have these data:
applicant
appliedDate
position(part of position is positionId & department on which this position is coming from)
status
What I wanted is to let user to make statistics based on:
date range
status
department
What I was thinking about:
It's probably not the best idea to let firebase iterate over all referrals once users make requests as it may get really expensive on firebase. What I was thinking of is using cloudfunctions to calculate statistics always when something change e.g. when a new applicant applies I will increase the counter by one and the same for a counter to a specific department. However I feel like this make work for total numbers or for predefined queries e.g. "LAST MONTH" but once I will not know what dates user will select it start to get tricky.
Any idea how can I design something like this?
Thanks a lot!
What you're considering is the idiomatic approach to calculate aggregated in Firestore, and most NoSQL databases. If you follow this pattern, Firestore is quite well suited to storing statistics.
It's ad-hoc statistic, like the unknown data range, that are trickier. Usually this comes down to storing the right values to allow you to get rid of the need to read an unknown number of documents to calculate a value.
For example, if you store counters for the statistics per month, week, day and hour, you can satisfy a wide range of date ranges with a limited number of read operations. You may need to read multiple documents, but the number of documents to read depends on the range, and not on the total number of documents in the database.
Of course, for the most flexible ad-hoc querying, you may still want to consider another solution, such as BigQuery, which was made precisely for this use-case.
I am making a Moneymanagement-App where the user can create Transfers for each day.
I am currently listing all the data on the mainscreen. At the moment that doesn't matter because there isn't much data but imagine a user who uses the app several years and tracking all his spendings.
My first thought was to cache all the available Data for that user but that would cause too many unnecessary reads because the user most likely won't need the data from lets say 5 years ago.
So I thought the solution would be to just implement pagination for that screen.
But :
The user can get statistics about his spendinghistory on another screen by selecting a category and a timeperiod.
Currently i am running a query on those parameters each time they change but this will obviously also lead to a lot of unnecessary reads.
So the problem is, if the user chooses to get statistics from 5 years ago, that Data wouldn't exist in the cache so i would still have to run a query for this time period and then end up with a incomplete cache of that period because i only got some of the Data based on the Query.
Would love to hear your thoughts on this. How would you handle it ?
In general: don't run aggregation queries from the client on demand. Instead store aggregated data in the database, and update it as data is written.
So say that you keep some annual totals, such as their balance at the start and end of the year, their total income and spend for that year, probably broken down by categories. That is all information that you could put in a document for each year.
You'd have a structure /users/$uid/totals/$year and you then have the totals in fields in that document. Every time you write a new transaction, you update the totals document for that user for the current year.
If you do this, you'll only need to read the totals document to show totals, and you'll only need to read individual transactions if you want to show individual transactions.
Also see: Is it possible to run aggregation queries in Firestore?
I'm implementing a leaderboard which is backed up by DynamoDB, and their Global Secondary Index, as described in their developer guide, http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
But, two of the things that are very necessary for a leaderboard system is your position within it, and the total in a leaderboard, so you can show #1 of 2000, or similar.
Using the index, the rows are sorted the correct way, and I'd assume these calls would be cheap enough to make, but I haven't been able to find a way, as of yet, how to do it via their docs. I really hope I don't have to get the entire table every single time to know where a person is positioned in it, or the count of the entire table (although if that's not available, that could be delayed, calculated and stored outside of the table at scheduled periods).
I know DescribeTable gives you information about the entire table, but I would be applying filters to the range key, so that wouldn't suit this purpose.
I am not aware of any efficient way to get the ranking of a player. The dumb way is to do a query starting from the player with the highest point, move downward, keep incrementing your counter until you reach the target player. So for the user with lowest point, you might end up scanning the whole range.
That being said, you can still get the top 100 player with no problem (Leaders). Just do a query starting from the player with the highest point, and set the query limit to 100.
Also, for a given player, you can get 100 players around him with similar points. You just need do two queries like:
query with hashkey="" and rangekey <= his point, limit 50
query with hashkey="" and rangekey >= his point, limit 50
This was the exact same problem we were facing when we were developing our app. Following are two solutions we had come with to deal with this problem:
Query your index with scanIndex->false that will give you all top players (assuming your score/points key in range) with limit 1000. Then applying this mathematical formula y = mx+b where you can take 2 iteration, mostly 1 and last value to find out m and b, x-points, and y-rank. Based on this you will get the rank if you have user's points (this will not be exact rank value it would be approximate, google does the same if we search some thing in our mail it show
and not exact value in first call.
Get all the records and store it in cache until the next update. This is by far the best and less expensive thing we are using.
The beauty of DynamoDB is that it is highly optimized for very specific (and common) use cases. The cost of this optimization is that many other use cases cannot be achieved as easily as with other databases. Unfortunately yours is one of them. That being said, there are perfectly valid and good ways to do this with DynamoDB. I happen to have built an application that has the same requirement as yours.
What you can do is enable DynamoDB Streams on your table and process item update events with a Lambda function. Every time the number of points for a user changes you re-compute their rank and update your item. Even if you use the same scan operation to re-compute the rank, this is still much better, because it moves the bulk of the cost from your read operation to your write operation, which is kind of the point of NoSQL in the first place. This approach also keeps your point updates fast and eventually consistent (the rank will not update immediately, but is guaranteed to update properly unless there's an issue with your Lambda function).
I recommend to go with this approach and once you reach scale optimize by caching your users by rank in something like Redis, unless you have prior experience with it and can set this up quickly. Pick whatever is simplest first. If you are concerned about your leaderboard changing too often, you can reduce the cost by only re-computing the ranks of first, say, 100 users and schedule another Lambda function to run every several minutes, scan all users and update their ranks all at the same time.
I have an Application that allows people to bet on the result of soccer games.
The score of each single bet (=entity) is calculated by comparing the betted scores of the bet with the actual result in the game(=entity). Bet's are betted within Betrounds. Betrounds are organisations where groups bet on gamegroups (groups of games e.g. single matchdays). Single Usergroups can have several betrounds.
To summarize the relational model:
UserGroup 1:N BetRounds 1:N Bets N:1 Game
Within each betround I create a resulttable where I show every user with their result points and position.
In order to calculate the position of one user I need to calculate the points of every user within a betround.
These points from the single betrounds are aggregated into groups and within the group there is again a resulttable.
Example
A Usergroup with: 20 users
One Season has 34 matchdays
One matchday has 9 games
In order to calculate the the points for this usergroup I would need to calculate the points from 20*34*9=6120 bets.
Since this is a lot to calculate I don't want to do it everytime I show the resulttable.
I currently see two options in order to save some calculation time:
Cache
Save interim results (e.g. on the bet entity) in the database
Maybe a mix of both.
Cache
If caching is the correct way I am not sure on which level and how to invalidate.
There are several options what to cache:
- pointresult of single bets
- pointresults of single users within a betround
- whole result table of a betround (points & position)
- pointresult of single user within usergroup
- whole resulttable of usergroup
I am unsure how to cache those data:
- just the integer values for positions and points
- whole entities (e.g. bets)
- temporary not persistent entities (e.g. to represent the the resulttables)
- the html output of the table
Then dependent on format how to cache it:
- html views could be cached via reverse proxies
- values / entities probably via redis / memcache etc.
In the future we might change to a single page app that data is only served via restapi, then caching of html outputs is not an option.
Dependent on the caching strategy the question arises how to invalidate cache and optionally warm it, so that the result is never calculated within the application but only recalculated when the cache is invalidated and immediately replaced by the new result.
I have read very often that cache invalidation is evil. I am not sure if this applies to my use case since all points/results/tables etc. only change when my interface updates the result of the games. This is the only time when points change.
2.Save interim results (e.g. on the bet entity) in the database
I am not sure if this scenario is applicable on all levels. I first thought about saving the actual result on a bet instead of always comparing the bet scores with the actual scores. This would then make my data model a little bit redundent and i have increased complexity if I wrong result is fetched by my interface and later the correct comes in and my points are not recalculated.
On all other levels I would need to create new interim entities to store table results persistently.
3.Mix of both
I am not sure how mixing both would look like and if it makes sense at all, but I thought it might be an option.
Any advice, Input or experience would be highly appreciated.
I only mildly understand betting, so hopefully this helps.
It sounds like you are asking two questions:
When do I calculate results?
How much caching should I use?
To me it sounds like there are very clear events that happen, after which you can successfully calculate your results. Your design should take advantage of this and be evented in nature. You should have background processes that can detect when a game is complete. The results of the game should be written, and additional background jobs should be triggered to calculate the results of any bets that depend on that game.
This would also be the point at which any caches that involve that game, results from that game, or results from any bets on that game, should be invalidated and/or refreshed.
How much you should cache should be based on how much you need to cache. Caching should be considered separately from computing results. That is not caching. That is computing results and storing them. You should definitely not be calculating results during a page view request, and should be done ahead of time when the corresponding event (game ends) has triggered the calculation.
Your database should pretty much always represent the latest information you have on everything. You should avoid doing any calculations on-the-fly if possible.
I would get all the events and background stuff working first, then see what kind of performance you get. At that point your app should be doing little more than taking the results and sticking them into a view for each page view. If that part is going too slow, then you should start looking at caching your views/templates/html. As mentioned before, these caches could be invalidated by your background workers when they encounter new results.
I'm developing a statistics module for my website that will help me measure conversion rates, and other interesting data.
The mechanism I use is - to store a database entry in a statistics table - each time a user enters a specific zone in my DB (I avoid duplicate records with the help of cookies).
For example, I have the following zones:
Website - a general zone used to count unique users as I stopped trusting Google Analytics lately.
Category - self descriptive.
Minisite - self descriptive.
Product Image - whenever user sees a product and the lead submission form.
Problem is after a month, my statistics table is packed with a lot of rows, and the ASP.NET pages I wrote to parse the data load really slow.
I thought maybe writing a service that will somehow parse the data, but I can't see any way to do that without losing flexibility.
My questions:
How large scale data parsing applications - like Google Analytics load the data so fast?
What is the best way for me to do it?
Maybe my DB design is wrong and I should store the data in only one table?
Thanks for anyone that helps,
Eytan.
The basic approach you're looking for is called aggregation.
You are interested in certain function calculated over your data and instead of calculating the data "online" when starting up the displaying website, you calculate them offline, either via a batch process in the night or incrementally when the log record is written.
A simple enhancement would be to store counts per user/session, instead of storing every hit and counting them. That would reduce your analytic processing requirements by a factor in the order of the hits per session. Of course it would increase processing costs when inserting log entries.
Another kind of aggregation is called online analytical processing, which only aggregates along some dimensions of your data and lets users aggregate the other dimensions in a browsing mode. This trades off performance, storage and flexibility.
It seems like you could do well by using two databases. One is for transactional data and it handles all of the INSERT statements. The other is for reporting and handles all of your query requests.
You can index the snot out of the reporting database, and/or denormalize the data so fewer joins are used in the queries. Periodically export data from the transaction database to the reporting database. This act will improve the reporting response time along with the aggregation ideas mentioned earlier.
Another trick to know is partitioning. Look up how that's done in the database of your choice - but basically the idea is that you tell your database to keep a table partitioned into several subtables, each with an identical definition, based on some value.
In your case, what is very useful is "range partitioning" -- choosing the partition based on a range into which a value falls into. If you partition by date range, you can create separate sub-tables for each week (or each day, or each month -- depends on how you use your data and how much of it there is).
This means that if you specify a date range when you issue a query, the data that is outside that range will not even be considered; that can lead to very significant time savings, even better than an index (an index has to consider every row, so it will grow with your data; a partition is one per day).
This makes both online queries (ones issued when you hit your ASP page), and the aggregation queries you use to pre-calculate necessary statistics, much faster.