I've been struggling for a couple of days trying to figure out the best way to design a database of a large data set on Firebase, I even wrote a question on database administration site.
I came up with a design, I don't know that's what's called denormalized data or not. I want to minimize querying time of data and also not making inserting/updating data so hard.
Here's my design:
Is that the right database design for this kind of data ?
(Please check my question at database administration site for more details about the nature of the data).
But also here's a short description of the data nature:
So I have an affiliator_category which maybe banks, clubs or organisations. And each category contains a number of affiliators and each affiliator contains number of stores divided into store_category, each store has a number of offers.
And for the user side (the one who do the shopping). A users has a number of memberships in several affiliators, and a number of spendings he/she does.
Related
I am working on an asset tracking system that also manages the concept of "projects". The users of this application perform maintenance activities on their customer's assets, so they need an action log where actions on an asset start life as a task in a project. For example, "Fix broken frame" might be a task where an action would have something like "Used parts a, b, and c to fix the frame" with a completed time and the employee who performed the action.
The conceptual data model for the application starts with a Customer that has multiple locations and each location has multiple assets. Each asset should have an associated action log so it is easy to view previous actions applied to that asset.
To me, that should all go in one table based upon the logical ownership of that data. Customer owns Locations which own Assets which own Actions.
I believe I should have a second table for projects as this data is tangential to the Customer/Location/Asset data. However, because I read so much about how it should all be one table, I'm not sure if this delineation only exists because I've modeled the data incorrectly because I can't get over the 3NF modeling that I've used for my entire career.
Single table design doesn't forbid you to create multiple tables. Instead in encourages to use only a single table per micro-services (meaning, store correlated data, which you want to access together, in the same table).
Let's look at some anecdotes from experts:
Rick Houlihan tweeted over a year ago
Using a table per entity in DynamoDB is like deploying a new server for each table in RDBMS. Nobody does that. As soon as you segregate items across tables you can no longer group them on a GSI. Instead you must query each table to get related items. This is slow and expensive.
Alex DeBrie responded to a tweet last August
Think of it as one table per service, not across your whole architecture. Each service should own its own table, just like with other databases. The key around single table is more about not requiring a table per entity like in an RDBMS.
Based on this, you should answer to yourself ...
How related is the data?
If you'd build using a relational database, would you store it in separate databases?
Are those actually 2 separate micro services, or is it part of the same micro service?
...
Based on the answers to those (and similar) questions you can argue to either keep it in one table, or to split it across 2 tables.
This is a typical situation for 1 to many relationships: a chat group iOS app, a group table to record all the group chat related information, like group id, create time, thread title, etc.
To record the participants, of course, I would assume there is another 1:m table. So I was rather surprised to see the app just added another column called "participants" to record it, with each participant is separated by a delimiter (':' to be exact). The problem with that is quite obvious, mixing application code with sql code, e.g. no way to see how many groups a specific user is in with sql code, violated 1NF/2NF, etc.
But they said we understood all your points. But
as this is a mobile app, you always need to use objective c code to access sqlite tables, you won't use sql codes alone. So not a "big deal" to mix them together.
participants don't change often and normally are set when a group is created. If we have 100 participants we would rather just insert 1 record to group table instead of insert 100 records into another group-participants table.
The participant data will be used when someone wants to see who are in this chat group (by several taps on the menu) and when someone joins or leaves the chat group, assume it won't happen often.
So my question is in this particular situation what is the advantage I will gain if I use another 1:m table?
----- update -----
Except for the answer I got, Renzo kindly pointed this discussion to me, which is also very helpful!
It's hard to respond to "is this design better/worse" style questions without understanding the full context. I'm going to make some assumptions based on your question.
You appear to be building a mobile application, supporting "many to many" user chat. I'm picturing something like Slack.
Your application design is using the SQLite database for local storage.
Your local sqlite database on the phone is some kind of subset of the overall application data - like a cache, only showing the data for the current user.
If all that is true, the question really is down to style/maintainability on the one hand, and performance and scalability on the other.
From a "style" point of view, storing the data in a comma-separated value in a column is ugly. A new developer who joins the project, with a background in "regular" database design will consider it at best a hack. On the other hand, iOS developers may consider it perfectly normal.
From a performance point of view, it's probably not worth arguing about - parsing the CSV is probably just as slow as reading/writing from the database.
From a scalability point of view, you may have a problem. If the application design needs to capture in which order users joined the chat, or capture some kind of status (active/asleep, for instance), or provide a bit of history (user x exited at 21:20), you almost certainly end up re-designing the database.
I am developing an app which presents a feed of posts and allows users to vote on these posts.
I want to prevent users from voting multiple times on a single post. To do that, I want to store a list of id's of the posts voted on already so that I can check that each time the user tries to vote.
What's the most efficient way of storing these post IDs if there's a chance of the user voting on up to thousands of posts within a year?
Sqlite, core data, p list or nsuserdefaults?
Since you would also like to know how many people voted (I think), I would save it to a server (using sqlite to store it).
Saving this on a user device seems redundant.
If you do want to store it I would advice Core Data.
It is too much information for NSUserdata, plists… I don’t know why but it just doesn’t seem like a good idea, and Coredata is just a better version of Sqlite (for swift usage)
I'm developing a quick side project that needs a users table, and I want them to be able to store profile data. I was already reaching for the ASP.NET profile provider when I realized that users will only ever have one profile.
I realize that frequently changing data will impact performance on things like indexes and stuff but how frequent is too frequent?
If I have one profile change per month per user happening for say 1000 users, is that a lot?
Or are we talking more like users changing profile data on an hourly basis?
I realize this isn't an exact science but I'm trying to gauge at what point the threshold starts to peak, and since my users profile data will probably rarely change if I should bother the extra work or just wait a few decades for it to be a problem.
One thing to consider is how adding a large text column to a table will affect the layout of the rows. Some databases will store the large columns inlined with the other fixed size columns; this will make the rows variable sized and that means more work for the database when it needs to pull a row off the disk. Other databases (such as PostgreSQL) store large text columns away from the fixed size columns; this leads to fixed sized rows with quick access during table scans and the like but an extra bit of work is needed to pull out the text columns.
1000 users isn't that much in database terms so there's probably nothing to worry about one way or the other. OTOH, little one-off side projects have a nasty habit of turning into real mission critical projects when you're not looking so doing it right from the beginning is a good idea.
I think Justin Cave has covered the index issue well enough.
As long as you structure your data access properly (i.e. all access to your user table goes through one isolated pile of code) then changing your data schema for users won't be much work anyway.
Does the profile information actually need to be indexed? Or are you just going to be retrieving it based on the USER_ID of the table or some other indexed USER column? If the profile data isn't indexed, which seems likely to me, than there are no performance impacts to other indexes on the table.
The only reason I can think of to be concerned about putting profile information in the table is if there is a lot of data compared to the necessary information to define a user and if the USER table needs to be full scanned for some reason. In that case, increasing the size of the table would adversely affect the performance of a table scan. Assuming that you don't have a use case where it's regularly going to make sense to do a full scan on the USERS table, and given that the table will only have 1000 rows, that's probably not a big deal.
I'm developing a statistics module for my website that will help me measure conversion rates, and other interesting data.
The mechanism I use is - to store a database entry in a statistics table - each time a user enters a specific zone in my DB (I avoid duplicate records with the help of cookies).
For example, I have the following zones:
Website - a general zone used to count unique users as I stopped trusting Google Analytics lately.
Category - self descriptive.
Minisite - self descriptive.
Product Image - whenever user sees a product and the lead submission form.
Problem is after a month, my statistics table is packed with a lot of rows, and the ASP.NET pages I wrote to parse the data load really slow.
I thought maybe writing a service that will somehow parse the data, but I can't see any way to do that without losing flexibility.
My questions:
How large scale data parsing applications - like Google Analytics load the data so fast?
What is the best way for me to do it?
Maybe my DB design is wrong and I should store the data in only one table?
Thanks for anyone that helps,
Eytan.
The basic approach you're looking for is called aggregation.
You are interested in certain function calculated over your data and instead of calculating the data "online" when starting up the displaying website, you calculate them offline, either via a batch process in the night or incrementally when the log record is written.
A simple enhancement would be to store counts per user/session, instead of storing every hit and counting them. That would reduce your analytic processing requirements by a factor in the order of the hits per session. Of course it would increase processing costs when inserting log entries.
Another kind of aggregation is called online analytical processing, which only aggregates along some dimensions of your data and lets users aggregate the other dimensions in a browsing mode. This trades off performance, storage and flexibility.
It seems like you could do well by using two databases. One is for transactional data and it handles all of the INSERT statements. The other is for reporting and handles all of your query requests.
You can index the snot out of the reporting database, and/or denormalize the data so fewer joins are used in the queries. Periodically export data from the transaction database to the reporting database. This act will improve the reporting response time along with the aggregation ideas mentioned earlier.
Another trick to know is partitioning. Look up how that's done in the database of your choice - but basically the idea is that you tell your database to keep a table partitioned into several subtables, each with an identical definition, based on some value.
In your case, what is very useful is "range partitioning" -- choosing the partition based on a range into which a value falls into. If you partition by date range, you can create separate sub-tables for each week (or each day, or each month -- depends on how you use your data and how much of it there is).
This means that if you specify a date range when you issue a query, the data that is outside that range will not even be considered; that can lead to very significant time savings, even better than an index (an index has to consider every row, so it will grow with your data; a partition is one per day).
This makes both online queries (ones issued when you hit your ASP page), and the aggregation queries you use to pre-calculate necessary statistics, much faster.