I have a difficult database design decision to make regarding multi-tenancy for the growing number of branches of my client's web-based CRM, which I actively maintain.
I made the decision early on to use separate applications with separate databases for each branch, because it was the simplest way to cater for three different branches with disparate data and code requirements. I also wanted to avoid managing Tenant IDs in every query, like I had to with the legacy Classic ASP (cringe) application I built in 2007...the horror.
But now the data requirements for branches are converging and as the business expands, I need to be able to roll out new branches quickly and share global product SKUs.
Since tables and views are the same for all branches and better ORM tools are now available to manage multi-tenant applications, I wonder if it would be better to have a shared database for multiple branches.
Considerations for a centralised database:
Global product SKUs
Simplified inventory requisitions
Easier to backup
Deploy once instead of for every branch
Considerations against a centralised database:
Easier to differentiate branch requirements with separate DBs
Modular deployment (one downed branch doesn't break all)
Harder to manage and develop for shared DB
I have to re-design invoice numbering (sequence generated by seed)
Less WHERE clauses everywhere
Restoring one broken branch has plenty of implications for other branches
It is unlikely there will be ever be as many as 10 branches. Right now there are 3.
Developers with real-world experience in this area, what would you do in my situation? Keep apps & DBs separate, or combine into one giant system?
Edit: Great Microsoft article on multi-tenancy pro's and cons. I should note that data isolation between branches is not a major issue.
Bite the bullet and merge them. Add your tenant ID where it needs to be, and change your queries.
For customizations, look into a plugin type architecture that would allow you to deploy specific screens for particular clients.
We have a software product that is built in just such a fashion. Sometimes it's deployed on a client site, sometimes we host it. For all intents and purposes it is an order of magnitude easier to deal with a single code base that has client specific extensions than dealing with multiple branches of the code.
For one, when we fix a problem, we fix it for everyone. Sure, if we break it, we break it for everyone but that's what unit tests are for. And it is a heck of a lot easier to maintain a set of unit tests against one code base than it is to maintain them for multiple branches.
We've been doing multi-tenant for over 10 years and not once have I looked back. Generally speaking, queries aren't that different if you are already security conscious in verifying that the person retrieving the record is actually allowed to get it.
I disagree with the issues brought up by Corbin. The one around versioning should already be handled by having an attribute based security structure in place. That way you can turn things on/off via user or tenant configuration. Also, I find it very rare that client A doesn't want the same new feature that client B asked for.
The second one about data mingling is also a non-issue. Just look at salesforce.com or any of the other large scale sites. They absolutely use a multi-tenant architecture and judging by the shear number of clients that use them this doesn't seem to be a problem. The main thing here is being able to ensure to your clients that their data is secured.
If you're talking about 10 branches, multi-tenancy seems like a big cost with little benefit.
There are complications with multi-tenancy you don't mention:
Versioning becomes difficult. Clients X, Y, and Z may want a new feature while clients A, B, and C don't. A multi-tenant app makes accommodating everyone difficult, especially if a new feature requires database schema changes. It's not impossible, it's just more difficult.
Some clients are very uncomfortable with their data mingling in the same tables as other clients. Even though we know better, it feels like a security risk to them. Legal departments hate it. In addition, if you ever dump raw data for a client, a shared database requires caution.
You can eliminate a few of your pain points with better practices:
Automate deployment. This should make it easier to add a new client or upgrade/downgrade an existing client. Database maintenance (backups, rebuilding indexes) should be set up automatically as well.
Store shared data (SKUs, inventory) in a central database and have every application instance access it either directly or through a service.
Don't get me wrong, one of the more interesting apps I worked on was multi-tenant. There can be huge benefits, but you'll more likely see them with thousands of clients versus ten.
Honestly, this is a business question. You are either going to be able to deliver more customized features to a smaller user group in a multi-tenant setup but with more IT overhead. That is, you will need more people and hardware (management reads this:money) but deliver greater flexibility.
If you are in a one GIANT Borg situation then you lower your IT overhead (again, people & things, to management money) but your end users have to absorb less flexibility in their software. All bugs are problems for all users, so big ones get whacked fast. However new features impact all users as well so they happen slower.
If you personally have the juice to make this call and the business just has to listen to what you say, or you can nudge management one way or another I'd suggest asking YOURSELF a series of questions about which scenario you prefer:
A) Do you want to have to have more people managing this and share salary/responsibilty
B) To the best of your knowledge is there going to be a 4th user group soon?
C) How long do you want to stay at this company?
If you answer yes to the first two, then you probably want multi-tenant.
I work in a situation where, for regulatory/legal reasons, we have to keep each client's data in a separate database. However, there is certain information that must be shared, mostly related to things like a lookup table for which client's URL corresponds to which database. Also, a client can choose to have multiple databases if they wish to separate their data in some logical way. So, for each of our products, we really have three types of databases:
ApplicationData, which has just a few tables that contain information about the clients themselves, like which MasterData database (see below) to use when reached by a certain URL and which features are available to that client. Each product has just one ApplicationData, no matter how many different clients are using that product.
MasterData, which contains client-specific information such as users, roles, and permissions (in our case, the tables that aspnet_regsql creates are here). Among the permissions specified here are which ClientData databases are available to a given user (see below). The schema for all MasterData databases (for the same product) are the same.
ClientData, which contains the data with which the user interacts. In one product, this is data that the client can search based on a large number of criteria, create reports about, etc. In another product, this contains the dynamic data that a client can upload so that other users can contact people to take surveys over the phone, etc. The schema for all ClientData databases for the same product is the same.
Now, one caveat: We actually use the same schema, and often the same actual database, for MasterData and ClientData. This is for historical reasons, as the ability to allow a client to have one authentication database (MasterData) corresponding to a number of ClientData databases is a relatively new feature that only applies to one of our products. Also, this structure simplifies deployment, since most clients only use one ClientData database. However, MasterData and ClientData have separate entity models under Entity Framework in our projects, and we have to ensure that there are no direct relationships between MasterData and ClientData such as foreign keys.
This setup works pretty well for us. One major advantage is that there is no problem with putting different ClientData databases on different servers. This helps greatly with load balancing, and it provides a natural way to partition data. We can essentially offer a client with a huge amount of data a dedicated database server if they are willing to pay for it.
One more thing that has really helped us in this situation are Red Gate's tools, specifically tools like Multi-Script, SQL Source Control, and Schema Compare. When we upgrade something, and the schema changes, we have to deploy the changes to all the relevant databases. These tools have more than paid for themselves in time saved. Note that I have no affiliation with Red Gate other than as a satisfied user.
Edit: (in response to comment)
ApplicationData is one database per product. The three web-based products we have use the same schema for ApplicationData, since they record basically the same types of information. However, there is no reason it would have to stay that way. The ApplicationData databases are all on the same server. One of the tables in ApplicationData points to the correct server and database name for the client's MasterData, so MasterData for a given client can reside on any server.
MasterData has server and database name information for each ClientData database, so again, the databases can reside on any server. In practice, for now, we only have two production database servers total for these products. The MasterData schema is similar per product, but I do not think they are exactly the same (I would have to check). Each client has its own MasterData. If a client purchases multiple products, there is a MasterData for each product for that client; the products interact in other ways (through web services, basically) if a client has purchased that feature (or requests custom development of such a feature. ClientData for a given product always has the same schema.
So, in summary:
ApplicationData is per product and happens to have the same schema in each product.
MasterData is per client for a product.
There are one or more ClientData instances for a client within a product.
I did oversimplify slightly in that only one of our products supports multiple ClientData instances per client. For a second product, that will probably be implemented eventually. For a third product, it would make no sense at all as a feature and will likely just remain as is.
I hope that answers your question!
Well, if the tendency is towards sharing information and data among different branches you're probably better off having one central database.
Otherwise the hoops you'll have to go through to achieve the ability to share data will be far worse than the extra WHERE clauses needed for a shared DB.
You could, of course, have a DB per branch and an extra database (a fourth database as of now) as a centralized storage for the information that needs sharing. Although you'd have to see if the over-complication makes this a best or worst of both worlds solution :)
If we're talking about CRM, then what are the chances of one customer being in multiple databases? If there's even the slightest chance of you being asked to combine customer details across branches then I'd definitely go with one centralised database.
IMO decentralization is becoming a tenant of maintainable and scalable design. The only centralized database I use is for security for authentication, which I'm currently growing into a decentralized database for authorization. So all authorization can stay at the same edge as the application physically sits with no network traversals since authorization is not a great candidate for caching.
Reading that you're specifically interested in multiple branches of the same application as opposed to truly disparate applications, it sounds like a great option would be to build your database around a seeding process (Entity Framework supports this) that would allow you to just deploy your new branch code to ASP.NET, and then during the initial build up the database that the tables are physically created that you poll the "blessed" server and dump all needed data to the edge server.
After this you would need some replication setup if new products are being added to the primary data store and those are expected to make it to each edge store. You could accomplish this with direct replication of your database or look at tools like the Microsoft Sync framework.
You may think today that you will only have a few customers, but a few years from now you may realize that the product has the potential to be sold to hundreds of customers. If that happens you will regret that you used a single-tenant approach.
Compare the costs of:
Converting a production system from single-tenant to multi-tenant where databases are populated with customer data
Developing a multi-tenant system despite thinking you won't need the benefits
Converting a production system is a daunting and very expensive task.
Using the second approach may cost you more initially, but it does give you a very valuable option to be able to add more customers in the future at low cost. The price of that option could be worth paying.
Related
Microservices - multiple DBs/tables
When I first read about Microservices (MS) one of the most striking things was that each MS has it's own DB. I think I understand this concept now and I am embracing it.
NoSQL DBs - single table
I then started researching NoSQL DBs, namely DynamoDB. I watched this deep dive video where the presenter discusses the idea of taking a relational model - say 4 tables, and representing the data in one table. He then uses various techniques to make the data super fast to query even at scale.
Again, I think I understand this concept.
Combining the two is where I get confused. MSs want me to split things out into separate services and therefore separate DBs (or tables) but NoSQL patterns want me to have one table....
Do these 2 design patterns/architectures not work together or am I missing something?
If you combine the two ideas, then you end up with each microservice having its own database, and each database having only one table.
If you have multiple micro services running in the same AWS account, I can see why you might be confused because you would end up having multiple tables in dynamodb. There are some questions I will address to try to clear things up for you.
How can I have separate databases in DynamoDB?
In DynamoDB, the notion of “separate databases” isn’t a very meaningful idea. From DynamoDB’s perspective, each table is independent of every other table (unlike a relational database). There’s no hardware you need to manage, so you can’t see whether your tables are on the same servers or not, and there’s definitely no concept of database instances.
How can I have separate databases if DynamoDB doesn’t have “separate databases”?
The goal is not necessarily to have a separate database for each micro service. The goal is to make sure that the only coupling between micro services happens through APIs provided by the micro services. Having separate databases is one way to help enforce that (so that the micro services aren’t tied to a shared internal data mode), but it’s not the only way.
So what should I do?
Each microservice should have whatever table(s) are necessary in order for it to function. Any given table should be read and written to by only one microservice. In order to achieve the isolation between micro services which are running in the same AWS account, you should use IAM policies to make sure that each microservice accesses only its own dynamodb table. In some cases, you might be better off putting each microservice into its own AWS account to provide an even high level of separation between them. (An added benefit of this approach is that if one of the accounts ever gets compromised, the attacker has access to only one of the microservices.)
I'm new to ASP.Net, MVC and the Entity Framework.
I'd like to understand the best practice for small databases. For example, say at Contoso University we know there are only going to be a few hundred or a few thousand students and courses. So all the data would comfortably fit in memory. So then is it better to use an in-memory collection and avoid potentially high-latency database operations?
I am thinking of small-scale production web sites deployed to Windows Azure.
To be more specific, the particular scenario I am thinking of has a few thousand records that are read-only, although users can create their own items too. Think of a collection of movies, albums or song lyrics that has been assembled offline from a list of a few thousand popular titles. The user can browse the collection (read-only), and most of the time they find what they are looking for there. However the user can also add their own records.
Since the popular titles fit in memory, and these are read-only, is it maybe better not to use a database for the popular titles? How would you organize data and code for this scenario?
Thanks for any thoughts and pointers.
I think a database is good place to store your information.
However, you are concerned about database latency.
You can mitigate that with caching - the data is stored in memory.
In short, it isn't an either or scenario...
You should definitely store your data in some persistent storage medium (SQL, Azure Tables, XML file, etc). The issues with storing items in memory are:
You have to find a way to store them once for the application and not once per user. Else, you will have potentially several copies of a 2-5 MB dataset floating around your memory space.
Users can add records, are these for everyone to see or just them. How would you handle user specific data.
If your app pool recycles, server gets moved by the Azure engineers, etc, you have to repopulate that data.
As said above, caching can really help to alleviate any SQL Azure latency (which btw, is not that high, we use SQL Azure and web roles and have not had any issues).
Complex queries. Sure, you can use LINQ to process in memory lists, but SQL is literally built to perform relational queries in a fast, efficient, data-safe manner.
Thread safe operations on an in-memory collection could be troublesome.
Edit/Addendum
The key, we have found, to working with SQL Azure is to not issue tons of little tiny queries, but rather, get the data you need in as few queries as possible. This is something all web applications should do, but it becomes much more apparent when using SQL Azure rather than a locally hosted database. Lastly, as far as performance/caching/etc, don't prematurely optimize! Get your application working, then identify bottlenecks. More often than not, it will be a code solution to fix the bottleneck and not necessarily a hardware/infrastructure issue.
I have an ASP.NET web application hosted in a web-farm environment, and I need a way to be able to indicate how much a user is using my database.
There are several reasons for this, and I mention a couple. First, because I pay for the database space per month, I want to have a reasonable way to charge my users. Second, it would be nice to know (again in a per user basis) when to inform the user to upgrade his subscription.
I don't have enough experience in RDBMS, I come from a different background (windows applications, graphics), and so I can't figure out if this is possible, and if it is, how this can be handled: through SQL or ASP.NET (some tool, library, etc.).
If you, also, have some other idea, I'd like to hear what you suggest.
Any other advice on this subject, including good places to learn, would also be appreciated.
It depends on your schema. If you use a database-per-user multi-tenant schema then is very easy, the size of the database is the size consumed, and is really easy to measure and, morei mportantly, enforce. If you use a shared database schema then you'll need to keep track in each table of what rows belong to which user and keep accounting. Both measurement and enforcement are more difficult and there is no general answer, you will have to properly code for accounting the bytes used and to enforce any max size per user constraint.
I have a requirement for a set of asp.net MVC websites as follows:
Multiple sites, using the same codebase, but each site will have a separate database (this is a requirement), and users will login and enter data.
A single site for super users where they log in and work on data aggregated from each of the individual sites.
The number of sites in point one is liable to expand as we roll it out to more clients.
My question is about the architecture of the above - how to manage the data aggregation, given that it needs to be real time. Do we maintain this at the database level (e.g. a view that is essentially a union across the individual site databases), or at the application level.
A few infrastructure points:
We have complete control over the database server and naming of databases.
All these websites are deployed onto a server that we manage.
I'd appreciate any input/ideas from folks that may have done this before.
Does the data aggregation have to be completely real-time, or can you get away with almost real-time? If "almost real-time" is acceptable then you can write a service application that harvests the data from the sites databases into your single central database. As long as the process runs continuously and you don't have too many sites to gather data from the delay should be more or less invisible for the user.
Having a view that accumulates the data from all the databases doesn't sound like a good solution. Not only will it probably be very slow, but you will also have to update the view whenever you add a new site.
What is the intention of the super user site, btw? Is it only for reporting or should super users edit the data across all sites as well? That may affect which solution you choose.
I've used asp.net profiles (using the AspNetSqlProfileProvider) for holding small bits of information about my users. I started to wonder how it would handle a robust profile for a large number of users. Does anyone have experience using this on a large website with large numbers of simultaneous users? What are the performance implications? How about maintenance?
Running against this via SQL I have found is a bit tricky, but i have worked with clients that have scaled it up to a few hundred properties, and 10K+ users without difficulty. Granted not a lot of users but it is working thus far.
I think it really depends on the specific project, and your exact needs when it comes to working with the profile information. Do you need to query on it regularly via SQL? Do you just need to for user display only, these types of things might help provide a more solid answer for your needs.
The SQL provider performance is more closely correlated to big iron throughput. Performance is more or less directly proportional to a single SQL Server's ability to handle the number of queries. Scale-up is the only option, so as such its not really five-nines robust out the box.
You'll have to figure out if you need scale-out performance and availability e.g. through partitioning, replication, redundancy etc. and at what cost to performance. Some of the capabilities are are possible as is - the current implementation is more aimed at the middle-market and enterprise.
Good thing is you can put your own implementation of the profile provider - then attach it to services and systems with capabilities outlined above.
We wrote a custom authn,authz and profile provider and strapped it to large AD/LDS LDAP cluster across 3 datacenters. We're in the Comscore Top 10 - so you could say that we deal with a good slice of internet every day. 1000's of profile queries per second and 100'millions of profiles - it can scale with good planning, engineering and operations.