I am designing a multi-tenant system and am considering sharding by tenant at the application layer level instead of database.
Hypothetically, the way this should work is that for incoming request a router process has a global collection of tenants containing primary attributes to determine the tenant for this request as well as the virtual shard id. This virtual shard id is further mapped to an actual shard.
The actual shard contains both the code for application as well as whole data for this tenant. These shards would be LNMP (Linux, Nginx, MySQL/MongoDB, PHP) servers.
The router process should act as proxy. It should be able to run some code to determine the target shard for incoming request based on the collection stored in some local db or files. To be able to scale this better, i am considering making the shards themselves act as routers also so that they can run a reverse proxy that will forward the request to appropriate shard. Maybe the nginx instance running on shard can also act as that reverse proxy. But how will it execute the application logic needed to match up the request with the appropriate shard.
I will appreciate any ideas and suggestions for this router implementation.
Thanks
Another option would be to use a product such as dbShards. dbShards is the only sharding product that shards at the application level. This way you can use any RDMS (Postgres, MySQL, etc.) and still be able to shard your database without having to put some kind of proxy in-between. A lot of the other sharding products rely on a proxy to point the transactions to the correct shard, but dbShards knows where to go without having to "ask" anyone else.
Great product. dbshards
Unless you expect your tenants to generate approximately equal data volume, sharding by tenant will not be very efficient.
As to application level sharding in general, let me share my own experience:
Version 1 of our high-volume SaaS product sharded at the application level. You will find that resharding as you grow will be a major headache if you shard against a SQL type solution at the application level, or you will have to write significant tooling to automate the process.
We switched to MongoDB (after considering multiple alternatives including Cassandra) in no small part because of all of the built-in support for resharding / rebalancing as data grows.
If your application does not need the relational capabilities of MySQL, I would suggest concentrating your efforts on MongoDB (since you have already identified that as a possible data platform) if you expect more than modest data growth. Allow MongoDB to handle the data sharding.
Related
Currently I am trying to design an application where we have a CosmosDB account representing a group of customers with:
One container is used an overall Metadata store that contains all customers
Other containers will containers will contain data specific to one customer where data will be partitioned on according to different categories of customer history etc.
When we onboard a new customer (which will not happen too often and once) we'd like to make sure that we create an row in the Overall customer Metadata and then provision the customer specific container if fail rollback the transaction if it fails. (In the future we'd like to remove customers as well.)
Unfortunately the Cosmosdb Nosql only supports transactions in one container within the same logical partition, and does not support multi-container transactions. Our own POC indicates the MongoDB api does support this but unfortunately MongoDB does not fit our use case as we need support for Azure Functions.
The heart of the problem here isn't whether Cosmos DB supports distributed transactions. The core problem is you can't enlist an Azure Control Plane action (in this case, creating a container resource) into a transaction.
Since you're building in the cloud, my recommendation would be to employ the outbox pattern to manage your provisioning state for your customers. There's an easy to understand example here you can read.
Given you are building a multi-tenant application for Cosmos DB and using containers as your tenant boundary, please note that the maximum number of databases and/or containers in an account is 500. Please see Service Quotas for more information.
Microservices - multiple DBs/tables
When I first read about Microservices (MS) one of the most striking things was that each MS has it's own DB. I think I understand this concept now and I am embracing it.
NoSQL DBs - single table
I then started researching NoSQL DBs, namely DynamoDB. I watched this deep dive video where the presenter discusses the idea of taking a relational model - say 4 tables, and representing the data in one table. He then uses various techniques to make the data super fast to query even at scale.
Again, I think I understand this concept.
Combining the two is where I get confused. MSs want me to split things out into separate services and therefore separate DBs (or tables) but NoSQL patterns want me to have one table....
Do these 2 design patterns/architectures not work together or am I missing something?
If you combine the two ideas, then you end up with each microservice having its own database, and each database having only one table.
If you have multiple micro services running in the same AWS account, I can see why you might be confused because you would end up having multiple tables in dynamodb. There are some questions I will address to try to clear things up for you.
How can I have separate databases in DynamoDB?
In DynamoDB, the notion of “separate databases” isn’t a very meaningful idea. From DynamoDB’s perspective, each table is independent of every other table (unlike a relational database). There’s no hardware you need to manage, so you can’t see whether your tables are on the same servers or not, and there’s definitely no concept of database instances.
How can I have separate databases if DynamoDB doesn’t have “separate databases”?
The goal is not necessarily to have a separate database for each micro service. The goal is to make sure that the only coupling between micro services happens through APIs provided by the micro services. Having separate databases is one way to help enforce that (so that the micro services aren’t tied to a shared internal data mode), but it’s not the only way.
So what should I do?
Each microservice should have whatever table(s) are necessary in order for it to function. Any given table should be read and written to by only one microservice. In order to achieve the isolation between micro services which are running in the same AWS account, you should use IAM policies to make sure that each microservice accesses only its own dynamodb table. In some cases, you might be better off putting each microservice into its own AWS account to provide an even high level of separation between them. (An added benefit of this approach is that if one of the accounts ever gets compromised, the attacker has access to only one of the microservices.)
I am starting to port one old desktop single tenant application into the cloud and wish to hear what would be your recommendation about the databases for my cloud-based multi-tenant application?
My basic requirement is simple:
For each tenant, its data is separate to any other tenants' data. I can easily backup, restore, export the data for one single tenant without affecting other tenants.
I don't really want to care about multi-tenancy in the business logic code. It should look like a single tenant application behind the security layer, no tenant ID pass around etc.
Easy to query using some mature technology like LINQ.
Availability and scalability, of course, easy to set up replicas, fail-over and scaling up and down etc.
I have gone through some investigations about multi-tenant application development. I have noticed SQL databases from Azure and AWS are both very expensive(the cost for just SQL database instance is close to the license fee of the original application), so I definitely can't use separate SQL database instances for tenants.
Now I'm reading this book Developing Multi-tenant Applications for the Cloud, 3rd Edition, and it uses Azure Storage Service to implement multi-tenancy. I haven't finished the book yet, it seems you still have to handle the multi-tenancy by yourself and the sample code is already out of date.
I have seen lots of SO questions compare Azure Table Storage with MongoDB. The MongoDB is very new to me, not sure whether it could be easily used to fulfill my requirements?
And I have seen RavenDB as well, it does support multi-tenancy out of box. But I didn't see some good sample code about how to use it in Azure app development.
Hope to hear some good advices from awesome SO guys.
I would better opt with RavenDB on top of MongoDB. Even Raven is a new comer in to the game, it supports most of the features which traditional SQL supports.
Also to make up a decisions the volume of data you are dealing is a also a key decision pointer. Also the amount of traffic you are expecting.
Also keep in mind that operational costs and development efforts. HA and DR scenarios can be problematic when you use Raven or Mongo because of the fact that you need to host them. But when it comes to Azure Storage, it by defaults protects you to a maximum extent by maintaining 3 copies of information.
So I would suggest you to carefully make the trade offs and opt wisely based on your business needs, cost optimization, development and operational effort.
Having a single instance of your application for each tenant is a very expensive way to implement an application, however I realise that if an application was developed with a single tenant in mind, then the costs of changing over can be high.
First can we start out with why you have a desktop application connecting to a database at another location. The latency can really slow down an application. Ideally you would want a locally installed database and have it sync with the cloud DB, or add in appropriate caching into your application.
However the DB would still need to differentiate the clients.
Why do you need this to go to a cloud database? Is it for backup purposes, not installing a DB locally on a clients machine, accessing the same data from many machines or something else?
Unless your application is extremely large, I would recommend rewriting it for multi-tenant to one SQL Azure database. The architecture chosen at the beginning of the project doesn't suit your requirements now. As you expand you will run into further issues.
I am planning to develop a fairly small SaaS service. Every business client will have an associated database (same schema among clients' databases, different data). In addition, they will have a unique domain pointing to the web app, and here I see these 2 options:
The domains will point to a unique web app, which will change the
connection string to the proper client's database depending on the
domain. (That is, I will need to deploy one web app only.)
The domains will point to their own web app, which is really the
same web app replicated for every client but with the proper
connection string to the client's database. (That is, I will need to
deploy many web apps.)
This is for an ASP.NET 4.0 MVC 3.0 web app that will run on IIS 7.0. It will be fairly small, but I do require to be scalable. Should I go with 1 or 2?
This MSDN article is a great resource that goes into detail about the advantages of three patterns:
Separated DB. Each app instance has its own DB instance. Easier, but can be difficult to administer from a database infrastructure standpoint.
Separated schema. Each app instance shares a DB but is partitioned via schemas. Requires a little more coding work, and mitigates some of the challenges of a totally separate schema, but still has difficulties if you need individual site backup/restore and things like that.
Shared schema. Your app is responsible for partitioning the data based on the app instance. This requires the most work, but is most flexible in terms of management of the data.
In terms of how your app handles it, the DB design will probably determine that. I have in the past done both shared DB and shared schema. In the separated DB approach, I usually separate the app instances as well. In the shared schema approach, it's the same app with logic to modify what data is available based on login and/or hostname.
I'm not sure this is the answer you're looking for, but there is a third option:
Using a multi-tenant database design. A single database which supports all clients. Your tables would contain composite primary keys.
Scale out when you need. If your service is small, I wouldn't see any benefit to multiple databases except for assured data security - meaning, you'll only bring back query results for the correct client. The costs will be much higher running multiple databases if you're planning on hosting with a cloud service.
If SalesForce can host their SaaS using a multitenant design, I would at least consider this as a viable option for a small service.
We are preparing to scale the API side of an API-heavy web application. My (technically savvy) client proposes a rather unconventional approach to this: instead of balancing the load to several app servers, which would talk to a sharded database, he wants us to:
“shard the app servers”, putting both app server code and db on each physical server, so that the app server only connects to its own db shard;
have the app servers talk to each other when they need to access other shards (instead of talking to another shard's DB directly);
have the API client pick an app shard itself (on the client side, based on some stable hash) and talk directly to it.
The underlying reasoning is that this is the most natural thing to do it, and that this would allow us to move to a multisite distributed system in the future.
(The stack is PHP + Node.js on MySQL, although at this point a transition to MongoDB is considered too.)
Now, I don't see huge problems with it off the shelf. It might get somewhat cumbersome to code these server-to-server interactions, but then it will surely have its own benefits. Basically I'm at a loss on whether this is a good idea or not.
What pros and cons come to your mind? I'm looking for technical issues and advantages here. Thanks!
This is just plain bad for many reasons.
The API client should not know which app shard to talk to. This will limit you in ways you probably can't foresee now, but may/will become a problem in the future. The API client should play dumb so you can route requests appropriately if an app server dies, changes, gets sharded again etc.
What happens if your app code or database architecture is slow? (Not both at the same time, just one). Now you have a db shard slowing down an app shard.
Your db+app shards will need to keep both app code+memory and db code+memory in RAM. This means the CPUs will spend more time swapping code and memory in and out to perform both sets of tasks.
I'm finding it hard to put down in words, but this type of architecture screams 'bad coupling' and 'no separation of concerns' (probably not the right terminology but I hope you understand what I mean). You are putting two distinctly different types of applications (app server and database) onto one box. The management nightmare of updating them and routing around failed instances will be very difficult.
I hate to argue my point this way, but a lot of very smart people have dealt with these problems before and I've never heard of this type of architecture. There's probably a reason for it. Not to mention there's a lot of technology and resources out there that can help you handle traditional sharding and load balancing of app and database servers. If you go with your client's suggested architecture you're on your own.