guys i have question,
lets say i want to upload hundreds thousands of post / product to wordpress, which will slow down the website performance, and the database size will also getting bigger.
what if i split the wordpress site into several different installation to different sub directory based on the product or post category, so lets say one website only contain 25-30k post / producst, but there will be like 10 of those in different installation, in this way the database will be a lot smaller.
do you think it will make the performance better than put everything in single website?
my server is around 16gb ram and 8 cpu cores.
I don't think it will make any difference given you will run it on the same hardware. In case of multiple machines and one ingress node/load balancer you could route the request to the different backend server basing on the product requested, but if you have only one server for hosting everything: web server, database, etc. you will hit the limits of CPU/RAM/etc. much faster than the size of the database table (given it's properly designed, has indices and so on)
However you can measure the performance in both cases using a load testing tool and see how does response time, resources usage and database slow query log looks like in both deployment scenarios.
Data size doesn't have to slow the site. It becomes a matter of how fast can you get the data from the DB. A few things to consider:
Place the Database on a dedicated host. If locally hosted dedicate a crossover cable from the web tier to the DB tier, with a second IP for admin on the database host. You might consider a managed instance of your database with a cloud provider.
Indexes are your friend. Larger datasets result in longer indexes, but you can make deliberately shortened indexes. Choose a database that supports partitioned indexes. Combine these partitioned indexes along with higher I/O operations per second of SSDs for your index partitions and ensuring that all lookup access via index will result in your performance for large data sets doesn't suffer. How does a partitioned index increase access speed? Instead of having to traverse an index from A to S for an index supported query with an S based where clause, in a partioned index you might have 26 indexes, one for A, then B, then C, then ... You jump straight to the S partition for the lookup.
Shape your pool size on the PHP/Web tier. You have already increased the pool size by pulling the database onto its own host. The next thing to do is to effectively manage your cache of fixed assets, the items that do not change across user sessions. Commonly these items are style sheets, images, fonts, javascript files, ... Minimally look at a cache node in front of your wordpress site. Take a look at Varnish or Nginx for this. I am partial to Varnish, but either should do the trick. If you pair this with a CDN for a multigenerational cache then all the better. If you are in the cloud then you have built in CDN options with each cloud provider. You can also widen your bandwidth by placing these fixed assets ona dedicated host and then caching that one host, but this would require a lot of base modification of your wordpress image.
There is no reason why you cannot have multiple web fronts with a common database back end. You would need a load balancer to distribute the load and your first generation cache would sit in front of the load balancer. Realistically, if all of your queries are index supported and your cache is effectively managed, then you can easily scale to hundreds of concurrent users on moderate hardware. Your most taxing item is going to be your PHP execution to pull dynamic data for user sessions. Make the queries respond as fast as possible then you have a small lock window on PHP for each session.
Watch your locks per session! You may be at the mercy of a template and how it is managing your finite resource pool, but in general (a) unless 50%+1 use something, do not allocate it early, (b) be merciless in cutting sessions to release the session based locks on memory, (c) pinch your assets until they bleed - No 45 MB images on the front page when a color optimized 120K compressed image will do the job, (d) Watch the repeat access problem - This applies to subqueries in the database as well as building a web page with hundreds of assets to resolve a page.
Have you considered other options, such as Drupal? The setup is a bit more complex, but I can validate running a dozen distinct websites out of a single Drupal instance with no degradation in performance with the above dedicated database and cache nodes with hundreds of concurrent users on fairly moderate hardware (mini-itx atom based PCs)
Related
I have a customer that runs 36 websites (many thousands of pages) on a round robin with sticky affinity load-balanced set of IIS servers - the infra is entirely AWS based (r3.2xl - 8 VCPU, 60.5 GiB RAM)
To get straight to the point, the site is configured to 'cache on access' using standard in-memory caching with ASP.NET 4.6 and static assets through Cloudfront. The site on a 'cold start' makes both SQL Server queries for content, and separate elasticsearch queries at runtime to determine hreflang alternate language tags - this basically queries which versions of the URL are available in different languages for SEO reasons. This query has been optimised to be a lookup on a single index from a cross-index wildcard query. As mentioned, the entire result is cached for 24h once all this has executed.
Under normal use conditions the site works perfectly. As there are 36 sites running on a single box, the private set space gets allocated up to the max (99%) of physical RAM over time, as more and more content gets cached in memory. I can end up with App Pools in excess of 1.5GiB which isn't ideal. After this point, presumably the .NET LRU cache eviction algorithm is working overtime.
The problem I have, after some post-mortem review of the IIS logd, the customer is using an SEO bot tool, SEMrush, which essentially triggers a denial of service attack against the sites (thundering herd?) because of simultaneous requests for the 'long tail' of pages which are never viewed by a user and hence aren't stored in the cache.
The net result is a server brought to its knees, App Pool CPU usage all over the place, and an Elasticsearch queue length > 1000, huge ES heap growth, rejection rate - and eventually a crash.
The solutions I've thought about but haven't implemented:
Cloudfront all the sites - use a warm up script (although I don't think this will actually help as it's a cold start problem when all the pages expire, unless I could have a MOST recently used cache invalidation mechanism which invalidated pages on number of requests - say > 100, and left everything else persistent)
AWS Shield/WAF to provide some sort of rate limiting
Remove the runtime ES lookup all together and move to an eventually-consistent model which computes the hreflang lookup table elsewhere on a separate process. Hpwever, the ES instances, whilst on a v1.3.1 version which is old, is a 3-node cluster which has a lot of CPU power and each node set to a 16GiB min/max heap so should be able to take that level of throughput?
Or all 3!
Has anyone come across this problem before and what was your solution? it must be fairly common especially for large sites which are hammered by SEO / DQM web crawlers?
I've traditionally followed the 'one app, one server' architecture for most of the ASP.NET/SQL Server applications I've worked on. I mean that loosely, having used multiple servers with a load balancer, etc. But they have all been in the same datacenter.
However, recently, a requirement has come up to scope an application which will support users in the US, China, and Russia. Performance will be fairly critical, so what is the most sensible way to architect such an application so it performs well in all these areas?
The options I've come up with are:
Use one single data center (ie: don't host in multiple places around the world). Deliver static content over a CDN, but database and ASP.NET site will be hosted in one place (eg: US). This seems like performance may still be an issue though.
Use multiple data centers, and have multiple versions of the application. Eg: ru.myapp.com, us.myapp.com, ch.myapp.com with their own code/databases/etc. This will work, but things like reporting, management, etc would need to be done in each application, which seems like the least efficient approach.
Use a different architecture - but I'm not familiar with alternatives. Is it possible to architect in such as way that you have one single application and database that works across multiple data centers (like a load balanced environment, but on a larger scale).
Does anyone have any experience in the best way to handle this?
Your front end servers can work across data centers same as they work within a single data center. There are some differences though - Load balancer does not usually work cross colo. Do you would have to use geo DNS to route people to nearest data center and then use a load balancer within that data center.
The main issue that comes up is use of shared resources, such as DB or a web service such as authentication web service. If you really need a single DB, then one architecture is to have a single master, but multiple read slaves spread across different data centers. The reads are then NOT paying any penalty for going cross colo. The writes do have to go cross colo and thus pay the latency penalty. This works for most sites where writes are much less numerous than reads and where write performance can be 1-2 seconds slower than read and still be counted as acceptable. e.g. take a movie ticket booking site. The reads are overwhelmingly more than writes.
The cross colo performance can be dramatically improved through the following choices
1. Minimize the number of round trips. e.g. do all writes over a single transaction rather than doing multiple writes through multiple calls to DB. i.e. use batch queries, stored procedures, batch remote call etc.
2. Use optimistic write/eventual consistency if possible. e.g. say you are recording the time a user logged in. You can very well make it asynchronous, where the time is eventually recorded. Though there are scenarios where the eventual consistency is not acceptable.
I'm about to begin designing the architecture of a personal project that has the following characteristics:
Essentially a "game" containing several concurrent users based on a sport.
Matches in this sport are simulated on a regular basis and their results stored in a database.
Users can view the details of a simulated match "live" when it is occurring as well as see results after they have occurred.
I developed a similar web application with a much smaller scope as the previous iteration of this project. In that case, however, I chose to go with SQLite as my DB provider since I also had a redistributable desktop application that could be used to manually simulate matches (and in fact that ran as a standalone simulator outside of the web application). My constraints have now shifted to be only a web application, so I don't have to worry about this additional level of complexity.
My main problem with my previous implementation was handling concurrent requests. I made the mistake of using one database (which was represented by a single file on disk) to power both the simulation aspect (which ran in a separate process on the server) and the web application. Hence, when users were accessing the website concurrently with a live simulation happening, there were all sorts of database access issues since it was getting locked by one process. I fixed this by implementing a cross-process mutex on database operations but this drastically slowed down the performance of the website.
The tools I will be using are:
ASP.NET for the web application.
SQL Server 2008 R2 for the database... probably with an NHibernate layer for object relational mapping.
My question is, how do I design this so I will achieve optimal efficiency as well as concurrent access? Obviously shifting to an actual DB server from a file will have it's positives, but do I need to have two redundant servers--one for the simulation process and one for the web server process?
Any suggestions would be appreciated!
Thanks.
You should be fine doing both on the same database. Concurrent access is what modern database engines are designed for. Concurrent reads are usually no problem at all; concurrent writes lock the minimum possible amount of data (a table, or even just a number of rows), not the entire database.
A few things you should keep in mind though:
Use transactions wisely. On the one hand, a transaction is an important tool in making sure your database is always consistent - in short, a transaction either happens completely, or not at all. On the other hand, two concurrent transactions can cause deadlocks, and those buggers can be extremely hard to debug.
Normalize, and use constraints to protect your data integrity. Enforcing foreign keys can save the day, even though it often leads to more cumbersome administration.
Minimize the amount of time spent on data access: don't keep connections around when you don't need them, make absolutely sure you're not leaking any connections, don't fetch data you know don't need, do as much data-related processing (especially things that can be solved using joins, subqueries, groupings, views, etc.) in SQL instead of in code
I am working with an application where we store our client data in separate SQL databases for each client. So far this has worked great, there was even a case where some bad code selected the wrong customer ids from the database and since the only data in the database belonged to that client, the damage was not as bad as it could have been. My concerns are about the number of databases you realistically have on an SQL Server.
Is there any additional overhead for each new database you create? We we eventually hit a wall where we have just to many databases on one server? The SQL Server specs say you can have something like 32000 databases but is that possible, does anyone have a large number of database on one server and what are the problems you encounter.
Thanks,
Frank
The upper limits are
disk space
memory
maintenance
Examples:
Rebuilding indexes for 32k databases? When?
If 10% of 32k databases each has a active set of 100MB data in memory at one time, you're already at 320GB target server memory
knowing what DB you're connected too
...
The effective limit depends on load, usage, database size etc.
Edit: And bandwidth as Wyatt Barnett mentioned.. I forgot about network, the bottleneck everyone forgets about...
The biggest problem with all the multiple databases is keeping them all in synch as you make schema changes. As far as realistic number of databases you can have and have the system work well, as usual it depends. It depends on how powerful the server is and how large the databases are. Likely you would want to have multiple servers at some point not just because it will be faster for your clients but because it will put fewer clients at risk at one time if something happens to the server. At what point that is, only your company can decide. Certainly if you start getting a lot of time-outs another server might be indicated (or fixing your poor queries might also do it). Big clients will often pay a premium to be on a separate server, so consider that in your pricing. We had one client so paranoid about their data we had to have a separate server that was not even co-located with the other servers. They paid big bucks for that as we had to rent extra space.
ISPs routinely have one database server that is shared by hundreds or thousands of databases.
Architecturally, this is the right call in general. You've seen the first huge advantage--oftentimes, damage can be limited to a single client and you have near zero risk of a client getting into another client's data. But you are missing the other big advantage--you don't have to keep all the clients on the same database server. When you do get big enough that your server is suffering, you can offload clients onto another box entirely with minimal effort.
I'd also bet you'll run out of bandwidth to manage the databases before your server runs out of steam to handle more databases . . .
What you are really asking about is Scalability; Though, ideally setting up 32,000 Databases on one Server is probably not advantageous it is possible (though, not recommended).
Read - http://www.sql-server-performance.com/articles/clustering/massive_scalability_p1.aspx
I know this is an old thread but it's the same structure we've had in place for the past 2 years and current run 1768 databases over 3 servers.
We have the following setup (not included mirrors and so on):
2 web farm servers and 4 content servers
SQL instance just for a master database of customers, which is queried when they access their webpage by the ID to get the server/instance and database name which their data resides on. This is then stored in the authentication ticket.
3 SQL servers to host customer databases on with load spread on creation based on current total number of learners that exist within all databases on each server (quickly calculated by license number field in master database).
On each SQL Server there is a smaller master database setup which contains shared static data that is used by all clients, therefore allowing smaller client databases and quicker updating of the content.
The biggest thing as mentioned above is keeping the database structures synchronises! For this I ended up programming a small .NET windows form that looks up all customers in the master database and you paste code in to execute and it'll loop through getting the database location and executing the SQL you past.
Creating new customers also caused some issues for us, so I ended up programming a management system for our sale people and it create a new database based on a backup of a inactive "blank" database, therefore we have the latest DB without need to re-script the entire database creation script. It then inserts the customer details inside the master database with location of where the database was created and migrates any old data from an old version of our software. All this is done on a separate instance before moving, therefore reducing any SQL locks.
We are now moving to a single database for our next version of the software as database redundancy is near impossible with so many databases! This is a huge thing to consider as SQL creates a couple of waiting tasks which mirror your data per database, once you start multiplying the databases it gets out of hand and the system almost solely is tasked with synchronising and can lock up due to the shear number of threads. See page 30 of Microsoft document below:
SQLCAT's Guide to High Availability Disaster Recovery.pdf
I do however have doubts about moving to a single database, due to some concerns as mentioned above, such as constantly checking in every single procedure that the current customer has access to only their data and also things along the lines of one little issue will now affect every single database, such as table indexing and so on. Also at the minute our customer are spaced over 3 servers, but the single database will mean yes we have redundancy, but if the error was within the database rather than server going down, then that's every single customer down, not just 1 customer database.
All in all, it depends what you're doing and if you are wanting the redundancy; for me, the redundancy is now key and everything else in a perfect world shouldn't happen (such as error which causes errors within the database for everyone). We only started off expecting a hundred or so to move to the system from the old self hosted software and that quickly turned into 200,500,1000,1500... We now have over 750,000 users use our system each year and in August/September we have over 15,000 concurrent users online (expecting to hit 20,000 this year).
Hope that this is of help to someone along the line :-)
Regards
Liam
I run a medium sized website on an ASP.net platform and using MS SQL server to store the data.
My current site stats are:
~ 6000 Page Views a day
~ 10 tables in the SQL server with around 1000 rows per table
~ 4 queries per page served
The hosting machine has 1GB RAM
I expect by the end of 2009 to hit around:
~ 20,000 page views
~ 10 tables and around 4000 rows per table
~ 5 queries per page served
My question is should I plan for scalability right now itself? Will the machine hold up till the end of the year with the expects stats.
I know my description is very top level and does not provide insight into the kind of queries etc. But just wanted to know what your gut instinct tells you?
Thanks!
You should always plan for scalability. When to put resources into doing the actual scaling is usually the tough guess.
Will the machine hold up until the end
of the year
Way too little information to answer this. If a page request takes 30 CPU seconds to process due to massive interaction with a legacy enterprise application through the four queries per page - then there's no way. If it's taking miniscule fractions of a second to serve some static content stored in the cache and your queries are only executed every half hour to refresh the content - then you're good until 2020 at the rate of traffic growth you describe.
My guess is that you're somewhere closer to the latter scenario. 20,000 page hits a day is not really a ton of traffic, but you'll need to benchmark your page and server performance at some point so that you can make the calculations you need.
Things to look at for scaling your site when it is time:
Output Caching
Optimizing Viewstate
Using Ajax where appropriate
Session optimization
Request, script, css and html minification
Two years ago I saw a relatively new (for two years ago) laptop running IIS and serving up 1100 to 1200 simple dynamic page requests per second. It had been set up by a consulting firm whose business was optimizing ASP.Net websites, but it goes to show you how much you can do.
Essentially, by the end of 2009, you expect to do 100,000 SQL queries per day. This is about 1.157 queries per second.
I am making the assumption that your configuration is "normal" (i.e. you're not doing something funky and these are pretty straightforward SELECT, UPDATE, INSERT, etc), and that your server is running RAID disks.
At 4,000 rows per table this is nothing to SQL server. You should be just fine. If you wanted to be proactive about it, put another stick of RAM in the server and bring it up to at least 2GB, that way IIS and SQL have plenty of memory (SQL will certainly take advantage of it).
The hosting machine? Does this mean that you have IIS and SQL installed on the same box or IIS on your host machine with a dedicated SQL Server provided by your hosting company? Either way I would suggest starting to take a look at how you might implement a caching layer to minimize the hits (where possible) to the database. Once this is PLANNED (not necessarily implemented) I would then start to look at how you might build a caching layer around your output (things built in ASP.NET). If you see a clear an easy path to building caching layers...then this is a quick and easy way to start to minimize request to the database and work on your web server. I suggest that this cache layer be flexible...read not use anything provided by .NET! Currently I still suggest using MemCached Win32. You can install it on your one hosted local box easily and configure your cache layer to use local resources (add memory...1gb is not enough). Then if you find that you really need to squeeze every little bit of performance out of your system...splurge for a second box. Split your cache between your current box...and the new box (allowing you to keep more in cache). This will give you some room (and time) to grow. Offloading to more cache should help address any future spikes...and with the second box you can now also focus on making your site work in farmed environment. If you are using local session..push that into your cache layer so that a request from one box or another won't matter (standard session is local to the box that it is managed on).
This is a huge subject...so without real details this is all speculation of course! You might be just right for adding better and more hardware to the existing installation.
Have you tried setting up a quick performance test using sample data? 20,000 page views is less than one/sec (assuming even distribution over 8 hours), which is pretty minimal given your small tables. Assuming you're not sending a ton of data with each page view (i.e. a data table with all 1000 rows from one of your tables), you are likely OK.
You may need to increase RAM, but other than running a performance test I wouldn't worry too much about performance right now.
I don't think the load you are describing would be too much of a problem for most machines. Of course it doesn't just depend on the few metrics you outlined but also on query complexity, page size, and a heap of other things.
If you worry about scalability do some load testing and see how your site handles, say 10000 page views per hour (about 3 views per second). It's mostly always good to plan ahead as long as you plan for probable scenarios.
Guts say: Given 10 tables with 4,000 rows each and assuming about 2KB of data per row is only 80MB for the entire database. Easily cached within memory available. Assuming everything else about the application is equally simple, you should be able to easily serve hundreds of pages per second.
Engineers say: If you want to know, stress test your application.