I have a very large site with 60k orders, most are subscriptions. 11k subscriptions over 6-7 years, but only 600 are active. This weight paired with an LMS makes the site very slow and hard to manage.
Are there any best practices around pruning or purging older orders for inactive subscriptions in an effort to improve database speed?
Misc config notes: DigitalOcean with 32gb ram, DigitalOcean Database cluster with 2gb ram dedicated to the DB. DB site uncompressed is around 2gb.
Related
Started a GCP free trial, migrated two WordPress sites with almost zero traffic to test the service. Here's what I'm running for each of the two sites:
VM: g1-small (1 vCPU, 1.7 GB memory) 10gb SSD
Package: bitnami-wordpress-5-2-4-1-linux-debian-9-x86-64
After about 1-2 months it seems to show that $46 has been deducted from the $300 free trial credit. Is this accurate / typical? Am I looking at paying $20+ per month to process perhaps 100 hits to the site from myself, plus any normal bot crawling that happens? This is roughly 10 times more expensive than a shared hosting multi domain account available from other web hosts.
Overall, how can I tell how much it will actually cost, when it looks to me that GCP reports about $2 of resource consumption per month, a $2 credit, and somehow a $254 balance from $300? Also GCP says average monthly cost is 17 cents on one of the billing pages, which is different from the $2 and the $46 figures. I can't find any entry that would explain all the other resources that were paid/credited.
Does anyone else have experience how much it should cost to run the Bitnami WordPress package provided on GCP marketplace?
Current Usage:
Running 2x g1-small (1 vCPU, 1.7 GB memory) 10gb SSD Package 24x7 should have deducted around ~$26* USD from your free-trial.
I presume you need MySQL would cost you minimum of $7.67* per instance:
Assuming you used 2x MySQL instances it would have costed you ~$15
So $26 Compute + $15 DB + $5 (other network, dns cost etc) would come upto about $46. Please note that price would go up if you used compute for less than a month.
*
1. As you can see from the image, you could get sustained use discount if you run it for a full month
if you are planning to use it for even longer you can get bigger discount for commited use.
Optimise for Cost
Have a look at the cost calculator link to plan your usage.
https://cloud.google.com/products/calculator/
Since compute and relational storage are the most cost prohibitive factor for you. If you are tech-savvy and open to experimentation you can try and use cloud run which should reduce your cost significantly but might add extra latency in serving your request. The link below shows how to set this up:
https://medium.com/acadevmy/how-to-install-a-wordpress-site-on-google-cloud-run-828bdc0d0e96
Currently there is no way around using database. Serverless databases could help bring down your cost but gcp does not over this at this point. AWS has this offering so gcp might come up with this in future.
Scalability
When your user base grows you might want to use
CDN which would help with your network cost.
Saving images to cloud storage would also help bring down your cost as disks are more expensive and less scalable and has increased maintenance.
Hope this helps.
I have a wordpress site hosted on dedicated server having below configuration.
CPU (8 cores): Intel Xeon CPU E3-1265L v3 # 2.50GHz,
Memory: 24GB,
Currently Used Storage: 350GB,
MySQL size: 3GB,
I have maximum daily visitors of 20,000 and maximum concurrent users at any point would be 400.
I would like to know which Google Cloud "Compute Engine" I should choose to cater these many requests without compromising the performance. Also, what are the other required resource I need to buy?
Is AWS better for this than GCP in this case?
Nobody can give you an answer to this type of question. It totally depends upon the application setup and the pattern of usage. The best way to determine this would be to create an automated test system that simulates usage in a pattern similar to how your website will be used, then to monitor the system (CPU, RAM usage, etc) and determine performance.
Alternatively, you can choose to oversize the system and monitor real-life metrics. Then, scale-down the system such that the metrics stay within acceptable ranges. It is relatively cheap to over-size systems in the cloud given that it might only be for a week.
We have an asp.net application currently running on a shared hosting provider.
The application specifications are
.NET 4.5 Web forms webiste.
1 MSSQL data base of size - 100 MB
Web Application size including images - 1.2GB
Page visits/day - ~2500
Bandwidth consumed/day - ~200MB
We are not happy with the Hosting providers and have faced a lot of issues recently. Our application requires trust level as full for some of the services and the hosting provider is unwilling to set the trust of the application as full.
Statistics obtained(1 week) from the hosting provider(See image below)
(We have around 2500 page visits a day on week days and 500 page visits on weekends)
Based on the above information, we are planning to move to AWS. I was thinking to go with a t2 medium reserved windows instance(It has 2 cores and 4 gb ram) and EBS storage(Will cost us around 50 USD a month)
Since we have a small database when compared to enterprise standards, I was thinking to host it in the same instance. We are also planning to go with Sequel server express as this does not need licensing rights.
I have read up a lot of documentation and I am not able to reach a conclusion about the size of the instance to go for and will the t2 medium EC2 instance be able to serve my purpose?
1. It would be great if anyone can tell me if the above t2 medium ec2 infrastructure will suit our needs?
2. Do I need a t2 medium instance or can I go for a lower level
configuration ?
Open to suggestions on change of above mentioned infrastructure as well.
Instead of quoting page visits, etc, you are better off showing RAM and CPU utilisation.
A couple of things to consider:
1 - The t instance is a burstable throughput CPU. That means you gain 'credits' when your CPU is not being used, and when you use it (I cant remember exactly but possibly >10%?) you use up credits. This means if your average workload is either very low, or very bursty, the t series is a good choice, but if your workload is quite constant and high, it is a very bad choice.
2 - How static is the website? You might see benefit in decoupling the database onto an RDS instance that matches your needs, and then having the website on a spot instance to reduce cost, for example.
3 - With respect to configuration requirements, only you can answer this.
I am not able to find maria DB recommended RAM,disk,number of Core capacity. We are setting up initial level and very minimum data volume. So just i need maria DB recommended capacity.
Appreciate your help!!!
Seeing that over the last few years Micro-Service architecture is rapidly increasing, and each Micro-Service usually needs its own database, I think this type of question is actually becoming more appropriate.
I was looking for this answer seeing that we were exploring the possibility to create small databases on many servers, and was wondering for interest sake what the minimum requirements for a Maria/MySQL DB would be...
Anyway I got this helpful answer from here that I thought I could also share here if someone else was looking into it...
When starting up, it (the database) allocates all the RAM it needs. By default, it
will use around 400MB of RAM, which isn’t noticible with a database
server with 64GB of RAM, but it is quite significant for a small
virtual machine. If you add in the default InnoDB buffer pool setting
of 128MB, you’re well over your 512MB RAM allotment and that doesn’t
include anything from the operating system.
1 CPU core is more than enough for most MySQL/MariaDB installations.
512MB of RAM is tight, but probably adequate if only MariaDB is running. But you would need to aggressively shrink various settings in my.cnf. Even 1GB is tiny.
1GB of disk is more than enough for the code and minimal data (I think).
Please experiment and report back.
There are minor differences in requirements between Operating system, and between versions of MariaDB.
Turn off most of the Performance_schema. If all the flags are turned on, lots of RAM is consumed.
20 years ago I had MySQL running on my personal 256MB (RAM) Windows box. I suspect today's MariaDB might be too big to work on such tiny machine. Today, the OS is the biggest occupant of any basic machine's disk. If you have only a few MB of data, then disk is not an issue.
Look at it this way -- What is the smallest smartphone you can get? A few GB of RAM and a few GB of "storage". If you cut either of those numbers in half, the phone probably cannot work, even before you add apps.
MariaDB or MySQL both actually use very less memory. About 50 MB to 150 MB is the range I found in some of my servers. These servers are running a few databases, having a handful of tables each and limited user load. MySQL documentation claims in needs 2 GB. That is very confusing to me. I understand why MariaDB does not specify any minimum requirements. If they say 50 MB there are going to be a lot of folks who will want to disagree. If they say 1 GB then they are unnecessarily inflating the minimum requirements. Come to think of it, more memory means better cache and performance. However, a well designed database can do disk reads every time without any performance issues. My apache installs (on the same server) consistently use up more memory (about double) than the database.
I currently have two systems with nginx in the following CPUS/RAM..
1x Intel® C2750 (Avoton), 8 cores 8 threads, #2.4 GHz, 8Gb RAM, 1 TB SATA3
1x Intel® Xeon® E3 1220, 4 cores 4 threads #3.1 GHz, 16Gb RAM, 420 GB 10K RAID 1
Basically I need it to host 6 Wordpress (with a cache plugin) and server a few thousands of files per day.
I'm using free CloudFlare service...
My question is...
Witch server is better for my needs?
Less CPU performance but more cores, or
More CPU performance but less cores?
Best regards,
Well i think for your needs both of them will supply the same performance and this is because of some basic reason's:
you serve a thousands of users per day lets say 10k this is not a massive traffic for your server unless they come in the same second see(DDoS) and for that situation non of them will help you.
CPU in most case's is not the bottleneck of the system setup you didn't mention here the HD those server's have, for example, if they have just regular HardDisk not an SSD both of them will give more or less the same performance.
bottom line, i would choose the cheapest one of those 2 unless money is not an issue.
hope it made your question clear enough.
I think you need choice:
1x Intel® Xeon® E3 1220, 4 cores 4 threads #3.1 GHz, 16Gb RAM, 420 GB 10K RAID 1
16Gb RAM, it is very important for you wordpress cache, because more data can be kept in the cache RAM
a more fast HDD, biggest speed, hight performance for cache
you will not see difference cpu on Wordpress
I'm going to go with the second option:
1x Intel® Xeon® E3 1220, 4 cores 4 threads #3.1 GHz, 16Gb RAM, 420 GB 10K RAID 1
Why?
Faster Hard Drives lead to better website performance, RAID 1 can help deliver this. Also RAID 1 will prevent against Hard Drive failure in case one drive fails.
RAM is essential in hosting environments, you will notice the biggest improvement here if your server comes under load. As your WordPress site will not do a lot of data processing, extra CPU isn't essential; if your server can't keep up CPU processes are just backlogged; though if you reach 75% CPU load, you need to start thinking about upgrading that too.
The Cloud Computing Rant
Of course I will say that old fashioned dedicated servers are the way of the past, CloudFlare in front of dedicated CloudFlare webserver and a dedicated MySQL server would be the best combo (with potentially a load balancer in front of your Nginx server if you ever want to scale them up). Digital Ocean or AWS offer some great cloud computing technology (using more reliable SSDs). Or, even better, use a WordPress PAAS service like WPEngine behind CloudFlare!
The software
I'm glad you're using Nginx over Apache, that will help this out a bit, but make sure your WordPress site is optimised, you could even consider using HHVM in order to speed up the WordPress site further in case you're expecting a lot of load. In short, keep the amount of plugins you use down (for security if anything else). Prevent bruteforce attacks with Fail2Ban, potentially enable NAXSI on Nginx with the dedicated WordPress rules for extra security. Think about enable CSS/HTML/JS minification at a CloudFlare level with aggressive caching, providing it doesn't break your site. Oh, and also think about doing some OPCaching at a PHP level.