nginx - more cores or better cpu? - wordpress

I currently have two systems with nginx in the following CPUS/RAM..
1x Intel® C2750 (Avoton), 8 cores 8 threads, #2.4 GHz, 8Gb RAM, 1 TB SATA3
1x Intel® Xeon® E3 1220, 4 cores 4 threads #3.1 GHz, 16Gb RAM, 420 GB 10K RAID 1
Basically I need it to host 6 Wordpress (with a cache plugin) and server a few thousands of files per day.
I'm using free CloudFlare service...
My question is...
Witch server is better for my needs?
Less CPU performance but more cores, or
More CPU performance but less cores?
Best regards,

Well i think for your needs both of them will supply the same performance and this is because of some basic reason's:
you serve a thousands of users per day lets say 10k this is not a massive traffic for your server unless they come in the same second see(DDoS) and for that situation non of them will help you.
CPU in most case's is not the bottleneck of the system setup you didn't mention here the HD those server's have, for example, if they have just regular HardDisk not an SSD both of them will give more or less the same performance.
bottom line, i would choose the cheapest one of those 2 unless money is not an issue.
hope it made your question clear enough.

I think you need choice:
1x Intel® Xeon® E3 1220, 4 cores 4 threads #3.1 GHz, 16Gb RAM, 420 GB 10K RAID 1
16Gb RAM, it is very important for you wordpress cache, because more data can be kept in the cache RAM
a more fast HDD, biggest speed, hight performance for cache
you will not see difference cpu on Wordpress

I'm going to go with the second option:
1x Intel® Xeon® E3 1220, 4 cores 4 threads #3.1 GHz, 16Gb RAM, 420 GB 10K RAID 1
Why?
Faster Hard Drives lead to better website performance, RAID 1 can help deliver this. Also RAID 1 will prevent against Hard Drive failure in case one drive fails.
RAM is essential in hosting environments, you will notice the biggest improvement here if your server comes under load. As your WordPress site will not do a lot of data processing, extra CPU isn't essential; if your server can't keep up CPU processes are just backlogged; though if you reach 75% CPU load, you need to start thinking about upgrading that too.
The Cloud Computing Rant
Of course I will say that old fashioned dedicated servers are the way of the past, CloudFlare in front of dedicated CloudFlare webserver and a dedicated MySQL server would be the best combo (with potentially a load balancer in front of your Nginx server if you ever want to scale them up). Digital Ocean or AWS offer some great cloud computing technology (using more reliable SSDs). Or, even better, use a WordPress PAAS service like WPEngine behind CloudFlare!
The software
I'm glad you're using Nginx over Apache, that will help this out a bit, but make sure your WordPress site is optimised, you could even consider using HHVM in order to speed up the WordPress site further in case you're expecting a lot of load. In short, keep the amount of plugins you use down (for security if anything else). Prevent bruteforce attacks with Fail2Ban, potentially enable NAXSI on Nginx with the dedicated WordPress rules for extra security. Think about enable CSS/HTML/JS minification at a CloudFlare level with aggressive caching, providing it doesn't break your site. Oh, and also think about doing some OPCaching at a PHP level.

Related

Maria DB recommended RAM,disk,core capacity?

I am not able to find maria DB recommended RAM,disk,number of Core capacity. We are setting up initial level and very minimum data volume. So just i need maria DB recommended capacity.
Appreciate your help!!!
Seeing that over the last few years Micro-Service architecture is rapidly increasing, and each Micro-Service usually needs its own database, I think this type of question is actually becoming more appropriate.
I was looking for this answer seeing that we were exploring the possibility to create small databases on many servers, and was wondering for interest sake what the minimum requirements for a Maria/MySQL DB would be...
Anyway I got this helpful answer from here that I thought I could also share here if someone else was looking into it...
When starting up, it (the database) allocates all the RAM it needs. By default, it
will use around 400MB of RAM, which isn’t noticible with a database
server with 64GB of RAM, but it is quite significant for a small
virtual machine. If you add in the default InnoDB buffer pool setting
of 128MB, you’re well over your 512MB RAM allotment and that doesn’t
include anything from the operating system.
1 CPU core is more than enough for most MySQL/MariaDB installations.
512MB of RAM is tight, but probably adequate if only MariaDB is running. But you would need to aggressively shrink various settings in my.cnf. Even 1GB is tiny.
1GB of disk is more than enough for the code and minimal data (I think).
Please experiment and report back.
There are minor differences in requirements between Operating system, and between versions of MariaDB.
Turn off most of the Performance_schema. If all the flags are turned on, lots of RAM is consumed.
20 years ago I had MySQL running on my personal 256MB (RAM) Windows box. I suspect today's MariaDB might be too big to work on such tiny machine. Today, the OS is the biggest occupant of any basic machine's disk. If you have only a few MB of data, then disk is not an issue.
Look at it this way -- What is the smallest smartphone you can get? A few GB of RAM and a few GB of "storage". If you cut either of those numbers in half, the phone probably cannot work, even before you add apps.
MariaDB or MySQL both actually use very less memory. About 50 MB to 150 MB is the range I found in some of my servers. These servers are running a few databases, having a handful of tables each and limited user load. MySQL documentation claims in needs 2 GB. That is very confusing to me. I understand why MariaDB does not specify any minimum requirements. If they say 50 MB there are going to be a lot of folks who will want to disagree. If they say 1 GB then they are unnecessarily inflating the minimum requirements. Come to think of it, more memory means better cache and performance. However, a well designed database can do disk reads every time without any performance issues. My apache installs (on the same server) consistently use up more memory (about double) than the database.

To how many users per second, 1 MB page can be served through 100 Mbps (12.5 MBps) uplink port of a dedicated server.

To how many users per second, 1 MB page can be served through 100 Mbps (12.5 MBps) uplink port of a dedicated server.
I am planning to increase capacity of my dedicated server as my current server is not able to manage the load of my application.
Henceforth, I need to understand the uplink port connection offered by varied dedicated server providers.
In Amazon EC2 this is mentioned as Network Performance, which only providsions 10 Gigabit on its largest instances.
Pls guide.
Simply put, a 12.5MB/s connection is going to be able to serve a 1MB page to 12.5 users every second.
That said, are you absolutely sure it's the network throughput that's causing the problem, rather than a CPU or memory limit? In my experience, the network link is very rarely the bottleneck.
Bear in mind that a 1MB page will often compress to far less than that, assuming the server's compression is configured correctly. And unless you're genuinely seeing 12.5 new users every second, they will likely have a lot of the static assets (images, scripts, etc) cached either in their browser or by an upstream proxy, so they won't be requested every time.
If you really are just serving a 1MB page to a very high number of users rather than being bound by CPU, then you might more luck investigating a CDN (like Cloudflare or Cloudfront) than simply upgrading to a quicker link.

Increase in number of requests form server cause website slow?

In My office website,webpage has 3css files ,2 javascript files ,11images and 1page request total 17 requests from server, If 10000 people visit my office site ...
This may slow the website due to more requests??
And any issues to the server due to huge traffic ??
I remember My tiny office server has
Intel i3 Processor
Nvidia 2Gb Graphic card
Microsoft 2008 server
8 GB DDR3 Ram and
500GB Hard disk..
Website developed on Asp.Net
Net speed was 10mbps download and 2mbps upload.using static ip address.
There are many reasons a website may be slow.
A huge spike in Additional Traffic.
Extremely Large or non-optimized graphics.
Large amount of external calls.
Server issue.
All websites should have optimized images, flash files, and video's. Large types media slow down the overall loading of each page. Optimize each image.PNG images have an improved weighted optimization that can offer better looking images with smaller file size.You could also run a Traceroute to your site.
Hope this helps.
This question is impossible to answer because there are so many variables. It sounds like you're hypothesising that you will have 10000 simultaneous users, do you really expect there to be that many?
The only way to find out if your server and site hold up under that kind of load is to profile it.
There is a tool called Apache Bench http://httpd.apache.org/docs/2.0/programs/ab.html which you can run from the command line and simulate a number of requests to your server to benchmark it. The tool comes with an install of apache, then you can simulate 10000 requests to your server and see how the request time holds up. At the same time you can run performance monitor in windows to diagnose if there are any bottlenecks.
Example usage taken from wikipedia
ab -n 100 -c 10 http://www.yahoo.com/
This will execute 100 HTTP GET requests, processing up to 10 requests
concurrently, to the specified URL, in this example,
"http://www.yahoo.com".
I don't think that downloads your page dependencies (js, css, images), but there probably are other tools you can use to simulate that.
I'd recommend that you ensure that you enable compression on your site and set up caching as this will significanly reduce the load and number of requests for very little effort.
Rather than hardware, you should think about your server's upload capacity. If your upload bandwidth is low, of course it would be a problem.
The most possible reason is because one session is lock all the rest requests.
If you not use session, turn it off and check again.
relative:
Replacing ASP.Net's session entirely
jQuery Ajax calls to web service seem to be synchronous

IIS 6 RAM Allocation on Windows Server 2003

I have my IIS 6 running my website. It is on a Windows Server 2003 which has 4GB of RAM. I run SQL intensive code after the user submits a form (math statistics stuff). This process is not threaded (should it be, especially if 2 or more users run the same thing?). But my process seems to consume only a couple of GBs of memory and the server crawls. How do I get my IIS process to use nearly all the memory?
I see on other sites that its 2GB or 3GB allocated using boot.ini. But is there another way for the process to use memory? If I make it multithreaded, will there be a process for each thread?
If there is still memory free for IIS, it does not need more. Even if you give it more memory it will perform better. It is good to see some memory is not used and can be used for other processes as IIS. If you want to make is multi threading, it depends on what you do parallel if more memory is used, and if you gain any performance.
The basic here is to start with your requirements and see what peak use you can have. Then make a performance test to see if your machine can handle that load. To be sure you can handle some more do an other test to see the peek load your machine can handle. Then you will know if you have to invest any more time.
Check you database server to see if you bottleneck is not on that machine, most developers forget optimizing and maintaining their databases.

w3wp.exe has high cpu usage on every request

I'm running a Windows 2008 server (a VPS with 1GB of RAM), with SQL Server Express and IIS 7 installed. On it I'm hosting a NopCommerce 1.7 website, with a database of around 26 000 products.
Right now I'm the only user of the website (it's in development) and I'm getting rather bad performance from it. To be more specific every time I make a request, the worker process goes to 90-100% CPU usage for a few seconds. Is it me or this is a lot for a 1 user NopCommerce website? Any ideas why this happens and what I can do to rectify it or further investigate?
PS: the worker process uses between 100MB-400MB of memory (private working set), and SQL Server with this database, around 160MB. Do you have any suggestions other then the obvious one to get more RAM? I intend to get one more GB but I fear this will not solve the cpu usage problem.
You've already stated you're going to get more RAM, but don't be surprised how much a lack of RAM can impact the CPU. If your RAM is not able to hold large objects efficiently because of lack of space (and I'd say using 40% of available RAM qualifies), then the CPU has to work harder to page things in and out of virtual memory. 90% is a little overkill for this, but with the server specs you give it's not impossible.
The most likely problem is that there is a hole in your code somewhere. My guess is that you have either an infinite loop or a direct memory leak (resources open during requests that aren't closed perhaps?). Your best bet would be to get the IIS Debug Diagnostics tool, install it and set up reports to find out what is going on directly on the server.

Resources