Is CDN helping server in terms of performance and RAM? - drupal

I'm planning to move my website files into a CDN system, i'm running 4 drupal websites and 1 wordpress. I was thinking to use Amazon Cloud Front.
I have a some questions:
Is the CDN system helping my server in terms of performance and RAM?
I'm using http://www.webpagetest.org to see the performance of the website, and the 83% of the requests comes from the images. The rest is between html, css, js and other. this is the other result
F = First Byte Time
A = Keep-alive Enabled
F = Compress Text
C = Compress Images
A = Cache static content
X = CDN detected
Is it possible, using amazon CloudFront, to put on cloud a website inside a sub-folder?
Basically I want to test it in a non-production site.
My server is a r310 quad core xeon 2.66 with 4gb of ram.
Thanks in advance

The answer should be much long to describe this I think, but in simple terms, a well-manaed CDN can help you to make your site faster.
4GB of RAM is not bad for a normal web site.
There are 3 main reasons to use a CDN that i can think about.
1. To deliver static content faster using nearby servers.
2. To avoid the browser from sending cookies to each GET request.
3. To take off some of the Apache load.
1 - I haven't used cloudfront but some akamai servers and they do make a difference. They simple gives content in a different and nearby server so file loading is relatively fast. But don't forget that this adds additional ip lookups if the user is loading site for the first time after a dns cache clean up.
2 - I think you know the cookie-less domain problem. If you host your site in example.com and images are in example.com/image.png like structure, browser should send the cookie info on each page request. They are usually ~100 bytes of data but when it comes to many assets, this is something worth considering. If you take off them to example-data.com domain, browser will not send the cookies to assets in this location. Faster pages.
3 - Your web server load is the other benefit. Your server will get less requests (mainly html requests) and images and other assets will be served from another server.

Related

nginx protecting from screaming frog and too gready crawlers (so no real ddos, but close)

we have seen several actions where a simple screaming frog action would almost take down our server (it does not go down, but it slows down almost to a halt and PHP processes go crazy). We run Magento ;)
Now we applied this Nginx ruleset: https://gist.github.com/denji/8359866
But I was wondering of there is a more strict or better way to kick out too gready crawlers and screaming frog crawl episodes. Say 'after 2 minutes' of intense requesting we should already know someone is running too many requests of some automated system (not blocking the Google bot ofcourse)
Help and ideas appreciated
Seeing how a simple SEO utility scan may cause your server to crawl, you should realize that blocking of spiders isn't real solution. Suppose that you have managed to block every spider in the world or created a sophisticated ruleset to define that this number of requests per second is legit, and this one is not.
But it's obvious that your server can't handle even a few visitors at the same time. Few more visitors and they will bring your server down, whenever your store receives more traffic.
You should address the main performance bottleneck, which is PHP.
PHP is slow. With Magento it's slower. That's it.
Imagine every request to your Magento store causes scanning and parsing of dozens and dozens of PHP files. This will hit the CPU so bad.
If you have unoptimized PHP-FPM configuration, this will hit your RAM so bad also.
These are the things which should be done in order of priority to ease the PHP strain:
Make use of Full Page Cache
Really, it's a must with Magento. You don't loose anything, but only gain performance. Common choices are:
Lesti FPC. This is easiest to install and configure. Works most of the time even if your theme is badly coded. Profit - your server will no longer be down and will be able to serve more visitors. It can even store its cache to Redis if you have enough RAM and you are willing to configure it. It will cache, and it will cache things fast.
Varnish. It is the fastest caching server, but it's tricky to configure if you're running Magento 1.9. You will need Turpentine Magento plugin. The latter is quite picky to make work if your theme is not well coded. If you're running Magento 2, it's just compatible with Varnish out of the box and quite easy to configure.
Adjust PHP-FPM pool settings
Make sure that your PHP-FPM pool is configured properly. A value for pm.max_children that is too small will make for slow page requests. A value that is too high might hang your server because it will lack RAM. Set it to (50% of total RAM divided by 128MB) for starters.
Make sure to adjust pm.max_requests and set it to a sane number, i.e. 500. Many times, having it set to 0 (the default) will lead to "fat" PHP processes which will eventually eat all of the RAM on server.
PHP 7
If you're running Magento 2, you really should be using PHP 7 since it's twice as fast as PHP 5.5 or PHP 5.6.
Same suggestions with configs in my blog post Magento 1.9 performance checklist.

Increase in number of requests form server cause website slow?

In My office website,webpage has 3css files ,2 javascript files ,11images and 1page request total 17 requests from server, If 10000 people visit my office site ...
This may slow the website due to more requests??
And any issues to the server due to huge traffic ??
I remember My tiny office server has
Intel i3 Processor
Nvidia 2Gb Graphic card
Microsoft 2008 server
8 GB DDR3 Ram and
500GB Hard disk..
Website developed on Asp.Net
Net speed was 10mbps download and 2mbps upload.using static ip address.
There are many reasons a website may be slow.
A huge spike in Additional Traffic.
Extremely Large or non-optimized graphics.
Large amount of external calls.
Server issue.
All websites should have optimized images, flash files, and video's. Large types media slow down the overall loading of each page. Optimize each image.PNG images have an improved weighted optimization that can offer better looking images with smaller file size.You could also run a Traceroute to your site.
Hope this helps.
This question is impossible to answer because there are so many variables. It sounds like you're hypothesising that you will have 10000 simultaneous users, do you really expect there to be that many?
The only way to find out if your server and site hold up under that kind of load is to profile it.
There is a tool called Apache Bench http://httpd.apache.org/docs/2.0/programs/ab.html which you can run from the command line and simulate a number of requests to your server to benchmark it. The tool comes with an install of apache, then you can simulate 10000 requests to your server and see how the request time holds up. At the same time you can run performance monitor in windows to diagnose if there are any bottlenecks.
Example usage taken from wikipedia
ab -n 100 -c 10 http://www.yahoo.com/
This will execute 100 HTTP GET requests, processing up to 10 requests
concurrently, to the specified URL, in this example,
"http://www.yahoo.com".
I don't think that downloads your page dependencies (js, css, images), but there probably are other tools you can use to simulate that.
I'd recommend that you ensure that you enable compression on your site and set up caching as this will significanly reduce the load and number of requests for very little effort.
Rather than hardware, you should think about your server's upload capacity. If your upload bandwidth is low, of course it would be a problem.
The most possible reason is because one session is lock all the rest requests.
If you not use session, turn it off and check again.
relative:
Replacing ASP.Net's session entirely
jQuery Ajax calls to web service seem to be synchronous

Enable dynamic compression in app within GBPS LAN?

I have a LAN of 1000 clients with speeds of 1 GBPS.
One application hosted in IIS 7.5.
Fact: A megabyte response is transferred between the server and the client in no more than 30 miliseconds. The connection is very fast.
Fact: Some clients have older PCs (windows xp, ie7, pentium4).
I think that dynamic compression is not needed in this case, becuase the problem is not the bandwidth but the clients computer performance.
Do you recommend to disable compression?
My pages have too much javascript. In every post I refresh the page with javascript, ajax and json. In some cases when the HTML is too big, the browser gets a little bit unresponsible. I think that compression is causing this problem.
any comments?
A useful scenario for compression is when you have to pay for the bandwith and would like to speed up the download of large pages, but this creates a bit of work for the client having to uncompress the data before serving it.
Turn it off.
You don't need it for serving pages over a high-speed LAN.
Definitely don't think you need the compression. But you are shooting in the dark here -- get yourself a http debugger such as the one included in google chrome and see what parts of the pages are slow.

Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load.
I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night.
Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution
Thanks
I've used Squid before to reduce load on dynamically-created RSS feeds, and it worked quite well. It just takes some careful configuration and tuning to get it working the way you want.
Using a primed cache server is an excellent idea (I've done the same thing using wget and Squid). However, it is probably unnecessary in this scenario.
It sounds like your data is fairly static and the problem is server load, not network bandwidth. Generally, the problem exists in one of two areas:
Database query load on your DB server.
Business logic load on your web/application server.
Here is a JSP-specific overview of caching options.
I have seen huge performance increases by simply caching query results. Even adding a cache with a duration of 60 seconds can dramatically reduce load on a database server. JSP has several options for in-memory cache.
Another area available to you is output caching. This means that the content of a page is created once, but the output is used multiple times. This reduces the CPU load of a web server dramatically.
My experience is with ASP, but the exact same mechanisms are available on JSP pages. In my experience, with even a small amount of caching you can expect a 5-10x increase in max requests per sec.
I would use tiered caching here; deploy Squid as a reverse proxy server in front of your app server as you suggest, but then deploy a Squid at each client site that points to your origin cache.
If geographic latency isn't a big deal, then you can probably get away with just priming the origin cache like you were planning to do and then letting the remote caches prime themselves off that one based on client requests. In other words, just deploying caches out at the clients might be all you need to do beyond priming the origin cache.

Using EC2 Load Balancing with Existing Wordpress Blog

I currently have a virtual dedicated server through Media Temple that I use to run several high traffic Wordpress blogs. Both tend to receive sudden StumbleUpon traffic surges that (I'm assuming) cause the server CPU to run at 100% and slow down everything. I'm currently using WP-Super-Cache, S3, and CloudFront for most static files, but high traffic is still causing slowdown on the CPU.
From what I'm reading, it seems like I might want to use EC2 to help the existing server when traffic spikes occur. Since I'm currently using the top tier of virtual dedicated servers on Media Temple, I'd like to avoid jumping to a dedicated server if possible. I get the sense that AWS might help boost the existing server's power. How would I go about doing this?
I apologize if I'm using any of these terms incorrectly -- I'm relatively amateur when it comes to server administration. If this isn't the best way to improve performance, what is the recommended course of action?
The first thing I would do is move your database server to another Media Temple VPS. After that, look to see which one is hitting 100% CPU. If it's the web server, you can create a second instance, and use a proxy to balance the load. If it's the database, you may be able to create some indexes.
Alternatively, setting up a Squid caching server in front of your web server can take off a lot of load from anonymous users. This is the approach Wikipedia takes, as the page doesn't need to be re-rendered for each user.
In either case, there isn't an easy way to spin up extra capacity on the EC2 unless your site is on the EC2 to begin with.
There is just 3 type of instance you can have. Other than that they cant give you any more "server power". You will need to do some load balancing. There are software Load Balancers, such as HAProxy, NginX, which are not bad, if you dont want to deal with that, you can do DNS Round Robin, after setting up the high load blogs on different machines.
You should be able to scale them, that s the beauty of AWS, scaling.

Resources