I am developing a couple of websites, but I only have paid for an EC2 nano instance on AWS. How many websites could I possible host there, assuming the websites will only have minimum traffic? Most of the websites are for personal use only.
Only one way to find out ;)
No definite answer possible because it depends on a lot of factors.
But if traffic is really low you will only be limited by the amount of disk space and as t2.nano runs on EBS storage this can be as big as you want. So you could fit a lot of websites!
t2.nano has only 512Mb memory so best to pick a not-so-memory-hungry webserver such as ngnix.
I run five very low traffic websites on my t2 nano - four of them Wordpress, one custom PHP. I run Nginx, PHP5.6, and MySQl 5.6 on the same instance. Traffic is extremely light, in the region of 2000 pages a day, which is about a page every 30 seconds. If you include static resources it'll be higher. CloudFlare runs as the CDN, which reduces static resource consumption significantly, but doesn't cache pages.
I have MySQL on the instance configured to use very little memory, currently 141MB physical RAM. Nginx takes around 10MB RAM. I have four PHP workers, each taking 150MB RAM, but of that 130MB is shared, so it's really 20MB per worker after the first.
Here's the output of a quick performance test on the t2.nano. Note that the Nginx page cache will be serving all of the pages.
siege -c 50 -t10s https://www.example.com -i -q -b
Lifting the server siege... done.
Transactions: 2399 hits
Availability: 100.00 %
Elapsed time: 9.60 secs
Data transferred: 14.82 MB
Response time: 0.20 secs
Transaction rate: 249.90 trans/sec ***
Throughput: 1.54 MB/sec
Concurrency: 49.42
Successful transactions: 2399
Failed transactions: 0
Longest transaction: 0.36
Shortest transaction: 0.14
Here it is with nginx page caching turned off
siege -c 5 -t10s https://www.example.com -i -q -b
Lifting the server siege... done.
Transactions: 113 hits
Availability: 100.00 %
Elapsed time: 9.99 secs
Data transferred: 0.70 MB
Response time: 0.44 secs
Transaction rate: 11.31 trans/sec ***
Throughput: 0.07 MB/sec
Concurrency: 4.95
Successful transactions: 113
Failed transactions: 0
Longest transaction: 0.70
Shortest transaction: 0.33
Related
After taking gitlab backup everyday gitlab is throwing 502 error.
I saw nginx logs but did not find that much information.
After gitlab-ctl restart it starts working again.
System Configurations:
OS : Ubuntu 16.04 LTS
4 GB Ram
200 GB Disk Space
can anyone give permanent solution for it.
There is a high possibility that it run out of shared memory. As each time after the backup you got the 502 error.
To check it with gitlab-ctl tail tail detail
It will show something like:
2019-04-12_12:37:17.27154 FATAL: could not map anonymous shared memory: Cannot allocate memory
2019-04-12_12:37:17.27157 HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 4345470976 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
2019-04-12_12:37:17.27171 LOG: database system is shut down
Then check it with free -m, which shows there is no available shared memory.
total used free shared buffers cached
Mem: 16081 13715 2365 0 104 753
-/+ buffers/cache: 12857 3223
Then you need to check if there is some process take too many shared memory, or too many zomibe process, then kill it with command like ps -aef | grep ffmpeg | awk '{print $2}' | xargs kill 9
Check it with free -h, there is about 112M shared memory now.
total used free shared buffers cached
Mem: 15G 4.4G 11G 112M 46M 416M
-/+ buffers/cache: 3.9G 11G
Swap: 0B 0B 0B
At last,restart you gitlab with gitlab-ctl restart, after sometime the gitlab booted, the 502 gone.
After long search i got something about it. After taking backup my gitlab-workhorse is getting ideal and gitlab.socket is refusing the connection. As temporary solution i have installed a new cron job for restarting gitlab service after the complpetion of gitlab backup cronjob.
If the gitlab is installed in Virtual-Box - Ubuntu server either 18.04 or 20.04,
please increase the RAM to 4gb and the provide atleast 3 processors.
Nginx supports proxy response buffering, according to the mailing list (Nov 2013):
The proxy_buffering directive disables response buffering, not request
buffering.
As of now, there is no way to prevent request body buffering in nginx.
It's always fully read by nginx before a request is passed to an
upstream server. It's basically a part of nginx being a web
accelerator - it handles slow communication with clients by itself and
only asks a backend to process a request when everything is ready.
Since version 1.7.11 Nginx (Mar 2015) directive proxy_request_buffering see details:
Update: in nginx 1.7.11 the proxy_request_buffering directive is
available, which allows to disable buffering of a request body. It
should be used with care though, see docs.
See docs for more details:
Syntax: proxy_request_buffering on | off;
Default: proxy_request_buffering on;
Context: http, server, location
When buffering is enabled, the entire request body is read from the
client before sending the request to a proxied server.
When buffering is disabled, the request body is sent to the proxied
server immediately as it is received. In this case, the request cannot
be passed to the next server if nginx already started sending the
request body.
When HTTP/1.1 chunked transfer encoding is used to send the original
request body, the request body will be buffered regardless of the
directive value unless HTTP/1.1 is enabled for proxying.
The question is about buffering for Nginx (Load Balancer). For instance, we have the following scheme:
Nginx (LB) -> Nginx (APP) -> Backend
Nginx (APP) buffers the request from the load balancer and also buffers response from Backend. But does it make sense to buffer request and response on the Nginx (LB) side if both Nginx nodes are physically located close to each other (less than 2ms ping) with pretty fast network connection in between?
I measured the following benchmarks - Please note that this is for illustration only - Production loads would be significantly higher :
siege --concurrent=50 --reps=50 https://domain.com
proxy_request_buffering on;
Transactions: 2500 hits
Availability: 100.00 %
Elapsed time: 58.57 secs
Data transferred: 577.31 MB
Response time: 0.55 secs
Transaction rate: 42.68 trans/sec
Throughput: 9.86 MB/sec
Concurrency: 23.47
Successful transactions: 2500
Failed transactions: 0
Longest transaction: 2.12
Shortest transaction: 0.10
proxy_request_buffering off;
Transactions: 2500 hits
Availability: 100.00 %
Elapsed time: 57.80 secs
Data transferred: 577.31 MB
Response time: 0.53 secs
Transaction rate: 43.25 trans/sec
Throughput: 9.99 MB/sec
Concurrency: 22.75
Successful transactions: 2500
Failed transactions: 0
Longest transaction: 2.01
Shortest transaction: 0.09
My Hudson jobs are crashing on each run with this error:
Caused by: java.io.IOException: error=12, Not enough space
at java.lang.UNIXProcess.forkAndExec(Native Method)
I found documention on StackOverflow and on the Jenkins website regarding this error, which indicate a problem of swap space (https://wiki.jenkins-ci.org/display/JENKINS/IOException+Not+enough+space).
However, maybe my problem is different or not, but if I launch the process manually it works fine.
A weird thing is I see different resuls from top of from prstat:
Specs:
Hudson processes are running in their own Unix user
OS: SunOS dc5c00-d12 5.10 Generic_147440-19 sun4v sparc sun4v
Memory:
from top:
32G phys mem, 6255M free mem, 16G total swap, 16G free swap
from prstat
NPROC USERNAME SWAP RSS MEMORY TIME CPU
50 user1 12G 12G 39% 89:02:31 0.3%
36 user2 11G 6779M 21% 155:17:41 0.0%
26 user3 10G 8509M 26% 4787:37:4 8.0%
6 hudson 572M 556M 1.7% 0:08:25 0.0%
57 root 280M 285M 0.9% 138:46:05 0.0%
Can anywone confirm if I have a swap issue? top shows 16GB free...
EDIT:
results from swap -s (after problem being remporarly resolved)
total: 19940168k bytes allocated + 12578048k reserved = 32518216k used, 4118208k available
.
It is certainly a swap issue.
top is reporting as free swap blocks that do not contain paginated data. However, even while unused, some of these blocks can be reserved (i.e untouched still allocated virtual memory). When you have no more blocks to back memory reservations, you got this "Not enough space" exception.
swap -s shows your applications are reserving more that 12 GB while your swap area is just 16 GB. I would double the size of your swap to prevent virtual memory shortage in your case.
(I googled and searched this forum for hours, found some topics, but none of them worked for me)
I'm using Wordpress with: Varnish + Nginx + PHP-FPM + APC + W3 Total Cache + PageSpeed.
As I'm using Varnish, first time I call www.mysite.com it use just 10% of CPU. Calling the second time, it will be cached. The problem is passing request parameter in URL.
For just 1 request (www.mysite.com?1=1) it shows in top:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7609 nginx 20 0 438m 41m 28m S 11.6 7.0 0:00.35 php-fpm
7606 nginx 20 0 437m 39m 26m S 10.3 6.7 0:00.31 php-fpm
After the page is fully loaded, these processes above are still active. And after 2 seconds, they are replaced by another 2 php-fpm processes(below), which are active for 3 seconds.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7665 nginx 20 0 444m 47m 28m S 20.9 7.9 0:00.69 php-fpm
7668 nginx 20 0 444m 46m 28m R 20.9 7.9 0:00.63 php-fpm
40% CPU usage just for 1 request not cached!
Strange things:
CPU usage is higher after the page was loaded
When I purged the cache (W3 and Varnish), it take just 10% of CPU to load a not cached page
This high CPU usage just happend passing request parameter or in Wordpress Admin
When I try to do 10 request(pressing F5 key 10x), the server stop serving and in php-fpm log appears:
WARNING: [pool www] server reached max_children setting (10), consider raising it
I raised that value to 20, same problem.
I'm using pm=ondemand (pm.max_children=10 and pm.max_requests=500).
Inittialy I was using pm=dynamic (pm.max_children=10, pm.start_servers=1, pm.min_spare_servers=1, pm.min_spare_servers=2, pm.max_requests=500) and it happened the same problem.
Anyone could help, plz? Any help would be appreciated!
PS:
APC is ON (98% Hits, 2% Misses)
Server is Amazon Micro (613MB RAM)
PHP 5.3.26 (fpm-fcgi)
Linux version 3.4.48-45.46.amzn1.x86_64 Red Hat 4.6.3-2 (I think it's based on CentOS 5)
First reduce the stack of caches. Why using varnish which serves pages from memory when you're using w3 cache already which serves from memory as well?
W3cache is CPU intensive! It does not just cache items but also compresses, minifies and merges files on the fly.
You got a total of 512MB of memory on your machine which is not a lot, also your CPU power is less than a modern smartphone has. Memory access is extremely slow compared to a root server because of the xen virtualization layer - That's why less is more.
Make sure w3cache is properly set up so it actually caches items, then warmup your cache and you should be fine.
Have a look at Googles nginx pagespeed module https://github.com/pagespeed/ngx_pagespeed, it can do the same thing w3cache does, just much more efficient because it happens in the webserver, not in PHP
Nginx can also directly serve from memcached http://www.kingletas.com/2012/08/full-page-cache-with-nginx-and-memcache.html (example article, might need some more investigation)
Problem solved!
For those who are having the same problem:
Check Varnish configuration;
Check your Wordpress's plugin;
1) In my case, TTL was not configured in Varnish, so nothing was being cached.
This config worked for me:
sub vcl_fetch {
if (!(req.url ~ "wp-(login|admin)")) {
unset beresp.http.set-cookie;
set beresp.ttl = 48h;
}
}
2) The high CPU usage AFTER page loads, was caused by a Wordpress plugin called: "Scroll Triggered Box".
It was doing some AJAX after page has loaded. I disabled that plugin and high load stopped.
There are two factors at play here:
You are using micro instance which has a burstable CPU profile. It can burst up to 2 ECU's then be limited to much less than 1 (Some estimates put this at around 0.1 - 0.2 ECU's)
While logged in as an admin, wordpress caching plugins often bypass or reduce caching. W3 should allow you to switch this if you want caching on all the time.
Working on a project where we need to server a small static xml file ~40k / s.
All incoming requests are sent to the server from HAProxy. However, none of the requests will be persistent.
The issue is that when benchmarking with non-Persistent requests, the nginx instance caps out at 19 114 req/s. When persistent connections are enabled, performance increases by nearly an order of magnitude, to 168 867 req/s. The results are similar with G-wan.
When benchmarking non-persistent requests, CPU usage is minimal.
What can I do to increase performance with non-persistent connections and nginx?
[root#spare01 lighttpd-weighttp-c24b505]# ./weighttp -n 1000000 -c 100 -t 16 "http://192.168.1.40/feed.txt"
finished in 52 sec, 315 millisec and 603 microsec, 19114 req/s, 5413 kbyte/s
requests: 1000000 total, 1000000 started, 1000000 done, 1000000 succeeded, 0 failed, 0 errored
status codes: 1000000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 290000000 bytes total, 231000000 bytes http, 59000000 bytes data
[root#spare01 lighttpd-weighttp-c24b505]# ./weighttp -n 1000000 -c 100 -t 16 -k "http://192.168.1.40/feed.txt"
finished in 5 sec, 921 millisec and 791 microsec, 168867 req/s, 48640 kbyte/s
requests: 1000000 total, 1000000 started, 1000000 done, 1000000 succeeded, 0 failed, 0 errored
status codes: 1000000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 294950245 bytes total, 235950245 bytes http, 59000000 bytes data
Your 2 tests are similar (except HTTP Keep-Alives):
./weighttp -n 1000000 -c 100 -t 16 "http://192.168.1.40/feed.txt"
./weighttp -n 1000000 -c 100 -t 16 -k "http://192.168.1.40/feed.txt"
And the one with HTTP Keep-Alives is 10x faster:
finished in 52 sec, 19114 req/s, 5413 kbyte/s
finished in 5 sec, 168867 req/s, 48640 kbyte/s
First, HTTP Keep-Alives (persistant connections) make HTTP requests run faster because:
Without HTTP Keep-Alives, the client must establish a new CONNECTION for EACH request (this is slow because of the TCP handshake).
With HTTP Keep-Alives, the client can send all requests at once (using the SAME CONNECTION). This is faster because there are less things to do.
Second, you say that the static file XML size is "small".
Is "small" nearer to 1 KB or 1 MB? We don't know. But that makes a huge difference in terms of available options to speedup things.
Huge files are usually served through sendfile() because it works in the kernel, freeing the usermode server from the burden of reading from disk and buffering.
Small files can use more flexible options available for application developers in usermode, but here also, file size matters (bytes and kilobytes are different animals).
Third, you are using 16 threads with your test. Are you really enjoying 16 PHYSICAL CPU Cores on BOTH the client and the server machines?
If that's not the case, then you are simply slowing-down the test to the point that you are no longer testing the web servers.
As you see, many factors have an influence on performance. And there are more with OS tuning (the TCP stack options, available file handles, system buffers, etc.).
To get the most of a system, you need to examinate all those parameters, and pick the best for your particular exercise.