I'm trying to block some specific countries in Apache 2.4.x.
I downloaded the list of IPs from https://www.ip2location.com/free/visitor-blocker, put them in a separate file and included it the httpd.conf file.
The size of this file is 8.5MB and it seems to significantly slow down the Apache 2.4 startup time. In particular, it increased from few seconds (without the block list) to few minutes (with the block list). Sometimes, the server fails to start.
Is there a way to speed up the server startup time?
Thank you
If the IP address list is huge, you can consider to block it from different application layer.
Firewall - such as ipconfig
Web server - such as Apache2 and nginX. You are working on it now.
Application - such as PHP page and query a database
Related
I want to set up a server (hosted on aws/or a running system in some part of the world) as an NTP server that can be queried globally.
Currently, I have modified the ntp.conf file on the node to be made the server as server . But the problem is, on using an NTP client if I try to query time from this server, or rather on using sudo ntpdate it says no suitable server found.
However, if I replicate the same on my local network (the server, as well as the querying node, are all on the local network) then this works perfectly fine.
I think the problem might lie in the ntp.conf file. Do I need to put some specific restrict lines for this to work publicly as well? And no I cannot list the server on public ntp pages. Is it at all possible?
Solved. This was a port issue. I was testing it on aws and had to manually open the related udp ports.
This might be a very silly question but I'll still ask it.
Nginx reads nginx.conf file & keeps information in memory/cache until you do a 'nginx -s reload'.
Is there a way were I can modify nginx configuration directly in memory as I need to do reload multiple times per minute and config file can be huge.
Basically the problem I'm trying to solve is that I have multiple docker containers coming up & down dynamically on a set of host machines. Every time a container comes up, it'll have a different IP & port open (application design constraint). And I'm thinking of using Nginx as reverse proxy. What should I do to solve this problem considering the fact that final product might have 3000 - 5000 containers running on a cluster of hosts. The rate at which containers are launched/destroyed might be around 100 per second.I need a fast way to make sure routing is happening properly
hmmm, probably not, nginx loads its config in multiple workers, so this does not look like a good idea to try to change it on the fly.
What it your goal ? You seem to need to do some dynamic routing or other sort of treatment. You should instead look at:
nginx directives and modules such as eval
Lua scripting
nginx module dev (in C/C++)
This would allow you to do more or less whatever you want, you can read some config in a db like redis, and change the behavior of your code according to the value in Redis.
For example, you could do a lot just by reading a value in Redis, and then use if directive in your nginx config file. You can use How can I get the value from Redis and put it in a variable in NGiNX? to get redis value in nginx with eval module.
UPDATE :
For dynamic IP in nginx, you should look at Dynamic proxy_pass to $var with nginx 1.0.
So I would suggest that you :
have a process that write in redis the IP address of your dockers
read it with eval and redis module in nginx
use the value to proxy
I have a physical server running Nginx, MySQL and serving my PHP website. The server has Multi-Core processor with 16 GB of RAM. This server can handle certain amount of web traffic.
Now instead of this single server, if I run multiple docker containers running individual instances of Nginx (App Server) and MySQL (DB Server) in it and load balance between the application and database containers, will it be able to handle the same amount of traffic as a single server handled it or would it be lesser (Performance wise)?
How will the performance be if I use a virtual server like EC2 or Digital Ocean Leaflet with the same hardware configuration instead of a physical server?
Since all process run on the native host (you can run ps aux on host outside container and see them). There should be very little overhead. The network bridging and IP Tables entries to forward packets to virtual host will add some CPU overhead but I can't imagine that being too onerous.
If the question is several nginx + 1 mysql versus several containers with each nginx + mysql, probably performance wise would be better not using containers, mainly because mysql and how much memory can have if is a single instance vs having multiple separate instances. You can anyway have the nginx in separate containers but using one central mysql for all sites, in a container or not.
I've found at that Instagram share their technology implementation with other developers trough their blog. They've some great solutions for the problems they run into. One of those solutions they've is an Elastic Load Balancer on Amazon with 3 nginx instances behind it. What is the task of those nginx servers? And what is the task of the Elastic Load balancers, and what is the relation between them?
Disclaimer: I am no expert on this in any way and am in the process of learning about AWS ecosystem myself.
The ELB (Elastic load balancer) has no functionality on its own except receiving the requests and routing it to the right server. The servers can run Nginx, IIS, Apache, lighthttpd, you name it.
I will give you a real use case.
I had one Nginx server running one WordPress blog. This server was, like I said, powered by Nginx serving static content and "upstreaming" .php requests to phpfpm running on the same server. Everything was going fine until one day. This blog was featured on a tv show. I had a ton of users and the server could not keep up with that much traffic.
My first reaction would be to just use the AMI (Amazon machine image) to spin up a copy of my server on a more powerful instance like m1.heavy. The problem was I knew I would have traffic increasing over time over the next couple of days. Soon I would have to spin an even more powerful machine, which would mean more downtime and trouble.
Instead, I launched an ELB (elastic load balancer) and updated my DNS to point website traffic to the ELB instead of directly to the server. The user doesn’t know server IP or anything, he only sees the ELB, everything else goes on inside amazon’s cloud.
The ELB decides to which server the traffic goes. You can have ELB and only one server on at the time (if your traffic is low at the moment), or hundreds. Servers can be created and added to the server array (server group) at any time, or you can configure auto scaling to spawn new servers and add them to the ELB Server group using amazon command line, all automatically.
Amazon cloud watch (another product and important part of the AWS ecosystem) is always watching your server’s health and decides to which server it will route that user. It also knows when all the servers are becoming too loaded and is the agent that gives the order to spawn another server (using your AMI). When the servers are not under heavy load anymore they are automatically destroyed (or stopped, I don’t recall).
This way I was able to serve all users at all times, and when the load was light, I would have ELB and only one Nginx server. When the load was high I would let it decide how many servers I need (according to server load). Minimal downtime. Of course, you can set limits to how many servers you can afford at the same time and stuff like that so you don’t get billed over what you can pay.
You see, Instagram guys said the following - "we used to run 2 Nginx machines and DNS Round-Robin between them". This is inefficient IMO compared to ELB. DNS Round Robin is DNS routing each request to a different server. So first goes to server one, second goes to server two and on and on.
ELB actually watches the servers' HEALTH (CPU usage, network usage) and decides which server traffic goes based on that. Do you see the difference?
And they say: "The downside of this approach is the time it takes for DNS to update in case one of the machines needs to get decommissioned."
DNS Round robin is a form of a load balancer. But if one server goes kaput and you need to update DNS to remove this server from the server group, you will have downtime (DNS takes time to update to the whole world). Some users will get routed to this bad server. With ELB this is automatic - if the server is in bad health it does not receive any more traffic - unless of course the whole group of servers is in bad health and you do not have any kind of auto-scaling setup.
And now the guys at Instagram: "Recently, we moved to using Amazon’s Elastic Load Balancer, with 3 NGINX instances behind it that can be swapped in and out (and are automatically taken out of rotation if they fail a health check).".
The scenario I illustrated is fictional. It is actually more complex than that but nothing that cannot be solved. For instance, if users upload pictures to your application, how can you keep consistency between all the machines on the server group? You would need to store the images on an external service like Amazon s3. On another post on Instagram engineering – “The photos themselves go straight to Amazon S3, which currently stores several terabytes of photo data for us.”. If they have 3 Nginx servers on the load balancer and all servers serve HTML pages on which the links for images point to S3, you will have no problem. If the image is stored locally on the instance – no way to do it.
All servers on the ELB would also need an external database. For that amazon has RDS – All machines can point to the same database and data consistency would be guaranteed.
On the image above, you can see an RDS "Read replica" - that is RDS way of load balancing. I don't know much about that at this time, sorry.
Try and read this: http://awsadvent.tumblr.com/post/38043683444/using-elb-and-auto-scaling
Can you please point the blog entry out?
Load balancers balance load. They monitor the Web servers health (response time etc) and distribute the load between the Web servers. On more complex implementations it is possible to have new servers spawn automatically if there is a traffic spike. Of course you need to make sure there is a consistency between the servers. THEY CAN share the same databases for instance.
So I believe the load balancer gets hit and decides to which server it will route the traffic according to server health.
.
Nginx is a Web server that is extremely good at serving a lot of static content for simultaneous users.
Requests for dynamic pages can be offloaded to a different server using cgi. Or the same servers that run nginx can also run phpfpm.
.
A lot of possibilities. I am on my cell phone right now. tomorrow I can write a little more.
Best regards.
I am aware that I am late to the party, but I think the use of NGINX instances behind ELB in Istagram blogpost is to provide high available load balancer as described here.
NGINX instances do not seem to be used as web servers in the blogpost.
For that role they mention:
Next up comes the application servers that handle our requests. We run Djangoon Amazon High-CPU Extra-Large machines
So ELB is used just as a replacement for their older solution with DNS Round-Robin between NGINX instances that was not providing high availability.
we are using windows 2003 server with dual CPU and IIS gets overflown with requests and not able to handle them but at the same time it uses less than 20% of the CPU and less than 40% of ram. When server is not able to serve any requests not only its not possible to browse the site but its not even serving images which are used on our other sites.
We are thinking of installing VMWare to have 2 servers on this machine and using one server to serve asp.net pages and the other one to serve images and simple html pages.
Do you guys know how we can route image and html pages requests to one server and requests for aspx pages to another?
Any ideas are appreciated.
Thank you,
Denis
What is the state of the network?
100mb at 100%?
In IIS Are you limiting # of
connections? are you limiting
Bandwidth?
Are there Server errors in the
server event log?
What is you Database Activity? is is
the database bottle necking your
webserver?
is the DB Network util very high as
well? do the DB and webserver talk
over the same network? some web
servers have 2 network cards, the db
and webserver should no share the
same bandwidth with external
traffic, put external traffic on one
network and internal comm on a
"backend" network
Do you have ANY caching enabled?
output, data?
You should be sure that proper data caching is used.
http://msdn.microsoft.com/en-us/library/ms972379.aspx
You should try using a CDN ( content delivery network ) or deploy your own CR ( Content Repository ) server, with different urls than you Website:
www.yoursite.com/index.aspx
your images / css / js can all be servered from a CR server
www.yourcdn.com/images/bigImage.jpg
or
cdn.yoursite.com/images/bigImage.jpg
or
cr.yoursite.com/images/bigImage.jpg
Since your web server CPU util is so low, try adding HTTP compression to lower some of the network util as per David's good comment
If your network is at 100% and your CPU is at 40% then adding more processing power and/or virtualizing machines isn't going to help. You can either add more bandwidth (how depends on hosting situation) or use a CDN like BigBlondeViking suggests or reduce bandwidth usage on your app (exactly how depends on app). Easiest option is really a CDN in most cases.
Now, once you get this bandwidth bottleneck solved you might start having CPU usage problems as the number of requests you can handle will increase dramatically.
BigBlondeViking has a few good points.
But I want to add that putting 2 VM's on the machine probably won't help you much. What we do (and I would recommend to anyone) is have 2 layers of servers:
Web Server(s) running Apache in the DMZ
these serve your images, css, js and other static content
does ssl
also used as a reverse proxy server (using mod_proxy)
Application Server(s) running IIS
these serve your ASP.NET pages
This helps add a level of scalability and security to your site.
Sample Apache mod_proxy config:
<VirtualHost 555.55.555.555:80>
ServerName domain.com
DocumentRoot c:/docroot
ProxyPass /img !
ProxyPass /js !
ProxyPass /css !
ProxyPass / http://serverA/vdir
ProxyPassReverse / http://serverA/vdir
</VirtualHost>
This will proxy all requests to / and any subdirs except img, js, and css to serverA/vdir