Efficiently using multiple docker containers in a single host - nginx

I have a physical server running Nginx, MySQL and serving my PHP website. The server has Multi-Core processor with 16 GB of RAM. This server can handle certain amount of web traffic.
Now instead of this single server, if I run multiple docker containers running individual instances of Nginx (App Server) and MySQL (DB Server) in it and load balance between the application and database containers, will it be able to handle the same amount of traffic as a single server handled it or would it be lesser (Performance wise)?
How will the performance be if I use a virtual server like EC2 or Digital Ocean Leaflet with the same hardware configuration instead of a physical server?

Since all process run on the native host (you can run ps aux on host outside container and see them). There should be very little overhead. The network bridging and IP Tables entries to forward packets to virtual host will add some CPU overhead but I can't imagine that being too onerous.

If the question is several nginx + 1 mysql versus several containers with each nginx + mysql, probably performance wise would be better not using containers, mainly because mysql and how much memory can have if is a single instance vs having multiple separate instances. You can anyway have the nginx in separate containers but using one central mysql for all sites, in a container or not.

Related

Apache 2.4.x with IP blacklist slow to start

I'm trying to block some specific countries in Apache 2.4.x.
I downloaded the list of IPs from https://www.ip2location.com/free/visitor-blocker, put them in a separate file and included it the httpd.conf file.
The size of this file is 8.5MB and it seems to significantly slow down the Apache 2.4 startup time. In particular, it increased from few seconds (without the block list) to few minutes (with the block list). Sometimes, the server fails to start.
Is there a way to speed up the server startup time?
Thank you
If the IP address list is huge, you can consider to block it from different application layer.
Firewall - such as ipconfig
Web server - such as Apache2 and nginX. You are working on it now.
Application - such as PHP page and query a database

How to have a scaleable docker cluster with a reverse proxy and load balancer serving several sites from one VM

I and trying to setup a scaleable platform that will not only scale itself but any other applications running on this platform on one host (VM).
Diagram
https://github.com/jwilder/nginx-proxy
this has a reverse proxy that points the site requested (eg. hello.example.com) to the docker with that env (environment variable) value.
The only thing that is missing in this setup is to be able to spin up load balancer containers for each different host (web application) so you can scale the web app also.
Docker clusters normally span more than one machine running the docker engine.
I suspect what you're looking for is a solution that supports the deployment of multiple applications on a single server? For the neatest PAAS you'll ever reverse engineer, check out Dokku:
http://dokku.viewdocs.io/dokku/

How to encrypt docker images or source code in docker images?

Say I have a docker image, and I deployed it on some server. But I don't want other user to access this image. Is there a good way to encrypt the docker image ?
Realistically no, if a user has permission to run the docker daemon then they are going to have access to all of the images - this is due to the elevated permissions docker requires in order to run.
See the extract from the docker security guide for more info on why this is.
Docker daemon attack surface
Running containers (and applications)
with Docker implies running the Docker daemon. This daemon currently
requires root privileges, and you should therefore be aware of some
important details.
First of all, only trusted users should be allowed to control your
Docker daemon. This is a direct consequence of some powerful Docker
features. Specifically, Docker allows you to share a directory between
the Docker host and a guest container; and it allows you to do so
without limiting the access rights of the container. This means that
you can start a container where the /host directory will be the /
directory on your host; and the container will be able to alter your
host filesystem without any restriction. This is similar to how
virtualization systems allow filesystem resource sharing. Nothing
prevents you from sharing your root filesystem (or even your root
block device) with a virtual machine.
This has a strong security implication: for example, if you instrument
Docker from a web server to provision containers through an API, you
should be even more careful than usual with parameter checking, to
make sure that a malicious user cannot pass crafted parameters causing
Docker to create arbitrary containers.
For this reason, the REST API endpoint (used by the Docker CLI to
communicate with the Docker daemon) changed in Docker 0.5.2, and now
uses a UNIX socket instead of a TCP socket bound on 127.0.0.1 (the
latter being prone to cross-site request forgery attacks if you happen
to run Docker directly on your local machine, outside of a VM). You
can then use traditional UNIX permission checks to limit access to the
control socket.
You can also expose the REST API over HTTP if you explicitly decide to
do so. However, if you do that, being aware of the above mentioned
security implication, you should ensure that it will be reachable only
from a trusted network or VPN; or protected with e.g., stunnel and
client SSL certificates. You can also secure them with HTTPS and
certificates.
The daemon is also potentially vulnerable to other inputs, such as
image loading from either disk with ‘docker load’, or from the network
with ‘docker pull’. This has been a focus of improvement in the
community, especially for ‘pull’ security. While these overlap, it
should be noted that ‘docker load’ is a mechanism for backup and
restore and is not currently considered a secure mechanism for loading
images. As of Docker 1.3.2, images are now extracted in a chrooted
subprocess on Linux/Unix platforms, being the first-step in a wider
effort toward privilege separation.
Eventually, it is expected that the Docker daemon will run restricted
privileges, delegating operations well-audited sub-processes, each
with its own (very limited) scope of Linux capabilities, virtual
network setup, filesystem management, etc. That is, most likely,
pieces of the Docker engine itself will run inside of containers.
Finally, if you run Docker on a server, it is recommended to run
exclusively Docker in the server, and move all other services within
containers controlled by Docker. Of course, it is fine to keep your
favorite admin tools (probably at least an SSH server), as well as
existing monitoring/supervision processes (e.g., NRPE, collectd, etc).
Say if only some strings need to be encrypted. Could possibly encrypt this data using openssl or an alternative solution. Encryption solution should be setup inside the docker container. When building container - data is encrypted. When container is run - data is decrypted (possibly with the help of an entry using a passphrase passed from .env file). This way container can be stored safely.
I am going to play with it this week as time permits, as I am pretty curious myself.

Latency between two Azure VMs

I have an ASP.NET 4.5, chatty, web application which is hosted on one Large (4 cores 7GB) Azure VM. The WEB application is loosely coupled to the data tier via a dedicated WCF Service. The application database is hosted by a dedicated SQL Server instance on another Large (4 cores 7GB) Azure VM. The WCF endpoint communicates with the DB VM via an ASP.NET Connection String that employs the DB VM public DNS name - e.g. xyz.cloudapp.net.
Both the WEB and DB VMs appear to be operating in a different subnet but both are in the same Azure location; differing second and onwards octet values.
When running the exact same solution on one Medium (2 cores 3.5GB) Azure VM, the latency issues are much lower.
I am looking for suggestions on how to reduce the WEB to DB latency as much as possible.
If you have two VMs in the same data center that need to communicate with each other, don't use their public DNS. Create an Affinity Group, create a virtual network in that affinity group, and then place both VMs in the virtual network (you might need to shut them down, delete them without deleting their VHDs, and then create them from the data disks in the new vnet).
Accessing VMs through the DNS (thus through the Azure LB) adds about 0.5ms latency to each request - Not recommended for a chatty app.
It almost sounds like you have the two VM's running in two separate cloud services. Might I suggest placing both machines in the same cloud service? This should allow you to access the database server from the web tier via the short DNS name (aka the server name). This should not only help secure the database server by allowing you to remove any input endpoints you have declared on it, but also reduce latency since calls will be made directly from one VM to another and not pass through the Azure Fabric load balancer (which is what fronts all calls coming to the cloud service URL).
in this blog post I have measured latency in various network configurations.
[https://nicolgit.github.io/azure-measuring-latency-across-availability-zones-in-we/]
I think it is useful to understand the impact on latency of different "typical" network architectures.

Configuring nginx web server with multiple app server of aws stack

I am a DevOps guy and presently I am running my Ruby on Rails application on ubuntu ec2 where the app and also the web server reside inside the same box but we are using mysql RDS cluster. I can see lot of spikes due to more traffic to the web site. So I am planning to change the system. I wanna put web server nginx in a separate instance and web app in a separate instance. But this needs a load balancer which should reside in nginx box, but once the traffic goes up, the nginx instance can be configured to auto scale. What about the app server instance? It can be configured to auto scale but it needs to attach itself to the web server and web server needs to discover the new app server which was created. How can achieve this? Kindly help me out to get this done.
When you are using one single web server at the moment, a transition to using nginx as static webserver and proxy for another backend webserver on another instance really makes sense and will give you performance boost.
However I am not sure if you really need autoscaling. Autoscaling mostly makes sense if you want to react on fast traffic spikes etc. If you have a more or less continuous workload that might increase over time, it should be easier to manually launch and add another backend server in the nginx config. If this does not work for you, you can still have a look at Amazon's Elastic Loadbalancers and Autoscaling afterwards.

Resources