Receiving and serving static files in kubernetes - nginx

In the pre-k8s, pre-container world, I have a cloud VM that runs nginx and lets an authorized user scp new content into the webroot.
I'd like to build a similar setup in a k8s cluster to host static files, with the goal that:
An authorized user can scp new files in
These files are statically served on the web
These files are kept in a persistent volume so they don't disappear when things restart
I can't seem to figure out a viable combination of storage class + containers to make this work. I'd definitely appreciate any advice!
Update
What I didn't realize is that two containers running in the same pod can both have the same gcePersistentDisk mounted as read/write. So my solution in the end looks like one nginx container running in the same pod as an sshd container that can write to the nginx webroot. It's been working great so far.

I think you're trying to fit a square peg into a round hole here.
Essentially, you're building an FTP server (albeit with scp rather than FTP).
Kubernetes is designed to orchestrate containers.
The two don't really overlap at all.
Now, if you're really intent on doing this, you could hack something together by creating a docker container running an ssh daemon, plus nginx running under supervisor. The layer you need to be concentrating on is getting your existing VM setup replicated in a docker container. You can then run it on Kubernetes and attach a persistent volume.

Related

Is it good or if there is some trouble will happen do a logrotate inside k8s pods with common file?

Using Kubernetes deploying nginx in several pods. Each pod is mounting access.log file to hostPath in order to read by Filebeat to collect to other output.
If do log rotation in the same cron time in every pod, they are using common access.log file, it works.
I tested with few data in a simple cluster. If large data occurred in production, is it a good plan or something wrong will happen with logrotate's design?
This will not usually work well since logrotate can't see the other nginx processes to sighup them. If nginx in particular can detect a rotation without a hup or other external poke then maybe, but most software cannot.
In general container logs should go to stdout or stderr and be handled by your container layer, which generally handles rotation itself or will include logrotate at the system level.

How can I use nginx as a dynamic load balancing proxy server on Bluemix?

I am using docker-compose to run an application on the bluemix container service. I am using nginx as a proxy webserver and load balancer.
I have found an image that uses docker events to automatically detect new web servers and adds those to the nginx configuration dynamically:
https://github.com/jwilder/nginx-proxy
But for this to work, I think the container needs to connect to a docker socket. I am not very familiar with docker and I dont know exactly what this does, but essentially it is necessary so that the image can listen to docker events.
The run command from the image documentation is the following:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
I have not been able to run this in the container service, as it does not find the /var/run/docker.sock file on the host.
The bluemix documentation has a tutorial explaining how to do load balancing with nginx. But it requires a "hard coded" list of web servers in the nginx configuration.
I was wondering how I could run the nginx-proxy image so that web instances are detected automatically?
The containers service on Bluemix doesn't expose that docker socket (not surprising, it would be a security risk to the compute host). A couple of alternate ways to accomplish what you want:
something like amalgam8 or consul, which is basically doing just that
similar, but self written - have a shared volume, and then each
container on startup adds a file to that shared volume saying what it
is, plus its private ip. nginx container has a watch on the shared
volume, and reloads when those change. (more work than amalgam8 or
consul, but perhaps more control)

How to encrypt docker images or source code in docker images?

Say I have a docker image, and I deployed it on some server. But I don't want other user to access this image. Is there a good way to encrypt the docker image ?
Realistically no, if a user has permission to run the docker daemon then they are going to have access to all of the images - this is due to the elevated permissions docker requires in order to run.
See the extract from the docker security guide for more info on why this is.
Docker daemon attack surface
Running containers (and applications)
with Docker implies running the Docker daemon. This daemon currently
requires root privileges, and you should therefore be aware of some
important details.
First of all, only trusted users should be allowed to control your
Docker daemon. This is a direct consequence of some powerful Docker
features. Specifically, Docker allows you to share a directory between
the Docker host and a guest container; and it allows you to do so
without limiting the access rights of the container. This means that
you can start a container where the /host directory will be the /
directory on your host; and the container will be able to alter your
host filesystem without any restriction. This is similar to how
virtualization systems allow filesystem resource sharing. Nothing
prevents you from sharing your root filesystem (or even your root
block device) with a virtual machine.
This has a strong security implication: for example, if you instrument
Docker from a web server to provision containers through an API, you
should be even more careful than usual with parameter checking, to
make sure that a malicious user cannot pass crafted parameters causing
Docker to create arbitrary containers.
For this reason, the REST API endpoint (used by the Docker CLI to
communicate with the Docker daemon) changed in Docker 0.5.2, and now
uses a UNIX socket instead of a TCP socket bound on 127.0.0.1 (the
latter being prone to cross-site request forgery attacks if you happen
to run Docker directly on your local machine, outside of a VM). You
can then use traditional UNIX permission checks to limit access to the
control socket.
You can also expose the REST API over HTTP if you explicitly decide to
do so. However, if you do that, being aware of the above mentioned
security implication, you should ensure that it will be reachable only
from a trusted network or VPN; or protected with e.g., stunnel and
client SSL certificates. You can also secure them with HTTPS and
certificates.
The daemon is also potentially vulnerable to other inputs, such as
image loading from either disk with ‘docker load’, or from the network
with ‘docker pull’. This has been a focus of improvement in the
community, especially for ‘pull’ security. While these overlap, it
should be noted that ‘docker load’ is a mechanism for backup and
restore and is not currently considered a secure mechanism for loading
images. As of Docker 1.3.2, images are now extracted in a chrooted
subprocess on Linux/Unix platforms, being the first-step in a wider
effort toward privilege separation.
Eventually, it is expected that the Docker daemon will run restricted
privileges, delegating operations well-audited sub-processes, each
with its own (very limited) scope of Linux capabilities, virtual
network setup, filesystem management, etc. That is, most likely,
pieces of the Docker engine itself will run inside of containers.
Finally, if you run Docker on a server, it is recommended to run
exclusively Docker in the server, and move all other services within
containers controlled by Docker. Of course, it is fine to keep your
favorite admin tools (probably at least an SSH server), as well as
existing monitoring/supervision processes (e.g., NRPE, collectd, etc).
Say if only some strings need to be encrypted. Could possibly encrypt this data using openssl or an alternative solution. Encryption solution should be setup inside the docker container. When building container - data is encrypted. When container is run - data is decrypted (possibly with the help of an entry using a passphrase passed from .env file). This way container can be stored safely.
I am going to play with it this week as time permits, as I am pretty curious myself.

Automatically append docker container to upstream config of nginx load balancer

I'm running Docker Compose (v2) and have a node service (website) and python based api deployed with nginx sitting in front of them.
One thing I would like to do is be able to scale the services by adding more containers. If I know ahead of time how many containers I will have, I can hardcode the nginx upstream config with the references to the IPs of the containers which docker makes available. However, the problem is that I want the upstream nginx config to be dynamic e.g. if I add another Docker container, it simply adds appends the location of the container to the upstream list of IPs in the upstream block.
My idea was to create a script which will automatically append the upstream servers using env variables when the containers change but I'm unsure where to start and can't find a good example.
There are a couple ways to achieve this. What you are referring to is usually called service discovery and comes in many forms. I'll describe two of them that I have used before.
The first and simplest one (which works fine for single servers or only discovering containers locally on one server) is a local proxy which makes use of the Docker socket or API. https://github.com/jwilder/nginx-proxy is one of the popular ones and should work well for prototyping scalable services in Compose.
Another way (which is more multi-host friendly but more complicated) would be registering services in a registry (such as etcd or Consul) and then dynamically writing out the configuration. To do this, you can use a registration system (such as https://github.com/gliderlabs/registrator) to register the containers and their ports. Then your proxy or application can consume a configuration file written out using a template system like https://github.com/kelseyhightower/confd.

How to setup group of docker containers with the same addresses?

I am going to install distributed software inside docker containers. It can be something like:
container1: 172.0.0.10 - management node
container2: 172.0.0.20 - database node
container3: 172.0.0.30 - UI node
I know how to manage containers as a group and how to link them between each other, however the problem is that ip information is located in many places (database etc), so when you deploy containers from such image ip are changed and infrastructure is broken.
The most easy way I see is to use several virtual networks on the host so, containers will be with the same addresses but will not affect each other. However as I understand it is not possible for docker currently, as you cannot start docker daemon with several bridges connected to one physical interface.
The question is, could you advice how to create such infrastructure? thanks.
Don't do it this way.
Containers are ephemeral, they come and go and will be assigned new IPs. Fighting against this is a bad idea. Instead, you need to figure out how to deal with changing IPs. There are a few solutions, which one you should use is entirely dependent on your use case.
Some suggestions:
You may be able to get away with just forwarding through ports on your host. So your DB is always HOST_IP:88888 or similar.
If you can put environment variables in your config files, or dynamically generate config files when the container starts, you can use Docker links which will put the IP of the linked container into an environment variable.
If those don't work for you, you need to start looking at more complete solutions such as the ambassador pattern and consul. In general, this issue is known as Service Discovery.
Adrian gave a good answer. But if you cannot use this approach you could do the next thing:
create ip aliases on hosts with docker (it could be many docker hosts)
then you run container map ports for this address.
.
docker run --name management --restart=always -d -p 172.0.0.10:NNNN:NNNN management
docker run --name db --restart=always -d -p 172.0.0.20:NNNN:NNNN db
docker run --name ui --restart=always -d -p 172.0.0.30:NNNN:NNNN ui
Now you could access your containers by fixed address and you could move them to different hosts (together with ip alias) and everything will continue to work.

Resources