Docker Data Container Encryption - encryption

I am not sure if this is feasible with current Docker/Linux, so I have this question. I am looking for advices, tools and How-To's in order to achieve this.
I want to add optional encryption to my existing Docker containers. My preferred choice is exposing volumes through a data container. The data container manages encryption and is provided with the encryption credentials:
Data container has volume /data
Data container has encryption credentials (Password or Key).
Data written to /data is written on a host directory encrypted using the data containers credentials.
Data read from /data can only be done through the data container.
The application container:
Application mounts data container with the Docker parameter -v
Application container Writes and Reads without any regards to encryption and decryption to and from data container.

Related

How to use OpenResty (nginx) to reduce data access time

I need to use nginx to proxy to different backend server according to the configuration information in database. One way is to use another program to write data in Redis and using OpenResty to access data in Redis.
To reduce access time, is there a better way to access data such as storing data in local memory using OpenResty and access that data in local memory?
OpenResty has build-in key-value storage. All data is shared between nginx workers by shared memory, so it is notably faster than accessing Redis.
It is possible to load all required values in init_by_lua*.
Probably you will need to use some cosocket-based library to access the database, cosocket API is disabled in
init_worker_by_lua*, but you may fire a timer with zero delay as workaround.
To avoid redundant database polling by multiple nginx workers you may start the timer for the first nginx worker only, when ngx.worker.id == 0.
This approach, of course, works only with static configuration data. I use it in a number of projects.

How can I set up a Docker network with restricted communication?

I'm trying to create something like this:
The server containers each have port 8080 exposed, and accept requests from the client, but crucially, they are not allowed to communicate with each other.
The problem here is that the server containers are launched after the client container, so I can't pass container link flags to the client like I used to, since the containers it's supposed to link to don't exist yet.
I've been looking at the newer Docker networking stuff, but I can't use a bridge because I don't want server cross-communication to be possible. It also seems to me like one bridge per server doesn't scale well, and would be difficult to manage within the client container.
Is there some kind of switch-like docker construct that can do this?
It seems like you will need to create multiple bridge networks, one per container. To simplify that, you may want to use docker-compose to specify how the networks and containers should be provisioned, and have the docker-compose tool wire it all up correctly.
Resources:
https://docs.docker.com/engine/userguide/networking/dockernetworks/
https://docs.docker.com/compose/
https://docs.docker.com/compose/compose-file/#version-2
One more side note: I think that exposed ports are accessible to all networks. If that's right, you may be able to set all of the server networking to none and rely on the exposed ports to reach the servers.
Hope this is relevant to your use-case - I'm attempting to draw context regards your actual application from the diagram and comments. I'd recommend you go the Service Discovery route. It may involve a little bit of simple API over a central store (say Redis, or SkyDNS), but would make things simple in the long run.
Kubernetes, for instance, uses SkyDNS to do so with DNS. At the end of the day, any orchestration tool of your choice would most likely do something like this out of the box: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
The idea is simple:
Use a DNS container that keeps entries of newly spawned servers
Allow the Client Container to query it for a list of servers. e.g. Picture a DNS response with a bunch of server-<<ISO Timestamp of Server Creation>>s
Disallow client containers read-access to this DNS (how to manage this permission-configuration without indirection, i.e. without proxying through an endpoint that allows writing into the DNS Container, but not reading, is going to exotic)
Bonus Edit: I just realised you can use a simpler Redis-like setup to do this, and that DNS might just be overengineering :)

mounting a truecrypt file across network

If I have a truecrypt file on a shared drive, if I mount it by using the shared path does my password data get sent in plain text across the network? Basically my question: is it safe to mount a truecrypt file across a network without copying the file to your local machine first.
Your password data is not sent across the network, because the cryptographic operations takes place on your computer, in the TrueCrypt driver. The password is used to derive a key that is used on your computer to decrypt the encrypted sectors sent across the network.
TrueCrypt FAQ has a section on this. I beleive item 2 is what you want to acheive. Their warning is that someone looking at the encrypted trafic could get some side-channel information, like the amount of data read and written, and the offset in the encrypted file.
Unless you want protection from your government or other well funded attacker, I beleive you should be ok, password wise. You might test what happens when a network failur occurs while writing a large file. It might corrupt the file system you mounted.
What I did:
mounted the TrueCrypt Drive and a TrueCrypt-Container with VeraCrypt (is newer)
created a windows (samba) and mac (afp) share of the drive and container with a password in the share settings (whatever software you use)
Mounting the container prevented it from being overwritten from some one else opening the container directly.

filesystem encryption over the wires

lets assume the following scenario; i need to open a encrypted filesystem (like i'm able to do with TrueCrypt locally) over a network, but
i want the encryption/decryption to happen strictly in the client, so no magic tokens get outside my machine
i want to read/write the filesystem on-demand basis: my encrypted filesystem might contain 3Gb of files, but i only need to edit a file of 1Mb, so my bandwidth consumption should not exceed a significant portion of that
it seems to me the only way to satisfy both requirement is with block-level encryption, so the client will decrypt the filesystem structure, request specific blocks over the network, edit some of the requested blocks, send updated (already encrypted) blocks.
What tools do exist for that? I've heard that eCryptFS does block-level encryption, but i'm not sure if there is a nice frontend for it as with TrueCrypt
My understanding is that with TrueCrypt you would need to download the full 3Gb partition, open it, edit some files, unmount and then resend the whole 3Gb. Is this correct?
You can use a protocol that allows you to connect to a raw disk over the network, then run a standard partition-encryption tool (like TrueCrypt) on top of that.
Examples of such protocols are NBD (Network Block Device) and iSCSI (SCSI over IP).
If you are looking for a file system library, than our SolFS offers exactly what you need. You can keep the storage on the server (encrypted) and open it from the client. When opening, only some pages will be downloaded and they will be decrypted on the client side (and re-encrypted and uploaded back upon change).
Network block devices should make this possible. Not sure how stable that protocol is or whether it even supports multiple clients.

Managing authorized_keys on a large number of hosts

What is the easiest way to manage the authorized_keys file for openssh across a large number of hosts? If I need to add or revoke a new key to an account on 10 hosts say, I must login and add the public key manually, or through a clumsy shell script, which is time consuming.
Ideally there would be a central database linking keys to accounts#machines with some sort of grouping support (IE, add this key to username X on all servers in the web category). There's fork of SSH with ldap support, but I'd rather use the mainline SSH packages.
I'd checkout the Monkeysphere project. It uses OpenPGP's web of trust concepts to manage ssh's authorized_keys and known_hosts files, without requiring changes to the ssh client or server.
I use Puppet for lots of things, including this.
(using the ssh_authorized_key resource type)
I've always done this by maintaining a "master" tree of the different servers' keys, and using rsync to update the remote machines. This lets you edit things in one location, push the changes out efficiently, and keeps things "up to date" -- everyone edits the master files, no one edits the files on random hosts.
You may want to look at projects which are made for running commands across groups of machines, such as Func at https://fedorahosted.org/func or other server configuration management packages.
Have you considered using clusterssh (or similar) to automate the file transfer? Another option is one of the centralized configuration systems.
/Allan

Resources