Docker trust: how to sign image without pushing it to repository - notary

We run private docker registry, and I am trying to use notary to add image signing. I have notary set up, and Docker client can sign images as it pushes to the registry.
My problem is that we do not push to the same registry name we are pulling from. We use Nexus for hosting our docker images, and this is an artifact of the Nexus way of running private registry.
We push to docker-registry-publish.example.com and pull from docker-registry.example.com
We cannot push to docker-registry.example.com
I cannot find a way to have docker, or notary, sign an image without pushing it first. What I would like to be able to do is something like this:
docker push docker-registry-publish.example.com/team1/app1:1.0
docker tag docker-registry-publish.example.com/team1/app1:1.00 docker-registry.example.com/team1/app1:1.0
$some_way_to_sign docker-registry.example.com/team1/app1:1.0
There is no 'docker sign' command, the only way for docker to sign an image is to push it, as far as I know. 'notary sign' needs a file, and I do not know how to feed it a docker image.
Is there a way to have either docker, or notary, just do the signing?
Thank you!

These are the steps I would do.
First, generate the key using docker trust key generate and name it ${key}
Then, load the key and add the signer for the image
docker trust key load --name ${key} ${key}.pub
docker trust signer add --key ${key}.pub ${key} docker-registry-publish.example.com/team1/app1:1.0
Next, build the image:
docker build --disable-content-trust=true -t docker-registry-publish.example.com/team1/app1:1.0 .
And then, sign the image:
docker trust sign docker-registry-publish.example.com/team1/app1:1.0
Verify the keys and signature:
docker trust inspect --pretty docker-registry-publish.example.com/team1/app1:1.0
And, finally, push it:
docker push docker-registry-publish.example.com/team1/app1:1.0

Related

Jfrog Xray Certificate Issue

Summary:
Xray helm chart needs the capability to receive a custom certificate used for Artifactory and apply that certificate to the router container.
Detail:
We have successfully installed Artifactory via helm. After installing, we configured TLS for the Artifactory web application using a custom certificate. When trying to deploy Xray via helm, the xray server pod’s router container will continually fail to connect to Artifactory with an error message of
Error: Get https://[url redacted]/access/api/v1/system/ping: x509: certificate signed by unknown authority
There does not appear to be any way to pass in a secret that would contain the custom certificate. It looks like at this time the only option is to customize the helm chart to install the certificate in the container but that will put us out if sync with Jfrog to receive vulnerability database updates or any other updates x-ray request from Jfrog.
Edit - Someone is having the same issue. Can Xray even do what we are trying to do then?
Update from Jfrog Support:
With regards to the query on adding/importing the CA cert chain to Xray, we already have a Jira for the same and our team is currently woking on it. As a work around I would request you to mount a custom volume with the ssl cert. Then run the command to import the ssl cert to the cacerts file from the init container.
Work around :
Create a Kubernetes configMap, and add the root and subordinate CA, and then mount that into the xray-server at /usr/local/share/ca-certificates. I then log into the server, and do a docker exec -it -u root into the xray server (since the container runs as a non-root user) and then run the command update-ca-certificates to import the CA certs. Once you did this, then Xray will be able to talk to Artifactory.
The drawback of this workaround would be we need to run the above steps every time the container restarts.

Easy way for read docker logs without ssh access

Is there a way to read docker logs from a container if i don't have ssh access to the host machine? Could i for example map the docker log command to a http port
So i could read the docker logs simply by do a Get request to
http://[dockerhost]:5234/logs
Docker container's log is located at /var/lib/docker/containers.
E.g.
If your container's id is ef80f1a75417a7933912c14fd8b86ecd828cf844e9793aae81ccebbc3120c774, then the log of the container is /var/lib/docker/containers/ef80f1a75417a7933912c14fd8b86ecd828cf844e9793aae81ccebbc3120c774/ef80f1a75417a7933912c14fd8b86ecd828cf844e9793aae81ccebbc3120c774-json.log.
So, you can just set a folder access for /var/lib/docker/containers in apache, then user can view it from browser.

How to create an overlay network with Docker?

When I try to create an overlay network with Docker I get the following error:
docker#boot2docker:/vagrant$ docker network create --driver overlay somenetwork
Error response from daemon: failed to parse pool request for address space
"GlobalDefault" pool "" subpool "": cannot find address space GlobalDefault
(most likely the backing datastore is not configured)`
I found this bug report on GitHub: https://github.com/docker/docker/issues/18770
I checked my Boot2Docker image, it is using sysvinit and not systemd, so this shouldn't be a problem, and also the kernel version seems to be good:
docker#boot2docker:/vagrant$ uname -r
4.1.19-boot2docker
Is it possible that this is a misuse of the overlay network concept that I try to run this only on one host...? Maybe this causes that strange error?
Update:
I think this was a mistake to execute the network creation command to the locally running docker daemon. I think I should have done this to my swarm manager instead - in this case the error message is different:
docker#boot2docker:~$ docker -H tcp://0.0.0.0:3375 network create --driver overlay network
Error response from daemon: No healthy node available in the cluster
When I check the status of the swarm cluster there are no nodes indeed. Maybe the original problem is my swarm join command was not fully correct...?
docker run -d swarm join consul://127.0.0.1:8500/
If you read through the documentation on overlay networks, you see that in order to create an overlay network you first need to configure a key/value store (Docker currently supports etcd, consul, and zookeeper) that Docker uses to coordinate things between multiple hosts.
From the docs:
To create an overlay network, you configure options on the daemon on each Docker Engine for use with overlay network. There are three options to set:
Option Description
--cluster-store=PROVIDER://URL
Describes the location of the KV service.
--cluster-advertise=HOST_IP|HOST_IFACE:PORT
The IP address or interface of the HOST used for clustering.
--cluster-store-opt=KEY-VALUE OPTIONS
Options such as TLS certificate or tuning discovery Timers
From your question, it doesn't sound like you have performed the necessary configuration.
As the update suggested ..
.. the problem is ..
your swarm join command.
A solution could be ..
docker run swarm join --addr=192.168.196.16:2375 token://`cat swarm_id`
Assuming that you created the swarm using a token. As for me I rather use static file.
You'll find everything you need in this answer.

Can I expose a Docker port to another Docker only (and not the host)?

Is it possible to expose a port from one Docker container to another one (or several other ones), without exposing it to the host?
Yes, you can link containers together and ports are only exposed for these linked containers, without having to export ports to the host.
For example, if you have a docker container running postgreSQL db:
$ docker run -d --name db training/postgres
You can link to another container running your web application:
$ docker run -d --name web --link db training/webapp python app.py
The container running your web application will have a set of environment variables with the ports exposed in the db container, for example:
DB_PORT_5432_TCP_PORT=5432
The environment variables are created based on the container name, in this case the container name is db, so environment variable starts with DB.
You can find more details in docker documentation here:
https://docs.docker.com/v1.8/userguide/dockerlinks/
I found an alternative to container linking: You can define custom "networks" and tell the container to use them using the --net option.
For example, if your containers are intended to be deployed together as a unit anyway, you can have them all share the same network stack (using --net container:oneOfThem). That way you don't need to even configure host names to have them find each-other, they can just share the same 127.0.0.1 and nothing gets exposed to the outside.
Of course, that way they expose all their ports to each-other, and you must be careful not to have conflicts (they cannot both run 8080 for example). If that is a concern, you can still use --net, just not to share the same network stack, but to set up a more complex overlay network.
Finally, the --net option can also be used to have a container run directly on the host's network.
Very flexible tool.

How to setup group of docker containers with the same addresses?

I am going to install distributed software inside docker containers. It can be something like:
container1: 172.0.0.10 - management node
container2: 172.0.0.20 - database node
container3: 172.0.0.30 - UI node
I know how to manage containers as a group and how to link them between each other, however the problem is that ip information is located in many places (database etc), so when you deploy containers from such image ip are changed and infrastructure is broken.
The most easy way I see is to use several virtual networks on the host so, containers will be with the same addresses but will not affect each other. However as I understand it is not possible for docker currently, as you cannot start docker daemon with several bridges connected to one physical interface.
The question is, could you advice how to create such infrastructure? thanks.
Don't do it this way.
Containers are ephemeral, they come and go and will be assigned new IPs. Fighting against this is a bad idea. Instead, you need to figure out how to deal with changing IPs. There are a few solutions, which one you should use is entirely dependent on your use case.
Some suggestions:
You may be able to get away with just forwarding through ports on your host. So your DB is always HOST_IP:88888 or similar.
If you can put environment variables in your config files, or dynamically generate config files when the container starts, you can use Docker links which will put the IP of the linked container into an environment variable.
If those don't work for you, you need to start looking at more complete solutions such as the ambassador pattern and consul. In general, this issue is known as Service Discovery.
Adrian gave a good answer. But if you cannot use this approach you could do the next thing:
create ip aliases on hosts with docker (it could be many docker hosts)
then you run container map ports for this address.
.
docker run --name management --restart=always -d -p 172.0.0.10:NNNN:NNNN management
docker run --name db --restart=always -d -p 172.0.0.20:NNNN:NNNN db
docker run --name ui --restart=always -d -p 172.0.0.30:NNNN:NNNN ui
Now you could access your containers by fixed address and you could move them to different hosts (together with ip alias) and everything will continue to work.

Resources