notary returns Error: unknown shorthand flag: 'r' in -r - notary

I want to use delegates with DOCKER CONTENT TRUST. I generated the delegate.crt/key on the collaborator's machine and now I am trying to rotate the snapshot key with:
notary key rotate localhost:5000/ubuntu snapshot -r
=> Error: unknown shorthand flag: 'r' in -r
Usage:
notary key rotate [ GUN ] [flags]
Why am I getting this error?

Had "unknown shorthand flag: 'r' in -rm" error but due bad argument, not that I wanted delegation.
Was:
$ docker run -rm busybox echo hello world
Causing:
unknown shorthand flag: 'r' in -rm
The correct parameter is --rm two dashes.
$ docker run --rm busybox echo hello world
the --rm flag that can be passed to docker run which automatically
deletes the container once it's exited from.
Source: https://github.com/prakhar1989/docker-curriculum#11-docker-run

(Disclaimer: I know zilch about Docker Notary, so this might be completely bogus)
According to the Notary documentation:
The root and targets key must be locally managed - to rotate either
the root or targets key, for instance in case of compromise, use the
notary key rotate command without the -r flag. The timestamp key must
be remotely managed - to rotate the timestamp key use the notary key
rotate timestamp -r command.
So I'd guess you're trying to use a non-locally managed root or targets key which apparently is not supported.

Related

docker push image to registry get error received unexpected HTTP status: 500 Internal Server Error

I hope someone can help me :))
I have a registry runing in linux/ubuntu registry:2
when I generate and push image based in docker-linux all is done.
but when I generate widows images/container like this FROM mcr.microsoft.com/dotnet/framework/sdk:4.8 AS build
....... build my app
but a get this error when I try to push image to registry(its running in ubuntu)
The push refers to repository [my registry url]
20682f4ff7bf: Layer already exists
364de9d9769f: Layer already exists
459cecac74e0: Layer already exists
d75d3f2676c0: Layer already exists
1ed8b25f5f56: Layer already exists
67814da28689: Layer already exists
a57c2fa5ac49: Layer already exists
b037a641a3b8: Layer already exists
90b1395e7b4f: Layer already exists
0a59c2d473fc: Layer already exists
1612da0ebc3b: Layer already exists
94ce506bdcf2: Skipped foreign layer
f358be10862c: Skipped foreign layer
received unexpected HTTP status: 500 Internal Server Error
registry logs:
.10.16 go/go1.17.10 git-commit/f756502 os/windows arch/amd64 UpstreamClient(Docker-Client/20.10.16 \(windows\))" vars.name="xxxxxx" vars.reference=des
time="2022-07-19T09:49:15.579829667Z" level=error msg="response completed with error" auth.user.name=admin err.code=unknown err.message="unknown error" go.version=go1.11.2 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host=xxxxxx http.request.id=85ace457-2985-402e-a930-5a2e79f0b6ba http.request.method=PUT http.request.remoteaddr=xxxxx http.request.uri="xxxxxx" http.request.useragent="docker/20.10.16 go/go1.17.10 git-commit/f756502 os/windows arch/amd64 UpstreamClient(Docker-Client/20.10.16 \(windows\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=4.974929ms http.response.status=500 http.response.written=523 vars.name="xxxxxx" vars.reference=des
thank y for any help
I've yet to sort out the magic string that validates the URL's to Windows foreign layers (pretty sure I tried a lot of combinations). So I eventually just disabled validation completely to workaround this error:
docker run -d --restart=unless-stopped --name registry \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
-e "REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR=inmemory" \
-e "REGISTRY_STORAGE_DELETE_ENABLED=true" \
-e "REGISTRY_VALIDATION_DISABLED=true" \
-v "registry-data:/var/lib/registry" \
-p "127.0.0.1:5000:5000" \
registry:2
Note the -e "REGISTRY_VALIDATION_DISABLED=true" in the command above.

What is RENV_PATHS_CACHE_HOST? -- docker documentation

In the docker vignette/documentation, they give an example with a shiny app, but don't exactly specify what their parameters mean. Some of them are self explanatory, but others aren't. More specifically:
https://rstudio.github.io/renv/articles/docker.html
RENV_PATHS_CACHE_HOST=/opt/local/renv/cache
RENV_PATHS_CACHE_CONTAINER=/renv/cache
docker run --rm \
-e "RENV_PATHS_CACHE=${RENV_PATHS_CACHE_CONTAINER}" \
-v "${RENV_PATHS_CACHE_HOST}:${RENV_PATHS_CACHE_CONTAINER}" \
-p 14618:14618 \
R -s -e 'renv::restore(); shiny::runApp(host = "0.0.0.0", port = 14618)'
What is RENV_PATHS_CACHE_HOST?
And is RENV_PATHS_CACHE_CONTAINER the location of where my cache will be upon running the image instance/container?
I'm not entirely sure how to use this example, but feel I'll need it.
The example here tries to demonstrate how one might mount an renv cache from the host filesystem on to a Docker container.
In this case, RENV_PATHS_CACHE_HOST points to a (theoretical) cache directory on the host filesystem, at /opt/local/renv/cache, whereas RENV_PATHS_CACHE_CONTAINER points to the location in the container where the host cache will be visible.

Can not get flyway-docker to recognize local files in volumes

I am trying to use Flyway to set up a DB2 test/demo environment in a Docker container. I have an image of DB2 running in a docker container and now am trying to get flyway to create the database environment. I can connect to the DB2 docker container and create DB2 objects and load them with data, but am looking for a way for non-technical users to do this (i.e. clone a GitHub repo and issue a single docker run command).
The Flyway Docker site (https://github.com/flyway/flyway-docker) indicates that it supports the following volumes:
| Volume | Description |
|-------------------|--------------------------------------------------------|
| `/flyway/conf` | Directory containing a flyway.conf |
| `/flyway/drivers` | Directory containing the JDBC driver for your database |
| `/flyway/sql` | The SQL files that you want Flyway to use |
I created the conf, drivers, and sql directories. In the conf directory, I placed the file flyway.conf that contained my flyway Url, user name, and password:
flyway.url=jdbc:db2://localhost:50000/apidemo
flyway.user=DB2INST1
flyway.passord=mY%tEst%pAsSwOrD
In the drivers directory, I added the DB2 JDBC Type 4 drivers (e.g. db2jcc4.jar, db2jcc_license_cisuz.jar),
And in the sql directory I put in a simple table creation statement (file name: V1__make_temp_table.sql):
CREATE TABLE EDS.REFT_TEMP_DIM (
TEMP_ID INTEGER NOT NULL )
, TEMP_CD CHAR (8)
, TEMP_NM VARCHAR (255)
)
DATA CAPTURE NONE
COMPRESS NO;
Attempting to perform the docker run with the flyway/flyway image as described in the GitHub Readme.md, it is not recognizing the flyway.conf file, since it does not know the url, user, and password.
docker run --rm -v sql:/flyway/sql -v conf:/flyway/conf -v drivers:/flyway/drivers flyway/flyway migrate
Flyway Community Edition 6.5.5 by Redgate
ERROR: Unable to connect to the database. Configure the url, user and password!
I then put the url, user, and password inline and It could not find the JDBC driver.
docker run --rm -v sql:/flyway/sql -v drivers:/flyway/drivers flyway/flyway -url=jdbc:db2://localhost:50000/apidemo -user=DB2INST1 -password=mY%tEst%pAsSwOrD migrate
ERROR: Unable to instantiate JDBC driver: com.ibm.db2.jcc.DB2Driver => Check whether the jar file is present
Caused by: Unable to instantiate class com.ibm.db2.jcc.DB2Driver : com.ibm.db2.jcc.DB2Driver
Caused by: java.lang.ClassNotFoundException: com.ibm.db2.jcc.DB2Driver
Therefore, I believe it is the way that I am setting up the local file system or associating to local files with the flyway volumes that is causing the issue. Does anyone have an idea of what I am doing wrong?
You need to supply absolute paths to your volumes for docker to mount them.
Changing the relative paths to absolute paths fixed the volume mount issue.
docker run --rm \
-v /Users/steve/github-ibm/flyway-db-migration/sql:/flyway/sql \
-v /Users/steve/github-ibm/flyway-db-migration/conf:/flyway/conf \
-v /Users/steve/github-ibm/flyway-db-migration/drivers:/flyway/drivers \
flyway/flyway migrate

Firestore authorization for Google Compute engine for app on a docker container

I have deployed a Node.js app on a Google compute instance via a Docker container. Is there a recommended way to pass the GOOGLE_APPLICATION_CREDENTIALS to the docker container?
I see the documentation states that GCE has Application Default Credentials (ADC), but these are not available in the docker container. (https://cloud.google.com/docs/authentication/production)
I am a bit new to docker & GCP, so any help would be appreciated.
Thank you!
So, I could find this documentation on where you can inject your GOOGLE_APPLICATION_CREDENTIALS into a docker in order to test cloud run locally, I know that this is not cloud run, but I believe that the same command could be used in order to inject your credentials to the container.
As I know that a lot of the times the community needs the steps and commands as the links could change and information also could change I will copy the steps needed in order to inject the credentials.
Refer to Getting Started with Authentication for instructions
on generating, retrieving, and configuring your Service Account
credentials.
The following Docker run flags inject the credentials and
configuration from your local system into the local container:
Use the --volume (-v) flag to inject the credential file into the
container (assumes you have already set your
GOOGLE_APPLICATION_CREDENTIALS environment variable on your machine):
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro
Use the --environment (-e) flag to set the
GOOGLE_APPLICATION_CREDENTIALS variable inside the container:
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
Optionally, use this fully configured Docker run command:
PORT=8080 && docker run \
-p 9090:${PORT} \
-e PORT=${PORT} \
-e K_SERVICE=dev \
-e K_CONFIGURATION=dev \
-e K_REVISION=dev-00001 \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json \
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro \ gcr.io/PROJECT_ID/IMAGE
Note that the path
/tmp/keys/FILE_NAME.json
shown in the example above is a reasonable location to place your
credentials inside the container. However, other directory locations
will also work. The crucial requirement is that the
GOOGLE_APPLICATION_CREDENTIALS environment variable must match the
bind mount location inside the container.
Hope this works for you.

How do you use an identity file with rsync?

How do you use an identity file with rsync?
This is the syntax I think I should be using with rsync to use an identity file to connect:
rsync -avz -e 'ssh -p1234 -i ~/.ssh/1234-identity' \
"/local/dir/" remoteUser#22.33.44.55:"/remote/dir/"
But it's giving me an error:
Warning: Identity file ~/.ssh/1234-identity not accessible: No such file or directory.
The file is fine, permissions are set correctly, it works when doing ssh - just not with rsync - at least in my syntax. What am I doing wrong? Is it trying to look for the identity file on the remote machine? If so, how do I specify that I want to use an identity file on my local machine?
Use either $HOME
rsync -avz -e "ssh -p1234 -i \"$HOME/.ssh/1234-identity\"" dir remoteUser#server:
or full path to the key:
rsync -avz -e "ssh -p1234 -i /home/username/.ssh/1234-identity" dir user#server:
Tested with rsync 3.0.9 on Ubuntu
You may want to use ssh-agent and ssh-add to load the key into memory. ssh will try identities from ssh-agent automatically if it can find them. Commands would be
eval $(ssh-agent) # Create agent and environment variables
ssh-add ~/.ssh/1234-identity
ssh-agent is a user daemon which holds unencrypted ssh keys in memory. ssh finds it based on environment variables which ssh-agent outputs when run. Using eval to evaluate this output creates the environment variables. ssh-add is the command which manages the keys memory. The agent can be locked using ssh-add. A default lifetime for a key can be specified when ssh-agent is started, and or specified for a key when it is added.
You might also want to setup a ~/.ssh/config file to supply the port and key definition. (See `man ssh_config for more options.)
host 22.33.44.55
IdentityFile ~/.ssh/1234-identity
Port 1234
Single quoting the ssh command will prevent shell expansion which is needed for ~ or $HOME. You could use the full or relative path to the key in single quotes.
You have to specify the absolute path to your identity key file. This probably some sort of quirck in rsync. (it can't be perfect after all)
I ran into this issue just a few days ago :-)
This works for me
rsync -avz --rsh="ssh -p1234 -i ~/.ssh/1234-identity" \
"/local/dir/" remoteUser#22.33.44.55:"/remote/dir/"
use key file with rsync:
rsync -rave "ssh -i /home/test/pkey_new.pem" /var/www/test/ ubuntu#231.210.24.48:/var/www/test
Are you executing the command in bash or sh? This might make a difference. Try replacing ~ with $HOME. Try double-quoting the string for the -e option.

Resources