Jfrog cli docker image is not persistent - artifactory

While running JFrog CLI as a docker container via using docker run releases-docker.jfrog.io/jfrog/jfrog-cli-full-v2-jf jf -v
The container get exited, hence I am not able to configure.

JFrog CLI stores its configuration inside its home directory, named .jfrog on the local system.
By default, the .jfrog directory is located under the user home directory. You have the option of changing the home directory location by setting the JFROG_CLI_HOME_DIR environment variable.
So if you'd like JFrog CLI to access the home directory from inside a container, you'll need to mount the container's JFrog CLI home directory to the directory on the host machine. You can do this using docker's volume feature, by adding the -v option to the command as follows:
docker run -v ~/.jfrog:/home/frogger/.jfrog releases-docker.jfrog.io/jfrog/jfrog-cli-full-v2-jf jf c show
In the above command, we're mapping the host machine's ~/.jfrog directory to the container's /home/frogger/.jfrog directory. The releases-docker.jfrog.io/jfrog/jfrog-cli-full-v2-jf container's user home is /home/frogger.

Related

Enable SSH on Azure AppService - Wordpress

I can't ssh to the Azure App Service wordpress site and seems it has been disabled within it.
Referred following url to setup the Site.
https://learn.microsoft.com/en-us/azure/app-service/quickstart-wordpress
Any idea on how can i enable this ?
Enable SSH on Azure AppService - Wordpress
Any idea on how can i enable this ?
To enable ssh for WordPress settings you first need to create normal webapp with docker container and then we need to deploy WordPress image in container.
After creating the docker container find the command for deploying WordPress docker image.
Check this document for more information on docker deployment.
for Docker image deployment check the official website
``console
$ docker run --name some-wordpress --network some-network -d wordpress
- *Here are the commands for installing SSH config file*
cat sshd_config
```
Here is the output

Setting Dokku environment variables

I'm trying to set Some variables on Dokku for deployment. As far as i can see from the dev files, one should create a .env file in the directory and put the variables in there. But this is not updating anything
.env file
DOKKU_NGINX_PORT=3000
MYSQL_URL=http://blabla
MYSQL_USER=mysqluser
I'm trying to map the port of the app to port 3000, and inject the mysql vars into the runtime environment.
I know I can set it with dokku config:set on the server, but I want to be able to automate it during deployment.
Any ideas? Or an example?
You'll need to install a Dokku client, or CLI in order to locally interact with the remote application on your Dokku instance.
Here are a few options:
(node.js) dokku-toolbelt
Dokku toolbelt is a node-based CLI wrapper that proxies requests to
the Dokku command running on remote hosts.
You can install it via the following shell command (assuming you have node and npm installed):
$ npm install -g dokku-toolbelt
See documentation here for more information.
(python) dokku-client
Dokku client is an extensible python-based cli wrapper for remote
Dokku hosts.
You can install it via the following shell command (assuming you have python and pip installed):
$ pip install dokku-client
See documentation here for more information.
(ruby) Dokku CLI
Dokku CLI is a rubygem that acts as a client for your Dokku
installation.
You can install it via the following shell command (assuming you have ruby and rubygems installed):
$ gem install dokku-cli
See documentation here for more information.
After the Dokku client is installed locally, make sure that the dokku app remote is set inside the repository directory.
You can verify this by running $ git remote -v.
If the output doesn't show your dokku application instance, set it with the following command:
$ git remote add dokku dokku#example.com:your-app-name
Here's an example from my terminal with some information redacted for security purposes.
seth#linuxmint ~/repos/Adopt-a-Pet $ git remote -v
dokku dokku#example.com:adopt-a-pet (fetch)
dokku dokku#example.com:adopt-a-pet (push)
origin https://github.com/sethbergman/Adopt-a-Pet.git (fetch)
origin https://github.com/sethbergman/Adopt-a-Pet.git (push)
Then you can set environment variables with the following commands:
$ dokku config:set DOKKU_NGINX_PORT=3000
You can optionally set environment variables with the .env file:
$ dokku config:set:file <path/to/.env>
If the .env file is in the root directory of the repository, then the command would be:
$ dokku config:set:file <.env>
If you're using ruby, you can use the gem 'dokku-cli'. With that, you can set config from any file by issuing the command
dokku config:set:file <path/to/file>
See ruby doc

Send a file via SFTP to a Docker Container

I have a Docker container running with an app on Linux. The container is hosted on a Mac(development) or AWS(production). I want to be able to send a file to this container remotely. How can I achieve that?
Thank you.
You need to install a SSH server in the image you are running, or make sure one is already installed. Then you need to map the ssh port (default 22) on your container to the host's port so you can reach your container from outside host. For example:
docker run -p 10022:22 app_container
If running on AWS check your security group for that ec2 instance you are running that container on to allow host port (10022 as in example above) to be accessible from outside.
You may also use "docker cp" to copy from/to container and local drive.
Be aware of the syntax. * is not possible, but cp is recursive and copies directories...
So e.g.
docker cp c867cee9451f:/var/www/html/themes/ .
copies the whole themes folder with subdirectories to your local drive while
docker cp c867cee9451f:/var/www/html/themes/* . #### does not work
won't work.

Hostname resolution fails when running docker build from a docker container

We are running a Jenkins CI server from a docker container, started with docker-compose. The Jenkins server is running some jobs which are pulling projects from git and building docker containers the standard way executing docker build . on them. To be able to use docker inside the docker container we are mounting over /var/run/docker.sock with docker-compose to the Jenkins container.
Some of the Dockerfile-s we are trying to build there are downloading files from our fileserver (3rd party installation images for example). Such a Dockerfile command looks like RUN curl -o xx.zip http://fileserver/xx-1.2.3.zip.
The fileserver hostname gets resolved through the /etc/hosts file and it resolves to the host's public IP which runs the Jenkins CI server. The docker-compose config for the Jenkins container also includes the extra_hosts parameter pointing the fileserver to the host's public IP.
The problem is that building the docker container with Jenkins running in it's own container fails with a plain Unknown host: fileserver message. If I enter the Jenkins container via docker exec -it <id>, I can execute the same curl command and it resolves the host, but if I try to run docker build . there which tries to run the same curl command, it fails to resolve the host.
Our host is an RHEL and I failed to reproduce the problem on my desktop Arch Linux so I suspect it's something redhat-specific issue (again).
Add --network=host, so that the build env will use the host machine domain resolution.
docker build --network=host foo/bar:latest .
Docker builds don't happen on the machine issuing the command (your jenkins container, in this case) - they happen on the machine with the Docker Engine. This means that your Jenkins machine tars up the source directory and ships it back to the parent machine for the build to happen. So, check if the curl command works from the parent machine, not the Jenkins container.

How to mount a directory in the docker container to the host?

It's quite easy to mount a host directory in the docker container.
But I need the other way around.
I use a docker container as a development environment for developing WordPress plugins. This docker container contains everything needed to run WordPress (MySQL, Apache, PHP and WordPress). I mount my plugin src folder from the host in the docker container, so that I can test my plugin during development.
For debugging it would be helpful if my IDE running on the host has read access to the WordPress files in the docker container.
I found two ways to solve the problem but both seem really hacky.
Adding a data volume to the docker container, with the path to the WordPress files
docker run ... -v /usr/share/wordpress/ ...
Docker adds this directory to the path on the host /var/lib/docker/vfs/dir... But you need to look up the actual path with docker inspect and you need root access rights to see the files.
Mounting a host directory to the docker container and copying the WordPress files in the container to that mounted host directory. A symlink doesn't seem to work.
Is there a better way to do that? Without copying files or changing access rights?
Thank you!
Copying the WordPress files to the mounted folder was the solution.
I move the files in the container from the original folder to the mounted folder and use symbolic links to link them back to the original folder.
The important part is, the container can follow symbolic links in the container and but the host can't. So just using symbolic links from the original folder to the mounted folder doesn't work, because the host cannot follow symbolic links in the container!
You can share the files using smb with svendowideits samba container like this:
docker run --rm -v $(which docker):/docker -v /var/run/docker.sock:/docker.sock svendowideit/samba <container name>
It's possible to do if you use volume instead of filesystem path. It's created for you automatically, if it already doesn't exist.
docker run -d -v usr_share_wordpress:/usr/share/wordpress --name your_container ... image
After you stop or remove your container, your volume will be stored on your filesystem with files from container.
You can inspect volume content during lifetime of your_container with busybox image. Something like:
docker run -it --rm --volumes-from your_container busybox sh
After shutdown of your_container you can still check volume with:
docker run -it --rm -v usr_share_wordpress:/usr/share/wordpress busybox sh
List volumes with docker volume ls.
I had a similar need of exposing the files from container to the host. There is an open issue on this as of today. One of the work-arounds mentioned, using binds, is pretty neat; it works when the container is up and running:
container_root=$(docker inspect --format {{.State.Pid}} "$container_name")/root
sudo bindfs --map=root/"$USER" "$container_root/$app_folder" "$host_folder"
PS: I am not sure this is good for production, but it should work in development scenarios!
Why not just do: docker run ... -v /usr/share/wordpress/:/usr/share/wordpress. Now your local /usr/share/wordpress/ is mapped to /usr/share/wordpress in the Docker container and both have the same files. You could also mount elsewhere in the container this way. The syntax is host_path:container_path, so if you wanted to mount /usr/share/wordpress from your host to /my/new/path on the container, you'd just do: docker run ... -v /usr/share/wordpress/:/my/new/path.

Resources