I have successfully deployed my meteor application using mup deploy. As stated in documentation to access database we need to run this command:
docker exec -it mongodb mongo <appName>
How can I use mongodump command with this setup? I have tried running
docker exec -it mongodb mongodump --db appName --archive=baza.gz --gzip
Command runs successfully, but I can not find baza.gz archive
As I found dump gets saved in docker container. To access backup from local filesystem we need to copy it from docker container.
To dump:
docker exec -it mongodb mongodump --db appName --archive=/root/baza.gz --gzip
To copy from docker container to local filesystem:
docker cp mongodb:/root/backup.gz /home/local_user
Related
I am using the docker MS-SQL server latest version and trying to restore the database (.bak) file using Azure data studio. But not able to find the physical location in MAC
As per the Azure data studio it points the data to the /var/opt/mssql/data however, we can't find the location on the MAC OS.
Docker SQL server command that I used to run the image
docker run --name MsSql -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=xxxxxxx' -p 1433:1433 -d mcr.microsoft.com/mssql/server:2019-latest
using the below command solve my issue, I have to mount the volume with the container volumn
docker run --name MsSql -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=FeteBird#sql' -p 1433:1433 -v /var/opt/mssql/data:/var/opt/mssql/data -d mcr.microsoft.com/mssql/server:2019-latest
I'm trying to set Some variables on Dokku for deployment. As far as i can see from the dev files, one should create a .env file in the directory and put the variables in there. But this is not updating anything
.env file
DOKKU_NGINX_PORT=3000
MYSQL_URL=http://blabla
MYSQL_USER=mysqluser
I'm trying to map the port of the app to port 3000, and inject the mysql vars into the runtime environment.
I know I can set it with dokku config:set on the server, but I want to be able to automate it during deployment.
Any ideas? Or an example?
You'll need to install a Dokku client, or CLI in order to locally interact with the remote application on your Dokku instance.
Here are a few options:
(node.js) dokku-toolbelt
Dokku toolbelt is a node-based CLI wrapper that proxies requests to
the Dokku command running on remote hosts.
You can install it via the following shell command (assuming you have node and npm installed):
$ npm install -g dokku-toolbelt
See documentation here for more information.
(python) dokku-client
Dokku client is an extensible python-based cli wrapper for remote
Dokku hosts.
You can install it via the following shell command (assuming you have python and pip installed):
$ pip install dokku-client
See documentation here for more information.
(ruby) Dokku CLI
Dokku CLI is a rubygem that acts as a client for your Dokku
installation.
You can install it via the following shell command (assuming you have ruby and rubygems installed):
$ gem install dokku-cli
See documentation here for more information.
After the Dokku client is installed locally, make sure that the dokku app remote is set inside the repository directory.
You can verify this by running $ git remote -v.
If the output doesn't show your dokku application instance, set it with the following command:
$ git remote add dokku dokku#example.com:your-app-name
Here's an example from my terminal with some information redacted for security purposes.
seth#linuxmint ~/repos/Adopt-a-Pet $ git remote -v
dokku dokku#example.com:adopt-a-pet (fetch)
dokku dokku#example.com:adopt-a-pet (push)
origin https://github.com/sethbergman/Adopt-a-Pet.git (fetch)
origin https://github.com/sethbergman/Adopt-a-Pet.git (push)
Then you can set environment variables with the following commands:
$ dokku config:set DOKKU_NGINX_PORT=3000
You can optionally set environment variables with the .env file:
$ dokku config:set:file <path/to/.env>
If the .env file is in the root directory of the repository, then the command would be:
$ dokku config:set:file <.env>
If you're using ruby, you can use the gem 'dokku-cli'. With that, you can set config from any file by issuing the command
dokku config:set:file <path/to/file>
See ruby doc
FROM golang:1.8
ADD . /go/src/beginnerapp
RUN go get -u github.com/gorilla/mux
RUN go get github.com/mattn/go-sqlite3
RUN go install beginnerapp/
VOLUME /go/src/beginnerapp/local-db
WORKDIR /go/src/beginnerapp
ENTRYPOINT /go/bin/beginnerapp
EXPOSE 8080
The sqlite db file is in the local-db directory but I don't seem to be using the VOLUME command correctly. Any ideas how I can have db changes to the sqlite db file persisted?
I don't mind if the volume is mounted before or after the build.
I also tried running the following command
user#cardboardlaptop:~/go/src/beginnerapp$ docker run -p 8080:8080 -v ./local-db:/go/src/beginnerapp/local-db beginnerapp
docker: Error response from daemon: create ./local-db: "./local-db" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
EDIT: Works with using /absolutepath/local-db instead of relative path ./local-db
You are not mounting volumes in a Dockerfile.
VOLUME tells docker that content on those directories can be mounted via docker run --volumes-from
You're right. Docker doesn't allow relative paths on volumes on command line.
Run your docker using absolute path:
docker run -v /host/db/local-db:/go/src/beginnerapp/local-db
Your db will be persisted in the host file /host/db/local-db
If you want to use relative paths, you can make it work with docker-compose with "volumes" tag:
volumes:
- ./local-db:/go/src/beginnerapp/local-db
You can try this configuration:
Put the Dockerfile in a directory, (e.g. /opt/docker/myproject)
create a docker-compose.yml file in the same path like this:
version: "2.0"
services:
myproject:
build: .
volumes:
- "./local-db:/go/src/beginnerapp/local-db"
Execute docker-compose up -d myproject in the same path.
Your db should be stored in /opt/docker/myproject/local-db
Just a comment. The content of local-db (if any) will be replaced by the content of ./local-db path (empty). If the container have any information (initialized database) will be a good idea to copy it with docker cp or include any init logic on an entrypoint or command shell script.
I need nginx-openresty and redis in single docker container. I have written docker file its working fine. But thing i need to start my redis service after login into the docker bash to automate this I have written .sh file which contains instrutions like start and stop of redis server and nginx. ENTRYPOINT ["./startup.sh"]
and .sh file is
cd /etc/redis-installation/utils
echo -n | ./install_server.sh
service redis_6379 stop
cd /
cp ./dump.rdb /var/lib/redis/6379/
service redis_6379 start
openresty
My problem is that docker container start and exist when shell execution completed. How can stay the container keep running with nginx and redis in running state.
Try using docker-compose with a link between your app container and your redis container. I suggest using the official redis container
We are running a Jenkins CI server from a docker container, started with docker-compose. The Jenkins server is running some jobs which are pulling projects from git and building docker containers the standard way executing docker build . on them. To be able to use docker inside the docker container we are mounting over /var/run/docker.sock with docker-compose to the Jenkins container.
Some of the Dockerfile-s we are trying to build there are downloading files from our fileserver (3rd party installation images for example). Such a Dockerfile command looks like RUN curl -o xx.zip http://fileserver/xx-1.2.3.zip.
The fileserver hostname gets resolved through the /etc/hosts file and it resolves to the host's public IP which runs the Jenkins CI server. The docker-compose config for the Jenkins container also includes the extra_hosts parameter pointing the fileserver to the host's public IP.
The problem is that building the docker container with Jenkins running in it's own container fails with a plain Unknown host: fileserver message. If I enter the Jenkins container via docker exec -it <id>, I can execute the same curl command and it resolves the host, but if I try to run docker build . there which tries to run the same curl command, it fails to resolve the host.
Our host is an RHEL and I failed to reproduce the problem on my desktop Arch Linux so I suspect it's something redhat-specific issue (again).
Add --network=host, so that the build env will use the host machine domain resolution.
docker build --network=host foo/bar:latest .
Docker builds don't happen on the machine issuing the command (your jenkins container, in this case) - they happen on the machine with the Docker Engine. This means that your Jenkins machine tars up the source directory and ships it back to the parent machine for the build to happen. So, check if the curl command works from the parent machine, not the Jenkins container.