Sadly although mup deploy works perfectly, I have a folder called ".uploads", which users can upload files into.
Each deploy deletes the files in the directory. I would like to exclude or protect the file from the deploy deleting the files, any ideas?
Currently I filed an issue: https://github.com/arunoda/meteor-up/issues/1022
Not sure if its an issue with mup or my system setup. I use tomitrescak:meteor-uploads and also cfs:file-collection. They both have the same issue.
But from what I see it shall be easy to do, you need to modify the script at https://github.com/arunoda/meteor-up/blob/mupx/templates/linux/start.sh#L26
And add a new mapping. You can map multiple volumes such as following: Mounting multiple volumes on a docker container?
So your script would look like (mounting the folder on the host machine /opt/uploads/myapp to the folder /opt/uploads in the container):
docker run \
-d \
--restart=always \
--publish=$PORT:80 \
--volume=$BUNDLE_PATH:/bundle \
--volume=/opt/uploads/myapp:/opt/uploads/ \ .... this is the new line
--env-file=$ENV_FILE \
--link=mongodb:mongodb \
--hostname="$HOSTNAME-$APPNAME" \
--env=MONGO_URL=mongodb://mongodb:27017/$APPNAME \
--name=$APPNAME \
meteorhacks/meteord:base
This can be found in "start.sh" in MUP until this issue is resolved and you can mount volumes via the options.
Also see discussion here: https://github.com/tomitrescak/meteor-uploads/issues/235#issuecomment-228618130
Related
I have been struggling to use vi editor in WordPress container (on Kubernetes) to edit a file wp-config.php
I am currently using this helm chart of WordPress from Artifactub: https://artifacthub.io/packages/helm/bitnami/wordpress
Image: docker.io/bitnami/wordpress:6.1.1-debian-11-r1
These are the errors I'm getting when trying to edit the wp-config.php inside the pod with either vi or vim
# vi wp-config.php
bash: vi: command not found
When I tried installing the vi, I get this error:
apt-get install vi
# Error
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?
Then I tried by first ssh-ing into the node hosting the WordPress pod, then exec into the container using docker with sudo privileges as shown below:
docker exec -it -u root <containerID> /bin/bash
I then tried installing the vi editor in the container by still getting this same error
The content I want to add to the wp-config.php is the following. It's a plugin requirement so that I can be able to store media files right into my AWS S3 bucket:
define('SSU_PROVIDER', 'aws');
define('SSU_BUCKET', 'my-bucket');
define('SSU_FOLDER', 'my-folder');
Can I run the command like this:
helm install my-wordpress bitnami/wordpress \
--set mariadb.enabled=false \
--set externalDatabase.host=my-host \
--set externalDatabase.user=my-user \
--set externalDatabase.password=my-password \
--set externalDatabase.database=mydb \
--set wordpressExtraConfigContent="define('SSU_PROVIDER', 'aws');define('SSU_BUCKET', 'my-bucket');define('SSU_FOLDER', 'my-folder');"
In the chart documentation repository here there are 2 possible ways to do it:
So, to the value files you could use wordpressExtraConfigContent variable and add extra content, or use the variable wordpressConfiguration to set a new wp-config.php
EDIT: You seem to be trying to define environment variables with php define, in that case you can pass environment variables to the pods with the variables:
So --set extraEnvVars or create a configmap with the variables that you want( would be better) and pass --set extraEnvVarsCM <you-configmap> (which will mount the configmap as an env var into the wordpress container.
The Fix for me after a marathon of different options was just to use a plugin to sync my media files with my AWS s3 bucket. There was literally no way for me to be able to do any function with the bitnami wordpress container. Can't edit nor install any edition (vi/vim/nano). It was locked and I didn't want to edit and build from their base image because we had running wordpress applications on a k8s cluster
This is the plugin that I used media cloud
In the docker vignette/documentation, they give an example with a shiny app, but don't exactly specify what their parameters mean. Some of them are self explanatory, but others aren't. More specifically:
https://rstudio.github.io/renv/articles/docker.html
RENV_PATHS_CACHE_HOST=/opt/local/renv/cache
RENV_PATHS_CACHE_CONTAINER=/renv/cache
docker run --rm \
-e "RENV_PATHS_CACHE=${RENV_PATHS_CACHE_CONTAINER}" \
-v "${RENV_PATHS_CACHE_HOST}:${RENV_PATHS_CACHE_CONTAINER}" \
-p 14618:14618 \
R -s -e 'renv::restore(); shiny::runApp(host = "0.0.0.0", port = 14618)'
What is RENV_PATHS_CACHE_HOST?
And is RENV_PATHS_CACHE_CONTAINER the location of where my cache will be upon running the image instance/container?
I'm not entirely sure how to use this example, but feel I'll need it.
The example here tries to demonstrate how one might mount an renv cache from the host filesystem on to a Docker container.
In this case, RENV_PATHS_CACHE_HOST points to a (theoretical) cache directory on the host filesystem, at /opt/local/renv/cache, whereas RENV_PATHS_CACHE_CONTAINER points to the location in the container where the host cache will be visible.
I have deployed a Node.js app on a Google compute instance via a Docker container. Is there a recommended way to pass the GOOGLE_APPLICATION_CREDENTIALS to the docker container?
I see the documentation states that GCE has Application Default Credentials (ADC), but these are not available in the docker container. (https://cloud.google.com/docs/authentication/production)
I am a bit new to docker & GCP, so any help would be appreciated.
Thank you!
So, I could find this documentation on where you can inject your GOOGLE_APPLICATION_CREDENTIALS into a docker in order to test cloud run locally, I know that this is not cloud run, but I believe that the same command could be used in order to inject your credentials to the container.
As I know that a lot of the times the community needs the steps and commands as the links could change and information also could change I will copy the steps needed in order to inject the credentials.
Refer to Getting Started with Authentication for instructions
on generating, retrieving, and configuring your Service Account
credentials.
The following Docker run flags inject the credentials and
configuration from your local system into the local container:
Use the --volume (-v) flag to inject the credential file into the
container (assumes you have already set your
GOOGLE_APPLICATION_CREDENTIALS environment variable on your machine):
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro
Use the --environment (-e) flag to set the
GOOGLE_APPLICATION_CREDENTIALS variable inside the container:
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
Optionally, use this fully configured Docker run command:
PORT=8080 && docker run \
-p 9090:${PORT} \
-e PORT=${PORT} \
-e K_SERVICE=dev \
-e K_CONFIGURATION=dev \
-e K_REVISION=dev-00001 \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json \
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro \ gcr.io/PROJECT_ID/IMAGE
Note that the path
/tmp/keys/FILE_NAME.json
shown in the example above is a reasonable location to place your
credentials inside the container. However, other directory locations
will also work. The crucial requirement is that the
GOOGLE_APPLICATION_CREDENTIALS environment variable must match the
bind mount location inside the container.
Hope this works for you.
I'm using a local repository as a staging repo and would like to be able to clear the whole staging repo via REST. How can I delete the contents of the repo without deleting the repo itself?
Since I have a similar requirement in one of my environments I like to provide a possible solution approach.
It is assumed the JFrog Artifactory instance has a local repository called JFROG-ARTIFACTORY which holds the latest JFrog Artifactory Pro installation RPM(s). For listing and deleting I've created the following script:
#!/bin/bash
# The logged in user will be also the admin account for Artifactory REST API
A_ACCOUNT=$(who am i | cut -d " " -f 1)
LOCAL_REPO=$1
PASSWORD=$2
STAGE=$3
URL="example.com"
# Check if a stage were provided, if not set it to PROD
if [ -z "$STAGE" ]; then
STAGE="repository-prod"
fi
# Going to list all files within the local repository
# Doc: https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-FileList
curl --silent \
-u"${A_ACCOUNT}:${PASSWORD}" \
-i \
-X GET "https://${STAGE}.${URL}/artifactory/api/storage/${LOCAL_REPO}/?list&deep=1" \
-w "\n\n%{http_code}\n"
echo
# Going to delete all files in the local repository
# Doc: https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-DeleteItem
curl --silent \
-u"${A_ACCOUNT}:${PASSWORD}" \
-i \
-X DELETE "https://${STAGE}.${URL}/artifactory/${LOCAL_REPO}/" \
-w "\n\n%{http_code}\n"
echo
So after calling
./Scripts/deleteRepository.sh JFROG-ARTIFACTORY Pa\$\$w0rd! repository-dev
for the development instance, it listed me all files in the local repository called JFROG-ARTIFACTORY, the JFrog Artifactory Pro installation RPM(s), deleted them, but left the local repository itself.
You may change and enhance the script for your needs and have also a look into How can I completely remove artifacts from Artifactory?
I have a multi-container Symfony application that uses docker-compose to handle the relationships between the containers. To simplify a little, i have 4 main services :
code:
image: mycode
web:
image: mynginx
volumes-from:
- code
ports:
- "80:80"
links:
- php-fpm
php-fpm:
image: myphpfpm
volumes-from:
- code
links:
- mongo
mongo:
image: mongo
The "mycode" image contains the code of my application and is built from the following Dockerfile :
FROM composer/composer
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libmcrypt-dev \
libxml2-dev \
libicu-dev \
libcurl4-openssl-dev \
libssl-dev \
pkg-config
RUN docker-php-ext-install iconv mcrypt mbstring bcmath json ctype iconv posix intl
RUN pecl install mongo \
&& echo extension=mongo.so >> /usr/local/etc/php/conf.d/mongo.ini
COPY . /code
WORKDIR /code
RUN rm -rf /code/app/cache/* \
&& rm -rf /code/app/logs/* \
&& chown -R root /code/app/cache \
&& chown -R root /code/app/logs \
&& chmod -R 777 /code/app/cache \
&& chmod -R 777 /code/app/logs \
&& composer install \
&& rm -f /code/web/app_dev.php \
&& rm -f /code/web/config.php
VOLUME ["/code", "/code/app/logs", "/code/app/cache"]
At first, deploying this application was easy. I just had to do a simple docker-compose up -d and it created all the containers and ran them without any issue. But then i had to deploy a new version.
This configuration uses volumes to store data :
the source code is mounted on the /code volume, and shared between 3
containers (code, web, php-fpm). It has to be replaced by a new version when deploying.
the MongoDb data is on another
volume, mounted only by the mongo container. I have to keep this data between deployments.
When i deploy an update to my code, i publish the new version of the mycode image and re-create the container. But since the /code volume is still used by the web and php-fpm containers, the old volume can't be replaced by the new one. I have to stop all the running services to delete the old volume, and if i use the docker-compose rm -v command, it will delete the mongodb data too !
Can't i replace only one volume with a new version, without any downtime ?
So i'm kind of stuck here. I'm thinking of having a permanent volume to store the code and update it through SSH with Capistrano, old style. This will allow me to run doctrine migrations scripts after deployment too. But i have other issues with it as Capistrano uses symlinks to handle versions so i can't just mount the /current folder to /code.
Do you have a solution to handle the deployment of a Docker application without losing data and without downtime ?
Should i use manual scripts instead of docker-compose ?
the source code is mounted on the /code volume
This is the problem, it is not what you want.
Code never goes into a volume, it should change when the image changes. Volumes are for things that you want to preserve between changes to the image (data, logs, state, etc).
Code is the immutable thing that you want to replace when you change a container. So remove the /code volume from the Dockerfile entirely, and instead do an ADD . /code in the mynginx and myphpfpm Dockerfiles.
With that change, you can deploy with just up -d. It will recreate any container that have changed, and your volumes will be copied over. You don't need an rm anymore.
If you have your Dockerfile for myphpfpm and mynginx in a different directory, you can build using docker build -f path/to/dockerfile .
Using a host volume (as suggested in another answer) is another option, however that's not usually what you want outside of development. With a host volume you would still remove the /code VOLUME from the dockerfile.
Do not copy the code via the Dockerfile, just attach volumes to the 'code' container.
Few edits:
code:
image: mycode
volumes:
- .:/code
- /code
web:
image: mynginx
volumes-from:
- code
ports:
- "80:80"
links:
- php-fpm
php-fpm:
image: myphpfpm
volumes-from:
- code
links:
- mongo
mongo:
image: mongo
Same thing applies to mongo mount it to an external volume so it persists when the container shuts down. Actually there is also another method, they mention it in their dockerhub page https://hub.docker.com/_/mongo/
Where to Store Data
Important note: There are several ways to store data used by
applications that run in Docker containers. We encourage users of the
mongo images to familiarize themselves with the options available,
including:
Let Docker manage the storage of your database data by writing the
database files to disk on the host system using its own internal
volume management. This is the default and is easy and fairly
transparent to the user. The downside is that the files may be hard to
locate for tools and applications that run directly on the host
system, i.e. outside containers.
Create a data directory on the host system (outside the container) and
mount this to a directory visible from inside the container. This
places the database files in a known location on the host system, and
makes it easy for tools and applications on the host system to access
the files. The downside is that the user needs to make sure that the
directory exists, and that e.g. directory permissions and other
security mechanisms on the host system are set up correctly.