encrypt file with sops with github workflow - encryption

I'm trying to encrypt a file with sops with github actions, my workflow code is
name: Encrypt application secrets
on:
workflow_dispatch:
jobs:
encrypt:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v2
with:
fetch-depth: 1
- name: sops install
run: |
curl -O -L -C - https://github.com/mozilla/sops/releases/download/v3.7.1/sops-v3.7.1.darwin
sudo mv sops-v3.7.1.darwin /usr/bin/sops
sudo chmod +x /usr/bin/sops
- name: upload keystore
run: gpg --import .github/.gpg
- name: encrypt file
run: |
sudo chmod +x /usr/bin/sops
sudo sops --encrypt --in-place .github/application.secrets.yaml
But I get this error
Run sudo chmod +x /usr/bin/sops
sudo chmod +x /usr/bin/sops
sudo sops --encrypt --in-place .github/application.secrets.yaml
shell: /usr/bin/bash -e {0}
/usr/bin/sops: 1: ����
�: not found
/usr/bin/sops: 8: Syntax error: word unexpected (expecting ")")
Is there someone who can help please ?

Following worked for my github pipline (though for decryption purposes):
# main.yaml
...
jobs:
build-publish-deploy:
name: Build, Publish and Deploy
runs-on: ubuntu-latest
steps:
...
- name: Decrypt secret
run: |-
curl -O -L -C - https://github.com/mozilla/sops/releases/download/v3.7.3/sops-v3.7.3.linux
sudo mv sops-v3.7.3.linux /usr/bin/sops
sudo chmod +x /usr/bin/sops
export SOPS_AGE_KEY=${{ secrets.GKE_DWK_SOPS_AGE_KEY }}
sops --decrypt manifests/secret.enc.yaml > manifests/secret.yaml
...
Darwin files are usually for MacOS and you are requesting to run on ubuntu-latest.

Related

Add shiny server with ADD=Shiny with rocker verse image

Documentation for rocker/rstudio docker container.
I am able to get up and running in rstudio using Docker with the following set up in a directory:
Dockerfile:
FROM rocker/tidyverse:latest
docker-compose:
version: "3.5"
services:
ide-rstudio:
build:
context: .
ports:
- 8787:8787
environment:
ROOT: "TRUE"
PASSWORD: test
Now, if I enter this dir in the terminal and type: docker-compose build followed by docker-compose up -d and then navigate to localhost:8787 I see the rstudio login screen. So far so good.
I would like to add shiny to the same container per the documentation (as opposed to using a separate shiny image).
On the documentation I link to at the top it says:
Add shiny server on start up with e ADD=shiny
docker run -d -p 3838:3838 -p 8787:8787 -e ADD=shiny -e PASSWORD=yourpasswordhere rocker/rstudio
shiny server is now running on localhost:3838 and RStudio on localhost:8787.
Since I'm using docker-compose I updated my docker-compose file to this:
version: "3.5"
services:
ide-rstudio:
build:
context: .
ports:
- 8787:8787
- 3838:3838
environment:
ROOT: "TRUE"
ADD: "shiny"
PASSWORD: test
Now, when I go to the terminal like before and type: docker-compose build followed by docker-compose up -d I again see the rstudio login page at localhost:8787. However, if I go to localhost:3838, I see Firefox' 'connection was reset' page. It looks like nothing is there.
How can I add shiny to my container per the instructions?
It seems the image is missing shiny installer. If you run the same compose file without -d and using rocker/rstudio:3.2.0 image you will see in logs that shiny is installing. It failed to install for me (there was a problem with missing file /usr/local/lib/R/site-library/littler/examples/install2.r) but I found the script which installs the thing. For some reason the script does not exist in rocker/tidyverse:latest (I have no idea why, you'd better ask the maintainer) and ADD=shiny has no effect.
I managed to get things working by injecting that script into rocker/tidyverse:latest and here is how you can do it. Save the following as a file named add:
#!/usr/bin/with-contenv bash
ADD=${ADD:=none}
## A script to add shiny to an rstudio-based rocker image.
if [ "$ADD" == "shiny" ]; then
echo "Adding shiny server to container..."
apt-get update && apt-get -y install \
gdebi-core \
libxt-dev && \
wget --no-verbose https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb && \
install2.r -e --skipinstalled shiny rmarkdown && \
cp -R /usr/local/lib/R/site-library/shiny/examples/* /srv/shiny-server/ && \
rm -rf /var/lib/apt/lists/* && \
mkdir -p /var/log/shiny-server && \
chown shiny.shiny /var/log/shiny-server && \
mkdir -p /etc/services.d/shiny-server && \
cd /etc/services.d/shiny-server && \
echo '#!/bin/bash' > run && echo 'exec shiny-server > /var/log/shiny-server.log' >> run && \
chmod +x run && \
adduser rstudio shiny && \
cd /
fi
if [ $"$ADD" == "none" ]; then
echo "Nothing additional to add"
fi
Then either add the following to your Dockefile:
COPY add /etc/cont-init.d/add
RUN chmod +x /etc/cont-init.d/add
or apply execution permission locally and mount it during runtime. To do this run the following locally:
chmod +x add
and add this to docker-compose.yml:
services:
ide-rstudio:
volumes: # this line and below
- ./add:/etc/cont-init.d/add

Using Symfony Doctrine Migrations with Gitlab CI: GitLab CI interprets "Nothing to migrate" as error

We are using the Doctrine migrations bundle for update database in our deployment process. Currently, we are switching to Gitlab-CI.
The problem: The CI is aborting the deployment process because the output of command php sf doctrine:migrations:diff contains stderr.
The part of our .gitlab-ci.yml:
deploy_live:
type: deploy
environment:
name: live
url: 1.2.3.4
script:
- ssh root#1.2.3.4 "cd /var/www/html/ && git pull origin master && exit"
- ssh root#11.2.3.4 "cd /var/www/html/ && composer install -n && exit"
- ssh root#1.2.3.4 "cd /var/www/html/ && php sf doctrine:migrations:diff --env=prod && exit"
- ssh root#1.2.3.4 "cd /var/www/html/ && php sf doctrine:migrations:migrate -n --env=prod && exit"
- 'ssh root#1.2.3.4 "cd /var/www/html/ && chown www-data:www-data . -R && exit"'
only:
- master
Output of Gitlab CI:
$ ssh root#1.2.3.4 "cd /var/www/html/ && php sf doctrine:migrations:diff --env=prod && exit"
#!/usr/bin/env php
In NoChangesDetected.php line 13:
No changes detected in your mapping information.
doctrine:migrations:diff [--configuration [CONFIGURATION]] [--db-configuration [DB-CONFIGURATION]] [--editor-cmd [EDITOR-CMD]] [--filter-expression [FILTER-EXPRESSION]] [--formatted] [--line-length [LINE-LENGTH]] [--check-database-platform [CHECK-DATABASE-PLATFORM]] [--db DB] [--em [EM]] [--shard SHARD] [-h|--help] [-q|--quiet] [-v|vv|vvv|--verbose] [-V|--version] [--ansi] [--no-ansi] [-n|--no-interaction] [-e|--env ENV] [--no-debug] [--] <command>
ERROR: Job failed: exit code 1
This may be a bug, but maybe it can be circumvented?
FYI: sf is a symlink to bin/console.
I just found a solution:
move the commands excecuted directly from the gitlab-ci.yml file unter script to an shell script deploy.sh
move this script via scp to the server
gitlab-ci.yml
deploy_live:
type: deploy
environment:
name: live
url: 1.2.3.4
script:
- scp deploy.sh root#1.2.3.4:/var/www/html/
- ssh root#1.2.3.4 "cd /var/www/html/ && chmod +x deploy.sh && ./deploy.sh && exit"
only:
- master
deploy.sh
cd /var/www/html/
git add --all
git commit -m "changes"
git pull origin master
composer install -n
php sf doctrine:cache:clear-metadata --env=prod
php sf doctrine:migrations:diff --env=prod
php sf doctrine:migrations:migrate -n --env=prod
php sf cache:clear --env=prod
exit
They added option for such case: --allow-no-migration - can you try it?
see: https://github.com/doctrine/migrations/blob/ebd2551c7767375fcbc762b48d7dee4c18ceae97/lib/Doctrine/Migrations/Tools/Console/Command/MigrateCommand.php#L64

How do I use JFrog CLI with CircleCI 2.0?

I'm trying to use JFrog CLI with CircleCI 2.0 to publish my docker image into my JFrog artifactory, after some research I've found this tutorial: https://circleci.com/docs/1.0/Artifactory/ but it's based on CircleCI 1.0 specification.
my config.yml file currently is:
version: 2
jobs:
build:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1
pip install \
docker-compose==1.12.0 \
awscli==1.11.76
- run:
name: Setup JFrog
command: |
wget http://dl.bintray.com/jfrog/jfrog-cli-go/1.7.1/jfrog-cli-linux-amd64/jfrog
chmod +x jfrog
./jfrog rt config --url $ARTIFACTORY_URL --user $ARTIFACTORY_USER --apikey $ARTIFACTORY_PASSWORD
docker login -e $ARTIFACTORY_EMAIL -u $ARTIFACTORY_USER -p $ARTIFACTORY_PASSWORD $ARTIFACTORY_DOCKER_REPOSITORY
But I'm getting the following error:
#!/bin/sh -eo pipefail
wget http://dl.bintray.com/jfrog/jfrog-cli-go/1.7.1/jfrog-cli-linux-amd64/jfrog
chmod +x jfrog
./jfrog rt config --url $ARTIFACTORY_URL --user $ARTIFACTORY_USER --apikey $ARTIFACTORY_PASSWORD
docker login -e $ARTIFACTORY_EMAIL -u $ARTIFACTORY_USER -p $ARTIFACTORY_PASSWORD $ARTIFACTORY_DOCKER_REPOSITORY
Connecting to dl.bintray.com (35.162.24.14:80)
Connecting to akamai.bintray.com (23.46.57.209:80)
jfrog 100% |*******************************| 9543k 0:00:00 ETA
/bin/sh: ./jfrog: not found
Exited with code 127
Does anyone know what is the correct way to use JFrog CLI with CircleCI 2.0?
I've fixed this installing JFrog CLI through npm:
version: 2
jobs:
build:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1 \
openssl \
nodejs
pip install \
docker-compose==1.12.0 \
awscli==1.11.76
- run:
name: Setup JFrog
command: |
npm install -g jfrog-cli-go
jfrog rt config --url $ARTIFACTORY_URL --user $ARTIFACTORY_USER --apikey $ARTIFACTORY_PASSWORD
docker login -u $ARTIFACTORY_USER -p $ARTIFACTORY_PASSWORD $ARTIFACTORY_DOCKER_REPOSITORY
Now it's working.
As an alternative to installing with Node.js (which is perfectly possible too, especially if you're running a Node.js build in CircleCI), you can use a cURL command to install it for you.
curl -fL https://getcli.jfrog.io | sh
This script will download the latest released version of the JFrog CLI based on your operating system and your architecture (32 vs 64 bits).

Installing rpy2 with Docker is unable to find R path

A Django application using Docker needs to install rpy2 as a dependency. Although I install r-base container and specify it as a dependency, when installing django requirements I keep getting:
Collecting rpy2==2.8.3 (from -r /requirements/base.txt (line 55))
Downloading rpy2-2.8.3.tar.gz (186kB)
Complete output from command python setup.py egg_info:
Error: Tried to guess R's HOME but no command 'R' in the PATH.
How can specify inside Docker where the R path is?
My server.yml looks like this:
version: '2'
services:
r:
build: ./services/r
django:
build:
context: ./myproject/
dockerfile: ./compose/django/Dockerfile
env_file:
- .env
- .env-server
environment:
- DATABASE_URL=postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}#postgres:5432/${POSTGRES_USER}
depends_on:
- postgres
- r
command: /gunicorn.sh
volumes:
- ./myproject:/app
The Dockerfile for django is:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
COPY ./requirements /requirements
RUN pip install -r /requirements/production.txt \
&& pip install -r /requirements/test.txt \
&& groupadd -r django \
&& useradd -r -g django django
COPY . /app
RUN chown -R django /app
COPY ./compose/django/gunicorn.sh /gunicorn.sh
COPY ./compose/django/entrypoint.sh /entrypoint.sh
RUN sed -i 's/\r//' /entrypoint.sh \
&& sed -i 's/\r//' /gunicorn.sh \
&& chmod +x /entrypoint.sh \
&& chown django /entrypoint.sh \
&& chmod +x /gunicorn.sh \
&& chown django /gunicorn.sh
WORKDIR /app
ENTRYPOINT ["/entrypoint.sh"]
The Dockerfile for R is:
FROM r-base
It was easier to just install r inside the django container. So removing the r container and modifiying the django docker file adding this lines, worked:
RUN apt-get --force-yes update \
&& apt-get --assume-yes install r-base-core

Symfony cache permissions with docker with nginx rsync

Following #sveneisenschmidt's workaround which utilizes rsync in a container to speed up Symfony on OSX:
https://forums.docker.com/t/how-to-speed-up-shared-folders/9322/15
I seem to have Symfony running this way, but I'm running into permissions issues with the web server that I'm not sure how to resolve in Docker.
I'm able to clear the cache via the CLI in my php-fom instance (cache:clear --env=prod --no-debug)
But the problem is when I view Symfony via app_dev.php, nginx cannot seem to write to the cache/logs directories:
Unable to write in the cache directory (/app/app/cache/dev)
I'm confused about how rsync fits into the permissions, but it seems that nginx needs more permissions than it has. Any ideas on how to resolve this?
docker_compose.yml
# Web server
nginx:
container_name: insight_nginx
build: docker/nginx
ports:
- "80:80"
links:
- php
- sync:sync
volumes_from:
- sync
# Data alias
data:
container_name: insight_data
build: docker/data/.
# Database
db:
container_name: insight_db
build: docker/db
ports:
- 3306:3306
volumes:
- "./.data/db:/var/lib/mysql"
- ./db-dump:/docker-entrypoint-initdb.d
environment:
MYSQL_ROOT_PASSWORD: root
# Application server
php:
container_name: insight_php
build: docker/php-fpm
external_links:
- insight_db:docker-mysql
environment:
DB_HOST: docker-mysql
# Syncing
volumes_from:
- sync
links:
- sync:sync
# Synchronization
### Symfony rsync workaround from here: https://forums.docker.com/t/how-to-speed-up-shared-folders/9322/15
sync:
container_name: insight_sync
build: docker/sync
command: "lsyncd -delay 1 -nodaemon -rsync /src /app"
volumes:
- /app
- "./:/src"
working_dir: /src
stdin_open: true
tty: true
nginx/Dockerfile
FROM nginx:latest
COPY symfony3.conf /etc/nginx/conf.d/symfony3.conf
#RUN usermod -u 1000 www-data
#RUN chown -R www-data:www-data /app/cache
#RUN chown -R www-data:www-data /app/logs
php-fpm/Dockerfile
FROM pvlltvk/ubuntu-trusty-php-fpm-5.6
RUN apt-get install -y \
php5-curl \
php5-sybase \
freetds-dev \
libxml2-dev
ADD freetds.conf /etc/freetds/freetds.conf
RUN echo 'alias sf="php /app/app/console"' >> ~/.bashrc
#RUN chmod -R 0777 /tmp/symfony/logs
#RUN chmod -R 0777 /tmp/symfony/cache
#ADD start.sh /start.sh
#RUN chmod +x /start.sh
WORKDIR /app
sync/Dockerfile
FROM ubuntu:16.04
RUN PACKAGES="\
rsync \
lsyncd \
" && \
apt-get update && \
apt-get install -y $PACKAGES && \
apt-get autoremove --purge -y && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
#RUN rm -rf /src/app/cache/* \
# rm -rf /src/app/logs/* \
# sudo chmod +R 777 /src/app/cache /src/app/logs
#RUN chmod -R 0777 ./app/logs
#RUN chmod -R 0777 ./app/cache
CMD instruction allows you to set a default command, which will be executed only when you run container without specifying a command. *
RUN executes the command(s) that you give in a new layer and creates a new image.**
try
CMD chown -R www-data:www-data /var/www && nginx
*http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/
**https://til.codes/docker-run-vs-cmd-vs-entrypoint/

Resources