How do I use JFrog CLI with CircleCI 2.0? - jfrog-cli

I'm trying to use JFrog CLI with CircleCI 2.0 to publish my docker image into my JFrog artifactory, after some research I've found this tutorial: https://circleci.com/docs/1.0/Artifactory/ but it's based on CircleCI 1.0 specification.
my config.yml file currently is:
version: 2
jobs:
build:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1
pip install \
docker-compose==1.12.0 \
awscli==1.11.76
- run:
name: Setup JFrog
command: |
wget http://dl.bintray.com/jfrog/jfrog-cli-go/1.7.1/jfrog-cli-linux-amd64/jfrog
chmod +x jfrog
./jfrog rt config --url $ARTIFACTORY_URL --user $ARTIFACTORY_USER --apikey $ARTIFACTORY_PASSWORD
docker login -e $ARTIFACTORY_EMAIL -u $ARTIFACTORY_USER -p $ARTIFACTORY_PASSWORD $ARTIFACTORY_DOCKER_REPOSITORY
But I'm getting the following error:
#!/bin/sh -eo pipefail
wget http://dl.bintray.com/jfrog/jfrog-cli-go/1.7.1/jfrog-cli-linux-amd64/jfrog
chmod +x jfrog
./jfrog rt config --url $ARTIFACTORY_URL --user $ARTIFACTORY_USER --apikey $ARTIFACTORY_PASSWORD
docker login -e $ARTIFACTORY_EMAIL -u $ARTIFACTORY_USER -p $ARTIFACTORY_PASSWORD $ARTIFACTORY_DOCKER_REPOSITORY
Connecting to dl.bintray.com (35.162.24.14:80)
Connecting to akamai.bintray.com (23.46.57.209:80)
jfrog 100% |*******************************| 9543k 0:00:00 ETA
/bin/sh: ./jfrog: not found
Exited with code 127
Does anyone know what is the correct way to use JFrog CLI with CircleCI 2.0?

I've fixed this installing JFrog CLI through npm:
version: 2
jobs:
build:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1 \
openssl \
nodejs
pip install \
docker-compose==1.12.0 \
awscli==1.11.76
- run:
name: Setup JFrog
command: |
npm install -g jfrog-cli-go
jfrog rt config --url $ARTIFACTORY_URL --user $ARTIFACTORY_USER --apikey $ARTIFACTORY_PASSWORD
docker login -u $ARTIFACTORY_USER -p $ARTIFACTORY_PASSWORD $ARTIFACTORY_DOCKER_REPOSITORY
Now it's working.

As an alternative to installing with Node.js (which is perfectly possible too, especially if you're running a Node.js build in CircleCI), you can use a cURL command to install it for you.
curl -fL https://getcli.jfrog.io | sh
This script will download the latest released version of the JFrog CLI based on your operating system and your architecture (32 vs 64 bits).

Related

Command pabot not found in docker version of Robotframework browser

I'm trying to run Pabot / robot framework-browser script in Docker.
I have tryed to use command:
docker run --rm -v "$(pwd):/test" --ipc=host --user pwuser --security-opt seccomp=Docker/seccomp_profile.json -e "enviroment=***" -e "ROBOT_THREADS=10" -e PABOT_OPTIONS="--testlevelsplit" marketsquare/robotframework-browser:latest bash -c "pabot . -i smoke --outputdir /test/output /test"
Result: bash: pabot: command not found
Whats wrong in that syntax??
If i use "robot -i Smoke --outputdir /test/output /test"" then execution works ok (no errors).
I checked and confirmed that the image does not have pabot installed. You can build an image locally with docker build -f Dockerfile ..
Here is an example of the execution with that image, but installing missing dependencies:
sudo docker run --rm -v $(pwd)/atest:/atest -v /tmp:/tmp -e "ROBOT_THREADS=10" -e PABOT_OPTIONS="--testlevelsplit" --ipc=host --user pwuser --security-opt seccomp=seccomp_profile.json marketsquare/robotframework-browser:latest bash -c "pip install robotframework-pabot psutil && pabot --outputdir /tmp/test/output atest/test"

Impossible to start Symfony 5 server on Docker container (symfony serve -d)

I trying to create Docker container to contenerized my Symfony 5 application.
I create first a Dockerfile
FROM php:7.4-fpm-alpine
# Update
RUN apk --no-cache update
RUN apk --no-cache add bash git
# Install Node
RUN apk --no-cache add --update nodejs npm
RUN apk --no-cache add --update python3
RUN apk --no-cache add --update make
RUN apk --no-cache add --update g++
# Install pdo
RUN docker-php-ext-install pdo_mysql
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Symfony CLI
RUN curl -sS https://get.symfony.com/cli/installer | bash && mv /root/.symfony/bin/symfony /usr/local/bin/symfony
# WORK DIR
COPY . /var/www/html
WORKDIR /var/www/html
RUN composer update
RUN composer install
RUN npm install
# Start Symfony server on Port 8000
EXPOSE 8000
RUN symfony serve -d
Then I created a docker-compose.yml file (where I simply redirect port 8000 of the container to port 8080 on my machine).
version: '3.8'
services:
php-fpm:
container_name: infolea
build: ./
ports:
- 8080:8000
volumes:
- ./:/var/www/html
Then, I build my image docker-compose build.
Then, I run my image docker-compose up -d.
On my browser, the localhost:8080 link doesn't display anything.
Then I restart the symfony server by typing symfony serve -d on the terminal of my container and on localhost:8080 I can see my application working.
Something is weard, is that when I verified if my server is not started yet on my docker container terminal, I got this :
docker container terminal
What i want, it's to start my Symfony server dirrectly, without retapping symfony serve -d.
How can i do it ?
Try using CMD istead of RUN
CMD ["/usr/local/bin/symfony", "local:server:start" , "--port=8000", "--no-tls"]
see https://docs.docker.com/engine/reference/builder/#cmd

Generate Dev Certificate inside a docker image

I have a .NET application and I wish, in production, generate a dev certificate (self-signed).
Locally, to do this I use the following commands:
dotnet dev-certs https --clean
dotnet dev-certs https
dotnet dev-certs https --trust
So I tried 2 methods, but none seems to work.
I have search for the .pfx file into my "/root/.aspnet/https" (/data/socloze/web/keys/https in real, because of the volume mapping), but this folder does not exists.
Method 1 : Create the certificate at image build
In the DockerFile file, I have the following
### >>> GLOBALS
ARG ENVIRONMENT="Production"
ARG PROJECT="Mycorp.MyApp.Web"
### <<<
#--------------------------------------------------
# Build / Publish
#--------------------------------------------------
# debian buster - AMD64
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
### >>> IMPORTS
ARG ENVIRONMENT
ARG PROJECT
### <<<
ARG NUGET_CACHE=https://api.nuget.org/v3/index.json
ARG NUGET_FEED=https://api.nuget.org/v3/index.json
# Copy sources
COPY src/ /app/src
ADD common.props /app
WORKDIR /app
# Installs NodeJS to build typescripts
#RUN apt-get update -yq && apt-get upgrade -yq && apt-get install -yq curl git nano
#RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && apt-get install -yq nodejs build-essential
#RUN npm install -g npm
#RUN npm install
RUN apt-get update
RUN apt-get install curl
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
RUN npm install /app/src/MyCorp.Core.Blazor/
#RUN npm install -g parcel-bundler
# Installs the required dependencies on top of the base image
# Publish a self-contained image
RUN apt-get update && apt-get install -y libgdiplus libc6-dev && dotnet dev-certs https --clean && dotnet dev-certs https && dotnet dev-certs https --trust &&\
dotnet publish --self-contained --runtime linux-x64 -c Debug -o out src/${PROJECT};
#--------------------------------------------------
# Execute
#--------------------------------------------------
# Start a new image from aspnet runtime image
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS runtime
### >>> IMPORTS
ARG ENVIRONMENT
ARG PROJECT
### <<<
#ENV DOTNET_GENERATE_ASPNET_CERTIFICATE=true
ENV ASPNETCORE_ENVIRONMENT=${ENVIRONMENT}
ENV ASPNETCORE_URLS="http://+:80;https://+:443;https://+:44390"
#ENV ASPNETCORE_URLS="http://+:80"
ENV PROJECT="${PROJECT}.dll"
# Make logs a volume for persistence
VOLUME /app/Logs
# App directory
WORKDIR /app
# Copy our build from the previous stage in /app
COPY --from=build /app/out ./
RUN apt-get update && apt-get install -y ffmpeg libgdiplus libc6-dev
# Ports
EXPOSE 80
EXPOSE 443
EXPOSE 44390
# Execute
ENTRYPOINT dotnet ${PROJECT}
Method 2 : Create the certificate by using docker-compose
The other way, is to generate the certificate once the container start, in my myappstack.yaml I have the following:
version: '3.3'
services:
web:
image: registry.gitlab.com/mycorp/myapp/socloze.web:1.1.1040
command:
- sh -c "dotnet dev-certs https --clean"
- sh -c "dotnet dev-certs https"
- sh -c "dotnet dev-certs https --trust"
- sh -c "echo MYPASS | sudo -S -k update-ca-certificates"
volumes:
- keys-vol:/root/.aspnet
- logs-vol:/app/Logs
- sitemap-vol:/data/sitemap/
networks:
- haproxy-net
- socloze-net
configs:
-
source: socloze-web-conf
target: /app/appsettings.json
logging:
driver: json-file
deploy:
placement:
constraints:
- node.role == manager
networks:
haproxy-net:
external: true
socloze-net:
external: true
volumes:
keys-vol:
driver: local
driver_opts:
device: /data/socloze/web/keys
o: bind
type: none
logs-vol:
driver: local
driver_opts:
device: /data/socloze/web/logs
o: bind
type: none
sitemap-vol:
driver: local
driver_opts:
device: /data/sitemap
o: bind
type: none
configs:
socloze-web-conf:
external: true
But none seems to work. I know that the first method has already worked, but I can't make it work again.

How to run Gcloud datastore emulator in Travis-ci?

I'm having some problems running Gcloud's Datastore emulator in Travis-ci.
Now running it like:
script:
- export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)"
- echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
- curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
- sudo apt-get update && sudo apt-get install google-cloud-sdk
- nohup gcloud beta emulators datastore start &
But this seems less than ideal.
Not sure what is wrong with this setup, as you say it is 'less than ideal', which indicates that it works.
If you want the setup steps to be cleaner, you can install the google-cloud-sdk directly because it's whitelisted by travis:
dist: trusty
apt:
packages:
- google-cloud-sdk
before_script:
- gcloud beta emulators datastore start &
- $(gcloud beta emulators datastore env-init)

Installing rpy2 with Docker is unable to find R path

A Django application using Docker needs to install rpy2 as a dependency. Although I install r-base container and specify it as a dependency, when installing django requirements I keep getting:
Collecting rpy2==2.8.3 (from -r /requirements/base.txt (line 55))
Downloading rpy2-2.8.3.tar.gz (186kB)
Complete output from command python setup.py egg_info:
Error: Tried to guess R's HOME but no command 'R' in the PATH.
How can specify inside Docker where the R path is?
My server.yml looks like this:
version: '2'
services:
r:
build: ./services/r
django:
build:
context: ./myproject/
dockerfile: ./compose/django/Dockerfile
env_file:
- .env
- .env-server
environment:
- DATABASE_URL=postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}#postgres:5432/${POSTGRES_USER}
depends_on:
- postgres
- r
command: /gunicorn.sh
volumes:
- ./myproject:/app
The Dockerfile for django is:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
COPY ./requirements /requirements
RUN pip install -r /requirements/production.txt \
&& pip install -r /requirements/test.txt \
&& groupadd -r django \
&& useradd -r -g django django
COPY . /app
RUN chown -R django /app
COPY ./compose/django/gunicorn.sh /gunicorn.sh
COPY ./compose/django/entrypoint.sh /entrypoint.sh
RUN sed -i 's/\r//' /entrypoint.sh \
&& sed -i 's/\r//' /gunicorn.sh \
&& chmod +x /entrypoint.sh \
&& chown django /entrypoint.sh \
&& chmod +x /gunicorn.sh \
&& chown django /gunicorn.sh
WORKDIR /app
ENTRYPOINT ["/entrypoint.sh"]
The Dockerfile for R is:
FROM r-base
It was easier to just install r inside the django container. So removing the r container and modifiying the django docker file adding this lines, worked:
RUN apt-get --force-yes update \
&& apt-get --assume-yes install r-base-core

Resources