Spinnaker Nexus Integration - nexus

I'm facing issue while integrating spinnaker with Nexus.
Basically, here is my process - Building docker image using Jenkins and uploading to Nexus. Next, want to trigger spinnaker pipelines based on new image available on Nexus to deploy apps on kubernetes.
I've used these 2 commands
hal config provider docker-registry enable
hal config provider docker-registry account add my-docker-registry \
--address <pvtIP>:9082 \
--repositories repository/<repoName> \
--username <userName> \
--password
Getting error as below
+ Get current deployment
Success
- Add the my-docker-registry account
Failure
Problems in default.provider.dockerRegistry.my-docker-registry:
! ERROR Unable to fetch tags from the docker repository:
repository/test-docker-snapshots/, Unrecognized SSL message, plaintext
connection?
? Can the provided user access this repository?
- WARNING None of your supplied repositories contain any tags.
Spinnaker will not be able to deploy any docker images.
? Push some images to your registry.
- Failed to add account my-docker-registry for provider
dockerRegistry.
is it mandatory to have nexus on HTTPS ? I'm running on http, and using in internal network only...
please advise.. thanks..

If your nexus repo is running on HTTP then you should set --insecure-registry flag in your command. So you would final command would be as follows:
hal config provider docker-registry account add my-docker-registry \
--address <pvtIP>:9082 \
--repositories repository/<repoName> \
--insecure-registry true \
--username <userName> \
--password

Related

How to test Firestore Security Rules with Jenkins?

I'm developing some Firestore security rules locally. I use mocha to test the rules, and locally everything works. I've a Jenkins pipeline that every time I merge a PR on develop it published the rules on Firebase in cloud. What I want to do is running my unit tests within Jenkins. Anyway, every time Jenkins calls yarn test from the pipeline, I get an error that says
#firebase/firestore: Firestore (7.18.0): Could not reach Cloud Firestore backend. Connection failed 1 times. Most recent error: FirebaseError: [code=internal]: 13 INTERNAL: Received RST_STREAM with code 2 triggered by internal client error: Protocol error
This typically indicates that your device does not have a healthy Internet connection at the moment. The client will operate in offline mode until it is able to successfully connect to the backend.
Is there a way to run the firebase emulators from Jenkins?
Thanks!
I found a way to do that.
By using firebase-tools-docker I can easily run my tests inside a docker container that brings up the emulators suite.
The Jenkinsfile goes like this:
def jenkinsUser = 1001
def firebaseDocker = 'andreysenov/firebase-tools:9.14.0'
stage('Pull docker image') {
sh "docker pull $firebaseDocker"
}
stage('Unit tests') {
sh "docker run -d --rm \
--user $jenkinsUser:$jenkinsUser \
-p 8080:8080 \
-v ${pwd()}:/home/node \
--name firebase-emulators \
$firebaseDocker \
firebase emulators:start"
sleep(5)
sh "docker exec firebase-emulators /bin/bash -c 'cd tests && yarn test'"
sh "docker stop firebase-emulators"
}
This is my folder structure (for reference):
Hope this helps 😉

Firestore authorization for Google Compute engine for app on a docker container

I have deployed a Node.js app on a Google compute instance via a Docker container. Is there a recommended way to pass the GOOGLE_APPLICATION_CREDENTIALS to the docker container?
I see the documentation states that GCE has Application Default Credentials (ADC), but these are not available in the docker container. (https://cloud.google.com/docs/authentication/production)
I am a bit new to docker & GCP, so any help would be appreciated.
Thank you!
So, I could find this documentation on where you can inject your GOOGLE_APPLICATION_CREDENTIALS into a docker in order to test cloud run locally, I know that this is not cloud run, but I believe that the same command could be used in order to inject your credentials to the container.
As I know that a lot of the times the community needs the steps and commands as the links could change and information also could change I will copy the steps needed in order to inject the credentials.
Refer to Getting Started with Authentication for instructions
on generating, retrieving, and configuring your Service Account
credentials.
The following Docker run flags inject the credentials and
configuration from your local system into the local container:
Use the --volume (-v) flag to inject the credential file into the
container (assumes you have already set your
GOOGLE_APPLICATION_CREDENTIALS environment variable on your machine):
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro
Use the --environment (-e) flag to set the
GOOGLE_APPLICATION_CREDENTIALS variable inside the container:
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
Optionally, use this fully configured Docker run command:
PORT=8080 && docker run \
-p 9090:${PORT} \
-e PORT=${PORT} \
-e K_SERVICE=dev \
-e K_CONFIGURATION=dev \
-e K_REVISION=dev-00001 \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json \
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro \ gcr.io/PROJECT_ID/IMAGE
Note that the path
/tmp/keys/FILE_NAME.json
shown in the example above is a reasonable location to place your
credentials inside the container. However, other directory locations
will also work. The crucial requirement is that the
GOOGLE_APPLICATION_CREDENTIALS environment variable must match the
bind mount location inside the container.
Hope this works for you.

Does Bintray implement the artifactory api?

I am working on a delivery pipeline within spinnaker. Spinnaker has support for searching artifactory for artifacts and then triggering a pipeline. I have been publishing my maven artifacts to bintray.com and assumed that this would work for triggering my pipelines.
I've configured spinnaker with this information...
hal config repository artifactory enable
hal config repository artifactory search add bintray \
--base-url https://dl.bintray.com/$USERNAME \
--repo maven-repo \
--groupId $GROUP_ID \
--username $USERNAME \
--password $PASSWORD
However I getting errors in the igor service log saying...
2019-08-15 14:20:00.262 WARN 1 --- [RxIoScheduler-3] c.n.s.i.a.ArtifactoryBuildMonitor : Unable to query Artifactory for artifacts (HTTP 405):
I'm wondering if I am falsely assuming that bintray implements the artifactory api.
Does bintray.com implement the artifactory api?
Bintray's API isn't the same as Artifactory.
It has its own API, documented here.

Symfony 4 app works with Docker Compose but breaks with Docker Swarm (no login, profiler broken)

I'm using Docker Compose locally with:
app container: Nginx & PHP-FPM with a Symfony 4 app
PostgreSQL container
Redis container
It works great locally but when deployed to the development Docker Swarm cluster, I can't login to the Symfony app.
The Swarm stack is the same as local, except for PostgreSQL which is installed on its own server (not a Docker container).
Using the profiler, I nearly always get the following error:
Token not found
Token "2df1bb" was not found in the database.
When I display the content of the var/log/dev.log file, I get these lines about my login attempts:
[2019-07-22 10:11:14] request.INFO: Matched route "app_login". {"route":"app_login","route_parameters":{"_route":"app_login","_controller":"App\\Controller\\SecurityController::login"},"request_uri":"http://dev.ip/public/login","method":"GET"} []
[2019-07-22 10:11:14] security.DEBUG: Checking for guard authentication credentials. {"firewall_key":"main","authenticators":1} []
[2019-07-22 10:11:14] security.DEBUG: Checking support on guard authenticator. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} []
[2019-07-22 10:11:14] security.DEBUG: Guard authenticator does not support the request. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} []
[2019-07-22 10:11:14] security.INFO: Populated the TokenStorage with an anonymous Token. [] []
The only thing I may find useful here is the Guard authenticator does not support the request. message, but I have no idea what do search from there.
UPDATE:
Here is my docker-compose.dev.yml (removed redis container and changed app environment variables):
version: "3.7"
networks:
web:
driver: overlay
services:
# Symfony + Nginx
app:
image: "registry.gitlab.com/my-image"
deploy:
replicas: 2
restart_policy:
condition: on-failure
networks:
- web
ports:
- 80:80
environment:
APP_ENV: dev
DATABASE_URL: pgsql://user:pass#0.0.0.0/my-db
MAILER_URL: gmail://user#gmail.com:pass#localhost
Here is the Dockerfile.dev used to build the app image on development servers:
# Base image
FROM php:7.3-fpm-alpine
# Source code into:
WORKDIR /var/www/html
# Import Symfony + Composer
COPY --chown=www-data:www-data ./symfony .
COPY --from=composer /usr/bin/composer /usr/bin/composer
# Alpine Linux packages + PHP extensions
RUN apk update && apk add \
supervisor \
nginx \
bash \
postgresql-dev \
wget \
libzip-dev zip \
yarn \
npm \
&& apk --no-cache add pcre-dev ${PHPIZE_DEPS} \
&& pecl install redis \
&& docker-php-ext-enable redis \
&& docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
&& docker-php-ext-install pdo_pgsql \
&& docker-php-ext-configure zip --with-libzip \
&& docker-php-ext-install zip \
&& composer install \
--prefer-dist \
--no-interaction \
--no-progress \
&& yarn install \
&& npm rebuild node-sass \
&& yarn encore dev \
&& mkdir -p /run/nginx
# Nginx conf + Supervisor entrypoint
COPY ./dev.conf /etc/nginx/conf.d/default.conf
COPY ./.htpasswd /etc/nginx/.htpasswd
COPY ./supervisord.conf /etc/supervisord.conf
EXPOSE 80 443
ENTRYPOINT /usr/bin/supervisord -c /etc/supervisord.conf
UPDATE 2:
I pulled my Docker images and ran the application using only the docker-compose.dev.yml (without the docker-compose.local.yml that I'd use too locally). I have been able to login, everything is okay.
So... It works with Docker Compose locally, but not in Docker Swarm on a remote server.
UPDATE 3:
I made the dev server leave the Swarm cluster and started the services using Docker Compose. It works.
The issue is about going from Compose to Swarm. I created an issue: docker/swarm #2956
Maybe it's not your specific case, but it could help some user who have problems using Docker Swarm which are not present in Docker Compose.
I've been fighting this issue for over a week. I found that the default network for Docker Compose uses the bridge driver and Docker Swarm uses Overlay.
Later, I read in the Caveats section in the Postgres Docker image repo that there's a poblem with the IPVS connection timeouts in overlay networks and it refers to this blog for solutions.
I try with the first option and changed the endpoint_mode setting to dnsrr in my docker-compose.yml file:
db:
image: postgres:12
# Others settings ...
deploy:
endpoint_mode: dnsrr
Keep in mind that there are some caveats (mentioned in the blog) to consider. However, you could try the other options.
Also in this issue maybe you find something useful as they faced the same problem.

Register Designate with Keystone

I have followed the following Guide for Setup of designate.
http://docs.openstack.org/developer/designate/install/ubuntu.html
Above guide is having the exact workflow what I was looking for.
I need to setup Designate using PowerDns Backend. It provides way for doing the same.
But In case of Registering Designate with Keystone it lacks in Detail.
Please some one help me regarding the same.
Now I am trying to access http://IP.Address:9001/v2/command.
It gives error as follows:
Authentication required
Error log from designate-api:
2015-10-20 03:58:36.917 20993 WARNING keystoneclient.middleware.auth_token [-] Unable to find authentication token in headers
2015-10-20 03:58:36.917 20993 INFO keystoneclient.middleware.auth_token [-] Invalid user token - rejecting request
2015-10-20 03:58:36.917 20993 INFO eventlet.wsgi [-] 61.12.45.30 - - [20/Oct/2015 03:58:36] "GET /v1/ HTTP/1.1" 401 217 0.000681
I found the way for doing the same.
Here it is detailed steps attached.
Registering keystone with designate:
Kestone Setup:
apt-get install keystone
Edit /etc/keystone/keystone.conf and change the [database] section:
connection = mysql://keystone:keystone#localhost/keystone
rm /var/lib/keystone/keystone.db
$ mysql -u root -p
mysql> CREATE DATABASE keystone;
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'#'localhost' \
IDENTIFIED BY 'keystone';
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'#'%' \
IDENTIFIED BY 'keystone';
mysql> exit
pip install mysql-python
su -s /bin/sh -c "keystone-manage db_sync" keystone
Execute the following command note down the value:
openssl rand -hex 10
Edit /etc/keystone/keystone.conf and change the [DEFAULT] section, replacing ADMIN_TOKEN with the results of the command:
[DEFAULT]
# A "shared secret" between keystone and other openstack services
admin_token = ADMIN_TOKEN
Configure the log directory. Edit the /etc/keystone/keystone.conf file and update the [DEFAULT] section:
[DEFAULT]
...
log_dir = /var/log/keystone
service keystone restart
Users tenants service and endpoint creation:
export OS_SERVICE_TOKEN=token_value
(please edit the token value generated above)
export OS_SERVICE_ENDPOINT=http://localhost:35357/v2.0
keystone tenant-create --name service --description "Service Tenant" --enabled true
keystone service-create --type dns --name designate --description="Designate"
keystone endpoint-create --service designate --publicurl http://127.0.0.1:9001/v1 --adminurl http://127.0.0.1:9001/v1 --internalurl http://127.0.0.1:9001/v1
keystone user-create --name dnsaas --tenant service --pass dnsaas --enabled true
keystone role-create --name=admin
keystone user-role-add --user dnsaas --tenant service --role admin
apt-get install python-designateclient
Create an openrc file:
$ vi openrc
export OS_USERNAME=dnsaas
export OS_PASSWORD=dnsaas
export OS_TENANT_NAME=service
export OS_AUTH_URL=http://localhost:5000/v2.0/
export OS_AUTH_STRATEGY=keystone
export OS_REGION_NAME=RegionOne
source an openrc file:
. openrc
Note :
Execute or restart the designate-central and designate-api services.
designate domain-list command
designate domain-list
Above command is not returing any errors means fine to go.

Resources