push package builded in Gitlab CI.Runner to Nexus repository - nexus

In Gitlab issue #19095 it's decided to leverage GL as package repository, but what should i do just now, until it's not done, for task: "try that Gitlab instead Jenkins+Nexus". From which place can I push package to Nexus?
from gitlab-ci.yml
using uploaded package from Runner using artifacts parameter gitlab-ci.yml https://about.gitlab.com/2015/11/22/gitlab-8-2-released/
from Docker image using Maven may be
via webhook
using release tag?

The best answer I think you're going to find is that you need to write it into your gitlab yml script
NEXUS_USERNAME=admin
NEXUS_PASSWORD=admin123
NEXUS_SERVER=server.com/yourserver
NEXUS_REPOSITORY=raw
echo "Sending backup to server"
curl -v -u ${NEXUS_USERNAME}:${NEXUS_PASSWORD} --upload-file ${UPLOAD_FILE} http://${NEXUS_SERVER}/repository/${NEXUS_REPOSITORY}/${UPLOAD_FILE}

Related

How Can I create Dockerfile that install dotnet sdk for existing jenkins container

I'm absolutely new in Docker and Jenkins in addition that I am sophomore at software world. Firstly, I would like to describe our system. We are using Centos 7 and I installed Jenkins on Docker as a container. After that I have tried using the Dotnet commands such as dotnet build on Jenkins, but I faced some errors(" dotnet: not found "). I guess I must install dotnet sdk for jenkins on docker by using Dockerfile. But I could not create Dockerfile properly. I got always some error. Can you share Dockerfile or similar issue for me.
After a quick search i have found this Dockerfile and also this post
This could do for you. I think you should be able to salvage useful bits for yourself.
But if there is an option id recommend your Jenkins agent just run the docker build of the dedicated .net core container through the docker socket? (as an idea of course). For reference

How can I automatically accept Artifactory EULA?

I have been using Artifactory OSS and set it up with a deploy script. The deploy script also uploads some images with curl to a generic repo immediately after setup. Now I need to upload docker images as well so I made the switch to Artifactory JCR. JCR won't accept my curl push until I have accepted the EULA. Is it possible to accept it automatically? I have been looking for a EULA flag in files and the database but without success.
My environment is a docker container with artifactory-jcr:6.17.0 in Kubernetes.
One more option is to use this curl in the script, after JFrog Container Registry is installed:
curl -XPOST -vu username:password http://${ArtifactoryURL}/artifactory/ui/jcr/eula/accept
For deployments using a scripts, you can sign the JCR EULA in a YAML configuration file you have prepared ahead. As JCR is based on Artifactory, the configuration files are usually similar.
Create a YAML file at $JCR_HOME/etc/artifactory.config.import.yml
Add the below
GeneralConfiguration:
eula:
accepted: true
OnboardingConfiguration:
repoTypes:
- docker
- helm
Make sure to format it as YAML before writing to the file

How to utilize .ebextension while using CodePipeline

I'm using CodePipeline to deploy whatever is on master branch of the git to Elastic Beanstalk.
I followed this tutorial to extend the default nginx configuration (specifically the max-body-size): https://medium.com/swlh/using-ebextensions-to-extend-nginx-default-configuration-in-aws-elastic-beanstalk-189b844ab6ad
However, because I'm not using the standard eb deploy command, I dont think the CodePipeline flow is going into the .ebextension directory and doing the things its supposed to do.
Is there a way to use code pipeline (so i can have CI/CD from master) as well as utilize the benefits of .ebextension?
Does this work if you use the eb deploy command directly? If yes, then I would try using the pipeline execution history to find a recent artifact to download and test with the eb deploy command.
If CodePipeline's Elastic Beanstalk Job Worker does not play well with ebextensions, I would consider it completely useless to deploy to Elastic Beanstalk.
I believe there is some problem with the ebextensions themselves. You can investigate the execution in these log files to see if something is going wrong during deployment:
/var/log/eb-activity.log
/var/log/eb-commandprocessor.log
/var/log/eb-version-deployment.log
All the config files under .ebextension will be executed based on the order of precedence while deploying on the Elastic Beanstalk. So, it is doesn't matter whether you are using codepipeline or eb deploy, all the file in ebextension directory will be executed. So, you don't have to worry about that.
Be careful about the platform you're using, since “64bit Amazon Linux 2 v5.0.2" instead of .ebextension you have to use .platform.
Create .platform directory instead of .ebextension
Create the subfolders and the proxy.conf file like in this path .platform/nginx/conf.d/proxy.conf
In proxy.conf write what you need, in case of req body size just client_max_body_size 20M;
I resolved the problem. You need include .ebextension folder in your deploy.
I only copy the dist files, then I need include too:
- .ebextensions/**/*
Example:
## Required mapping. Represents the buildspec version. We recommend that you use 0.2.
version: 0.2
phases:
## install: install dependencies you may need for your build
install:
runtime-versions:
nodejs: 12
commands:
- echo Installing Nest...
- npm install -g #nestjs/cli
## pre_build: final commands to execute before build
pre_build:
commands:
- echo Installing source NPM dependencies...
- npm install
## build: actual build commands
build:
commands:
# Build your app
- echo Build started on `date`
- echo Compiling the Node.js code
- npm run build
## Clean up node_modules to keep only production dependencies
# - npm prune --production
## post_build: finishing touches
post_build:
commands:
- echo Build completed on `date`
# Include only the files required for your application to run.
artifacts:
files:
- dist/**/*
- package.json
- node_modules/**/*
- .ebextensions/**/*
Ande the config file /.ebextensions/.node-settings.config:
option_settings:
aws:elasticbeanstalk:container:nodejs:
NodeCommand: "npm run start:prod"

Artifactory: Converting remote repo to local repo

My employer has been misusing Bintray as our binary repository for some time. We are finally moving to Artifactory instead and closing down Bintray. But this seems to be an almost impossible task. There is no way of exporting Bintray repos to a zip. Downloading the repos means manually downloading each file from the UI or through their API. I have tried two approaches for automation:
1) wget for crawling our bintray like this:
wget -e robots=off -o ~/wget.log -w 1 -m -np --user --password "https://.bintray.com"
which yielded all of the files in the repos. But this only solves half the problem. I couldn't find out how to import the files to a repository in artifactory (all the repos are over 100mbs each and therefore can't be uploaded, for some reason).
2) I set the Bintray repos up as remote repositories and enabled Active Replication. That seems to have worked for now. But I don't know if they will be removed when the Bintray account is moved or even if they are stored in Artifactory. Therefore I would like to convert the remote repo to a local repo, to make sure that it is permanently stored in artifactory is there a way of doing this? If so, how?
I'll try to address both of your questions below.
What do you mean you can't upload more than 100mb? Which version of Artifactory are you using? On-prem or SaaS-based installation? How are you trying to upload your files to Artifactory? Have you tried to import the content by using the import feature of Artifactory? (Admin --> Import&Export --> repository Import)
It sounds like you are using the UI for the upload, and if so you can configure the max upload size in Admin --> General Configuration page.
If you mean that you have all of the content from Bintray cached in your remote repository cache in Artifactory just use the "Copy" or "Move" option and move the content to a local repository. This will ensure that all of the content is stored locally.

How to deploy a Java project using war file on CloudControl

I have my Java project in a Hg repository. How can I use this code or convert to Git repo to be able to deploy on Cloud Control?
Is there an Option for me to use a .war file to Deploy instead of the entire file structure upload and compile? If so, would it matter if the war file was built using Tomcat server or Glass-fish server?
Your help in answering these question will be a valuable information for me. Appreciate your help in advance.
Best Regards.
If you want to keep all mercurial branches, tags and history use fast-export. Otherwise just create git repository from scratch:
git init; git add . ; git commit -am "Initial commit"
Deploying prebuilt war file is not the recommended way. You should push to the repository (via git or cctrlapp) and let cloudControl platform take care of the build process. Anyway you can still use Maven to download the war file. You can find some examples here.
Apart from this you have to provide an embedded jetty or tomcat runner and specify start command in Procfile:
Jetty:
web: java -jar target/dependency/jetty-runner.jar --port $PORT target/YOU_WAR.war
Tomcat:
web: java -jar target/dependency/webapp-runner.jar --port $PORT target/YOUR_WAR.war
Keep in mind that war file should be built independently of any application servers.

Resources