How to mount local directory to concourse pipeline job? - pipeline

I am trying to connect local git repository to concourse so that i can perform automated testing on my local environment even before committing the code to GitRepo. In other terms i want to perform some tasks before git commit using concourse pipeline for which i want to mount my local working directory to concourse pipeline jobs.

You can't run a pipeline or a complete job with a local repository, only a task. But that's OK, as a job main goal is to setup inputs and outputs for a task, and you will be providing them locally
The command is fly execute, and the complete doc is here : https://concourse-ci.org/tasks.html#running-tasks
To run a tasks locally you will have to have the task in a separate yaml file, not inline in your pipeline.
The basic command where you run the task run-tests.yml with the input repository set to the current directory:
fly -t my_target execute --config run-tests.yml --input repository=.

Related

Azure pipeline Deploy to Amazon Lambda seldom fails on build

This doesn't happen very often, but I'd like to know why Azure pipeline sometimes fail.. Here is the error:
Here is the raw logs from Azure:
The task also requires permissions to upload your Lambda function or
serverless application content to the specified Amazon S3 bucket.
Depending on the size of the application bundle, either putObject or
the S3 multi-part upload APIs may be used.
============================================================================== Configuring credentials for task ...configuring AWS credentials from
service endpoint '0fccf46b-rAnDom-GuId-f8a4f1351ac1' ...endpoint
defines standard access/secret key credentials Processing Lambda
project at D:\a\1\s\FooService\FooProject.Services.Foo Reading
existing aws-lambda-tools-defaults.json Clearing out profile foo so
task credentials will be used. Configuring region for task
...configured to use region ap-southeast-1, defined in task.
[command]"C:\Program Files\dotnet\dotnet.exe" tool install -g
Amazon.Lambda.Tools Since you just installed the .NET Core SDK, you
will need to reopen the Command Prompt window before running the tool
you installed. You can invoke the tool using the following command:
dotnet-lambda Tool 'amazon.lambda.tools' (version '4.0.0') was
successfully installed.
[command]"C:\Program Files\dotnet\dotnet.exe" restore Restore
completed in 63.09 ms for
D:\a\1\s\Infrastructure\FooProject.Infrastructure\FooProject.Infrastructure.csproj.
Restore completed in 63.07 ms for
D:\a\1\s\FooService\FooProject.Services.Foo\FooProject.Services.Foo.csproj.
Beginning Serverless Deployment Performing package-only build of
serverless application, output template will be placed in
D:\a\1\a\serverless.template [command]"C:\Program
Files\dotnet\dotnet.exe" lambda package-ci -ot
D:\a\1\a\serverless.template --region ap-southeast-1 --s3-bucket
foo-dev-bucket --disable-interactive true Could not execute
because the specified command or file was not found. Possible reasons
for this include: * You misspelled a built-in dotnet command. *
You intended to execute a .NET Core program, but dotnet-lambda does
not exist. * You intended to run a global tool, but a
dotnet-prefixed executable with this name could not be found on the
PATH.
[error]Error: The process 'C:\Program Files\dotnet\dotnet.exe' failed with exit code 1
[section]Finishing: Deploy .NET Core to Lambda: fooProject-api-foo
Here is the setup:

How to utilize .ebextension while using CodePipeline

I'm using CodePipeline to deploy whatever is on master branch of the git to Elastic Beanstalk.
I followed this tutorial to extend the default nginx configuration (specifically the max-body-size): https://medium.com/swlh/using-ebextensions-to-extend-nginx-default-configuration-in-aws-elastic-beanstalk-189b844ab6ad
However, because I'm not using the standard eb deploy command, I dont think the CodePipeline flow is going into the .ebextension directory and doing the things its supposed to do.
Is there a way to use code pipeline (so i can have CI/CD from master) as well as utilize the benefits of .ebextension?
Does this work if you use the eb deploy command directly? If yes, then I would try using the pipeline execution history to find a recent artifact to download and test with the eb deploy command.
If CodePipeline's Elastic Beanstalk Job Worker does not play well with ebextensions, I would consider it completely useless to deploy to Elastic Beanstalk.
I believe there is some problem with the ebextensions themselves. You can investigate the execution in these log files to see if something is going wrong during deployment:
/var/log/eb-activity.log
/var/log/eb-commandprocessor.log
/var/log/eb-version-deployment.log
All the config files under .ebextension will be executed based on the order of precedence while deploying on the Elastic Beanstalk. So, it is doesn't matter whether you are using codepipeline or eb deploy, all the file in ebextension directory will be executed. So, you don't have to worry about that.
Be careful about the platform you're using, since “64bit Amazon Linux 2 v5.0.2" instead of .ebextension you have to use .platform.
Create .platform directory instead of .ebextension
Create the subfolders and the proxy.conf file like in this path .platform/nginx/conf.d/proxy.conf
In proxy.conf write what you need, in case of req body size just client_max_body_size 20M;
I resolved the problem. You need include .ebextension folder in your deploy.
I only copy the dist files, then I need include too:
- .ebextensions/**/*
Example:
## Required mapping. Represents the buildspec version. We recommend that you use 0.2.
version: 0.2
phases:
## install: install dependencies you may need for your build
install:
runtime-versions:
nodejs: 12
commands:
- echo Installing Nest...
- npm install -g #nestjs/cli
## pre_build: final commands to execute before build
pre_build:
commands:
- echo Installing source NPM dependencies...
- npm install
## build: actual build commands
build:
commands:
# Build your app
- echo Build started on `date`
- echo Compiling the Node.js code
- npm run build
## Clean up node_modules to keep only production dependencies
# - npm prune --production
## post_build: finishing touches
post_build:
commands:
- echo Build completed on `date`
# Include only the files required for your application to run.
artifacts:
files:
- dist/**/*
- package.json
- node_modules/**/*
- .ebextensions/**/*
Ande the config file /.ebextensions/.node-settings.config:
option_settings:
aws:elasticbeanstalk:container:nodejs:
NodeCommand: "npm run start:prod"

The dotnet sonarscanner is running a node command for some reason and throwing "Failure during analysis, Node.js command to start eslint-bridge"

I'm getting a non fatal exception while running the dotnet sonarscanner utility to send data to our sonarqube instance.
During "dotnet sonarscanner end" command an exception is being thrown of "Failure during analysis, Node.js command to start eslint-bridge was: /usr/bin/node /builds/app-namespace/app-name/.sonarqube/out/.sonar/.sonartmp/eslint-bridge-bundle/node_modules/eslint-bridge/bin/server 44002
java.lang.IllegalStateException: Failed to start server (10s timeout)"
Im currently running the command in an alpine docker container with
node : v10.14.2
dotnetcore: 2.2.3
The node and npm commands are available from the path and I have also specified the sonar.nodejs.executable in the sonarqube xml config.
Additionally what is the node package used for relating to a dotnet project?
The content is still being deployed to our sonarqube instance but I would like to understand the cause of the exception.
For C#/VB.NET code, the code analysis takes place in the build step because the SonarC#/VB.NET rules are written as Roslyn rules.
However, for most other languages analysed by SonarQube/Cloud, the analysis takes place in the scanner end step. The scanner downloads the language plugins from the SonarQube/Cloud server and calls each one to give it the chance to analyse any files it can handle.
The SonarJS plugin works with ESLint which uses NodeJS i.e. the node package is being used to analyse JavaScript files.

When your build failed in automated process

I have a script that does the 'build' in several servers once I run it. I am now trying to figure out how to know if build failed. For example, the script will run the 3 commands(shown below), how would I know if my pre-build failed?
During the build process I have to run the following commands:
./ant pre-build
./ant install1
./ant post-build
I can use ssh, puppet etc. in my Redhat machine, and don't have anything more than that. Is there an application or a management tool that will let me monitor my build process in UI?
You can check the return code of the ant executable as mentioned in the Running Apache Ant:
the ant start up scripts (in their Windows and Unix version) return the return code of the java program. So a successful build returns 0, failed builds return other values.

Call the build script present on another machine

We have a build script present on our build machine.
How will I run this build script from my machine using ant?
Both machines are in same network.
Is there any free tool to call build script on remote machine which supports command line support?
Please Note: Build script cannot be called manually.
Please ask for any more details are required.
The sshexec task can do the work. Obviously Ant should be installed on the build machine. But you have to install a ssh server on the build machine too.
Here a piece of build.xml for your local machine:
<sshexec host="buildmachine"
username="builduser"
password="somepassword"
command="ant -f /path/to/the/build.xml" />

Resources