Building different images for each environment with Gitlab-CI AutoDevOps - next.js

Dockerfiles accept ENV variables via --build-args. These vars are mandatory for NextJS, used for static pages (to call remote APIs) and are "Hard coded" within the built image.
Gitlab-CI AutoDevOps has an ENV var to pass these args (AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS). But this is only consumeable if you are using one environment/image. When multiple environments (staging and production) needed, the URLs differ (https://staging.exmpl.com and https://www.exmpl.com).
How do I have to modify the Gitlab AutoDevOps to have two different images built?
In the CI/CD Settings my AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS are set:
--build-arg=API_URL=https://staging.exmpl.at/backend --build-arg=NEXT_PUBLIC_API_URL=https://staging.exmpl.at/backend
# as well $NEXT_PUBLIC_API_URL is set there
Currently this is my complete gitlab-ci.yml:
include:
- template: Auto-DevOps.gitlab-ci.yml
# added vars for build
build:
stage: build
variables:
API_URL: $NEXT_PUBLIC_API_URL
NEXT_PUBLIC_API_URL: $NEXT_PUBLIC_API_URL
How can I build two images, without "leaving" AutoDevOps? I assume I have to customize the build stage.
Another idea is to create a second Git repository called production with production URL set for $NEXT_PUBLIC_API_URL:
Staging get build and runs tests.
if successful it gets published
Staging repo content will be copied to the production repo
Production repo gets build (with production URL) and tested and then published too
Then I have two images.
Has someone please a better idea?
Thank you in advance

Maybe a couple way you can do this. It's pretty easy if you only care about the build job.
One way would be to make a second job which extends: from build
build:
variables:
MY_ENV: staging
build production:
extends: build
variables:
MY_ENV: production
Another way might be to add a parallel:matrix: key to the build job
build:
parallel:
matrix:
- MY_ENV: staging
- MY_ENV: production
Keep in mind, however, if any jobs downstream of build depends on its artifacts, you'll need to manage those as well.
For example:
build:
variables:
MY_ENV: staging
build production:
extends: build
variables:
MY_ENV: production
# some other job inherited from a template that depends on `build`
# we also want two of these jobs for each environment
downstream:
variables:
MY_ENV: staging
# extend it for production
downstream production:
extends: downstream
dependencies: # make sure we get artifacts from correct upstream build job
- build production
variables:
MY_ENV: production

Related

gcloud deploy with sqlite database without wiped out

I have an issue when I am trying to deploy an application to GCP which is using a sqlite database at the backend. My problem is that on each deployment the database is been wiped out and I can't find a way to make it permanent.
So assume that the database.sqlite is placed in api/db/database.sqlite. For the initial deployment is working fine as the database is created and placed in the /db folder. But when I deploy the app again the folder is wiped out and the database. I have also tried to placed it in a folder outside of the api folder (e.g /database in the root) but again the folder is wiped out.
I don't want to create/migrate the db on each build and pass it as an artifact to the deploy job.
// gitlab-ci.yaml
image: node:latest
stages:
- build
- deploy
build:
stage: build
script:
- yarn
- yarn test:api
- yarn test:ui
- yarn build
// this runs for the first time to create the db, then I remove it but the database is gone.
# - yarn db:migrate --env production
artifacts:
paths:
- api/dist/
// this runs for the first time to upload the db
// - api/db
- ui/dist/
expire_in: 1 day
deploy:
only:
- master
stage: deploy
image: google/cloud-sdk:alpine
dependencies:
- build
script:
- gcloud app deploy --project=PROJECT_ID ui-app.yaml api-app.yaml
// api-app.yaml
service: service-name
runtime: nodejs
env: flex
skip_files:
- node_modules/
- ui/
manual_scaling:
instances: 1
resources:
volumes:
- name: ramdisk1
volume_type: tmpfs
size_gb: 1
memory_gb: 6
disk_size_gb: 10
Ideally I need a folder in the instance somewhere which will not wiped out on each deployment. I am sure that I am missing something. Any ideas?
You simply can't! App engine, flex or standard, are serverless and stateless. Your volume type is explicit tmpfs "TeMPorary FileSystem".
App Engine Flex are at least restarted once a week for patching and update of the underlying server, and thus you will lost your database at least every week.
When you deploy a new version, a new instance is created from scratch, and thus your memory is empty when the instance start.
Serverless product are stateless. If you want to persist your data you need to store them outside of the serverless product. On a compute engine for example, and with a persistent disk, which is "persistent".
If you use an ORM, you can also easily switch from SQL Lite to MySQL or PostgreSQL. And thus leverage Cloud SQL managed database for your uses.

Cannot find file './aws-exports' in './src'

I'm on the third module of this AWS tutorial to build a React app with AWS, Amplify and GraphQL but the build keeps breaking. When I ran amplify push --y the CLI generated ./src/aws-exports.js and added the same file to the .gitignore. So I'm not surprised the build is failing, since that file isn't included when I push my changes.
So I'm not sure what to do here. Considering it's automatically added to the .gitignore I'm hesitant to remove it.
Any suggestions?
I'm assuming you are trying to build your app in a CI/CD environment?
If that's the case then you need to build the backend part of your amplify app before you can build the frontend component.
For example, my app is building from the AWS amplify console and in my build settings I have
version: 0.1
backend:
phases:
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
frontend:
phases:
preBuild:
commands:
- yarn install --frozen-lockfile
build:
commands:
- yarn build
artifacts:
baseDirectory: build
files:
- "**/*"
cache:
paths:
- node_modules/**/*
Note that the backend is building first with the amplifyPush --simple command. This is what generates the aws-exports.js file.
The 'aws-exports.js' file gets created automatically when AWS Amplify runs the CI/CD deployment build process and gets configured with the appropriate settings for the environment you are deploying to.
And for this reason it is included in the .gitignore. You don't want your local test configuration to be used in your production deployment for example.
As per Matthe's answer above the should be generated when the build script runs the 'amplifyPush' command. For some reason this is not working for me at the moment though!
AWS added support to automatically generate the aws-exports.js at build time to avoid getting the error: https://docs.aws.amazon.com/amplify/latest/userguide/amplify-config-autogeneration.html

How to use gitlab ci to test if a java maven project can be built and run under multiple jdks platform?

In some other ci, for example, with Travis, it supports multiple JDKs test (e.g https://blog.travis-ci.com/support_for_multiple_jdks).
However, I'm not sure how can I make it under the GitLab ci.
Assume I have a java project, I want to make sure this project can both build and run correctly under jdk8 and jdk11, How can I do this in the Gitlab CI?
Many thanks!
One way to do this would be to define pipeline jobs with different images with required dependencies. You can use any public images from dockerhub. After quick search I choose for my yaml example codenvy/jdk8_maven3_tomcat8 (jdk8 + maven) and appinair/jdk11-maven (jdk11 + maven), I'm not sure they will work for you though.
tests_jdk8:
image: codenvy/jdk8_maven3_tomcat8
script:
- <your mvn test/build script>
tests_jdk11:
image: appinair/jdk11-maven
script:
- <your mvn test/build script>
If you can dockerize your environment you can use your custom images instead. Or you write some before_script which can install your project requirements to any os docker image. Would be perfect to reflect existing production environment.
If you have already instances where you test your project you can always connect them as Gitlab shell/docker runner and run your test on custom runner.
All depends on what resources you've got and what do you want to achieve.

How to utilize .ebextension while using CodePipeline

I'm using CodePipeline to deploy whatever is on master branch of the git to Elastic Beanstalk.
I followed this tutorial to extend the default nginx configuration (specifically the max-body-size): https://medium.com/swlh/using-ebextensions-to-extend-nginx-default-configuration-in-aws-elastic-beanstalk-189b844ab6ad
However, because I'm not using the standard eb deploy command, I dont think the CodePipeline flow is going into the .ebextension directory and doing the things its supposed to do.
Is there a way to use code pipeline (so i can have CI/CD from master) as well as utilize the benefits of .ebextension?
Does this work if you use the eb deploy command directly? If yes, then I would try using the pipeline execution history to find a recent artifact to download and test with the eb deploy command.
If CodePipeline's Elastic Beanstalk Job Worker does not play well with ebextensions, I would consider it completely useless to deploy to Elastic Beanstalk.
I believe there is some problem with the ebextensions themselves. You can investigate the execution in these log files to see if something is going wrong during deployment:
/var/log/eb-activity.log
/var/log/eb-commandprocessor.log
/var/log/eb-version-deployment.log
All the config files under .ebextension will be executed based on the order of precedence while deploying on the Elastic Beanstalk. So, it is doesn't matter whether you are using codepipeline or eb deploy, all the file in ebextension directory will be executed. So, you don't have to worry about that.
Be careful about the platform you're using, since “64bit Amazon Linux 2 v5.0.2" instead of .ebextension you have to use .platform.
Create .platform directory instead of .ebextension
Create the subfolders and the proxy.conf file like in this path .platform/nginx/conf.d/proxy.conf
In proxy.conf write what you need, in case of req body size just client_max_body_size 20M;
I resolved the problem. You need include .ebextension folder in your deploy.
I only copy the dist files, then I need include too:
- .ebextensions/**/*
Example:
## Required mapping. Represents the buildspec version. We recommend that you use 0.2.
version: 0.2
phases:
## install: install dependencies you may need for your build
install:
runtime-versions:
nodejs: 12
commands:
- echo Installing Nest...
- npm install -g #nestjs/cli
## pre_build: final commands to execute before build
pre_build:
commands:
- echo Installing source NPM dependencies...
- npm install
## build: actual build commands
build:
commands:
# Build your app
- echo Build started on `date`
- echo Compiling the Node.js code
- npm run build
## Clean up node_modules to keep only production dependencies
# - npm prune --production
## post_build: finishing touches
post_build:
commands:
- echo Build completed on `date`
# Include only the files required for your application to run.
artifacts:
files:
- dist/**/*
- package.json
- node_modules/**/*
- .ebextensions/**/*
Ande the config file /.ebextensions/.node-settings.config:
option_settings:
aws:elasticbeanstalk:container:nodejs:
NodeCommand: "npm run start:prod"

how to deploy docker image in gcp vm

I'm trying to deploy a simple R Shiny app containerized in a Docker image, onto a virtual machine hosted by Google Cloud Platform, but I'm having problems.
The files are stored on a Github repo and the Docker image is built using a trigger on GCP/Cloud Build. The Docker file is based on the rocker/shiny format.
The build is triggered correctly and starts to build, but the build keeps timing out after 10min.
TIMEOUT ERROR: context deadline exceeded
Is there a command I can add to the Dockerfile to extend the build time, or is my Dockerfile wrong?
You can extend the timeout with a Cloud Build config (cloudbuild.yaml). The default timeout for a build is 10 minutes. Note that you define timeouts for each step as well as the entire build: https://cloud.google.com/cloud-build/docs/build-config
For your app, the cloudbuild.yaml would look something like
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--tag=gcr.io/$PROJECT_ID/linear', '.'] # build from Dockerfile
images: ['gcr.io/$PROJECT_ID/linear'] # push tagged images to Container Registry
timeout: '1200s' # extend timeout for build to 20 minutes

Resources