I'm on the third module of this AWS tutorial to build a React app with AWS, Amplify and GraphQL but the build keeps breaking. When I ran amplify push --y the CLI generated ./src/aws-exports.js and added the same file to the .gitignore. So I'm not surprised the build is failing, since that file isn't included when I push my changes.
So I'm not sure what to do here. Considering it's automatically added to the .gitignore I'm hesitant to remove it.
Any suggestions?
I'm assuming you are trying to build your app in a CI/CD environment?
If that's the case then you need to build the backend part of your amplify app before you can build the frontend component.
For example, my app is building from the AWS amplify console and in my build settings I have
version: 0.1
backend:
phases:
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
frontend:
phases:
preBuild:
commands:
- yarn install --frozen-lockfile
build:
commands:
- yarn build
artifacts:
baseDirectory: build
files:
- "**/*"
cache:
paths:
- node_modules/**/*
Note that the backend is building first with the amplifyPush --simple command. This is what generates the aws-exports.js file.
The 'aws-exports.js' file gets created automatically when AWS Amplify runs the CI/CD deployment build process and gets configured with the appropriate settings for the environment you are deploying to.
And for this reason it is included in the .gitignore. You don't want your local test configuration to be used in your production deployment for example.
As per Matthe's answer above the should be generated when the build script runs the 'amplifyPush' command. For some reason this is not working for me at the moment though!
AWS added support to automatically generate the aws-exports.js at build time to avoid getting the error: https://docs.aws.amazon.com/amplify/latest/userguide/amplify-config-autogeneration.html
Related
Problem: A variety of tutorials from AWS for integrating automated testing with CI/CD look to integrate the testing stage after, or within the build process by using a localhost:3000 server.
However, as AWS Amplify developers know, the local environment can sometimes offer a different user experience to deploying to a development environment, and using that environment with AWS Amplify's hosting service.
Therefore, how do I add testing not only at the build stage with localhost:3000, but also for the development environment's hosted url?
I'm looking to build all my resources (backend and front-end), with a git push to the code commit repository.
Idea of Stages:
Source
Build
Test
Deploy Cloud Formation Stacks
Test
Roll back if failure
Current amplify.yml:
Note: this currently does not work so please do not copy it for your build. Please refer to the hyperlink above.
version: 0.2
frontend:
phases:
preBuild:
commands:
- npm ci
build:
commands:
- npm run build
artifacts:
baseDirectory: .next
files:
- '**/*'
cache:
paths:
- node_modules/**/*
test:
phases:
preTest:
commands:
- npm ci
- npm install wait-on
- npm install pm2
- npm install mocha mochawesome mochawesome-merge mochawesome-report-generator
- npx pm2 start npm -- dev
- 'npx wait-on --timeout 300 http://localhost:3000'
test:
commands:
- 'npx cypress run --reporter mochawesome --reporter-options "reportDir=cypress/report/mochawesome-report,overwrite=false,html=false,json=true,timestamp=mmddyyyy_HHMMss"'
postTest:
commands:
- npx mochawesome-merge cypress/report/mochawesome-report/mochawesome*.json > cypress/report/mochawesome.json
- npx pm2 kill
artifacts:
baseDirectory: cypress
configFilePath: '**/mochawesome.json'
files:
- '**/*.png'
- '**/*.mp4'
I have project with a monorepo on pnpm workspaces and turborepo to manage monorepo scripts.
All the single projects works as expected, they are nextjs projects.
When i upgraded turborepo from the 1.2.6 to the 1.4.0 version and when i run the build script on a gitlab ci, the build task succeeded, but the pipeline keep stuck.
I run the script in this way on the pipeline
.gitlab-ci.yml
.build-dashboard:
image: gitlab.****.it:4567/.../node-pnpm
stage: build
script:
- pnpm build:dashboard
package.json
....
scripts: {
"build:dashboard": "turbo run build --filter=...#project/dashboard && exit 0"
}
...
i try to force the exit using exit 0, but without success (with turobrepo > 1.2.6).
Any suggestions on that ?
Thanks
UPDATE
After many attempt i get an additional log messagge
Attempting to remove file /builds/.../....-user-interface/node_modules/.cache/turbo/a9f0a39c2d3de111/apps/main/.next/standalone/node_modules/.pnpm/supports-color#7.2.0/node_modules/has-flag; a subdirectory is required
As discussed here the problem whas related to turborepo with next build in standalone mode.
Install "turbo": "1.4.4-canary.0" solve the issue.
I have an issue when I am trying to deploy an application to GCP which is using a sqlite database at the backend. My problem is that on each deployment the database is been wiped out and I can't find a way to make it permanent.
So assume that the database.sqlite is placed in api/db/database.sqlite. For the initial deployment is working fine as the database is created and placed in the /db folder. But when I deploy the app again the folder is wiped out and the database. I have also tried to placed it in a folder outside of the api folder (e.g /database in the root) but again the folder is wiped out.
I don't want to create/migrate the db on each build and pass it as an artifact to the deploy job.
// gitlab-ci.yaml
image: node:latest
stages:
- build
- deploy
build:
stage: build
script:
- yarn
- yarn test:api
- yarn test:ui
- yarn build
// this runs for the first time to create the db, then I remove it but the database is gone.
# - yarn db:migrate --env production
artifacts:
paths:
- api/dist/
// this runs for the first time to upload the db
// - api/db
- ui/dist/
expire_in: 1 day
deploy:
only:
- master
stage: deploy
image: google/cloud-sdk:alpine
dependencies:
- build
script:
- gcloud app deploy --project=PROJECT_ID ui-app.yaml api-app.yaml
// api-app.yaml
service: service-name
runtime: nodejs
env: flex
skip_files:
- node_modules/
- ui/
manual_scaling:
instances: 1
resources:
volumes:
- name: ramdisk1
volume_type: tmpfs
size_gb: 1
memory_gb: 6
disk_size_gb: 10
Ideally I need a folder in the instance somewhere which will not wiped out on each deployment. I am sure that I am missing something. Any ideas?
You simply can't! App engine, flex or standard, are serverless and stateless. Your volume type is explicit tmpfs "TeMPorary FileSystem".
App Engine Flex are at least restarted once a week for patching and update of the underlying server, and thus you will lost your database at least every week.
When you deploy a new version, a new instance is created from scratch, and thus your memory is empty when the instance start.
Serverless product are stateless. If you want to persist your data you need to store them outside of the serverless product. On a compute engine for example, and with a persistent disk, which is "persistent".
If you use an ORM, you can also easily switch from SQL Lite to MySQL or PostgreSQL. And thus leverage Cloud SQL managed database for your uses.
I'm using CodePipeline to deploy whatever is on master branch of the git to Elastic Beanstalk.
I followed this tutorial to extend the default nginx configuration (specifically the max-body-size): https://medium.com/swlh/using-ebextensions-to-extend-nginx-default-configuration-in-aws-elastic-beanstalk-189b844ab6ad
However, because I'm not using the standard eb deploy command, I dont think the CodePipeline flow is going into the .ebextension directory and doing the things its supposed to do.
Is there a way to use code pipeline (so i can have CI/CD from master) as well as utilize the benefits of .ebextension?
Does this work if you use the eb deploy command directly? If yes, then I would try using the pipeline execution history to find a recent artifact to download and test with the eb deploy command.
If CodePipeline's Elastic Beanstalk Job Worker does not play well with ebextensions, I would consider it completely useless to deploy to Elastic Beanstalk.
I believe there is some problem with the ebextensions themselves. You can investigate the execution in these log files to see if something is going wrong during deployment:
/var/log/eb-activity.log
/var/log/eb-commandprocessor.log
/var/log/eb-version-deployment.log
All the config files under .ebextension will be executed based on the order of precedence while deploying on the Elastic Beanstalk. So, it is doesn't matter whether you are using codepipeline or eb deploy, all the file in ebextension directory will be executed. So, you don't have to worry about that.
Be careful about the platform you're using, since “64bit Amazon Linux 2 v5.0.2" instead of .ebextension you have to use .platform.
Create .platform directory instead of .ebextension
Create the subfolders and the proxy.conf file like in this path .platform/nginx/conf.d/proxy.conf
In proxy.conf write what you need, in case of req body size just client_max_body_size 20M;
I resolved the problem. You need include .ebextension folder in your deploy.
I only copy the dist files, then I need include too:
- .ebextensions/**/*
Example:
## Required mapping. Represents the buildspec version. We recommend that you use 0.2.
version: 0.2
phases:
## install: install dependencies you may need for your build
install:
runtime-versions:
nodejs: 12
commands:
- echo Installing Nest...
- npm install -g #nestjs/cli
## pre_build: final commands to execute before build
pre_build:
commands:
- echo Installing source NPM dependencies...
- npm install
## build: actual build commands
build:
commands:
# Build your app
- echo Build started on `date`
- echo Compiling the Node.js code
- npm run build
## Clean up node_modules to keep only production dependencies
# - npm prune --production
## post_build: finishing touches
post_build:
commands:
- echo Build completed on `date`
# Include only the files required for your application to run.
artifacts:
files:
- dist/**/*
- package.json
- node_modules/**/*
- .ebextensions/**/*
Ande the config file /.ebextensions/.node-settings.config:
option_settings:
aws:elasticbeanstalk:container:nodejs:
NodeCommand: "npm run start:prod"
I am looking for an easy and clean way to publish artefacts build with GitLab CI onto Artifactory.
I was able to spot https://gitlab.com/gitlab-org/omnibus/blob/af8af9552966348a15dc1bf488efb29a8ca27111/lib/omnibus/publishers/artifactory_publisher.rb but I wasnt able to find any documentation regarding how I am supposed to configure it to make it work.
Note: I am looking for a gitlab_ci.yaml approach, not as in implementing it externally.
At a basic level, this can be done with the JFrog CLI tools. Unless you want to embed configuration in your .gitlab-ci.yml (I don't) you will first need to run (on your runner):
jfrog rt c
This will prompt for your Artifactory URL and an API key by default. After entering these items, you'll find ~/.jfrog/jfrog-cli.conf containing JSON like so:
{
"artifactory": {
"url": "http://artifactory.localdomain:8081/artifactory/",
"apiKey": "AKCp2V77EgrbwK8NB8z3LdvCkeBPq2axeF3MeVK1GFYhbeN5cfaWf8xJXLKkuqTCs5obpzxzu"
}
}
You can copy this file to the GitLab runner's home directory - in my case, /home/gitlab-runner/.jfrog/jfrog-cli.conf
Once that is done, the runner will authenticate with Artifactory using that configuration. There are a bunch of other possibilities for authentication if you don't want to use API keys - check the JFrog CLI docs.
Before moving on, make sure the 'jfrog' executable is in a known location, with execute permissions for the gitlab-runner user. From here you can call the utility within your .gitlab-ci.yml - here is a minimal example for a node.js app that will pass the Git tag as the artifact version:
stages:
- build-package
build-package:
stage: build-package
script:
- npm install
- tar -czf test-project.tar.gz *
- /usr/local/bin/jfrog rt u --build-name="Test Project" --build-number="${CI_BUILD_TAG}" test-project.tar.gz test-repo
If you're building with maven this is how I managed to do mine:
Note: you need to have your artifactory credentials (user and pass) ready.
Create a master password and generate an encrypted password from it. The procedure on how to create a masterpassword can be found here
In your pipeline settings in gitlab, create 2 secret variables, one for the username and the other for your encrypted password.
Update or create a settings.xml file in .m2 directory for maven builds. Your settings.xml should look like this:
<settings xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.1.0 http://maven.apache.org/xsd/settings-1.1.0.xsd" xmlns="http://maven.apache.org/SETTINGS/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<servers>
<server>
<id>central</id>
<username>${env.ARTIFACTORY_USER}</username>
<password>${env.ENCRYPTED_PASS}</password>
</server>
</servers>
</settings>
In your .gitlab-ci.yml file, you need to use this settings.xml like this:
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
and that's it. This should work. You can visit here for more about how to use artifactory with maven
I know this doesn't exactly answer your question, but I got to this question from a related search, so I thought it might be relevant to others too:
I ended up using an mvn deploy job that was bound to the deploy stage for gitlab.
Here is the relevant job portion:
deploy:jdk8:
stage: test
script:
- 'mvn $MAVEN_CLI_OPTS deploy site site:stage'
only:
- master
# Archive up the built documentation site.
artifacts:
paths:
- target/staging
image: maven:3.3.9-jdk-8