How to utilize .ebextension while using CodePipeline - nginx

I'm using CodePipeline to deploy whatever is on master branch of the git to Elastic Beanstalk.
I followed this tutorial to extend the default nginx configuration (specifically the max-body-size): https://medium.com/swlh/using-ebextensions-to-extend-nginx-default-configuration-in-aws-elastic-beanstalk-189b844ab6ad
However, because I'm not using the standard eb deploy command, I dont think the CodePipeline flow is going into the .ebextension directory and doing the things its supposed to do.
Is there a way to use code pipeline (so i can have CI/CD from master) as well as utilize the benefits of .ebextension?

Does this work if you use the eb deploy command directly? If yes, then I would try using the pipeline execution history to find a recent artifact to download and test with the eb deploy command.

If CodePipeline's Elastic Beanstalk Job Worker does not play well with ebextensions, I would consider it completely useless to deploy to Elastic Beanstalk.
I believe there is some problem with the ebextensions themselves. You can investigate the execution in these log files to see if something is going wrong during deployment:
/var/log/eb-activity.log
/var/log/eb-commandprocessor.log
/var/log/eb-version-deployment.log

All the config files under .ebextension will be executed based on the order of precedence while deploying on the Elastic Beanstalk. So, it is doesn't matter whether you are using codepipeline or eb deploy, all the file in ebextension directory will be executed. So, you don't have to worry about that.

Be careful about the platform you're using, since “64bit Amazon Linux 2 v5.0.2" instead of .ebextension you have to use .platform.
Create .platform directory instead of .ebextension
Create the subfolders and the proxy.conf file like in this path .platform/nginx/conf.d/proxy.conf
In proxy.conf write what you need, in case of req body size just client_max_body_size 20M;

I resolved the problem. You need include .ebextension folder in your deploy.
I only copy the dist files, then I need include too:
- .ebextensions/**/*
Example:
## Required mapping. Represents the buildspec version. We recommend that you use 0.2.
version: 0.2
phases:
## install: install dependencies you may need for your build
install:
runtime-versions:
nodejs: 12
commands:
- echo Installing Nest...
- npm install -g #nestjs/cli
## pre_build: final commands to execute before build
pre_build:
commands:
- echo Installing source NPM dependencies...
- npm install
## build: actual build commands
build:
commands:
# Build your app
- echo Build started on `date`
- echo Compiling the Node.js code
- npm run build
## Clean up node_modules to keep only production dependencies
# - npm prune --production
## post_build: finishing touches
post_build:
commands:
- echo Build completed on `date`
# Include only the files required for your application to run.
artifacts:
files:
- dist/**/*
- package.json
- node_modules/**/*
- .ebextensions/**/*
Ande the config file /.ebextensions/.node-settings.config:
option_settings:
aws:elasticbeanstalk:container:nodejs:
NodeCommand: "npm run start:prod"

Related

Next failed to load SWC binary

When trying to run the command using nextjs npm run dev shows error - failed to load SWC binary see more info here: https://nextjs.org/docs/messages/failed-loading-swc.
I've tried uninstalling node and reinstalling it again with version 16.13 but without success, on the vercel page, but unsuccessful so far. Any tips?
Also, I noticed it's a current issue on NextJS discussion page and it has to do with the new Rust-base compiler which is faster than Babel.
Delete the package-lock.json file and the node_modules directory in your project and then run npm install on your terminal.
This worked as suggeted by nextJS docs but it takes away Rust compiler and all its benefits... Here is what I did for those who eventually get stuck...
Step 1. add this line or edit next.json.js
{
swcMinify: false // it should be false by default
}
Step 2. add a ".babelrc" file to project root dir
Step 3. add this snippet to the new file ".babelrc"
{
"presets": ["next/babel"]
}
Step 4, you need to run this command as steps 1-3 will remove SWC failed to load error but you will notice another error when you run the build command. So run this too
npm install next#canary
hope this helps
If you use Docker, just add RUN npm install -D #swc/cli #swc/core to Dockerfile.
I have the same issue, don't know why, I am using
node v18.4.0
next#12.1.6
to fix this issue
just visit this website
https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170
install this
I had the same issue on Windows 11. I upgraded NodeJS to 17.0.1. After that, everything works now.
make .babelrc in root directory. And add the following code.
{ "presets": ["next/babel"], "plugins": [["styled-components", { "ssr": true }]] }
This error occurs because next js uses a Rust-based compiler to compile JavaScript which is much faster than babel but this is not compatible with all system architecture, in other to fix this you have to disable this compiler and use the native babel compiler. This is done by creating a .babelrc file in your root directory and adding this code below to the file;
{"presets": ["next/babel"]}
you can check out this link for more details: SWC Failed to Load - NEXTJS DOCS
If you are running using Docker, I had to use node:14-buster-slim as my base image to make it work. I got the idea for my working solution from https://github.com/vercel/next.js/discussions/30468#discussioncomment-1598941.
My multi-staged Dockerfile looks like this:
############### Base Image ###############
FROM node:14-buster-slim AS base
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
############### Build Image ###############
FROM base AS build
ARG app_env=production
ARG app_port=3000
WORKDIR /build
COPY --from=base /app ./
ENV NODE_ENV=${app_env}
ENV PORT=${app_port}
EXPOSE ${app_port}
RUN npm run build
############### Deploy Image ###############
FROM node:14.18.1-alpine AS production
ARG app_env=production
ENV NODE_ENV=${app_env}
WORKDIR /app
COPY --from=build /build/package*.json ./
COPY --from=build /build/.next ./.next
COPY --from=build /build/public ./public
RUN npm install next
EXPOSE 3000
CMD npm run start
If you want to use docker-compose to run your services, the docker-compose.yaml file for running the next dev will look something like this:
version: "3"
services:
web-server:
env_file:
- ./.env
build:
context: .
dockerfile: Dockerfile
target: base
command: npm run dev
container_name: web-server
restart: always
volumes:
- ./:/app
- /app/node_modules
ports:
- "${NODEJS_PORT}:3000"
In your NextJS Project you have this file , named .eslintrc.json, In this file
You have following code
{
"extends": "next/core-web-vitals"
}
Replace it with
{
"extends": ["next/babel","next/core-web-vitals"]
}
The best way to fixed this problem on Windows is install "Microsoft Visual C++ Redistributable"
The error is occurring because Next.js is using Rust-based compiler SWC to compile JavaScript/TypeScript now and for this SWC requires a binary be downloaded that is compatible specific to your system.
To Solve this problem :
Just go to Microsoft Visual C++ Redistributable to download
latest supported Microsoft Visual C++ Redistributable.
Or, you can simply download from here (please check your version first)
Permalink for latest supported x64 version
The X64 Redistributable package contains both ARM64 and X64 binaries. This package makes it easy to install required Visual C++ ARM64 binaries when the X64 Redistributable is installed on an ARM64 device.
Permalink for latest supported x86 version
Permalink for latest supported ARM64 version
Just run 'npm i' or 'yarn' and then restart the server.
Remove node_modules directory and package-lock.json
Run npm i to install the dependencies
If you are on MAC OS, you can directly run below command in terminal
rm -rf node_modules && rm package-lock.json && npm i
In my case issue was with NextJs version 12.2. I have downgraded it to 12.1.6 and my issue is fixed.
I'm a beginner with next.js and I had the same error. After searching I got a solution to add .babelrc. but using that couldn't get the features of SWC.
Today I got a real solution, using this example project command. When we create our new project, then swc will work as well as no error will be there.
command- npx create-next-app 'your_project_name' --use-npm --example "https://github.com/vercel/next-learn/tree/master/basics/learn-starter"
Let me know if you face any further issues.
Just Download Redistributable C++ 2015
If you read the docs it says you need a distributable file which can be downloaded at https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-
Best solution for this problem is
Delete your package-lock.json.
Downgrade your next.js version "12.1.6" from current version.
run npm i --force command.
now run npm run dev command and it will work.
This is Happen because of you have uninstalled npm modules or yarn in your project
Just Run this command / install node packages you will get your back

Slow Next.js builds on Gitlab CI

We have a JS-based stack in our application - React with vast majority being a React Admin frontend, built on Next.js server, with Postgres, Prisma and Nexus on backend. I realize it's not a great use case for Next.js (React Admin basically puts entire application in a single "component" (root), so basically I have a giant index.tsx page instead of lots of smaller pages), but we've had quite terrible build times in Gitlab CI and I'd like to know if there's anything I can do about it.
We utilize custom gitlab-runners deployed on the company Kubernetes cluster. Our build job essentially looks like:
- docker login
- CACHE_IMAGE_NAME="$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:latest"
- SHA_IMAGE_NAME="$CI_REGISTRY_IMAGE/$CI_COMMIT_SHORT_SHA"
- docker pull $CACHE_IMAGE_NAME || true
- docker build
-t $CACHE_IMAGE_NAME
-t $SHA_IMAGE_NAME
--cache-from=$CACHE_IMAGE_NAME
--build-arg BUILDKIT_INLINE_CACHE=1
- docker push # both tags
And the Dockerfile for that is
FROM node:14-alpine
WORKDIR /app
RUN chown -R node:node /app
USER node
COPY package.json yarn.lock ./
ENV NODE_ENV=production
RUN yarn install --frozen-lockfile --production
COPY . .
# Prisma client generate
RUN npm run generate
ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
ARG NODE_OPTIONS=--max-old-space-size=4096
ENV NODE_OPTIONS $NODE_OPTIONS
EXPOSE 3000
CMD ["yarn", "start"]
This built image is then deployed with Helm into our K8s with the premise that initial build is slower, but subsequent builds in the pipeline will be faster as they can utilize docker cache. This works fine for npm install (first run takes around 10 minutes to install, subsequent are cached), but next build is where hell breaks loose. The build times are around 10-20 minutes. I recently updated to Next.js 12.0.2 which ships with new Rust-based SWC compiler which is supposed to be up to 5 times faster, and it's actually even slower (16 minutes).
I must be doing something wrong, but can anyone point me in some direction? Unfortunately, React Admin cannot be split across several Next.js pages AFAIK, and rewriting it to not use the framework is not an option either. I've tried doing npm install and next build in the CI and copy that into the image, and store in the Gitlab cache, but that seems to just shift the time spent from installing/building into copying the massive directories in/out cache and into the image. I'd like to try caching the .next directory in between builds, maybe there is some kind of incremental build possible but I'm skeptical to say the least.
Well, there are different things that we can approach for making it faster.
You're using Prisma, but you're generating the client every time that you have a modification in any of the files, preventing the Docker cache to actually take care of that layer. If we take a look into the Prisma documentation, we need to generate the Prisma Client each time that there's a change on the Prisma schema, not in the TS/JS Code.
I will suppose you have your Prisma schematics under the prisma directory, but feel free to adapt my suppositions to the reality of your project:
ENV NODE_ENV=production
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile --production
COPY prisma prisma
RUN npm run generate
You're using a huge image for your final container, which maybe doesn't have a significant impact on the build time, but it definitely has on the final size and the time required to load/download the image. I would recommend to migrate into a multi-stage solution like the following one:
ARG NODE_OPTIONS=--max-old-space-size=4096
FROM node:alpine AS dependencies
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile --production
COPY prisma prisma
RUN npm run generate
FROM node:alpine AS build
WORKDIR /app
COPY . .
COPY --from=dependencies /app/node_modules ./node_modules
ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
FROM node:alpine
ARG NODE_OPTIONS
ENV NODE_ENV production
ENV NODE_OPTIONS $NODE_OPTIONS
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
WORKDIR /app
COPY --from=build /app/public ./public
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/package.json ./package.json
COPY --chown=nextjs:nodejs --from=build /app/.next ./.next
USER 1001
EXPOSE 3000
CMD ["yarn", "start"]
From another point of view, probably you could improve also the NextJS build by changing some tools and modifying the NextJS configuration. You should use locally the tool https://github.com/stephencookdev/speed-measure-webpack-plugin to analyze which is the culprit of that humongous build-time (which probably is something related to sass) and also take a look into the TerserPlugin and the IgnorePlugin.

Why is webpack encore required only in dev

I'm currently configuring some docker images for a symfony 5 project and trying to deal with the production build. Doing so, I noticed that webpack encore is installed only on dev mode, as advised on this official documentation : https://symfony.com/doc/current/frontend/encore/installation.html :
yarn add #symfony/webpack-encore --dev
However, this doesn't make sense to me, since even in production, we are supposed to build the assets :
yarn encore production
Does anyone have clues about this ?
Thank you
The Symfony docs on How Do I Deploy My Encore Assets? provide two important things to remember when deploying assets:
1) Compile Assets for Production:
$ ./node_modules/.bin/encore production
Now the important part:
But, what server should you run this command on? That depends on how you deploy. For example, you could execute this locally (or on a build server), and use rsync or something else to transfer the generated files to your production server. Or, you could put your files on your production server first (e.g. via git pull) and then run this command on production (ideally, before traffic hits your code). In this case, you’ll need to install Node.js on your production server.
And the second important thing:
2) Only Deploy the Built Assets
The only files that need to be deployed to your production servers are the final, built assets (e.g. the public/build directory). You do not need to install Node.js, deploy webpack.config.js, the node_modules directory or even your source asset files, unless you plan on running encore production on your production machine. Once your assets are built, these are the only thing that need to live on the production server.
Simply put, in the production environment you only need the generated assets (usually /public/build directory content). In a simple scenario when you only need to load compiled Javascript and CSS files the Webpack is not used at runtime.
A possible solution how to deploy a Symfony application & assets
When deploying a Symfony app manually (without CI/CD) the following steps can be performed on the local machine or in a Docker container (assumes Symfony 4/5):
Export the source code from GIT repository with git-archive, e.g.: git archive --prefix=myApp/ HEAD | tar -xC /tmp/¹
Go to exported source code: cd /tmp/myApp
Install Symfony & other PHP vendors (see also the Symfony docs): composer install --no-dev --optimize-autoloader
Install YARN/NPM vendors (they'll be required to generate assets with Webpack): yarn install
Create production assets: yarn build (or yarn encore production)
(Install Symfony assets if needed: bin/console assets:install)
Now the code is ready to rsync it to the production server. You may exclude or delete the /node_modules, /var and even /assets directories and webpack.config.js (probably package.json & yarn.lock won't be needed either -- didn't tested it!) and run e.g.: rsync --archive --compress --delete . <myProductionServer>:<app/target/path/>
Resources on Symfony deployment:
How to Deploy a Symfony Application (Symfony docs)
How Do I Deploy My Encore Assets? (Symfony Frontend FAQ)
Do I Need to Install Node.js on My Production Server? (Symfony Frontend FAQ)
Production Build & Deployment (SymfonyCast)
¹ Untars on the fly the archived GIT repository into the /tmp/myApp directory instead of into a TAR archive. Don't miss the leading / in the --prefix flag! git-archive docs.

Cannot find file './aws-exports' in './src'

I'm on the third module of this AWS tutorial to build a React app with AWS, Amplify and GraphQL but the build keeps breaking. When I ran amplify push --y the CLI generated ./src/aws-exports.js and added the same file to the .gitignore. So I'm not surprised the build is failing, since that file isn't included when I push my changes.
So I'm not sure what to do here. Considering it's automatically added to the .gitignore I'm hesitant to remove it.
Any suggestions?
I'm assuming you are trying to build your app in a CI/CD environment?
If that's the case then you need to build the backend part of your amplify app before you can build the frontend component.
For example, my app is building from the AWS amplify console and in my build settings I have
version: 0.1
backend:
phases:
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
frontend:
phases:
preBuild:
commands:
- yarn install --frozen-lockfile
build:
commands:
- yarn build
artifacts:
baseDirectory: build
files:
- "**/*"
cache:
paths:
- node_modules/**/*
Note that the backend is building first with the amplifyPush --simple command. This is what generates the aws-exports.js file.
The 'aws-exports.js' file gets created automatically when AWS Amplify runs the CI/CD deployment build process and gets configured with the appropriate settings for the environment you are deploying to.
And for this reason it is included in the .gitignore. You don't want your local test configuration to be used in your production deployment for example.
As per Matthe's answer above the should be generated when the build script runs the 'amplifyPush' command. For some reason this is not working for me at the moment though!
AWS added support to automatically generate the aws-exports.js at build time to avoid getting the error: https://docs.aws.amazon.com/amplify/latest/userguide/amplify-config-autogeneration.html

how to solve "Failed at the fibers#2.0.0 install script' error while deploying the meteor app?

I know how to package and then deploy meteor application. But recently for one project i'm stuck at an error which i couldn't resolve.
Steps I followed for package and deploy of my meteor app:
1. meteor build package
2. cd package
3. tar -xf inventoryTool.tar.gz
4. cd bundle/programs/server
5. npm install
6. cd ../..
7. PORT=<port> MONGO_URL=mongodb://127.0.0.1:27017/dbName ROOT_URL=http://<ip> node main.js
Here is the log for the error when i run the npm install(STEP 5) command.
Is there anything missing in my execution?. I'm not using the fibers package anywhere in my project. Does anyone have solution to this problem? Thanks in advance.
Why this happens (a lot)?
Your local version of node is v8.9.4. When using the build command, you will export your application and build the code against this exact node version. Your server environment will require this exact version, too.
An excerpt from the custom deployment section of the guide:
Depending on the version of Meteor you are using, you should install
the proper version of node using the appropriate installation process
for your platform. To find out which version of node you should use,
run meteor node -v in the development environment, or check the
.node_version.txt file within the bundle generated by meteor build.
Even if you don't use fibers explicitly it will be required to run your Meteor app on the server correctly.
So what to do?
In order to solve this, you need to
a) ensure that your local version of node exactly matches the version on the server
b) ensure that you build against the server's architecture (see build command)
To install a) the very specific node version on your server you have two options:
Option I. Use n, as described here. However this works only if your server environment uses node and not nodejs (which depends on how you installed nodejs on the server).
II. To install a specific nodejs version from the repositories, you may do the following:
$ cd /tmp
$ wget https://deb.nodesource.com/node_8.x/pool/main/n/nodejs/nodejs_8.9.4-1nodesource1_amd64.deb
$ apt install nodejs_8.9.4-1nodesource1_amd64.deb
If you are not sure, which of both are installed on your server, check node -v and nodejs -v. One of both will return a version. If your npm install still fails, check the error output and if it involves either node or nodejs and install the desired distribution using the options above.
To build b) against the architecture on your server, you should use the --architecture flag in your build command.

Resources