In my package.json, I have a build script set to build my next project as follows:
"build": "echo \"$NODE_ENV\" && NODE_ENV=$NODE_ENV next build && npm run build-server",
The output from the command that I get is:
> admin-ui#1.0.0 build /usr/src/admin-ui
> echo "$NODE_ENV" && NODE_ENV=$NODE_ENV next build && npm run build-server
development
Creating an optimized production build ...
The echo outputs development as the value for $NODE_ENV. However, I cannot get that passed to next. It just ignores it.
Using next.js version 9.0.5.
next build generates production builds
next dev creates a development build - https://nextjs.org/docs/api-reference/cli#development
Build usually is nomenclature for production builds and dev for a development server.
When you do next build it assumes that its preparing for production and continues to do so.
Related
I have been doing firebase functions development for a while now and was used to be able to switch the target project from the command line via firebase use test, firebase use staging etc.
In my npm package scripts in the project I also invoke the firebase tools to do things like generating a .runtimeconfig.json via:
"scripts": {
"genruntime" : "firebase functions:config:get > .runtimeconfig.json",
}
In general, this worked fine - I'd change the target project from the command line, and the same target project would be used when I ran firebase commands by npm scripts.
Over the last few days though, I've found that sometimes the target projects weren't synced as I'd set the target environment to test but the find that the npm command was getting functions from the staging environment.
My firebase tools are the same version (global via project) and it's almost as if the target project is not being shared between the global firebase tools and the the npm ones.
Has anyone else seen this issue?
I can change my npm scripts to be explicit about the project being used but this has worked in the past and I'm curious about what might have happened.
I have project with a monorepo on pnpm workspaces and turborepo to manage monorepo scripts.
All the single projects works as expected, they are nextjs projects.
When i upgraded turborepo from the 1.2.6 to the 1.4.0 version and when i run the build script on a gitlab ci, the build task succeeded, but the pipeline keep stuck.
I run the script in this way on the pipeline
.gitlab-ci.yml
.build-dashboard:
image: gitlab.****.it:4567/.../node-pnpm
stage: build
script:
- pnpm build:dashboard
package.json
....
scripts: {
"build:dashboard": "turbo run build --filter=...#project/dashboard && exit 0"
}
...
i try to force the exit using exit 0, but without success (with turobrepo > 1.2.6).
Any suggestions on that ?
Thanks
UPDATE
After many attempt i get an additional log messagge
Attempting to remove file /builds/.../....-user-interface/node_modules/.cache/turbo/a9f0a39c2d3de111/apps/main/.next/standalone/node_modules/.pnpm/supports-color#7.2.0/node_modules/has-flag; a subdirectory is required
As discussed here the problem whas related to turborepo with next build in standalone mode.
Install "turbo": "1.4.4-canary.0" solve the issue.
I am trying to run API Manager 4.0.0 from source code, I download product-am and carbon-apimgt from github. How can i debug source code in idea or eclipse ?
First of all, you have to build the product. Follow these steps in order to build the product locally.
Make sure you have installed Java and Maven in your machine.
Download or clone carbon-apimgt repository from
https://github.com/wso2/carbon-apimgt.
Go to carbon-apimgt directory and run mvn clean install command in the terminal. (You can ignore unit tests by running mvn clean install -Dmaven.test.skip=true)
Copy the build version to the clipboard (ex:- 9.12.3-SNAPSHOT)
Download or clone product-apim from https://github.com/wso2/product-apim
Replace the value of carbon.apimgt.version in pom.xml file with the value you copied. (ex:- <carbon.apimgt.version>9.12.3-SNAPSHOT</carbon.apimgt.version>)
Go to product-apim directory and run mvn clean install command in the terminal. (You can ignore integration tests by running mvn clean install -Dmaven.test.skip=true which will save your time)
The built pack can be found in product-apim/modules/distribution/product/target directory.
After building the pack, extract the content in the zip file and run sh bin/api-manager.sh --debug 5005 command.
I recommend JetBrains Intellij IDEA to debug the code easily. So, open the carbon-apimgt project in IDEA. Then Add Configuration > Add new... > Remote JVM Debug > OK. After adding the configuration, you can click on the debug button and start debugging.
We have a JS-based stack in our application - React with vast majority being a React Admin frontend, built on Next.js server, with Postgres, Prisma and Nexus on backend. I realize it's not a great use case for Next.js (React Admin basically puts entire application in a single "component" (root), so basically I have a giant index.tsx page instead of lots of smaller pages), but we've had quite terrible build times in Gitlab CI and I'd like to know if there's anything I can do about it.
We utilize custom gitlab-runners deployed on the company Kubernetes cluster. Our build job essentially looks like:
- docker login
- CACHE_IMAGE_NAME="$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:latest"
- SHA_IMAGE_NAME="$CI_REGISTRY_IMAGE/$CI_COMMIT_SHORT_SHA"
- docker pull $CACHE_IMAGE_NAME || true
- docker build
-t $CACHE_IMAGE_NAME
-t $SHA_IMAGE_NAME
--cache-from=$CACHE_IMAGE_NAME
--build-arg BUILDKIT_INLINE_CACHE=1
- docker push # both tags
And the Dockerfile for that is
FROM node:14-alpine
WORKDIR /app
RUN chown -R node:node /app
USER node
COPY package.json yarn.lock ./
ENV NODE_ENV=production
RUN yarn install --frozen-lockfile --production
COPY . .
# Prisma client generate
RUN npm run generate
ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
ARG NODE_OPTIONS=--max-old-space-size=4096
ENV NODE_OPTIONS $NODE_OPTIONS
EXPOSE 3000
CMD ["yarn", "start"]
This built image is then deployed with Helm into our K8s with the premise that initial build is slower, but subsequent builds in the pipeline will be faster as they can utilize docker cache. This works fine for npm install (first run takes around 10 minutes to install, subsequent are cached), but next build is where hell breaks loose. The build times are around 10-20 minutes. I recently updated to Next.js 12.0.2 which ships with new Rust-based SWC compiler which is supposed to be up to 5 times faster, and it's actually even slower (16 minutes).
I must be doing something wrong, but can anyone point me in some direction? Unfortunately, React Admin cannot be split across several Next.js pages AFAIK, and rewriting it to not use the framework is not an option either. I've tried doing npm install and next build in the CI and copy that into the image, and store in the Gitlab cache, but that seems to just shift the time spent from installing/building into copying the massive directories in/out cache and into the image. I'd like to try caching the .next directory in between builds, maybe there is some kind of incremental build possible but I'm skeptical to say the least.
Well, there are different things that we can approach for making it faster.
You're using Prisma, but you're generating the client every time that you have a modification in any of the files, preventing the Docker cache to actually take care of that layer. If we take a look into the Prisma documentation, we need to generate the Prisma Client each time that there's a change on the Prisma schema, not in the TS/JS Code.
I will suppose you have your Prisma schematics under the prisma directory, but feel free to adapt my suppositions to the reality of your project:
ENV NODE_ENV=production
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile --production
COPY prisma prisma
RUN npm run generate
You're using a huge image for your final container, which maybe doesn't have a significant impact on the build time, but it definitely has on the final size and the time required to load/download the image. I would recommend to migrate into a multi-stage solution like the following one:
ARG NODE_OPTIONS=--max-old-space-size=4096
FROM node:alpine AS dependencies
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile --production
COPY prisma prisma
RUN npm run generate
FROM node:alpine AS build
WORKDIR /app
COPY . .
COPY --from=dependencies /app/node_modules ./node_modules
ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
FROM node:alpine
ARG NODE_OPTIONS
ENV NODE_ENV production
ENV NODE_OPTIONS $NODE_OPTIONS
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
WORKDIR /app
COPY --from=build /app/public ./public
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/package.json ./package.json
COPY --chown=nextjs:nodejs --from=build /app/.next ./.next
USER 1001
EXPOSE 3000
CMD ["yarn", "start"]
From another point of view, probably you could improve also the NextJS build by changing some tools and modifying the NextJS configuration. You should use locally the tool https://github.com/stephencookdev/speed-measure-webpack-plugin to analyze which is the culprit of that humongous build-time (which probably is something related to sass) and also take a look into the TerserPlugin and the IgnorePlugin.
In the following project
https://github.com/spark-jobserver/spark-jobserver,
we recently upgraded from 0.13.12 to 1.1.6.
In 0.13.12 we used to do sbt "testOnly *JobServerSpec" and tests cases for the specific class used to run but with 1.1.6, it is not the case anymore.
To reproduce the issue,
NOTE: If you get a Not in Gzip format exception, then in the project root, execute rm -rf **/target
clone the project, currently it is using 1.1.6.
git checkout d7e231ea4ee9981e49b411d09f132e396c901b98 will switch to a point before 1.1.6 was introduce
Execute sbt "testOnly *JobServerSpec", tests should be running
git checkout master to switch back to version with sbt 1.1.6
Execute sbt "testOnly *JobServerSpec"
It should not run any tests.