5 Steps to Ship your NestJs + Prisma in Docker

Photo by Ian Taylor on Unsplash

5 Steps to Ship your NestJs + Prisma in Docker

The right way so far

This article shows how to dockerize our NestJS + Prisma application. We go beyond the basics, following the best practices from Dockerfiles and Snyk.

Our final Dockerfile looks like this:

FROM node:18 as build
WORKDIR /usr/src/app
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
RUN npx prisma generate
RUN npm run build

FROM node:18-slim
RUN apt update && apt install libssl-dev dumb-init -y --no-install-recommends
WORKDIR /usr/src/app
COPY --chown=node:node --from=build /usr/src/app/dist ./dist
COPY --chown=node:node --from=build /usr/src/app/.env .env
COPY --chown=node:node --from=build /usr/src/app/package.json .
COPY --chown=node:node --from=build /usr/src/app/package-lock.json .
RUN npm install --omit=dev
COPY --chown=node:node --from=build /usr/src/app/node_modules/.prisma/client  ./node_modules/.prisma/client

ENV NODE_ENV production
EXPOSE 3000
CMD ["dumb-init", "node", "dist/src/main"]

Now, let's start from the basics and improve our Dockerfile.

1 Basic Dockerfile

The simple starting dockerfile that we can create is the following:

FROM node                           # Use basic node image
WORKDIR /usr/src/app                # Set working dir inside base docker image
COPY . .                            # Copy our project files to docker image
RUN npm install                     # Install project dependencies
RUN npx prisma generate             # Generate Prisma client files
RUN yarn build                      # Build our nestjs
EXPOSE 3000                         # Espose our app port for incoming requests
CMD ["npm", "run","start:prod"]     # Run our app

The previous Dockerfile would generate a valid image and no problems. It will use the latest node image (Node v18 at the time of writing) and run our app.

2 Adding .dockerignore to our project

We want to be sure that we aren't leaking any sensitive files to our docker image. In particular, during the command COPY . .. If we are running this from a CI pipeline, that has just downloaded the repository, hopefully, we are being careful and there isn't any credential leaked into git. So, no consequences to our docker image.
But, during install we could have some more complex workflows and generate some sensitive files, or we are building our image locally to share or test it. Best to be sure we have a .dockerignore file preventing that.
It could have something like this:

.dockerignore   # Ignore the ignore
node_modules    # Ignore local node_modules folder
npm-debug.log   # Debug files
Dockerfile      # The dockerfile
.git            # The git history
.gitignore      #
.npmrc          # If accessing a private npm repository here will be the token used, so ignore to prevent leaking
.env-*          # Any other environment that we don't want to include
.gitlab-*       # Deploying with gitlab ?
.github         # Using github actions ?
*.md            # Any

3 Creating a multi stage image

Continue optimizing our image, we can benefit from using a multistage docker image, reducing image size (which cost us money in some environments like AWS ECR, and also saves us bandwidth and time during deployment). Splitting our steps in a building stage and then using a final deploy container.

We could install our packages and build the application in one step. Then use another image, copy the generated build files, and install only the production dependencies.
For the build step, we could continue with the base node image used so far. We ideally want to use the same version of tooling during all steps, the same node version, underlying OS, and packages. We could go on all details using a specific image tag like 14.21.2-buster or not use any specific tag, which will use latest by default, as we did on the first Dockerfile presented.

I would recommend at least specifying the major node version that you are using, that would give us an image that we know it's mainly compatible with our local environment, but also will be using the most up-to-date official image, reducing the vulnerabilities and bugs that are constantly being found.
So far we could change the first steps of our Dockerfile:

FROM node:18 as build       # Naming our image to be use in later steps

WORKDIR /usr/src/app
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
RUN npx prisma generate
RUN npm run build

In the next step, to reduce our final image size, we would like to use a reduced base image like slim or alpine (Although alpine has a smaller size, it's not built with libc, and some tools might not work as expected, watch out). What can happen ( and does in our case using Prisma) it's that this smaller slim doesn't contain some libraries or tools needed from our app. In this case, we need to add libssl-dev.
We should also set de NODE_ENV variable to production, so that different modules work accordingly, reducing the load of debug symbols and logs.

FROM node:18-slim                                                                           # Base smaller node image
RUN apt update && apt install libssl-dev -y --no-install-recommends                         # Add missing dependency needed for prisma
WORKDIR /usr/src/app
COPY --from=build /usr/src/app/dist ./dist                                                  # Copy de dist folder generated in the previous step
COPY --from=build /usr/src/app/.env .env                                                    # Copy env variables to use
COPY --from=build /usr/src/app/package.json .
COPY --from=build /usr/src/app/package-lock.json .
RUN npm install --omit=dev                                                                  # Install without  dev dependencies to save some space
COPY --from=build /usr/src/app/node_modules/.prisma/client  ./node_modules/.prisma/client   # Copy generated prisma client from previous step
ENV NODE_ENV production
EXPOSE 3000
CMD ["npm", "run","start:prod"]

With these two steps, we are going to save around 300 mb only for the different base images.
Quick note, in older npm versions, use npm install --production

4 Better App Start

There are some caveats running our app directly through npm. First, npm doesn't forward any signal to the spawned process, and our process would be assigned PID 1 which is treated differently by the kernel of our docker image. Which can affect the ability to gracefully shut down our app and, cause difficult to debug problems.
See the references at the end for more info.
So let's change our RUN command with:

CMD ["dumb-init", "node", "dist/src/main"]

5 Security

We should never want to run our application with the root privileges, and although the current images of node run with a low privilege user node by default, all the files that we copied are owned by root.

In some cloud environments like AWS or Azure, this could have little to null consequences, it's better not to risk, so in every copy, we would downgrade to the same node user. We could then add to all our COPY commands the --chown=node:node argument.

Putting it all together we get the Dockerfile that we introduce at the beginning.

FROM node:18 as build
WORKDIR /usr/src/app
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
RUN npx prisma generate
RUN npm run build

FROM node:18-slim
RUN apt update && apt install libssl-dev dumb-init -y --no-install-recommends
WORKDIR /usr/src/app
COPY --chown=node:node --from=build /usr/src/app/dist ./dist
COPY --chown=node:node --from=build /usr/src/app/.env .env
COPY --chown=node:node --from=build /usr/src/app/package.json .
COPY --chown=node:node --from=build /usr/src/app/package-lock.json .
RUN npm install --omit=dev
COPY --chown=node:node --from=build /usr/src/app/node_modules/.prisma/client  ./node_modules/.prisma/client

ENV NODE_ENV production
EXPOSE 3000
CMD ["dumb-init", "node", "dist/src/main"]

If you want to go more in-depth I recommend you the following articles: