Exploring and setting up NestJS

I recently decided to do some more exploration of NestJS, as its a framework I’ve been interested in for a few years, and even dabbled in a bit. I have used it for a small project and built some services with it back in 2021 to test out how well we could implement a microservice architecture to replace the aging backend for our codebase at Introdus.

I recall having loads of trouble understanding the modules and the way things were loaded, so I have some confusing memories on the matter. Even though I “grew up” on OOP and at the time had most of my programming experience from Ruby on Rails, a strong OOP based language, I felt tripped up by the way things were loaded.
This was before I became proficient with Angular, who also have a similar module approach (nestJS claims they’re heavily inspired from Angular).

Despite this, I didn’t really get NestJS.

I mean I used it a little bit, but I didn’t really get it, and didn’t build anything particularly powerful with it.
As such I was curious to explore it again. I have become a much better developer since last working with it, and we have recently been introducing OOP and SOLID principles at work. This makes NestJS an obvious source of inspiration to look at for patterns, and simply just to see if it would be worth it to replace (parts of) our codebase with it.

In this post I just wanted to explore my initial experience with starting a new NestJS project.

Some of the things I wanted to explore specifically were:

  • Setting up the project with a Postgres database, locally as well as in production
  • Dockerizing the application
  • Figure out how difficult it is to host a NestJS application
  • Setup the most basic requirements for healthy codebase: Health checks, environment variables, Error reporting, SSO

So first things first, I simply ran nest new my-application and followed the prompts, and we are off to the races.

Setting up Postgres

After the initial setup and just configuring a few minor things such as bubbling errors from the database level, to the API, I wanted to set up a database.
It’s been a few years since I worked with Postgres, and I vaguely remember having a lot of issues with it. Probably because I’ve spent the past 4 years of my career working with NoSQL, which is a bit more relaxed in how you use it. But we’ll circle back to that.

But hey, like I said, I have become a much better developer since, so surely something as simple as a database won’t trip me up, right?

First things first, I needed to setup Postgres locally, which seems easy enough on the surface but in my experience always have all sorts of caveats.
For example, when creating the database, it automatically creates a user with the name of postgres, but often when you attempt to connect to Postgres from an application or with the cli (psql) it assumes that you want to use the systems username.
For most people, their system user is not called postgres, so thats immediately going to be a bit of a pain. Small pain, granted. It just means I always have to provide the username explicitly.

Just a minor caveat, but one I could do without 👌

After battling my own system for a while and figuring out the best way to install Postgres, I actually landed on DbNgin, which just made installing, creating and managing a Postgres database incredibly easy. Especially once paired with Tableplus.
The initial application setup was easy enough and well documented on NestJS’ documentation website.
Synchronizing is a convenient option that NestJS enables, that allows the database to sync the schema from how you define your entities, completely skipping any migrations.

However, as they point out in the documentation, you should not be using synchronize: true for production. So we already know we’re going to have to provide a different configuration for the production environment, to make this step work.
At the core of it, to make this work in production we need a schema that the database can use. However, NestJS doesn’t seem to work with schemas in a way similarly to say Ruby on Rails. Instead the Schema is always a reflection of the most recently ran migration, but without finalizing a schema anywhere that you can see.

So let’s start with the migrations.

Migrations making life difficult

This is where I saw the first sign of trouble showed up. NestJS recommends using TypeORM, which, as the name suggests, is a typed ORM that works well with Typescript, and by extension, NestJS.

However, what was completely omitted in the documentation was how to setup migrations, and instead you are referred to the TypeORM documentation which seems to be strangely unaware that people may want to use this library in a NestJS application.
Which is a little strange, seeing NestJS being pushing it as (one of?) their recommended library for database interactions.

This is where things get a bit tricky, because I started seeing disconnects in how the two libraries talk about these.
TypeORM and its CLI almost expects a “DataSource” to exist in a single file, and NestJS does mention a data source, but almost in passing, as if it isn’t a big deal.

Turns out, its a very big deal.

Even on the specific TypeORM recipe page in the NestJS docs, this pattern of using a DataSource is not mentioned.

But for TypeORM migrations to run, you have to point them to a datasource file, so this needs to exist.

Just in case you are struggling with this (and for my future self!) here’s how I ended up defining that datasource, in its own file:

const configService = new ConfigService();
const isProduction = process.env.NODE_ENV === 'production';

export const AppDataSource = new DataSource({
    type: 'postgres',
    ...(isProduction
        ? {
            url: configService.get<string>('DATABASE_URL'),
            synchronize: false,
        }
        : {
            host: configService.get<string>('DB_HOST', 'localhost'),
            port: configService.get<number>('DB_PORT', 5432),
            username: configService.get<string>('DB_USERNAME', 'postgres'),
            password: configService.get<string>('DB_PASSWORD', ''),
            database: configService.get<string>('DB_DATABASE_NAME', 'testdb'),
            synchronize: configService.get<boolean>('DB_SYNCHRONIZE', true),
        }),
        entities: [join(__dirname, '../../**/*.entity{.ts,.js}')],
        migrations: [join(__dirname, './migrations/*{.ts,.js}')],
        migrationsTableName: 'migrations_history',
        logging: configService.get<boolean>('DB_LOGGING', true),
        ssl: configService.get<boolean>('DB_SSL', false)
        ? { rejectUnauthorized: false }
        : false,
});

This Datasource was then imported into a database provider:

import { Injectable } from '@nestjs/common';
import { AppDataSource } from './data-source';

@Injectable()
export class DatabaseProvider {
  constructor() {}

  createDataSource() {
    return AppDataSource;
  }
}

From there I could follow the regular NestJS pattern of importing it into a database.module.ts and use it in the application like I would expect.

This may look simple, but it took me a while, so definitely storing this here for my future reference 👏

After spending hours fiddling with the recommended steps and getting things to work, I was able to figure out how to get the migrations setup.
First of all, you CAN NOT have them outside the ./src/ folder, as I was initially attempting. Maybe I’m stupid or missed a detail in the documentation, but I would have preferred a separate folder ./migrations/ so I could potentially keep this out of the production artifact after build.

I suspect that NestJS is intentionally keeping this part of the documentation vague, so their hosting service “MAU” seems like a better option.

… Or again, maybe I’m just stupid or missing something 🤷‍♂️.

For this project, the MAU setup seemed a little excessive. It doesn’t say exactly how it intends to help you, just that it spins up a setup on AWS, which can quickly become complex. I don’t want to save myself a few hours of debugging now, just to have to consistently struggle with using AWS or worse yet, have a massive bill rack up because I’m not directly in charge of the services it spins up.

After all, all of this is simply compiling Javascript and running it on the server, including some migrations and a database connection. I was adamant I could fix this myself. So I decided to spend the time debugging now, and figure out how I could set up a brand new NestJS project. At the end of the time, that’s a valuable lesson I can re-use the learnings from again and again.

So, after much frustration, I finally realised I had to keep the migrations within the scope of the ./src folder. It seems NestJS doesn’t really do a great job at reading files outside this folder. Maybe this is mentioned somewhere, but I seemed to have missed it.

Anyways, I settled on keeping my migrations in ./src/config/database/migrations, along with the other database related files and that seems to work well enough.

I’m not sure if I regret the decision to work this out on my own, but it definitely took me a few hours to get everything working. Locally, that is.

Running the CLI to migrate schema in production

With that in place and being able to build the schema locally (I even dropped the database and tried with a new computer to verify!), I still had one environment left to conquer: Production.

In the meantime I had chosen to setup the application on fly.io, a hosting provider I have been curious to try out, not least because they produce some very interesting youtube videos.

Between the documentation from TypeORM, some ChatGPT and general AI inputs and a couple of outdated StackOverflow posts, I had plenty of approaches to choose from, but wasn’t really sure which to go with.

I thought I should stick with the approach recommended by TypeORM, as I figured they made the library that consumes the migrations so, yeah. I tried that for quite a while, but could not make it work in production, getting all sorts of issues.
Finally it turned out that the proposed solution from an outdated StackOverflow post, was the best one for my case, which was a combination of TypeORM’s recommended approach, with a caveat. Simply put, setup the scripts as TypeORM recommends, and run them through npx.

I should note at this point I was also causing some unnecessary confusion for myself by using bun as my runner for the project, but not for the package manager. For that I used yarn. So I might have been better off just using yarn for everything.

I ended up with 4 simple scripts in my package.json that I could use (and will continue to use) moving forward:

{
    ...
    "typeorm": "typeorm-ts-node-commonjs",
    "migration:generate": "npx typeorm -- migration:generate -d ./src/config/database/data-source.ts",
    "migration:run": "npx typeorm -- migration:run -d ./src/config/database/data-source.ts",
    "migration:run:prod": "npx typeorm migration:run -d dist/config/database/data-source.js",
    "migration:revert": "npx typeorm -- migration:revert -d ./src/config/database/data-source.ts",
    ...
}

Seems simple, and almost like it shouldn’t matter, but running with npx the trick when nothing else worked.

If you pay attention you will notice I had to write an extra script to run this in production, simply because I could not get things to work without it.
What seemed to happen, after the migration files would finally run, was the build pipeline it claimed it could not find the data source file. I discovered this happened because at that point of the build pipeline, the code had already compiled to Javascript, so there was no TS file to test on.

Now, since I had been progressing in parallel with getting the application hosted on fly.io, and used their Dockerfile (they containerize things for you!), this could be a matter of which step in the CI pipeline I chose to execute the migration at.
Perhaps instead of running it from the fly.toml configuration file, using their release_command hook (more on this in the next post), I could have included this step in the Dockerfile.
Perhaps that way, I could run the migration after the codebase was compiled but before the TS code is thrown out, true, it is possible.

I think at this stage I was just frustrated and chose to go with the easy option, even though I was reluctant to put in the extra script. It simply shouldn’t be necessary.

So what did I learn?

So! That wraps up the configuration, and with the fly.toml and a dedicated script for running the migrations in the production build pipeline, things works!
I could finally wrap up migrations and consider the database working.

In the next article I will go dockerizing and hosting the application using fly.io. Sign up for my newsletter below 👇 to be among the first to know when I release the next version of NestJS exploration!