SST Guidelines using AWS

This guide outlines the use of SST with the goal of explaining how to deploy a NestJS backend and a React frontend on AWS using SST.

In the examples, the backend is configured using Prisma ORM and PostgreSQL as the database.
The frontend is using Vite.
As a code management platform, Bitbucket is being used.

If you’re using other tools, you can use the guide document but make some modifications to the process according to your project’s stack.

Requirements

  • AWS Account: You must have access to an AWS account, preferably with Admin permissions.

  • Docker Desktop: You must have Docker Desktop installed.

  • IAM user should be configured with the necessary permissions to create secret keys.

  • Windows: If using Windows, you need to install WSL. Throughout the entire process, commands must be run within the WSL environment.

  • Install AWS CLI

  • Domain and SSL Certificate: Verify with the client and the assigned Project Manager the domain to be used. If applicable, manage it through Space, purchase a domain for the project in Route 53, and add the domain and any required subdomains. Example: *.projectname.com test.api.projectname.com test.admin.projectname.com test.app.projectname.com

    If the client already owns a domain, it’s important to verify where it is registered. If the domain is managed in AWS Route 53, it is recommended to continue the process directly in Route 53.

    If the domain is not hosted in Route 53, it is recommended to configure its DNS management in Route 53 to have centralized control from AWS. To do this, you will need to ask the client to update their current domain provider with the nameservers provided by Route 53.

    Note: Hosting the domain in Route 53 is recommended but not mandatory. AWS allows you to generate SSL certificates for domains managed externally. In that case, you only need to create an “A” record in the current DNS provider pointing to the Load Balancer.

    Make sure to have an approved SSL certificate in AWS Certificate Manager associated with the declared domains.

    💡 TIP: If the domain is not yet confirmed, this step can be skipped and configured later. Simply do not include the domain and HTTPS configurations in the sst.config.ts file.

  • Bitbucket Deployment Variables: If you are using Bitbucket Pipelines, you need to configure the repository deployments variables. Go to Deployments => Config => Variables in Bitbucket and set up the following variables with the secret checkbox enabled:

    Frontend & Backend:

    AWS_ACCESS_KEY_ID
    AWS_SECRET_ACCESS_KEY
    AWS_REGION

    Backend (If you have a domain):

    API_DOMAIN: example: test.api.projectname.com
    API_DOMAIN_CERT_ARN: ARN from certificate in AWS Certificate Manager

    Frontend (If you have a domain):

    VITE_DOMAIN: example: test.app.projectname.com
    VITE_DOMAIN_CERT_ARN: ARN from certificate in AWS Certificate Manager

    Configure these variables in each environment you have in the Deployments section of Bitbucket.

Recommendations for the process

✅ Make sure to always use the same package manager that the project uses (npm/pnpm/yarn) to run the necessary commands in order to avoid inconsistencies.

✅ At the start of working with the backend or frontend project, delete the node_modules and dist folders, then run npm/pnpm/yarn install followed by npm/pnpm/yarn run build.

✅ It is recommended to perform this action using the latest Node LTS version in the project.

💡 TIP: SST console: SST offers an interface to view the resources; this step is optional.

🚨🚨🚨 WARNING: In this process, DON’T manually delete resources from the AWS console to avoid losing the SST state.

Init SST

Start by opening the project to be deployed, creating a new branch, and navigating to the project’s root directory.

  • INIT SST: Run the command to start SST:
 npx sst@latest init
At this moment, some SST folders and files will be autogenerated in the project:

.sst <= SST dependencies, automatically added to .gitignore

sst-env.d.ts <= Interface that describes the environment variables

sst.config.ts <= This is where the deploy script will be placed
  • UPDATE tsconfig.json: For the backend, it is necessary to modify the tsconfig.json so that the build works by adding:
"include": ["src/**/*", "test/**/*", "sst-env.d.ts"]
  • SCRIPTS: Add these scripts to the package.json; they will be used to specify the deploy according to the environment (stage):
"scripts": {
    "deploy:production": "sst deploy --stage production",
    "deploy:staging": "sst deploy --stage staging",
}
  • Health endpoint: Include a GET health endpoint in backend side.
    @Get('/health')
    health() {
      return { status: 'ok' };
    }
    

NestJS Backend sst.config.ts

This file is automatically generated when running the sst init command. An example of how to complete its content is included.

To deploy the backend, in these guidelines we will use the following SST resources:

/// <reference path="./.sst/platform/config.d.ts" />


export enum StageEnum {
  PROD = 'production',
  TEST = 'staging',
}

enum NodeEnvEnum {
  DEV = 'DEV',
  TEST = 'TEST',
  PROD = 'PROD',
}

export default $config({
  app(input) {
    const stage = input?.stage as StageEnum;
    return {
      name: 'spotlink-backend',
      removal: stage === StageEnum.PROD ? 'retain' : 'remove',
      protect: ['production'].includes(input?.stage),
      home: 'aws',
    };
  },
  async run() {
    const environment = $app.stage;
    const projectName = 'spotlink';
    const isFromLocalMachine = process.env.NODE_ENV === NodeEnvEnum.DEV || process.env.NODE_ENV === NodeEnvEnum.TEST || process.env.NODE_ENV === NodeEnvEnum.PROD;
    // VPC
    const vpcName = `vpc`;
    const vpc = new sst.aws.Vpc(vpcName, { bastion: true });

    // RDS
    const rdsName = `rds`;
    const rds = new sst.aws.Postgres(rdsName, {
      vpc,
    });
    const DATABASE_URL = rds.username.apply(username =>
      rds.password.apply(password =>
        rds.host.apply(host =>
          rds.port.apply(port =>
            rds.database.apply(database => 
              `postgresql://${username}:${encodeURIComponent(password).replace(/!/g, '%21')}@${host}:${port}/${database}`
            )
          )
        )
      )
    );
    
    // SECRET MANAGER
    const secretManagerName = `${projectName}--${environment}--secrets-manager`;
    const secret = new aws.secretsmanager.Secret(secretManagerName);

    if(isFromLocalMachine) { // For practicality, run this code only during the first deploy from your local machine with the updated variables in your local .env file. Then, for future deploys from the Bitbucket pipeline, manually configure the required secrets in AWS Secrets Manager for each environment.
      
      // SECRET VERSION
      const secretVersionName = `${projectName}--${environment}--secret-version`;
      const secretVersion = new aws.secretsmanager.SecretVersion(secretVersionName, {
        secretId: secret.id.apply(id =>id),
        secretString: DATABASE_URL.apply(databaseUrl => JSON.stringify({
          DATABASE_URL: databaseUrl,
          SENDGRID_API_KEY: process.env.SENDGRID_API_KEY,
          NODE_ENV: process.env.NODE_ENV,
          EMAIL_FROM: process.env.EMAIL_FROM,
          PORT: process.env.PORT,
          JWT_SECRET: process.env.JWT_SECRET,
          JWT_EXPIRES_IN: process.env.JWT_EXPIRES_IN,
          JWT_SECRET_ADMIN: process.env.JWT_SECRET_ADMIN,
          JWT_EXPIRES_IN_ADMIN: process.env.JWT_EXPIRES_IN_ADMIN,
          JWT_RESET_PASSWORD_SECRET: process.env.JWT_RESET_PASSWORD_SECRET,
          JWT_RESET_PASSWORD_EXPIRES_IN: process.env.JWT_RESET_PASSWORD_EXPIRES_IN,
          AWS_REGION: process.env.AWS_REGION,
          SWAGGER_USER: process.env.SWAGGER_USER,
          SWAGGER_PASSWORD: process.env.SWAGGER_PASSWORD,
          FRONTEND_URL: process.env.FRONTEND_URL,
          BACKOFFICE_FRONTEND_URL: process.env.BACKOFFICE_FRONTEND_URL,
        })),
      });
    }

    // ECS
    const clusterName = `ecs-cluster`;
    const cluster = new sst.aws.Cluster(clusterName, { vpc });

    const serviceName = `${projectName}-backend-${environment}-ecs-service`;
    const imageUri = process.env.IMAGE_URI; // Get the image uri generated in Bitbucket Pipelines
    const service = new sst.aws.Service(serviceName, {
      ...(isFromLocalMachine ? {} : { image: imageUri }),
      cluster,
      link: [rds, secret], // Associate secret manager and rds 
      permissions: [
        {
          effect: 'allow',
          actions: [
            'secretsmanager:GetSecretValue',
            'secretsmanager:CreateSecret',
            'secretsmanager:DescribeSecret',
          ],
          resources: ['*'],
        },
      ],
      ssm: {
        DATABASE_URL: secret.arn.apply(arn => `${arn}:DATABASE_URL::`),
        SENDGRID_API_KEY: secret.arn.apply(arn => `${arn}:SENDGRID_API_KEY::`),
        NODE_ENV: secret.arn.apply(arn => `${arn}:NODE_ENV::`),
        EMAIL_FROM: secret.arn.apply(arn => `${arn}:EMAIL_FROM::`),
        PORT: secret.arn.apply(arn => `${arn}:PORT::`),
        JWT_SECRET: secret.arn.apply(arn => `${arn}:JWT_SECRET::`),
        JWT_EXPIRES_IN: secret.arn.apply(arn => `${arn}:JWT_EXPIRES_IN::`),
        JWT_SECRET_ADMIN: secret.arn.apply(arn => `${arn}:JWT_SECRET_ADMIN::`),
        JWT_EXPIRES_IN_ADMIN: secret.arn.apply(arn => `${arn}:JWT_EXPIRES_IN_ADMIN::`),
        JWT_RESET_PASSWORD_SECRET: secret.arn.apply(arn => `${arn}:JWT_RESET_PASSWORD_SECRET::`),
        JWT_RESET_PASSWORD_EXPIRES_IN: secret.arn.apply(arn => `${arn}:JWT_RESET_PASSWORD_EXPIRES_IN::`),
        AWS_REGION: secret.arn.apply(arn => `${arn}:AWS_REGION::`),
        SWAGGER_USER: secret.arn.apply(arn => `${arn}:SWAGGER_USER::`),
        SWAGGER_PASSWORD: secret.arn.apply(arn => `${arn}:SWAGGER_PASSWORD::`),
        FRONTEND_URL: secret.arn.apply(arn => `${arn}:FRONTEND_URL::`),
        BACKOFFICE_FRONTEND_URL: secret.arn.apply(arn => `${arn}:BACKOFFICE_FRONTEND_URL::`),
        FIRST_SUPER_ADMIN_PASSWORD: secret.arn.apply(arn => `${arn}:FIRST_SUPER_ADMIN_PASSWORD::`),
        FIRST_SUPER_ADMIN_EMAIL: secret.arn.apply(arn => `${arn}:FIRST_SUPER_ADMIN_EMAIL::`),
      },
      loadBalancer: {
        domain: {
          name: process.env.API_DOMAIN!,
          cert: process.env.API_DOMAIN_CERT_ARN!
        },
        rules: [
          { listen: "80/http", redirect: "443/https" },
          { listen: "443/https", forward: "5000/http" } // If your backend is running on a port other than 5000, modify this line with your backend port.
        ],
        health: {
          '5000/http': { // If your backend is running on a port other than 5000, modify this line with your backend port.
            path: '/health',
            interval: '60 seconds',
            timeout: '5 seconds',
          },
        },
      },
    });
    service.url.apply(data => console.log(`SERVICE_URL: ${data}`));
  },
});

In the output SERVICE_URL, you will be able to get access to the Backend URL.

React Frontend sst.config.ts

This file is automatically generated when running the sst init command. An example of how to complete its content is included.

To deploy the React frontend, in these guidelines we will use the following SST resources:

/// <reference path="./.sst/platform/config.d.ts" />

export default $config({
  app(input) {
    return {
      name: "spotlink-frontend",
      removal: input?.stage === "production" ? "retain" : "remove",
      protect: ["production"].includes(input?.stage),
      home: "aws",
    };
  },
  async run() {
    const staticName = 'static-site';
    const staticSite = new sst.aws.StaticSite(staticName, {
      domain: {
        name: process.env.VITE_DOMAIN!,
        cert: process.env.VITE_DOMAIN_CERT_ARN!,
      },
      environment: {
        VITE_API_BASE_URL: process.env.VITE_API_BASE_URL!
      },
      build: {
        command: "pnpm run build",
        output: "dist"
      },
    });
    staticSite.url.apply(url => console.log('STATIC_SITE_URL', `${url}`));
  },
});

In the output STATIC_SITE_URL, you will be able to get access to the Frontend URL on CloudFront.

Configure Bitbucket Pipelines

It is necessary to configure Bitbucket pipelines in order to perform the deploy and have the AWS repository variables configured in bitbucket as detailed previously.

Add the bitbucket-pipelines.yml file in the root directory of the project.

Below is a sample file as a guide, but it may vary depending on the project:

Frontend Bitbucket Pipelines with pnpm

image: node:22.14.0

definitions:
  caches:
    pnpm: $BITBUCKET_CLONE_DIR/.pnpm-store

pipelines:
  branches:
    master:
      - step:
          name: SST Deploy to AWS Production
          size: 2x
          caches:
            - node
            - pnpm
          deployment: Production
          script:
            - echo "Deploying to stage production"
            - corepack enable
            - corepack prepare pnpm@latest-10 --activate
            - pnpm install --frozen-lockfile
            - pnpm run deploy:production
          services:
            - docker

    staging:
      - step:
          name: SST Deploy to AWS Staging
          size: 2x
          caches:
            - node
            - pnpm
          deployment: Staging
          script:
            - echo "Deploying to stage staging"
            - corepack enable
            - corepack prepare pnpm@latest-10 --activate
            - pnpm install --frozen-lockfile
            - pnpm run deploy:staging
          services:
            - docker

  pull-requests:
    '**':
      - step:
          name: Build and test
          script:
            - npm install --global corepack@latest
            - corepack enable
            - corepack prepare pnpm@latest-10 --activate
            - pnpm install
            - pnpm run build
          caches:
            - pnpm

Backend Bitbucket Pipelines with npm

image: node:22.15.0

definitions:
pipelines:
  branches:
    master:
      - step:
          image: node:alpine
          name: Backend build/publish docker to ECR
          size: 2x
          caches:
            - node
          deployment: Production
          services:
            - docker
          script:
            # Creating environment variables
            - export ECR_NAME="sst-asset"
            - export IMAGE_TAG="$BITBUCKET_BUILD_NUMBER"
            - export IMAGE_URI="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${ECR_NAME}:${IMAGE_TAG}"
            # Install envsubst
            - apk update && apk add gettext
            # Set the name of the docker image we will be building.
            - export IMAGE_NAME="${ECR_NAME}"
            # Build the docker image
            - docker build -t "${IMAGE_NAME}" .
            # Save to env
            - echo "IMAGE_URI=$IMAGE_URI" > .env
            - echo "API_DOMAIN=$API_DOMAIN" >> .env
            - echo "API_DOMAIN_CERT_ARN=$API_DOMAIN_CERT_ARN" >> .env
            # Push to ECR
            - pipe: atlassian/aws-ecr-push-image:1.5.0
              variables:
                IMAGE_NAME: $IMAGE_NAME
                TAGS: $BITBUCKET_BUILD_NUMBER
                AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
                AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
                AWS_DEFAULT_REGION: $AWS_REGION
          artifacts:
            - .env 

      - step:
          name: Deploy SST Production
          image: node:22.15.0
          size: 2x
          caches:
            - node
          services:
            - docker
          script:
            - echo "Deploying to stage production"
            # Restore env variables
            - source .env
            # Execute deploy
            - echo "Deploying image:" $IMAGE_URI
            - npm run deploy:production

    staging:
      - step:
          image: node:alpine
          name: Backend build/publish docker to ECR
          size: 2x
          caches:
            - node
          deployment: Staging
          services:
            - docker
          script:
            # Creating environment variables
            - export ECR_NAME="sst-asset"
            - export IMAGE_TAG="$BITBUCKET_BUILD_NUMBER"
            - export IMAGE_URI="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${ECR_NAME}:${IMAGE_TAG}"
            # Install envsubst
            - apk update && apk add gettext
            # Set the name of the docker image we will be building.
            - export IMAGE_NAME="${ECR_NAME}"
            # Build the docker image
            - docker build -t "${IMAGE_NAME}" .
            # Save to env
            - echo "IMAGE_URI=$IMAGE_URI" > .env
            - echo "API_DOMAIN=$API_DOMAIN" >> .env
            - echo "API_DOMAIN_CERT_ARN=$API_DOMAIN_CERT_ARN" >> .env
            # Push to ECR
            - pipe: atlassian/aws-ecr-push-image:1.5.0
              variables:
                IMAGE_NAME: $IMAGE_NAME
                TAGS: $BITBUCKET_BUILD_NUMBER
                AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
                AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
                AWS_DEFAULT_REGION: $AWS_REGION
          artifacts:
            - .env 

      - step:
          name: Deploy SST Staging
          image: node:22.15.0
          size: 2x
          caches:
            - node
          services:
            - docker
          script:
            - echo "Deploying to stage staging"
            # Restore env variables
            - source .env
            # Execute deploy
            - npm run deploy:staging


  pull-requests:
    '**':
      - step:
          name: Build and test
          script:
            - npm ci
            - npm run lint
            - npm run build
          caches:
            - node


Configure Backend Dockerfile

It is necessary to add a Dockerfile in the root of the backend project. Below is a sample Dockerfile as a guide, but it may vary depending on the project:

FROM node:22-alpine

# Intall OpenSSL y dependencies
RUN apk update && apk add --no-cache openssl

# RUN as root
RUN apk add dumb-init
# Use the node user from the image (instead of the root user)
USER node
# Create app directory
WORKDIR /home/node

# Copy application dependency manifests to the container image.
# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).
# Copying this first prevents re-running npm install on every code change.
COPY --chown=node:node package*.json ./
# Install app dependencies using the `npm ci` command instead of `npm install`
RUN npm ci
# Bundle app source
COPY --chown=node:node . .

# We need this because SST generates type definitions that are being used by the app
COPY sst-env.d.ts . 

# Run the build command which creates the production bundle
RUN npx prisma generate

RUN npm run build

RUN chmod 777 ./docker-script.sh

# Start the server using the production build
CMD ./docker-script.sh

docker-script.sh using Prisma

npx prisma migrate deploy
dumb-init node ./dist/main.js

SST Deploy

When everything is ready to deploy, first run the command for staging. Once the whole process is complete in that environment, move on to production.

One step at a time.

npm run deploy:staging

npm run deploy:production

SST Commands - Just in case

If at any point in the process you need to delete everything created in that project, you can run this command in the root of the project specifying the environment; it will delete absolutely everything generated by SST in that project and environment.

 npx sst remove --stage staging

If you need to sync the AWS resources with your local machine due to some error, there is a command that performs synchronization between both environments so that SST knows which AWS resources exist.

 npx sst refresh --stage staging

SST Support

If at any point in the process you need support from SST, you can contact them through this Discord link.

Finally

  • Finally, check in the AWS account that the required services are running correctly.
  • Share the necessary access credentials with the team.
  • Make a test by making some change in the repo and deploy using Bitbucket and the pipelines.
  • Enjoy the new environments!😎