Neon for Database-per-user
How to configure Neon for multi-tenancy - plus a few design tips
With its serverless and API-first nature, Neon is an excellent choice for building database-per-user applications (or apps where each user/customer has their own Postgres database). Neon is particularly well-suited for architectures that prioritize maximum database isolation, achieving the equivalent of instance-level isolation.
This guide will help you get started with implementing this architecture.
Multi-tenant architectures in Postgres
In a multi-tenant architecture, a single system supports multiple users (tenants), each with access to manage their own data. In a database like Postgres, this setup requires careful structuring to keep each tenant’s data private, secure, and isolated—all while remaining efficient to manage and scale.
Following these principles, there are three primary routes you could follow to implement multi-tenant architectures in Postgres:
- Creating one separate database per user (the focus of this guide);
- Creating one schema-per-user, within the same database;
- And keeping your tenants separate within a shared schema.
To better situate our use case, let’s briefly outline the differences between these architectures:
Database-per-user
In a database-per-user design, each user’s data is fully isolated in its own database, eliminating any risk of data overlap. This setup is straightforward to design and highly secure. However, implementing this in managed Postgres databases has traditionally been challenging. For users of AWS RDS, Amazon Aurora, or similar services, two primary options have existed for achieving a database-per-user design:
-
Using one large instance to host multiple user databases. This option can be tempting due to the reduced number of instances to manage and (probably) lower infrastructure costs. But the trade-off is a higher demand for DBA expertise—this is a design that requires careful planning, especially at scale. Hosting all users on shared resources can impact performance, particularly if users have varying workload patterns, and if the instance fails, all customers are affected. Migrations and upgrades also become complex.
-
Handling multiple instances, each hosting a single production database. In this scenario, each instance scales independently, preventing resource competition between users and minimizing the risk of widespread failures. This is a much simpler design from the perspective of the database layer, but managing hundreds of instances in AWS can get very costly and complex. As the number of instances grows into the thousands, management becomes nearly impossible.
As we’ll see later throughout this guide, Neon offers a third alternative by providing a logical equivalent to the instance-per-customer model with near-infinite scalability, without the heavy DevOps overhead. This solution involves creating one Neon project per customer.
Schema-per-user
But before focusing on database-per-user, let’s briefly cover another multi-tenancy approach in Postgres: the schema-per-user model. Instead of isolating data by database, this design places all users in a single database, with a unique schema for each.
In Neon, we generally don’t recommend this approach for SaaS applications, unless this is a design you’re already experienced with. This approach doesn’t reduce operational complexity or costs if compared to the many-databases approach, but it does introduce additional risks; it also limits the potential of Neon features like instant Point-in-Time Recovery (PITR), which in a project-per-customer model allows you to restore customer databases independently without impacting the entire fleet’s operations. More about this later.
Shared schema
Lastly, Postgres’s robustness actually makes it possible to ensure tenant isolation within a shared schema. In this model, all users' data resides within the same tables, with isolation enforced through foreign keys and row-level security.
While this is a common choice—and can be a good starting point if you’re just beginning to build your app—we still recommend the project-per-user route if possible. Over time, as your app scales, meeting requirements within a shared schema setup becomes increasingly challenging. Enforcing compliance and managing access restrictions at the schema level grows more complex as you add more users.
You’ll also need to manage very large Postgres tables, as all customer data is stored in the same tables. As these tables grow, additional Postgres fine-tuning will be required to maintain performance.
Setting up Neon for Database-per-user
Now that we’ve reviewed your options, let’s focus on the design choice we recommend for multi-tenancy in Neon: creating isolated databases for each user, with each database hosted on its own project.
Database-per-user = Project-per-user
We recommend setting up one project per user, rather than, for example, using a branch per customer. A Neon project serves as the logical equivalent of an "instance" but without the management overhead. Here’s why we suggest this design:
-
Straightforward scalability
Instead of learning how to handle large Postgres databases, this model allows you to simply create a new project when a user joins—something that can be handled automatically via the Neon API. This approach is very cost-effective, as we’ll see below. Databases remain small, keeping management at the database level simple. -
Better performance with lower costs
This design is also highly efficient in terms of compute usage. Each project has its own dedicated compute, which scales up and down independently per customer; a spike in usage for one tenant doesn’t affect others, and inactive projects remain practically free. -
Complete data isolation
By creating a dedicated project for each customer, their data remains completely separate from others, ensuring the highest level of security and privacy. -
Easier regional compliance
Each Neon project can be deployed in a specific region, making it easy to host customer data closer to their location. -
Per-customer PITR
Setting up a project per customer allows you to run PITR on individual customers instantly, without risking disruption to your entire fleet.
Managing many projects
As you scale, following a project-per-user design means eventually managing thousands of Neon projects. This might sound overwhelming, but it’s much simpler in practice than it seems—some Neon users manage hundreds of thousands of projects with just one engineer. Here’s why that’s possible:
-
You can manage everything with the Neon API
The API allows you to automate every step of project management, including setting resource limits per customer and configuring resources. -
No infrastructure provisioning
New Neon projects are ready in milliseconds. You can set things up to create new projects instantly when new customers join, without the need to manually pre-provision instances. -
You only pay for active projects
Empty projects are virtually free thanks to Neon’s scale-to-zero feature. If, on a given day, you have a few hundred projects that were only active for a few minutes, that’s fine—your bill won’t suffer. -
Subscription plans
To support this usage pattern, our pricing plans include a generous number of projects within the monthly subscription fee, allowing you to scale without a big budget:- The Launch plan includes 100 projects for $19/month.
- The Scale plan includes 1,000 projects for $69/month.
- The Business plan includes 5,000 projects for $700/month.
Dev/test environments
In Neon, database branching is a powerful feature that enables you to create fast, isolated copies of your data for development and testing. You can use child branches as ephemeral environments that mirror your main testing database but operate independently, without adding to storage costs. This feature is a game-changer for dev/test workflows, as it reduces the complexity of managing multiple test databases while lowering non-prod costs significantly.
To handle dev/test in a project-per-user design, consider creating a dedicated Neon project as your non-prod environment. This Neon project can serve as a substitute for the numerous non-prod instances you might maintain in RDS/Aurora.
The methodology:
- Within the non-prod project, load your testing data into the main branch. This main branch will serve as the primary source for all dev/test environments.
- Create ephemeral environments via child branches. For each ephemeral environment, create a child branch from the main branch. These branches are fully isolated in terms of resources and come with an up-to-date copy of your testing dataset.
- Automate the process. Use CI/CD and automations to streamline your workflow. You can reset child branches with one click to keep them in sync with the main branch as needed, maintaining data consistency across your dev/test environments.
Designing a Control Plane
Once you have everything set up, as your number of projects grows, you might want to create a control plane to stay on top of everything in a centralized manner.
The catalog database
The catalog database is a centralized repository that tracks and manages all Neon projects and databases. It holds records for every Neon project your system creates. You can also use it to keep track of tenant-specific configurations, such as database names, regions, schema versions, and so on.
You can set up your catalog database as a separate Neon project. When it's time to design its schema, consider these tips:
- Use foreign keys to link tables like
project
andpayment
tocustomer
. - Choose data types carefully:
citext
for case-insensitive text,uuid
for unique identifiers to obscure sequence data, andtimestamptz
for tracking real-world time. - Track key operational data, like
schema_version
, in theproject
table. - Index wisely! While the catalog will likely remain smaller than user databases, it will grow—especially with recurring events like payments—so indexing is crucial for control plane performance at scale.
- Start with essential data fields and plan for future extensions as needs evolve.
- Standard Neon metadata (e.g., compute size, branch info) is accessible via the console. Avoid duplicating it in the catalog database unless separate access adds significant complexity.
Automations
To effectively scale a multi-tenant architecture, leveraging automation tools is essential. The Neon API will allow you to automate various tasks, such as creating and managing projects, setting usage limits, and configuring resources. Beyond the API, Neon offers several integrations to streamline your workflows:
-
GitHub Actions
Neon's GitHub integration allows you to automate database branching workflows directly from your repositories. By connecting a Neon project to a GitHub repository, you can set up actions that create or delete database branches in response to pull request events, facilitating isolated testing environments for each feature or bug fix. -
Vercel Integration
You can connect your Vercel projects to Neon, creating database branches for each preview deployment. -
CI/CD pipelines
By combining Neon branching into your CI/CD, you can simplify your dev/test workflows by creating and deleting ephemeral environments automatically as child branches. -
Automated backups to your own S3
If you must keep your own data copy, you can schedule regular backups using tools likepg_dump
in conjunction with GitHub Actions.
The Application Layer
Although the application layer isn’t our main focus, a common question developers ask us when approaching a multi-tenant architecture is: Do I deploy one application environment per database, or connect all databases to a single application environment?
Both approaches are viable, each with its own pros and cons.
Shared application environments
Pros of shared environments
- Managing a single application instance minimizes operational complexity.
- Updates and new features are easy to implement since changes apply universally.
- Operating one environment reduces infrastructure and maintenance costs.
Cons of shared environments
- A single application environment makes it difficult to offer tailored experiences for individual customers.
- Compliance becomes challenging when users' databases span multiple regions.
- Updates apply to all users simultaneously, which can be problematic for those needing specific software versions.
- A single environment heightens the risk of data breaches, as vulnerabilities can impact all users.
Advice
-
Implement robust authorization
Ensure secure access as all users share the same application environment. -
Define user authentication and data routing
- Users provide their organization details during login.
- Users access the application via an organization-specific subdomain.
- The system identifies the user's organization based on their credentials.
-
Monitor usage and performance
Regularly track application usage to prevent performance bottlenecks. -
Plan maintenance windows carefully
Minimize disruptions for all users by scheduling maintenance during low-usage periods.
Isolated application environments
In this architecture, each customer has instead a dedicated application environment alongside their own database. Similar to the shared environment option, this design has pros and cons:
Pros of isolated environments
- Since each customer can now have a unique application environment, it’s easier to implement personalized features and configurations, to keep separate versions for particular customers, and so on.
- Compliance is also simpler if you’re handling multiple regions. Deploying the application in multiple regions can also help with latency.
- This design also opens the door for customers to control their own upgrade schedules, e.g., via defining their own maintenance windows.
Cons of isolated environments
- This design has an obvious tradeoff: it comes with higher complexity of deployment, monitoring, and maintenance.
- You’ll need to think about how to route optimal resource utilization across multiple environments, and how to keep observability on-point to diagnose issues.
- Operating separate environments for each customer might also lead to higher costs.
Advice
If you decide to implement isolated environments, here’s some advice to consider:
- Design your architecture to accommodate growth, even if your setup is small today.
- Similarly as you’re doing with Neon projects, take advantage of automation tools to streamline the creation and management of your application environments.
- Set up proper monitoring to track key metrics across all environments.
Migrating Schemas
Is database-per-user design, it is common to have the same schema for all users/databases. Any changes to the user schema will most likely be rolled out to all individual databases simultaneously. In this section, we teach you how to use DrizzleORM, GitHub Actions, the Neon API, and a couple of custom template scripts to manage many databases using the same database schema.
Example app
To walk you through it, we’ve created example code in this repository. The example includes 4 Neon databases, all using Postgres 16 and all deployed to AWS us-east-1.
The schema consists of three tables, users
, projects
and tasks
. You can see the schema here: schema.ts, and for good measure, here’s the raw SQL equivalent: schema.sql. This default schema is referenced by each of the drizzle.config.ts
files that have been created for each customer.
Workflow using Drizzle ORM and GitHub Actions
Creating Neon projects via a CLI script
Our example creates new Neon projects via the command line, using the following script:
This script utilizes the commander
library to create a simple command-line interface (CLI) and the Neon API's createProject
method to set up a new project. Ensure that your Neon API key is stored in an environment variable named NEON_API_KEY
.
To execute the script and create a new Neon project named "ACME Corp" with PostgreSQL version 16 in the aws-us-east-1 region, run:
In this example, the same approach was used to create the following projects:
- ACME Corp
- Payroll Inc
- Finance Co
- Talent Biz
To interact with the Neon API, you'll need to generate an API key. For more information, refer to the Neon documentation on creating an API key.
Generating a workflow to prepare for migrations
The script above goes through these steps:
-
Ensures the
configs
directory exists, creating it if necessary. -
Retrieves the GitHub public key for encrypting secrets.
-
Lists all projects in your Neon account.
-
For each project:
- Retrieves the connection URI from Neon.
- Sanitizes project names for safe usage in directory names and environment variables.
- Creates DrizzleORM config files.
- Encrypts secrets and adds them to the GitHub repository.
- Generates migrations using
drizzle-kit
.
-
Finally, it generates GitHub Actions workflow that includes all generated environment variables for running migrations.
-
To run the script, use the following command:
Ensure the following environment variables are set:
NEON_API_KEY
: Your Neon API key.PERSONAL_ACCESS_TOKEN
: Your GitHub personal access token.
And update repoOwner
and repoName
to match your repository details.
Here’s an example output for the Drizzle configuration:
And for the GitHub workflow:
Running migrations
Now, we’re ready to run migrations:
This script:
- Only runs after a pull request (PR) has been merged. It reads through the configs directory and applies the migrations defined in each
drizzle.config.ts
file for every project or customer, ensuring that all databases are using the same schema. - Uses
npx
to run thedrizzle-kit migrate
command against eachdrizzle.config.ts
file, ensuring that the schema is applied to all databases.
The source code for this migration script is located at: src/scripts/migrate.js
. This approach automatically includes any new projects or customers added to the system, as well as schema changes that need to be applied across all databases.
Summary
Here’s an overview of the workflow:
- We used a script to automate the creation of DrizzleORM configuration files (
drizzle.config.ts
) and securely store database connection strings as GitHub secrets.management. - We used a migration script to iterate through the configs directory and apply schema changes to all databases via
drizzle-kit migrate
. - The GitHub Actions workflow triggers migrations automatically when a PR is merged. Environment variables for each project are explicitly injected into the workflow, giving DrizzleORM access to the connection strings needed for schema updates.
Backing up Projects to Your Own S3
As a manage database, Neon already takes care of securing your data, always keeping a full copy of your dataset in object storage. But if your use case or company demands that you also keep a copy of your data in your own S3, this section covers how to automate the process via a scheduled GitHub Action. A more extensive explanation can be found in this two-part blog post series: Part 1, Part 2.
AWS IAM configuration
First, GitHub must be added as an identity provider to allow the Action to use your AWS credentials. To create a new Identity Provider, navigate to IAM > Access Management > Identity Providers, and click Add provider.
On the next screen select OpenID Connect and add the following to the Provider URL and Audience fields.
- Provider URL: https://token.actions.githubusercontent.com
- Audience:
sts.amazonaws.com
When you’re done, click Add Provider. You should now see this provider is visible in the list under IAM > Access Management > Identity Providers.
Now, you must create a role, which is an identity that you can assume to obtain temporary security credentials for specific tasks or actions within AWS. Navigate to IAM > Access Management > Roles, and click Create role.
On the next screen you can create a Trusted Identity for the Role. Select Trusted Identity. On the next screen, select Web Identity, then select token.actions.githubusercontent.com
from the Identity Provider dropdown menu.
Once you select the Identity Provider, you’ll be shown a number of fields to fill out. Select sts.amazonaws.com
from the Audience dropdown menu, then fill out the GitHub repository details as per your requirements. When you’re ready, click Next. For reference, the options shown in the image below are for this repository.
You can skip selecting anything from the Add Permissions screen and click Next to continue.
On this screen give the Role a name and description. You’ll use the Role name in the code for the GitHub Action. When you’re ready click Create role.
S3 bucket policy
This section assumes you already have an S3 bucket. If you need instructions on how to create a bucket, refer to this blog post.
To ensure the Role being used in the GitHub Action can perform actions on the S3 bucket, you’ll need to update the bucket policy. Select your bucket then select the Permissions tab and click Edit.
You can now add the following policy which grants the Role you created earlier access to perform S3 List, Get, Put and Delete actions. Replace the Role name (neon-multiple-db-s3-backups-github-action
) with your Role name and replace the S3 bucket name (neon-multiple-db-s3-backups
) with your S3 bucket name.
When you’re ready click Save changes.
GitHub secrets
Create the following GitHub Secrets to hold various values that you likely won’t want to expose or repeat in code:
AWS_ACCOUNT_ID
: This can be found by clicking on your user name in the AWS console.S3_BUCKET_NAME
: In my case, this would be, neon-multiple-db-s3-backupsIAM_ROLE
: In my case this would be, neon-multiple-db-s3-backups-github-action
Scheduled pg_dump/restore GitHub Action
Before diving into the code, here’s a look at this example in the Neon console dashboard. These are three databases set up for three fictional customers, all running Postgres 16 and all are deployed to us-east-1. We will be backing up each database into its own folder within an S3 bucket, with different schedules and retention periods. All the code in this example lives in this repository.
Using the same naming conventions, there are three new files in the ``.github/workflows` folder in the repository:
paycorp-payments-prod.yml
acme-analytics-prod.yml
paycorp-payments-prod.yml
All the Actions are technically the same, (besides the name of the file), but there are several areas where they differ.
These are:
- The workflow name
- The
DATABASE_URL
- The
RETENTION
period
For example, in the first .yml
file, the workflow name is acme-analytics-prod
, the DATABASE_URL
points to secrets.ACME_ANALYTICS_PROD
, and the RETENTION
period is 7 days.
Here’s the full Action, and below the code snippet, we'll explain how it all works.
Starting from the top, there are a few configuration options:
Action configuration
name
: This is the workflow name and will also be used when creating the folder in the S3 bucket.cron
: This determines how often the Action will run, take a look a the GitHub docs where the POSIX cron syntax is explained.
Environment variables
RETENTION
: This determines how long a backup file should remain in the S3 bucket before it’s deleted.DATABASE_URL
: This is the Neon Postgres connection string for the database you’re backing up.IAM_ROLE
: This is the name of the AWS IAM Role.AWS_ACCOUNT_ID
: This is your AWS Account ID.S3_BUCKET_NAME
: This is the name of the S3 bucket where all backups are being stored.AWS_REGION
: This is the region where the S3 bucket is deployed.PG_VERSION
: This is the version of Postgres to install.
GitHub Secrets
As we mentioned above, several of the above environment variables are defined using secrets. These variables can be added to Settings > Secrets and variables > Actions.
Here’s a screenshot of the GitHub repository secrets including the connection string for the fictional ACME Analytics Prod database.
Action steps
This step installs Postgres into the GitHub Action’s virtual environment. The version to install is defined by the PG_VERSION
environment variable.
Install Postgres
Configure AWS credentials
This step configures AWS credentials within the GitHub Action virtual environment, allowing the workflow to interact with AWS services securely.
Set file, folder and path variables
In this step I’ve created three variables that are all output to GITHUB_ENV
. This allows me to access the values from other steps in the Action.
The three variables are as follows:
GZIP_NAME
: The name of the.gz
file derived from the date which would produce a file name similar to,October-21-2024@07:53:02.gz
FOLDER_NAME
: The folder where the.gz
files are to be uploadedUPLOAD_PATH
: This is the full path that includes the S3 bucket name, folder name and.gz
file
Create folder if it doesn’t exist
This step creates a new folder (if one doesn’t already exist) inside the S3 bucket using the FOLDER_NAME
as defined in the previous step.
Final remarks
You can create as many of these Actions as you need. Just be careful to double check the DATABASE_URL
to avoid backing up a database to the wrong folder.
important
GitHub Actions will timeout after ~6 hours. The size of your database is and how you’ve configured it will determine how long the pg_dump
step takes. If you do experience timeout issues, you can self host GitHub Action runners.