At Neon Support, we’ve noticed users occasionally encountering connection limit errors while connecting applications to their databases. Today, we’re sharing insights from our support exchanges to help you understand and avoid this issue in your setup.
Specifically, our users sometimes report running into the following Postgres connection error:
FATAL: remaining connection slots are reserved for non-replication superuser connections (SQLSTATE 53300)
What causes this error?
The “remaining slots are reserved error” error occurs when the maximum number of simultaneous database connections, defined by the Postgres max_connections setting, is reached, with a number of those connection slots being reserved for superusers.
For example, if your Postgres instance supports 112 simultaneous connections, it might reserve 4 for superusers, allowing 108 for regular concurrent connections.
Typically, what leads to this error is a high number of simultaneous connection requests from the same Postgres user that exceeds the limit for their database. In one case, we found that a user was reaching the 108 simultaneous connection limit several times in a 24-hour period, resulting in the “remaining slots are reserved” errors during those periods.
What is the Postgres max_connections setting?
Postgres defines the max_connections setting as follows:
“Determines the maximum number of concurrent connections to the database server. The default is typically 100 connections, but might be less if your kernel settings will not support it (as determined during initdb). This parameter can only be set at server start.”
Why not set a really high max_connections value?
Each Postgres connection creates a new process in the operating system, which consumes resources; i.e., each connection consumes memory, and you don’t want to run out of memory. Postgres limits the number of open connections for this reason.
What is your max_connections limit?
Neon is a managed Postgres service, so we manage the max_connections setting for you. Rather than the Postgres limit of 100 simultaneous connections, Neon sets max_connections based on the size of your compute instance and how much memory it has. The exact formula used to configure max_connections can be found in the neon code base:
maxConnections := max(100, min(int64(ramInBytes/9_531_392), 4_000))
The following table shows the max_connection settings for different compute sizes in Neon, based on the formula above:
Compute Size (Compute Units (CU) | vCPU | RAM (vCPU x 4) | max_connections |
---|---|---|---|
.25 | .25 | 1 GB | 112 |
.50 | .50 | 2 GB | 225 |
1 | 1 | 4 GB | 450 |
2 | 2 | 8 GB | 901 |
3 | 3 | 12 GB | 1351 |
4 | 4 | 16 GB | 1802 |
5 | 5 | 20 GB | 2253 |
6 | 6 | 24 GB | 2703 |
7 | 7 | 28 GB | 3154 |
You can check the max_connections limit for your compute by running the following query from the Neon SQL Editor or a client connected to Neon:
SHOW max_connections;
How do you avoid these errors?
As a Neon Free Tier user, your compute size is fixed at .25 CU, which gives you:
- .25 vCPU
- 1 GB of RAM
- A max_connections setting of 112
You have a few options in this case:
- Find and remove long-running or idle connections.
- Upgrade to the Neon Pro Plan for larger computes with a higher max_connections setting.
- Enable connection pooling.
Find and remove long-running or idle connections
If you have a bunch of long-running or idle connections using up your connection limit, it may be enough for you to drop those connections to make room for new ones, at least in the short term.
You can use this query to check for long-running or idle connections:
To remove those connections, you can run this query:
Replace ‘databasename’ with the actual name of your database.
Upgrade to the Neon Pro Plan for larger compute sizes
You can upgrade to the Neon Pro Plan from the Billing page in the Neon Console. Once you are signed up, you can select a larger compute size that supports more connections. You’ll find the instructions here: Compute size and autoscaling configuration.
Connection Pooling
Even with the largest compute size, the max_connections limit may not be sufficient for some applications, such as those that use serverless functions. To increase the number of connections that Neon supports, you can enable connection pooling. All Neon plans, including the Neon Free Tier, support connection pooling.
Neon uses PgBouncer for connection pooling, which allows Neon to support up to 10,000 simultaneous connections.
When an application needs to connect to a database, PgBouncer provides a connection from the pool. Connections in the pool are routed to a smaller number of actual Postgres connections. When a connection is no longer required, it is returned to the pool and is available to be used again.
Using a pool of connections improves performance by reducing the number of connections that need to be created and torn down to service incoming requests. Connection pooling also helps avoid rejected connections. When all connections in the pool are being used, PgBouncer queues a new request until a connection from the pool becomes available.
How to enable connection pooling in Neon
Enabling connection pooling in Neon requires adding a -pooler suffix to the compute endpoint ID, which is part of the Neon hostname. Connections that specify the -pooler suffix in the connection string use a pooled connection. The -pooler suffix routes the connection to a pooled connection port at the Neon proxy.
You can add the -pooler suffix to the endpoint ID in your connection string as shown:
But you don’t have to add this suffix manually. The Connection Details widget on the Neon Dashboard provides a Pooled connection checkbox that adds the -pooler suffix to a connection string, which you can copy and paste.
Conclusion
Dealing with “remaining connection slots are reserved” errors can be straightforward with the right approach. By knowing your database’s connection limit, managing long-running and idle connections, using right-size computes, or enabling connection pooling, you can ensure that you always have enough connections to handle your application load.
Thanks for reading and stay tuned for the next Support Case Recap!