Ship faster with database branching workflows: Add prod-like data to your preview and local dev environments.Read more
Manage

Manage computes

A single read-write compute endpoint is created for your project's primary branch, by default.

To connect to a database that resides in a branch, you must connect via a compute endpoint associated with the branch. The following diagram shows the project's primary branch (main) and a child branch, both of which have an associated compute endpoint.

Project
    |----primary branch (main) ---- compute endpoint <--- application/client
             |    |
             |    |---- database (neondb)
             |
             ---- child branch ---- compute endpoint <--- application/client
                            |
                            |---- database (mydb)

Neon supports both read-write and read-only compute endpoints. Read-only compute endpoints are also referred to as Read replicas. A branch can have a single read-write compute endpoint but supports multiple read-only compute endpoints.

Tier limits define resources (vCPUs and RAM) available to a compute endpoint. The Neon Free Tier provides a shared vCPU and up to 1 GB of RAM per compute endpoint. Paid plans support larger compute sizes and autoscaling.

View a compute endpoint

A compute endpoint is associated with a branch. To view a compute endpoint, select Branches in the Neon Console, and select a branch. If the branch has a compute endpoint, it is shown on the branch page.

Compute endpoint details shown on the branch page include:

  • Id: The compute endpoint ID. -- Type: The type of compute endpoint. R/W (Read-write) or R/O (Read-only).
  • Status: The compute endpoint status (Active, Idle, or Stopped).
  • Compute size: The size of the compute endpoint. Users on paid plans can configure the amount of vCPU and RAM for a compute endpoint when creating or editing a compute endpoint. Shows autoscaling minimum and maximum vCPU values if autoscaling is enabled.
  • Auto-suspend delay: The number of seconds of inactivity after which a compute endpoint is automatically suspended. The default is 300 seconds (5 minutes). For more information, see Autosuspend configuration.
  • Last active: The date and time the compute was last active.

Create a compute endpoint

You can only create a read-write compute endpoint for a branch that does not have one, but a branch can have multiple read-only compute endpoints (referred to as "read replicas"). Read replicas are a paid plan feature.

To create an endpoint:

  1. In the Neon Console, select Branches.
  2. Select a branch that does not have an endpoint
  3. Click Add compute.
  4. On the Create compute endpoint dialog, specify your settings and click Create. Selecting Read-only creates a Read replica.

Edit a compute endpoint

Neon paid plan users can edit a compute endpoint to change the compute size or Autosuspend configuration.

To edit a compute endpoint:

  1. In the Neon Console, select Branches.

  2. Select a branch.

  3. Click the kebab menu in the Computes table, and select Edit.

    The Edit window opens, letting you take a range of actions, depending on your tier.

  4. Once you've made your changes, click Save. All changes take immediate effect.

For information about selecting an appropriate compute size, see How to size your compute.

What happens to the compute endpoint when making changes

Some key points to understand about how your endpoint responds when you make changes to your compute settings:

  • Changing the size of your fixed compute restarts the endpoint and temporarily disconnects all existing connections.

    note

    When your compute resizes automatically as part of the autoscaling feature, there are no restarts or disconnects; it just scales.

  • Editing minimum or maximum autoscaling sizes also requires a restart; existing connections are temporarily disconnected.
  • Changes to autosuspend settings do not require an endpoint restart; existing connections are unaffected.

To avoid prolonged interruptions resulting from compute restarts, we recommend configuring your clients and applications to reconnect automatically in case of a dropped connection.

Compute size and autoscaling configuration

Users on paid plans can change compute size settings when editing a compute endpoint.

Compute size is the number of Compute Units (CUs) assigned to a Neon compute endpoint. The number of CUs determines the processing capacity of the compute endpoint. One CU has 1 vCPU and 4 GB of RAM, 2 CUs have 2 vCPUs and 8 GB of RAM, and so on. The amount of RAM in GB is always 4 times the vCPUs, as shown in the table below.

Compute size (in CUs)vCPURAM
.25.251 GB
.5.52 GB
114 GB
228 GB
3312 GB
4416 GB
5520 GB
6624 GB
7728 GB
8832 GB

Neon supports fixed-size and autoscaling compute configurations.

  • Fixed size: You can use the slider to select a fixed compute size. A fixed-size compute does not scale to meet workload demand.
  • Autoscaling: You can also use the slider to specify a minimum and maximum compute size. Neon scales the compute size up and down within the selected compute size boundaries to meet workload demand. For information about how Neon implements the Autoscaling feature, see Autoscaling.

info

The neon_utils extension provides a num_cpus() function you can use to monitor how the Autoscaling feature allocates compute resources in response to workload. For more information, see The neon_utils extension.

How to size your compute

The size of your compute determines the amount of frequently accessed data you can cache in memory and the maximum number of simultaneous connections you can support. As a result, if your compute size is too small, this can lead to suboptimal query performance and connection limit issues.

In Postgres, the shared_buffers setting defines the amount of data that can be held in memory. In Neon, the shared_buffers parameter is always set to 128 MB, but Neon uses a Local File Cache (LFC) to extend the amount of memory available for caching data. The LFC can use up to 80% of your compute's RAM.

The Postgres max_connections setting defines your compute's maximum simultaneous connection limit and is set according to your compute size. Larger computes support higher maximum connection limits.

The following table outlines the vCPU, RAM, LFC size (50 % of RAM), and the max_connections limit for each compute size that Neon supports.

Compute Size (CU)vCPURAMLFC sizemax_connections
0.250.251 GB0.8 GB112
0.500.502 GB1.6 GB225
114 GB3.2 GB450
228 GB6.4 GB901
3312 GB9.6 GB1351
4416 GB12.8 GB1802
5520 GB16 GB2253
6624 GB19.2 GB2703
7728 GB22.4 GB3154
8832 GB25.6 GB3,604

note

Users on paid plans can configure the size of their computes. The compute size for Free Tier users is set at .25 CU (.25 vCPU and 1 GB RAM).

When selecting a compute size, ideally, you want to keep as much of your dataset in memory as possible. This improves performance by reducing the amount of reads from storage. If your dataset is not too large, select a compute size that will hold the entire dataset in memory. For larger datasets that cannot be fully held in memory, select a compute size that can hold your working set. Selecting a compute size for a working set involves advanced steps, which are outlined below. See Sizing your compute based on the working set.

Regarding connection limits, you'll want a compute size that can support your anticipated maximum number of concurrent connections. If you are using Autoscaling, it is important to remember that your max_connections setting is based on the minimum compute size in your autoscaling configuration. The max_connections setting does not scale with your compute. To avoid the max_connections constraint, you can use a pooled connection with your application, which supports up to 10,000 concurrent user connections. See Connection pooling.

Sizing your compute based on the working set

If it's not possible to hold your entire dataset in memory, the next best option is to ensure that your working set is in memory. A working set is your frequently accessed or recently used data and indexes. To determine whether your working set is fully in memory, you can query the cache hit ratio for your Neon compute. The cache hit ratio tells you how many queries are served from memory. Queries not served from memory bypass the cache to retrieve data from Neon storage (the Pageserver), which can affect query performance.

As mentioned above, Neon computes use a Local File Cache (LFC) to extend Postgres shared buffers. To query the cache hit ratio for your compute's LFC, Neon provides a neon extension with a neon_stat_file_cache view.

To use the neon_stat_file_cache view, install the neon extension on a preferred database or connect to the Neon-managed postgres database where the neon extension is always available.

To install the extension on a preferred database:

CREATE EXTENSION neon;

To connect to the Neon-managed postgres database instead:

psql postgres://alex:AbC123dEf@ep-cool-darkness-123456.us-east-2.aws.neon.tech/postgres?sslmode=require

If you are already connected via psql, you can simply switch to the postgres database using the \c command:

\c postgres

Issue the following query to view LFC usage data for your compute instance:

SELECT * FROM neon_stat_file_cache;
 file_cache_misses | file_cache_hits | file_cache_used | file_cache_writes | file_cache_hit_ratio  
-------------------+-----------------+-----------------+-------------------+----------------------
           2133643 |       108999742 |             607 |          10767410 |                98.08
(1 row)

The file_cache_hit_ratio is calculated according to the following formula:

file_cache_hit_ratio = (file_cache_hits / (file_cache_hits + file_cache_misses)) * 100

tip

You can also use EXPLAIN ANALYZE with the FILECACHE option to view data for LFC hits and misses. See View LFC metrics with EXPLAIN ANALYZE.

For OLTP workloads, you should aim for a file_cache_hit_ratio above 99%. If you hit ration is below that, your working set may not be fully or adequately in memory. In this case, consider using a larger compute with more memory. Please keep in mind that the statistics are for the entire compute, not specific databases or tables.

note

The cache hit ratio query is based on statistics that represent the lifetime of your compute, from the last time the compute started until the time you ran the query. Be aware that statistics are lost when your compute stops and gathered again from scratch when your compute restarts. You'll only want to run the cache hit ratio query after a representative workload has been run. For example, say that you increased your compute size after seeing a cache hit ratio below 99%. Changing the compute size restarts your compute, so you lose all of your current usage statistics. In this case, you should run your workload before you try the cache hit ratio query again to see if your cache hit ratio improved. Optionally, to help speed up the process, you can use the pg_prewarm extension to pre-load data into memory after a compute restart. See The pg_prewarm extension.

Autoscaling considerations

Autoscaling is most effective when your data (either your full dataset or your working set) can be fully cached in memory on the minimum compute size in your autoscaling configuration.

Consider this scenario: If your data size is approximately 6 GB, starting with a compute size of .25 CU can lead to suboptimal performance because your data cannot be adequately cached. While your compute will scale up from .25 CU on demand, you may experience poor query performance until your compute scales up and fully caches your working set. You can avoid this issue if your minimum compute size can hold your working set in memory.

As mentioned above, your max_connections setting is based on the minimum compute size in your autoscaling configuration and does not scale along with your compute. To avoid this max_connections constraint, you can use a pooled connection for your application. See Connection pooling.

Autosuspend configuration

Neon's Autosuspend feature automatically transitions a compute endpoint into an Idle state after a period of inactivity, also known as "scale-to-zero". By default, suspension occurs after 5 minutes of inactivity, but this delay can be adjusted. For instance, you can increase the delay to reduce the frequency of suspensions, or you can disable autosuspend completely to maintain an "always-active" compute endpoint. An "always-active" configuration eliminates the few seconds of latency required to reactivate a compute endpoint but is likely to increase your compute time usage.

The maximum Suspend compute after a period of inactivity setting is 7 days. To configure a compute as "always-active", deselect Suspend compute after a period of inactivity. For more information, refer to Configuring autosuspend for Neon computes.

Restart a compute endpoint

It is sometimes necessary to restart a compute endpoint. For example, if you upgrade to a paid plan account, you may want to restart your compute endpoint to immediately apply your upgraded limits.

important

Please be aware that restarting a compute endpoint interrupts any connections currently using the compute endpoint. To avoid prolonged interruptions resulting from compute restarts, we recommend configuring your clients and applications to reconnect automatically in case of a dropped connection.

You can restart a compute endpoint using one of the following methods:

  • Stop activity on your compute endpoint (stop running queries) and wait for your compute endpoint to suspend due to inactivity. By default, Neon suspends a compute after 5 minutes of inactivity. You can watch the status of your compute on the Branches page in the Neon Console. Select your branch and monitor your compute's Status field. Wait for it to report an Idle status. The compute will restart the next time it's accessed, and the status will change to Active.
  • Issue a Restart endpoint call using the Neon API. You can do this directly from the Neon API Reference using the Try It! feature. You'll need an API key.
  • Users on paid plans can temporarily set a compute's Suspend compute after a period of inactivity to a low value to initiate a suspension (the default setting is 5 minutes). See Autosuspend configuration for instructions. After doing so, check the Operations page in the Neon Console. Look for suspend_compute action. Any activity on the compute endpoint will restart it, such as running a query. Watch for a start_compute action on the Operations page.

Delete a compute endpoint

Deleting a compute endpoint is a permanent action.

To delete a compute endpoint:

  1. In the Neon Console, select Branches.
  2. Select a branch.
  3. Click the kebab menu in the Computes table, and select Delete.
  4. On the confirmation dialog, click Delete.

Manage compute endpoints with the Neon API

Compute endpoint actions performed in the Neon Console can also be performed using the Neon API. The following examples demonstrate how to create, view, update, and delete compute endpoints using the Neon API. For other compute endpoint API methods, refer to the Neon API reference.

note

The API examples that follow may not show all of the user-configurable request body attributes that are available to you. To view all attributes for a particular method, refer to method's request body schema in the Neon API reference.

The jq option specified in each example is an optional third-party tool that formats the JSON response, making it easier to read. For information about this utility, see jq.

Prerequisites

A Neon API request requires an API key. For information about obtaining an API key, see Create an API key. In the cURL examples below, $NEON_API_KEY is specified in place of an actual API key, which you must provide when making a Neon API request.

Create a compute endpoint with the API

The following Neon API method creates a compute endpoint.

POST /projects/{project_id}/endpoints

The API method appears as follows when specified in a cURL command. The branch you specify cannot have an existing compute endpoint. A compute endpoint must be associated with a branch, and a branch can have only one compute endpoint. Neon supports read-write and read-only compute endpoints. Read-only compute endpoints are for creating Read replicas. A branch can have a single read-write compute endpoint but supports multiple read-only compute endpoints.

curl -X 'POST' \
  'https://console.neon.tech/api/v2/projects/hidden-cell-763301/endpoints' \
  -H 'accept: application/json' \
  -H "Authorization: Bearer $NEON_API_KEY" \
  -H 'Content-Type: application/json' \
  -d '{
  "endpoint": {
    "branch_id": "br-blue-tooth-671580",
    "type": "read_write"
  }
}'
Response body
{
  "endpoint": {
    "host": "ep-aged-math-668285.us-east-2.aws.neon.tech",
    "id": "ep-aged-math-668285",
    "project_id": "hidden-cell-763301",
    "branch_id": "br-blue-tooth-671580",
    "autoscaling_limit_min_cu": 1,
    "autoscaling_limit_max_cu": 1,
    "region_id": "aws-us-east-2",
    "type": "read_write",
    "current_state": "init",
    "pending_state": "active",
    "settings": {
      "pg_settings": {}
    },
    "pooler_enabled": false,
    "pooler_mode": "transaction",
    "disabled": false,
    "passwordless_access": true,
    "created_at": "2023-01-04T18:39:41Z",
    "updated_at": "2023-01-04T18:39:41Z",
    "proxy_host": "us-east-2.aws.neon.tech"
  },
  "operations": [
    {
      "id": "e0e4da91-8576-4348-913b-aaf61a46d314",
      "project_id": "hidden-cell-763301",
      "branch_id": "br-blue-tooth-671580",
      "endpoint_id": "ep-aged-math-668285",
      "action": "start_compute",
      "status": "running",
      "failures_count": 0,
      "created_at": "2023-01-04T18:39:41Z",
      "updated_at": "2023-01-04T18:39:41Z"
    }
  ]
}

List compute endpoints with the API

The following Neon API method lists compute endpoints for the specified project. A compute endpoint belongs to a Neon project. To view the API documentation for this method, refer to the Neon API reference.

GET /projects/{project_id}/endpoints

The API method appears as follows when specified in a cURL command:

curl -X 'GET' \
  'https://console.neon.tech/api/v2/projects/hidden-cell-763301/endpoints' \
  -H 'accept: application/json' \
  -H "Authorization: Bearer $NEON_API_KEY"
Response body
{
  "endpoints": [
    {
      "host": "ep-young-art-646685.us-east-2.aws.neon.tech",
      "id": "ep-young-art-646685",
      "project_id": "hidden-cell-763301",
      "branch_id": "br-shy-credit-899131",
      "autoscaling_limit_min_cu": 1,
      "autoscaling_limit_max_cu": 1,
      "region_id": "aws-us-east-2",
      "type": "read_write",
      "current_state": "idle",
      "settings": {
        "pg_settings": {}
      },
      "pooler_enabled": false,
      "pooler_mode": "transaction",
      "disabled": false,
      "passwordless_access": true,
      "last_active": "2023-01-04T18:38:25Z",
      "created_at": "2023-01-04T18:38:23Z",
      "updated_at": "2023-01-04T18:43:36Z",
      "proxy_host": "us-east-2.aws.neon.tech"
    },
    {
      "host": "ep-aged-math-668285.us-east-2.aws.neon.tech",
      "id": "ep-aged-math-668285",
      "project_id": "hidden-cell-763301",
      "branch_id": "br-blue-tooth-671580",
      "autoscaling_limit_min_cu": 1,
      "autoscaling_limit_max_cu": 1,
      "region_id": "aws-us-east-2",
      "type": "read_write",
      "current_state": "idle",
      "settings": {
        "pg_settings": {}
      },
      "pooler_enabled": false,
      "pooler_mode": "transaction",
      "disabled": false,
      "passwordless_access": true,
      "last_active": "2023-01-04T18:39:42Z",
      "created_at": "2023-01-04T18:39:41Z",
      "updated_at": "2023-01-04T18:44:48Z",
      "proxy_host": "us-east-2.aws.neon.tech"
    }
  ]
}

Update a compute endpoint with the API

The following Neon API method updates the specified compute endpoint. To view the API documentation for this method, refer to the Neon API reference.

PATCH /projects/{project_id}/endpoints/{endpoint_id}

The API method appears as follows when specified in a cURL command. The example reassigns the compute endpoint to another branch by changing the branch_id. The branch that you specify cannot have an existing compute endpoint. A compute endpoint must be associated with a branch, and a branch can have only one compute endpoint.

curl -X 'PATCH' \
  'https://console.neon.tech/api/v2/projects/hidden-cell-763301/endpoints/ep-young-art-646685' \
  -H 'accept: application/json' \
  -H "Authorization: Bearer $NEON_API_KEY" \
  -H 'Content-Type: application/json' \
  -d '{
  "endpoint": {
    "branch_id": "br-green-lab-617946"
  }
}'
Response body
{
  "endpoint": {
    "host": "ep-young-art-646685.us-east-2.aws.neon.tech",
    "id": "ep-young-art-646685",
    "project_id": "hidden-cell-763301",
    "branch_id": "br-green-lab-617946",
    "autoscaling_limit_min_cu": 1,
    "autoscaling_limit_max_cu": 1,
    "region_id": "aws-us-east-2",
    "type": "read_write",
    "current_state": "idle",
    "pending_state": "idle",
    "settings": {
      "pg_settings": {}
    },
    "pooler_enabled": false,
    "pooler_mode": "transaction",
    "disabled": false,
    "passwordless_access": true,
    "last_active": "2023-01-04T18:38:25Z",
    "created_at": "2023-01-04T18:38:23Z",
    "updated_at": "2023-01-04T18:47:36Z",
    "proxy_host": "us-east-2.aws.neon.tech"
  },
  "operations": [
    {
      "id": "03bf0bbc-cc46-4863-a5c4-f31fc1881228",
      "project_id": "hidden-cell-763301",
      "branch_id": "br-green-lab-617946",
      "endpoint_id": "ep-young-art-646685",
      "action": "apply_config",
      "status": "running",
      "failures_count": 0,
      "created_at": "2023-01-04T18:47:36Z",
      "updated_at": "2023-01-04T18:47:36Z"
    },
    {
      "id": "c96be00c-6340-4fb2-b80a-5ae96f469969",
      "project_id": "hidden-cell-763301",
      "branch_id": "br-green-lab-617946",
      "endpoint_id": "ep-young-art-646685",
      "action": "suspend_compute",
      "status": "scheduling",
      "failures_count": 0,
      "created_at": "2023-01-04T18:47:36Z",
      "updated_at": "2023-01-04T18:47:36Z"
    }
  ]
}

Delete a compute endpoint with the API

The following Neon API method deletes the specified compute endpoint. To view the API documentation for this method, refer to the Neon API reference.

DELETE /projects/{project_id}/endpoints/{endpoint_id}

The API method appears as follows when specified in a cURL command.

curl -X 'DELETE' \
  'https://console.neon.tech/api/v2/projects/hidden-cell-763301/endpoints/ep-young-art-646685' \
  -H 'accept: application/json' \
  -H "Authorization: Bearer $NEON_API_KEY"
Response body
{
  "endpoint": {
    "host": "ep-young-art-646685.us-east-2.aws.neon.tech",
    "id": "ep-young-art-646685",
    "project_id": "hidden-cell-763301",
    "branch_id": "br-green-lab-617946",
    "autoscaling_limit_min_cu": 1,
    "autoscaling_limit_max_cu": 1,
    "region_id": "aws-us-east-2",
    "type": "read_write",
    "current_state": "idle",
    "settings": {
      "pg_settings": {}
    },
    "pooler_enabled": false,
    "pooler_mode": "transaction",
    "disabled": false,
    "passwordless_access": true,
    "last_active": "2023-01-04T18:38:25Z",
    "created_at": "2023-01-04T18:38:23Z",
    "updated_at": "2023-01-04T18:47:45Z",
    "proxy_host": "us-east-2.aws.neon.tech"
  },
  "operations": []
}

Need help?

Join our Discord Server to ask questions or see what others are doing with Neon. Users on paid plans can open a support ticket from the console. For more detail, see Getting Support.

Last updated on

Edit this page
Was this page helpful?