The pg_embedding extension
Use Neon's pg_embedding extension with Hierarchical Navigable Small World (HNSW) for graph-based vector similarity search in Postgres
pg_embedding extension enables the use of the Hierarchical Navigable Small World (HNSW) algorithm for vector similarity search in Postgres.
pg_embedding extension was updated on August 3, 2023 to add support for on-disk index creation and additional distance metrics. If you installed
pg_embedding before this date and want to upgrade to the new version, please see Upgrade to pg_embedding with on-disk indexes for instructions.
Neon also supports
pgvector for vector similarity search. See The pgvector extension.
Using the pg_embedding extension
This section describes how to use the
pg_embedding extension in Neon with simple examples demonstrating the required statements, syntax, and options.
The statements in this summary are described in further detail in the sections that follow.
Enable the extension
Create a table for your vector data
To store your vector data, create a table similar to the following:
This statement generates a table named
documents with a
real type column for storing vector data. Your table and vector column names may differ.
To insert vector data, use an
INSERT statement similar to the following:
pg_embedding extension supports Euclidean (L2), cosine, and Manhattan distance metrics.
Euclidean (L2) distance:
SELECT id FROM documentsselects the
idfield from all records in the
ORDER BYsorts the selected records in ascending order based on the calculated distances. In other words, records with values closer to the
[1.1, 2.2, 3.3]query vector according to the distance metric will be returned first.
<~>operators define the distance metric, which calculates the distance between the query vector and each row of the dataset.
LIMIT 1limits the result set to one record after sorting. You can adjust this value as required.
In summary, the query retrieves the ID of the record from the
documents table whose value is closest to the
[3,3,3] query vector according to the specified distance metric.
Create an HNSW index
To optimize search behavior, you can add an HNSW index. To create the HNSW index on your vector column, use a
CREATE INDEX statement as shown in the following examples. The
pg_embedding extension supports indexes for use with Euclidean, cosine, and Manhattan distance metrics. You must ensure that your search query syntax matches the index that you define. You will notice in the query examples below that each distance metric has a specific operator (
Euclidean (L2) distance index:
Cosine distance index:
Manhattan distance index:
Tuning the HNSW algorithm
The following options allow you to tune the HNSW algorithm when creating an index:
dims: Defines the number of dimensions in your vector data. This is a required parameter.
m: Defines the maximum number of links or "edges" created for each node during graph construction. A higher value increases accuracy (recall) but also increases the size of the index in memory and index construction time.
efconstruction: Influences the trade-off between index quality and construction speed. A high
efconstructionvalue creates a higher quality graph, enabling more accurate search results, but a higher value also means that index construction takes longer.
efsearch: Influences the trade-off between query accuracy (recall) and speed. A higher
efsearchvalue increases accuracy at the cost of speed. This value should be equal to or larger than
k, which is the number of nearest neighbors you want your search to return (defined by the
LIMITclause in your
In summary, to prioritize search speed over accuracy, use lower values for
efsearch. Conversely, to prioritize accuracy over search speed, use a higher value for
efsearch. A higher
efconstruction value enables more accurate search results at the cost of index build time, which is also affected by the size of your dataset.
For an idea of how to configure index option values, consider the benchmark performed by Neon using the GIST-960 Euclidean dataset, which provides a training set of 1 million vectors of 960 dimensions. The benchmark was run with this series of index option values:
m: 32, 64, and 128.
efconstruction: 64, 128, and 256
efsearch: 32, 64, 128, 256, and 512
For a million rows of data, we recommend an
m setting between 48 and 64, and as mentioned above,
efsearch should be equal to or larger than the number of nearest neighbors you want your search to return.
To learn more about the benchmark, see Introducing pg_embedding extension for vector search in Postgres and LangChain. Try experimenting with different settings to find the ones that work best for your particular application.
How HNSW search works
HNSW is a graph-based approach to indexing multi-dimensional data. It constructs a multi-layered graph, where each layer is a subset of the previous one. During a search, the algorithm navigates through the graph from the top layer to the bottom to quickly find the nearest neighbor. An HNSW graph is known for its superior performance in terms of speed and accuracy.
The search process begins at the topmost layer of the HNSW graph. From the starting node, the algorithm navigates to the nearest neighbor in the same layer. The algorithm repeats this step until it can no longer find neighbors more similar to the query vector.
Using the found node as an entry point, the algorithm moves down to the next layer in the graph and repeats the process of navigating to the nearest neighbor. The process of navigating to the nearest neighbor and moving down a layer is repeated until the algorithm reaches the bottom layer.
In the bottom layer, the algorithm continues navigating to the nearest neighbor until it cannot find any nodes that are more similar to the query vector. The current node is then returned as the most similar node to the query vector.
The key idea behind HNSW is that by starting the search at the top layer and moving down through each layer, the algorithm can quickly navigate to the area of the graph that contains the node that is most similar to the query vector. This makes the search process much faster than if it had to search through every node in the graph.
Upgrade to pg_embedding for on-disk indexes
pg_embedding extension version in Neon was updated on August 3, 2023 to add support for on-disk HNSW indexes and additional distance metrics. If you installed
pg_embedding before this date, you can upgrade to the new version (0.3.5 or higher) following the instructions below.
pg_embedding version (0.1.0 and earlier) creates HNSW indexes in memory, which means that indexes are recreated on the first index access after a compute restart. Also, this version only supports Euclidean (2) distance. The new
pg_embedding version adds support for cosine and Manhattan distance metrics.
Upgrading to the new version of
pg_embedding requires dropping the existing
pg_embedding extension and installing the new version. If your compute has not restarted recently, you may be required to restart it to make the new extension version available for installation.
Drop the existing extension and indexes (version 0.1.0 or earlier):
Ensure that the new version of the extension is available for installation. The default_version should be 0.3.5 or higher.
If the default_version is not 0.3.5 or higher, restart your compute instance. Pro users can do so by temporarily setting the Auto-suspend setting to a low value, like 2 seconds, allowing the compute to restart, and then setting Auto-suspend back to its normal value. For instructions, refer to the Auto-suspend configuration details in Edit a compute endpoint.
Install the new version of the extension (version 0.3.5 or higher).
You should now be able to recreate your HNSW index, which will be created on disk. For example:
pg_embedding extension GitHub repository
The GitHub repository for the Neon
pg_embedding extension can be found here.
To further your understanding of HNSW, the following resources are recommended:
- Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs, Yu. A. Malkov, D. A. Yashunin
- Similarity Search, Part 4: Hierarchical Navigable Small World (HNSW)
- IVFPQ + HNSW for Billion-scale Similarity Search