Autoscaling now in Beta
A Beta version of Neon's Autoscaling feature is now available for Pro plan users in selected regions. Autoscaling allows you to specify a minimum and maximum compute size. Neon scales compute resources up and down within the specified compute size boundaries to meet workload demand. The Autoscaling feature can be enabled when creating a Neon project or by editing a compute endpoint.
Fixes & improvements
- API: Removed the
quota_reset_at
property from the Project consumption endpoint used by Neon partners. The property did not provide useful data. - Control Plane: Free Tier compute endpoints are now assigned 0.25 Compute Units (CU) by default, which is equal 0.25 vCPUs and 1 GB of RAM.
- Integrations: The Neon Vercel Integration no longer resets the password of the Neon role (the Neon database user). Previously, the password was reset to enable the integration to configure Vercel environment variables that require a password. Neon now stores passwords in a secure storage vault associated with the Neon project, allowing existing passwords to be retrieved.
- UI: Designating a different branch as the default branch of a Neon project is now supported. The compute endpoint associated with the default branch remains accessible if you exceed project limits, ensuring uninterrupted access to data on the default branch. You can designate a new default branch by selecting a branch from the Branches page in the Neon console and clicking Set as default.