Release notes for CloudNativePG 1.25
History of user-visible changes in the 1.25 minor release of CloudNativePG.
For a complete list of changes, please refer to the commits on the release branch in GitHub.
Version 1.25.1
Release Date: February 28, 2025
Enhancements
- Introduced a startup probe for the operator to enhance reliability and prevent premature liveness probe failures during initialization. (#7008)
- Added support for using the
-r
service with the Pooler. (#6868) - Introduced an optional
--ttl
flag for thepgbench
plugin, enabling automatic deletion of completed jobs after a user-defined duration. (#6701) - Marked known error messages from the Azure CSI Driver for volume snapshots as retryable, improving resilience. (#6906)
- Updated the default PostgreSQL version to 17.4 for new cluster definitions. (#6960)
Security
- The operator image build process has been enhanced to strengthen
security and transparency. Images are now signed with
cosign
, and OCI attestations are generated, incorporating the Software Bill of Materials (SBOM) and provenance data. Additionally, OCI annotations have been added to improve traceability and ensure the integrity of the images.
Bug Fixes
- Fixed inconsistent behavior in default probe knob values when
.spec.probes
is defined, ensuring users can override all settings, includingfailureThreshold
. If unspecified in the startup probe,failureThreshold
is now correctly derived from.spec.startupDelay / periodSeconds
(default:10
, now overridable). The same logic applies to liveness probes via.spec.livenessProbeTimeout
. (#6656) - Managed service ports now take precedence over default operator-defined ports. (#6474)
- Fixed an issue where WAL metrics were unavailable after an instance restart until a configuration change was applied. (#6816)
- Fixed an issue in monolithic database import where role import was skipped if no roles were specified. (#6646)
- Added support for new metrics introduced in PgBouncer 1.24. (#6630)
- Resolved an issue where
Database
,Publication
, andSubscription
CRDs became stuck incluster resource has been deleted, skipping reconciliation
after cluster rehydration. This patch forcesstatus.observedGeneration
to zero, ensuring proper reconciliation. (#6607) - Improved handling of replication-sensitive parameter reductions by ensuring timely reconciliation after primary server restarts. (#6440)
- Introduced a new
isWALArchiver
flag in the CNPG-I plugin configuration, allowing users to designate a plugin as a WAL archiver. This enables seamless migration from in-tree Barman Cloud support to the plugin while maintaining WAL archive consistency. (#6593) - Ensured
override.conf
is consistently included inpostgresql.conf
during replica cluster bootstrapping, preventing replication failures due to missing configuration settings. (#6808) - Ensured
override.conf
is correctly initialized before invokingpg_rewind
to prevent failures during primary role changes. (#6670) - Enhanced webhook responses to return both warnings and errors when applicable, improving diagnostic accuracy. (#6579)
- Ensured the operator version is correctly reconciled. (#6496)
- Improved PostgreSQL version detection by using a more precise check of the data directory. (#6659)
- Volume Snapshot Backups:
- Fixed an issue where unused backup connections were not properly cleaned up. (#6882)
- Ensured the instance manager closes stale PostgreSQL connections left by failed volume snapshot backups. (#6879)
- Prevented the operator from starting a new volume snapshot backup while another is already in progress. (#6890)
cnpg
plugin:- Restored functionality of the
promote
plugin command. (#6476) - Enhanced
kubectl cnpg report --logs <cluster>
to collect logs from all containers, including sidecars. (#6636) - Ensured
pgbench
jobs can run when aCluster
uses anImageCatalog
. (#6868)
- Restored functionality of the
Technical Enhancements
- Added support for Kubernetes
client-gen
, enabling automated generation of Go clients for all CloudNativePG CRDs. (#6695)
Version 1.25.0
Release Date: December 23, 2024
Features
-
Declarative Database Management: Introduce the
Database
Custom Resource Definition (CRD), enabling users to create and manage PostgreSQL databases declaratively within a cluster. (#5325) -
Logical Replication Management: Add
Publication
andSubscription
CRDs for declarative management of PostgreSQL logical replication. These simplify replication setup and facilitate online migrations to CloudNativePG. (#5329) -
Experimental Support for CNPG-I: Introducing CNPG-I (CloudNativePG Interface), a standardized framework designed to extend CloudNativePG functionality through third-party plugins and foster the growth of the CNPG ecosystem. The Barman Cloud Plugin serves as a live example, illustrating how plugins can be developed to enhance backup and recovery workflows. Although CNPG-I support is currently experimental, it offers a powerful approach to extending CloudNativePG without modifying the operator’s core code—akin to PostgreSQL extensions. We welcome community feedback and contributions to shape this exciting new capability.
Enhancements
- Add the
dataDurability
option to the.spec.postgresql.synchronous
stanza, allowing users to choose betweenrequired
(default) orpreferred
durability in synchronous replication. (#5878) - Enable customization of startup, liveness, and readiness probes through the
.spec.probes
stanza. (#6266) - Support additional
pg_dump
andpg_restore
options to enhance database import flexibility. (#6214) - Add support for
maxConcurrentReconciles
in the CloudNativePG controller and set the default to 10, improving the operator's ability to efficiently manage larger deployments out of the box. (#5678) - Add the
cnpg.io/userType
label to secrets generated for predefined users, specificallysuperuser
andapp
. (#4392) - Improved validation for the
spec.schedule
field in ScheduledBackups, raising warnings for potential misconfigurations. (#5396) cnpg
plugin:- Enhance the
backup
command to support plugins. (#6045) - Honor the
User-Agent
header in HTTP requests with the API server. (#6153)
- Enhance the
Bug Fixes
- Ensure the former primary flushes its WAL file queue to the archive before re-synchronizing as a replica, reducing recovery times and enhancing data consistency during failovers. (#6141)
- Clean the WAL volume along with the
PGDATA
volume during bootstrap. (#6265) - Update the operator to set the cluster phase to
Unrecoverable
when all previously generatedPersistentVolumeClaims
are missing. (#6170) - Fix the parsing of the
synchronous_standby_names
GUC when.spec.postgresql.synchronous.method
is set tofirst
. (#5955) - Resolved a potential race condition when patching certain conditions in CRD statuses, improving reliability in concurrent updates. (#6328)
- Correct role changes to apply at the transaction level instead of the database context. (#6064)
- Remove the
primary_slot_name
definition from theoverride.conf
file on the primary to ensure it is always empty. (#6219) - Configure libpq environment variables, including
PGHOST
, in PgBouncer pods to enable seamless access to thepgbouncer
virtual database usingpsql
from within the container. (#6247) - Remove unnecessary updates to the Cluster status when verifying changes in the image catalog. (#6277)
- Prevent panic during recovery from an external server without proper backup configuration. (#6300)
- Resolved a key collision issue in structured logs, where the name field was inconsistently used to log two distinct values. (#6324)
- Ensure proper quoting of the inRoles field in SQL statements to prevent syntax errors in generated SQL during role management. (#6346)
cnpg
plugin:- Ensure the
kubectl
context is properly passed in thepsql
command. (#6257) - Avoid displaying physical backups block when empty with
status
command. (#5998)
- Ensure the
Supported Versions
- Kubernetes: 1.32, 1.31, 1.30, and 1.29
- PostgreSQL: 17, 16, 15, 14, and 13
- Default image: PostgreSQL 17.2
- Officially dropped support for PostgreSQL 12
- PostgreSQL 13 support ends on November 12, 2025