Release notes for CloudNativePG 1.26
History of user-visible changes in the 1.26 minor release of CloudNativePG.
For a complete list of changes, please refer to the commits on the release branch in GitHub.
Version 1.26.0-rc1
Release date: Mar 28, 2025
Important Changes
-
CloudNativePG is now officially a CNCF project: CloudNativePG has been accepted into the Cloud Native Computing Foundation (CNCF), marking a significant milestone in its evolution. As part of this transition, the project is now governed under CloudNativePG, a Series of LF Projects, LLC, ensuring long-term sustainability and community-driven innovation. (#7203)
-
Deprecation of Native Barman Cloud Support: Native support for Barman Cloud backups and recovery is now deprecated and will be fully removed in CloudNativePG 1.28.0. Users must begin migrating their existing clusters to the new Barman Cloud Plugin to ensure a smooth transition. (#6876)
-
End of Support for Barman 3.4 and Earlier: CloudNativePG no longer supports Barman versions 3.4 and earlier, including the capability detection framework. Users running older operand versions (from before April 2023) must update their operand before upgrading the operator to avoid compatibility issues. (#7220)
-
Hibernation Command Changes: The
hibernate on
andhibernate off
commands in thecnpg
plugin forkubectl
now serve as shortcuts for declarative hibernation. The previous imperative approach has been removed in favor of this method. Additionally, thehibernate status
command has been removed, as its functionality is now covered by the standardstatus
command. Warning: Do not upgrade to version 1.26 of both the plugin and the operator unless you are prepared to migrate to the declarative hibernation method. (#7155)
Features:
-
Declarative Offline In-Place Major Upgrades of PostgreSQL: Introduced support for offline in-place major upgrades when a new operand container image with a higher PostgreSQL major version is applied to a cluster. During the upgrade, all cluster pods are shut down to ensure data consistency. A new job is created to validate upgrade conditions, run
pg_upgrade
, and set up new directories forPGDATA
, WAL files, and tablespaces as needed. Once the upgrade is complete, replicas are re-created. Failed upgrades can be rolled back declaratively. (#6664) -
Improved Startup and Readiness Probes for Replicas: Enhanced support for Kubernetes startup and readiness probes in PostgreSQL instances, providing greater control over replicas based on the streaming lag. (#6623)
-
Declarative management of extensions and schemas: Introduced the
extensions
andschemas
stanzas in the Database resource to declaratively create, modify, and drop PostgreSQL extensions and schemas within a database. (#7062)
Enhancements
-
Introduced the
STANDBY_TCP_USER_TIMEOUT
operator configuration setting, allowing users to specify thetcp_user_timeout
parameter on all standby instances managed by the operator. (#7036) -
Added the
pg_extensions
metric, providing information about installed PostgreSQL extensions and their latest available versions. (#7195) -
Introduced the
DRAIN_TAINTS
operator configuration option, enabling users to customize which node taints indicate a node is being drained. This replaces the previous fixed behavior of only recognizingnode.kubernetes.io/unschedulable
as a drain signal. -
Added the
KUBERNETES_CLUSTER_DOMAIN
configuration option to the operator, allowing users to specify the domain suffix for fully qualified domain names (FQDNs) generated within the Kubernetes cluster. If not set, it defaults tocluster.local
. (#6989) -
Added support for LZ4, XZ, and Zstandard compression methods when archiving WAL files via Barman Cloud (deprecated). (#7151)
-
Implemented the
cnpg.io/validation
annotation, enabling users to disable the validation webhook on CloudNativePG-managed resources. Use with caution, as this allows unrestricted changes. (#7196) -
Added support for patching PostgreSQL instance pods using the
cnpg.io/podPatch
annotation with a JSON Patch. This may introduce discrepancies between the operator’s expectations and Kubernetes behavior, so it should be used with caution. (#6323) -
Added support for collecting
pg_stat_wal
metrics in PostgreSQL 18. (#7005) -
CloudNativePG Interface (CNPG-I):
- A plugin can now trigger instance rollouts by implementing the
EVALUATE
verb, ensuring that plugin-induced changes are properly reconciled. (#7126)
- A plugin can now trigger instance rollouts by implementing the
Fixes
-
Resolved a race condition that caused the operator to perform two switchovers when updating the PostgreSQL configuration. (#6991)
-
Corrected the
PodMonitor
configuration by adjusting thematchLabels
scope for the targeted pooler and cluster pods. Previously, thematchLabels
were too broad, inadvertently inheriting labels from the cluster and leading to data collection from unintended targets. (#7063) -
Added a webhook warning for clusters with a missing unit (e.g., MB, GB) in the
shared_buffers
configuration. This will become an error in future releases. Users should update their configurations to include explicit units (e.g.,512MB
instead of512
). (#7160) -
Treated timeout errors during volume snapshot creation as retryable to prevent unnecessary backup failures. (#7010)
-
cnpg
plugin:- Ensured that the primary Pod is recreated during an imperative restart when
primaryUpdateMethod
is set torestart
, aligning its definition with the replicas. (#7122)
- Ensured that the primary Pod is recreated during an imperative restart when
-
CloudNativePG Interface (CNPG-I):
- Implemented automatic reloading of TLS certificates for plugins when they change. (#7029)
- Ensured the operator properly closes the plugin connection when performing a backup using the plugin. (#7095, #7096)
- Fixed an issue that prevented WALs from being archived on a former primary node when using a plugin. (#6964)
Supported versions
- Kubernetes 1.32, 1.31, and 1.30
- PostgreSQL 17, 16, 15, 14, and 13
- PostgreSQL 17.X is the default image
- PostgreSQL 13 support ends on November 12, 2025