Release notes for CloudNativePG 1.21

History of user-visible changes in the 1.21 minor release of CloudNativePG.

For a complete list of changes, please refer to the commits on the release branch in GitHub.

Version 1.21.6

Release date: Jun 12, 2024

Warning

This is expected to be the last release in the 1.21.X series. Users are encouraged to update to a newer minor version soon.

Enhancements:

  • Enabled configuration of standby-sensitive parameters during recovery using a physical backup (#4564)

  • Enabled the configuration of the liveness probe timeout via the .spec.livenessProbeTimeout option (#4719)

  • cnpg plugin for kubectl:

    • Enhanced support for ANSI colors in the plugin by adding the --color option, which accepts always, never, and auto (default) as values (#4775)
    • The plugin is now available on Homebrew for macOS users (#4602)

Fixes:

  • Prevented fenced instances from entering an unnecessary loop and consuming all available CPU (#4625)

  • Resolved an issue where the instance manager on the primary would indefinitely wait for the instance to start after encountering a failure following a stop operation (#4434)

  • Fixed an issue where the interaction between hot_standby_feedback and managed cluster-level replication slots was preventing the autovacuum from operating correctly; this issue was causing disk space to remain occupied by dead tuples (#4811)

  • Fixed a panic in the backup controller that occurred when pod container statuses were missing (#4765)

  • Prevented unnecessary shutdown of the instance manager (#4670)

  • Prevented unnecessary reloads of PostgreSQL configuration when unchanged (#4531)

  • Prevented unnecessary reloads of the ident map by ensuring a consistent and unique method of writing its content (#4648)

  • Avoided conflicts during phase registration by patching the status of the resource instead of updating it (#4637)

  • Implemented a timeout when restarting PostgreSQL and lifting fencing (#4504)

  • Ensured that a replica cluster is restarted after promotion to properly set the archive mode (#4399)

  • Removed an unneeded concurrent keep-alive routine that was causing random failures in volume snapshot backups (#4768)

  • Ensured correct parsing of the additional rows field returned when the pgaudit.log_rows option was enabled, preventing audit logs from being incorrectly routed to the normal log stream (#4394)

  • cnpg plugin for kubectl:

    • Resolved an issue with listing PDBs using the cnpg status command (#4530)

Changes

  • Default operand image set to PostgreSQL 16.3 (#4584)
  • Removed all RBAC requirements on namespace objects (#4753)

Version 1.21.5

Release date: Apr 24, 2024

Enhancements:

  • Users can now configure the wal_log_hints PostgreSQL parameter (#4218) (#4218)
  • Fully Qualified Domain Names (FQDN) in URIs for automatically generated secrets (#4095)
  • Cleanup of instance Pods not owned by the Cluster during Cluster restore (#4141)
  • Error detection when invoking barman-cloud-wal-restore in recovery bootstrap (#4101)

Fixes:

  • Ensured that before a switchover, the elected replica is in streaming replication (#4288)
  • Correctly handle parsing errors of instances' LSN when sorting them (#4283)
  • Recreate the primary Pod if there are no healthy standbys available to promote (#4132)
  • Cleanup PGDATA in case of failure of the restore job (#4151)
  • Reload certificates on configuration update (#3705)
  • cnpg plugin for kubectl:
    • Improve the arguments handling of destroy, fencing, and promote plugin commands (#4280)
    • Correctly handle the percentage of the backup progress in cnpg status (#4131)
    • Gracefully handle databases with no sequences in sync-sequences command (#4346)

Changes:

  • The Grafana dashboard now resides at https://github.com/cloudnative-pg/grafana-dashboards (#4154)

Version 1.21.4

Release date: Mar 14, 2024

Enhancements

  • Allow customization of the wal_level GUC in PostgreSQL (#4020)
  • Add the cnpg.io/skipWalArchiving annotation to disable WAL archiving when set to enabled (#4055)
  • Enrich the cnpg plugin for kubectl with the publication and subscription command groups to imperatively set up PostgreSQL native logical replication (#4052)
  • Allow customization of CERTIFICATE_DURATION and EXPIRING_CHECK_THRESHOLD for automated management of TLS certificates handled by the operator (#3686)
  • Introduce initial support for tab-completion with the cnpg plugin for kubectl (#3875)
  • Retrieve the correct architecture's binary from the corresponding catalog in the running operator image during in-place updates, enabling the operator to inject the correct binary into any Pod with a supported architecture (#3840)

Fixes

  • Properly synchronize PVC group labels with those on the pods, a critical aspect when all pods are deleted and the operator needs to decide which Pod to recreate first (#3930)
  • Disable wal_sender_timeout when cloning a replica to prevent timeout errors due to slow connections (#4080)
  • Ensure that volume snapshots are ready before initiating recovery bootstrap procedures, preventing an error condition where recovery with incomplete backups could enter an error loop (#3663)
  • Prevent an error loop when unsetting connection limits in managed roles (#3832)
  • Resolve a corner case in hibernation where the instance pod has been deleted, but the cluster status still has the hibernation condition set to false (#3970)
  • Correctly detect Google Cloud capabilities for Barman Cloud (#3931)

Security

  • Use Role instead of ClusterRole for operator permissions in OLM, requiring fewer privileges when installed on a per-namespace basis (#3855, #3990)
  • Enforce fully-qualified object names in SQL queries for the PgBouncer pooler (#4080)

Changes

  • Follow Kubernetes recommendations to switch from client-side to server-side application of manifests, requiring the --server-side option by default when installing the operator (#3729).
  • Set the default operand image to PostgreSQL 16.2 (#3823).

Version 1.21.3

Release date: Feb 2, 2024

Enhancements:

  • Tailor ephemeral volume storage in a Postgres cluster using a claim template through the ephemeralVolumeSource option (#3678)
  • Introduce the pgadmin4 command in the cnpg plugin for kubectl, providing a straightforward method to demonstrate connecting to a given database cluster and navigate its content in a local environment such as kind - for evaluation purposes only (#3701)
  • Allow customization of PostgreSQL's ident map file via the .spec.postgresql.pg_ident stanza, through a list of user name maps (#3534)

Fixes:

  • Prevent an unrecoverable issue with pg_rewind failing due to postgresql.auto.conf being read-only on clusters where the ALTER SYSTEM SQL command is disabled - the default (#3728)
  • Reduce the risk of disk space shortage when using the import facility of the initdb bootstrap method, by disabling the durability settings in the PostgreSQL instance for the duration of the import process (#3743)
  • Avoid pod restart due to erroneous resource quantity comparisons, e.g. "1 != 1000m" (#3706)
  • Properly escape reserved characters in pgpass connection fields (#3713)
  • Prevent systematic rollout of pods due to considering zero and nil different values in .spec.projectedVolumeTemplate.sources (#3647)
  • Ensure configuration coherence by pruning from postgresql.auto.conf any options now incorporated into override.conf (#3773)

Version 1.21.2

Release date: Dec 21, 2023

Security:

  • By default, TLSv1.3 is now enforced on all PostgreSQL 12 or higher installations. Additionally, users can configure the ssl_ciphers, ssl_min_protocol_version, and ssl_max_protocol_version GUCs (#3408).
  • Integration of Docker image scanning with Dockle and Snyk to enhance security measures (#3300).

Enhancements:

  • Improved reconciliation of external clusters (#3533).
  • Introduction of the ability to enable/disable the ALTER SYSTEM command (#3535).
  • Support for Prometheus' dynamic relabeling through the podMonitorMetricRelabelings and podMonitorRelabelings options in the .spec.monitoring stanza of the Cluster and Pooler resources (#3075).
  • Enhanced computation of the first recoverability point and last successful backup by considering volume snapshots alongside object-store backups (#2940).
  • Elimination of the use of the PGPASSFILE environment variable when establishing a network connection to PostgreSQL (#3522).
  • Improved cnpg report plugin command by collecting a cluster's PVCs (#3357).
  • Enhancement of the cnpg status plugin command, providing information about managed roles, including alerts (#3310).
  • Introduction of Red Hat UBI 8 container images for the operator, suitable for OLM deployments.
  • Connection pooler:
    • Scaling down instances of a Pooler resource to 0 is now possible (#3517).
    • Addition of the cnpg.io/podRole label with a value of 'pooler' to every pooler deployment, differentiating them from instance pods (#3396).

Fixes:

  • Reconciliation of metadata, annotations, and labels of PodDisruptionBudget resources (#3312 and #3434).
  • Reconciliation of the metadata of the managed credential secrets (#3316).
  • Resolution of a bug in the backup snapshot code where an error reading the body would be handled as an overall error, leaving the backup process indefinitely stuck (#3321).
  • Implicit setting of online backup with the cnpg backup plugin command when either immediate-checkpoint or wait-for-archive options are requested (#3449).
  • Disabling of wal_sender_timeout when joining through pg_basebackup (#3586)
  • Reloading of secrets used by external clusters (#3565)
  • Connection pooler:
    • Ensuring the controller watches all secrets owned by a Pooler resource (#3428).
    • Reconciliation of RoleBinding for Pooler resources (#3391).
    • Reconciliation of imagePullSecret for Pooler resources (#3389).
    • Reconciliation of the service of a Pooler and addition of the required labels (#3349).
    • Extension of Pooler labels to the deployment as well, not just the pods (#3350).

Changes:

  • Default operand image set to PostgreSQL 16.1 (#3270).

Version 1.21.1

Release date: Nov 3, 2023

Enhancements:

  • Introduce support for online/hot backups with volume snapshots by using the PostgreSQL API for physical online base backups. Default configuration for hot/cold backups on a given Postgres cluster can be controlled through the online option and the onlineConfiguration stanza in .spec.backup.volumeSnapshot. Unless explicitly set, backups on volume snapshots are now taken online by default (#3102)
  • Introduce the possibility to override the above default settings on volume snapshot backup using the ScheduledBackup and Backup resources (#3208, #3226)
  • Enhance cold backup on volume snapshots by reducing the time window in which the target instance (standby or primary) is fenced, by lifting it as soon as the volume snapshot have been cut and provisioned (#3210)
  • During a recovery from volume snapshots, ensure that the provided volume snapshots are coherent by validating the existing labels and annotations
  • The backup command of the cnpg plugin for kubectl improves the volume snapshot backup experience through the --online, --immediate-checkpoint, and --wait-for-archive runtime options
  • Enhance the status command of the cnpg plugin for kubectl with progress information on active streaming base backups (#3101)
  • Allow the configuration of max_prepared_statements with the pgBouncer Pooler resource (#3174)

Fixes:

  • Suspend WAL archiving during a switchover and resume it when it is completed (#3227)
  • Ensure that the instance manager always uses synchronous_commit = local when managing the PostgreSQL cluster (#3143)
  • Custom certificates for streaming replication user through .spec.certificates.replicationTLSSecret are now working (#3209)
  • Set the cnpg.io/cluster label to the Pooler pods (#3153)
  • Reduce the number of labels in VolumeSnapshots resources and render them into more appropriate annotations (#3151)

Changes:

  • Volume snapshot backups, introduced in 1.21.0, are now online/hot by default; in order to restore offline/cold backups set .spec.backup.volumeSnapshot to false
  • Stop using the postgresql.auto.conf file inside PGDATA to control Postgres replication settings, and replace it with a file named override.conf (#2812)

Technical enhancements:

  • Use extended query protocol for PostgreSQL in the instance manager (#3152)

Version 1.21.0

Release date: Oct 12, 2023

Important changes from previous versions

This release contains a few changes to the default settings of CloudNativePG with the goal to improve general stability and security through predefined values. If you are upgrading from a previous version, please carefully read the "Important Changes" section below, as well as the upgrade documentation.

Features:

  • Volume Snapshot support for backup and recovery: leverage the standard Kubernetes API on Volume Snapshots to take advantage of capabilities like incremental and differential copy for both backup and recovery operations. This first step, covering cold backups from a standby, will continue in 1.22 with support for hot backups using the PostgreSQL API and tablespaces.

  • OLM installation method: introduce support for Operator Lifecycle Manager via OperatorHub.io for the latest patch version of the latest minor release through the stable channel. Many thanks to EDB for donating the bundle of their "EDB Postgres for Kubernetes" operator and adapting it for CloudNativePG.

Important Changes:

  • Change the default value of stopDelay to 1800 seconds instead of 30 seconds (#2848)
  • Introduce a new parameter, called smartShutdownTimeout, to control the window of time reserved for the smart shutdown of Postgres to complete; the general formula to compute the overall timeout to stop Postgres is max(stopDelay - smartShutdownTimeout, 30) (#2848)
  • Change the default value of startDelay to 3600, instead of 30 seconds (#2847)
  • Replace the livenessProbe initial delay with a more proper Kubernetes startup probe to deal with the start of a Postgres server (#2847)
  • Change the default value of switchoverDelay to 3600 seconds instead of 40000000 seconds (#2846)
  • Disable superuser access by default for security (#2905)
  • Enable replication slots for HA by default (#2903)
  • Stop supporting the postgresql label - replaced by cnpg.io/cluster in 1.18 (#2744)

Security:

  • Add a default seccompProfile to the operator deployment (#2926)

Enhancements:

  • Enable bootstrap of a replica cluster from a consistent set of volume snapshots (#2647)
  • Enable full and Point In Time recovery from a consistent set of volume snapshots (#2390)
  • Introduce the cnpg.io/coredumpFilter annotation to control the content of a core dump generated in the unlikely event of a PostgreSQL crash, by default set to exclude shared memory segments from the dump (#2733)
  • Allow to configure ephemeral-storage limits for the shared memory and temporary data ephemeral volumes (#2830)
  • Validate resource limits and requests through the webhook (#2663)
  • Ensure that PostgreSQL's shared_buffers are coherent with the pods' allocated memory resources (#2840)
  • Add uri and jdbc-uri fields in the credential secrets to facilitate developers when connecting their applications to the database (#2186)
  • Add a new phase Waiting for the instances to become active for finer control of a cluster's state waiting for the replicas to be ready (#2612)
  • Improve detection of Pod rollout conditions through the podSpec annotation (#2243)
  • Add primary timestamp and uptime to the kubectl plugin's status command (#2953)

Fixes:

  • Ensure that the primary instance is always recreated first by prioritizing ready PVCs with a primary role (#2544)
  • Honor the cnpg.io/skipEmptyWalArchiveCheck annotation during recovery to bypass the check for an empty WAL archive (#2731)
  • Prevent a cluster from being stuck when the PostgreSQL server is down but the pod is up on the primary (#2966)
  • Avoid treating the designated primary in a replica cluster as a regular HA replica when replication slots are enabled (#2960)
  • Reconcile services every time the selectors change or when labels/annotations need to be changed (#2918)
  • Defaults to app both the owner and database during recovery bootstrap (#2957)
  • Avoid write-read concurrency on cached cluster (#2884)
  • Remove empty items, make them unique and sort in the ResourceName sections of the generated roles (#2875)
  • Ensure that the ContinuousArchiving condition is properly set to 'failed' in case of errors (#2625)
  • Make the Backup resource reconciliation cycle more resilient on interruptions by stopping only if the backup is completed or failed (#2591)
  • Reconcile PodMonitor labels and annotations (#2583)
  • Fix backup failure due to missing RBAC resourceNames on the Role object (#2956)
  • Observability:

    • Add TCP port label to default pg_stat_replication metric (#2961)
    • Fix the pg_wal_stat default metric for Prometheus (#2569)
    • Improve the pg_replication default metric for Prometheus (#2744 and #2750)
    • Use alertInstanceLabelFilter instead of alertName in the provided Grafana dashboard
    • Enforce standard_conforming_strings in metric collection (#2888)

Changes:

  • Set the default operand image to PostgreSQL 16.0
  • Fencing now uses PostgreSQL's fast shutdown instead of smart shutdown to halt an instance (#3051)
  • Rename webhooks from kb.io to cnpg.io group (#2851)
  • Replace the cnpg snapshot command with cnpg backup -m volumeSnapshot for the kubectl plugin
  • Let the cnpg hibernate plugin command use the ClusterManifestAnnotationName and PgControldataAnnotationName annotations on PVCs (#2657)
  • Add the cnpg.io/instanceRole label while deprecating the existing role label (#2915)

Technical enhancements:

  • Replace k8s-api-docgen with gen-crd-api-reference-docs to automatically build the API reference documentation (#2606)