Release notes for CloudNativePG 1.18

History of user-visible changes in the 1.18 minor release of CloudNativePG.

For a complete list of changes, please refer to the commits on the release branch in GitHub.

Version 1.18.5

Release date: June 12, 2023

Warning

Version 1.18 has reached End-of-Life (EOL). Version 1.18.5 is the last that will be released for the 1.18 minor version.

Enhancements:

  • Add the snapshot command to the cnpg plugin to create a consistent cold backup of the cluster from a standby using the Kubernetes VolumeSnapshot standard resource (#1960)
  • First implementation of recovery from a set of CSI VolumeSnapshot resources via the .spec.bootstrap.recovery.volumeSnapshot stanza (#1960)
  • Add pg_failover_slots to managed extensions (#2057)
  • Improved Grafana dashboard with updated instructions in the documentation and the quickstart guide (#1916)
  • Introduce the schemaOnly option in the import stanza, to avoid exporting and importing data when you bootstrap a new Postgres Cluster from one or more existing databases (#2234)
  • Add support for TopologySpreadConstraints to manage scheduling of instance pods (#2202)
  • Add PodMonitor support to the Poolerfor PgBouncer (#2034)
  • Add option to override the default Kubernetes scheduler (#2013)
  • Allow configuration of deployment strategy of a Pooler resource (#1983)
  • Update default PostgreSQL version to 15.3 (#2022)
  • Use PgBouncer 1.19 by default (#2018)

Technical enhancements:

  • Updated k8s kind tested versions (#2054)
  • Use separate transactions to reconcile role credentials. Before this patch, the operator would revert the synchronization of all roles if one failed (#2004)
  • Ensure fencing is removed during cluster restore (#1987)
  • Improve logging when deleting Pods (#2136)

Fixes:

  • Fix unbound variable with k3s engine which could prevent setup on K3’s (#2157)
  • Report the correct PG version in the metrics (#2126)
  • Use the correct walStorage key in the documentation (#2140)
  • Halt reconciliation when the operator cannot connect with the instances, and provide a clear diagnostic on such occasions. This will help clarify cases where network issues obstruct normal operation of CloudNativePG (#2145), (#2233), and (#2242)

Version 1.18.4

Release date: April 27, 2023

Warning

Version 1.18 will reach its End-of-Life (EOL) on May 27, 2023. If you haven't upgraded yet, please start planning an upgrade as soon as possible.

Important

CloudNativePG is dropping support for PostgreSQL 10, as PostgreSQL 10 reached End-of-Life (EOL) in November 2022. Versions 11 and newer are supported. Please plan your migration to PostgreSQL 15 as soon as possible. Refer to "Importing Postgres databases" for more information on PostgreSQL major offline upgrades.

Enhancements:

  • Improve the --logs option of the report command of the cnpg plugin for kubectl to also include the previous logs where available (#1811)
  • The -any service is now disabled by default (#1755)

Security:

  • Enable customization of SeccompProfile through override via a local file (#1827)

Fixes:

  • Apply the PostgreSQL configuration provided by the user during the initdb bootstrap phase, before the server is started the first time (#1858)

Version 1.18.3

Release date: March 20, 2023

Enhancements:

  • Extend the debug cluster's log level to the initdb job (#1503)
  • Support IPv6 and custom pg_hba for the PgBouncer pooler (#1395)
  • Enhance observability of backups with two new metrics and additional information in the status (#1428)
  • Document API calls from the instance manager (#1641)
  • Clarify deployment name via Helm (#1505)
  • Add the psql command to the cnpg plugin for kubectl (#1668) allowing the user to start a psql session with a pod (the primary by default)

Technical enhancements:

  • Adopt Renovate for dependency tracking/updating (#1367, #1473)
  • Inject binaries for all supported architectures in the operator image (#1513)
  • Use the backup name to match resources in the backup object store (#1650) Leverages the --name option introduced with Barman 3.3 to make the association between backups and the object store more robust.

Fixes:

  • Prevent panic with error handling in the probes (#1716)
  • Ensure that the HTTP package and controller runtime logs are in JSON format (#1442)
  • Adds WAL storage to a cluster in a single instance Cluster (#1570)
  • Various improvements to make backup code more robust (#1536, #1564, #1588, #1466, #1647)
  • Properly show WAL archiving information with status command of the cnpg plugin (#1666)
  • Ensure nodeAffinity is applied even if AdditionalPodAffinity and AdditionalPodAntiAffinity are not set (#1663)

Version 1.18.2

Release date: Feb 14, 2023

Enhancements:

  • Introduce support for Kubernetes' projected volumes (#1269)
  • Introduce support custom environment variables for finer control of the PostgreSQL server process (#1275)
  • Introduce the backup command in the cnpg plugin for kubectl to issue a new base backup of the cluster (#1348)
  • Improve support for the separate WAL volume feature by enabling users to move WAL files to a dedicated volume on an existing Postgres cluster (#1066)
  • Enhance WAL observability with additional metrics for the Prometheus exporter, including values equivalent to the min_wal_size, max_wal_size, keep_wal_size, wal_keep_segments, as well as the maximum number of WALs that can be stored in the dedicated volume (#1382)
  • Add a database comment on the streaming_replica user (#1349)
  • Document the firewall issues with webhooks on GKE (#1364)
  • Add note about postgresql.conf in recovery (#1211)
  • Add instructions on installing plugin using packages (#1357)
  • Specify Postgres versions supported by each minor release (#1355)
  • Clarify the meaning of PVC group in CloudNativePG (#1344)
  • Add an example of the DigitalOcean S3-compatible Spaces (#1289)

Technical enhancements:

  • Added daily end-to-end smoke test for release branches (#1235)

Fixes:

  • Skip executing a CHECKPOINT as the streaming_replica user (#1408)
  • Make waitForWalArchiveWorking resilient to connection errors (#1399)
  • Ensure that the PVC roles are always consistent (#1380)
  • Permit walStorage resize when using pvcTemplate (#1315)
  • Ensure ExecCommand obeys timeout (#1242)
  • Avoid PodMonitor reconcile if Prometheus is not installed (#1238)
  • Avoid looking for PodMonitor when not needed (#1213)

Version 1.18.1

Release date: Dec 21, 2022

Important announcements:

  • Alert on the impending deprecation of postgresql as a label to identify the CNPG cluster. In the remote case you have used this label, please start using the cnpg.io/cluster label instead (#1130)
  • Recognizing Armando Ruocco (@armru) as a new CloudNativePG maintainer for his consistent and impactful contributions (#1167)
  • Remove ARMv7 support (#1092)
  • FINAL patch release for 1.16: 1.16.5. Release 1.16 reaches end of life.

Enhancements:

  • Customize labels and annotations for the service account: add a service account template that can be used, for example, to make authentication easier via identity management on GKE or EKS via IRSA (#1105)
  • Add nodeAffinity support (#1182) - allows for richer scheduling options
  • Improve compatibility with Istio: add support for Istio's quit endpoint so that jobs with Istio sidecars do not run indefinitely (#967)
  • Allow fields remapping in JSON logs: helpful for use cases where the level and ts fields might interfere with the existing logging (#843)
  • Add fio command to the kubectl-cnpg plugin (#1097)
  • Add rpm/deb package for kubectl-cnpg plugin (#1008)
  • Update default PostgreSQL version for new cluster definitions to 15.1 (#908)
  • Documentation
  • Remove references to CNPG sandbox (#1120) - the CNPG sandbox has been deprecated, in favor of instructions on monitoring in the Quickstart documentation
  • Link to the "Release updates" discussion (#1148) - the release updates discussion will become the default channel for release announcements and discussions
  • Document emeritus status for maintainers in GOVERNANCE.md (#1033) - explains how maintainers should proceed if they are not ready to continue contributing
  • Improve instructions on creating pull requests (#1132)
  • Troubleshooting emergency backup instructions (#1184)
  • Cover the Kubernetes layer in greater detail in the Architecture documentation (#1432)

Fixes:

  • Ensure PGDATA permissions on bootstrap are properly set to 750 (#1164)
  • Ensure the PVC containing WALs is deleted when scaling down the cluster (#1135)
  • Fix missing ApiVersion and Kind in the pgbench manifest when using --dry-run (#1088)
  • Ensure that we create secrets and services only when not found (#1145)
  • Respect configured pg-wal when restoring (#1216)
  • Filter out replicas from nodeToClusters map (#1194)

Technical enhancements:

  • Use ciclops for test summary (#1064): rely on the ciclops GitHub action to provide summaries of the E2E suite, inheriting improvements from that project
  • Add backport pull request workflow (#965) - automatically backport patches to release branches if they are so annotated
  • Make the operator log level configurable in e2e test suite (#1094)
  • Enable test execution based on labels (#951)
  • Update Go version from 1.18 to 1.19 (#1166)

Version 1.18.0

Release date: Nov 10, 2022

Features:

  • Cluster-managed physical replication slots for High Availability: automatically manages physical replication slots for each hot standby replica in the High Availability cluster, both in the primary and the standby (#740)
  • Postgres cluster hibernation: introduces cluster hibernation via the plugin, with a new subcommand kubectl cnpg hibernate on/off/status <cluster-name>. Hibernation destroys all the resources generated by the cluster, except the PVCs that belong to the PostgreSQL primary instance (#782)

Security:

  • Add SeccomProfile to Pods and Containers (#888)

Enhancements:

  • Allow omitting the storage size in the cluster spec if there is a size request in the pvcTemplate (#914)
  • status command for the cnpg plugin:
  • Add replication slots information (#873)
  • Clarify display for fenced clusters (#886)
  • Improve display for replica clusters (#871)
  • Documentation:
  • Improve monitoring page, providing instructions on how to evaluate the observability capabilities of CloudNativePG on a local system using Prometheus and Grafana (#968)
  • Add page on design reasons for custom controller (#918)
  • Updates to the End-to-End Test Suite page (#945)
  • New subcommands in the cnpg plugin:
    • pgbench generates a job definition executing pgbench against a cluster (#958)
    • install generates an installation manifest for the operator (#944)
  • Set PostgreSQL 15.0 as the new default version (#821)

Fixes:

  • Import a database with plpgsql functions (#974)
  • Properly find the closest backup when doing Point-in-time recovery (#949)
  • Clarify that the ScheduledBackup format does not follow Kubernetes CronJob format (#883)
  • Bases the failover logic on the Postgres information from the instance manager, rather than Kubernetes pod readiness, which could be stale (#890)
  • Ensure we have a WAL to archive for every newly created cluster. The lack could prevent backups from working (#897)
  • Correct YAML key names for barmanObjectStore in documentation (#877)
  • Fix krew release (#866)