Release notes for CloudNativePG 1.18

History of user-visible changes in the 1.18 minor release of CloudNativePG.

For a complete list of changes, please refer to the commits on the release branch in GitHub.

Version 1.18.1

Release date: Dec 21, 2022

Important announcements:

  • Alert on the impending deprecation of postgresql as a label to identify the CNPG cluster. In the remote case you have used this label, please start using the cnpg.io/cluster label instead (#1130)
  • Recognizing Armando Ruocco (@armru) as a new CloudNativePG maintainer for his consistent and impactful contributions (#1167)
  • Remove ARMv7 support (#1092)
  • FINAL patch release for 1.16: 1.16.5. Release 1.16 reaches end of life.

Enhancements:

  • Customize labels and annotations for the service account: add a service account template that can be used, for example, to make authentication easier via identity management on GKE or EKS via IRSA (#1105)
  • Add nodeAffinity support (#1182) - allows for richer scheduling options
  • Improve compatibility with Istio: add support for Istio‚Äôs quit endpoint so that jobs with Istio sidecars do not run indefinitely (#967)
  • Allow fields remapping in JSON logs: helpful for use cases where the level and ts fields might interfere with the existing logging (#843)
  • Add fio command to the kubectl-cnpg plugin (#1097)
  • Add rpm/deb package for kubectl-cnpg plugin (#1008)
  • Update default PostgreSQL version for new cluster definitions to 15.1 (#908)
  • Documentation
  • Remove references to CNPG sandbox (#1120) - the CNPG sandbox has been deprecated, in favor of instructions on monitoring in the Quickstart documentation
  • Link to the "Release updates" discussion (#1148) - the release updates discussion will become the default channel for release announcements and discussions
  • Document emeritus status for maintainers in GOVERNANCE.md (#1033) - explains how maintainers should proceed if they are not ready to continue contributing
  • Improve instructions on creating pull requests (#1132)
  • Troubleshooting emergency backup instructions (#1184)

Fixes:

  • Ensure PGDATA permissions on bootstrap are properly set to 750 (#1164)
  • Ensure the PVC containing WALs is deleted when scaling down the cluster (#1135)
  • Fix missing ApiVersion and Kind in the pgbench manifest when using --dry-run (#1088)
  • Ensure that we create secrets and services only when not found (#1145)
  • Respect configured pg-wal when restoring (#1216)
  • Filter out replicas from nodeToClusters map (#1194)

Technical enhancements:

  • Use ciclops for test summary (#1064): rely on the ciclops GitHub action to provide summaries of the E2E suite, inheriting improvements from that project
  • Add backport pull request workflow (#965) - automatically backport patches to release branches if they are so annotated
  • Make the operator log level configurable in e2e test suite (#1094)
  • Enable test execution based on labels (#951)
  • Update Go version from 1.18 to 1.19 (#1166)

Version 1.18.0

Release date: Nov 10, 2022

Features:

  • Cluster-managed physical replication slots for High Availability: automatically manages physical replication slots for each hot standby replica in the High Availability cluster, both in the primary and the standby (#740)
  • Postgres cluster hibernation: introduces cluster hibernation via the plugin, with a new subcommand kubectl cnpg hibernate on/off/status <cluster-name>. Hibernation destroys all the resources generated by the cluster, except the PVCs that belong to the PostgreSQL primary instance (#782)

Security:

  • Add SeccomProfile to Pods and Containers (#888)

Enhancements:

  • Allow omitting the storage size in the cluster spec if there is a size request in the pvcTemplate (#914)
  • status command for the cnpg plugin:
  • Add replication slots information (#873)
  • Clarify display for fenced clusters (#886)
  • Improve display for replica clusters (#871)
  • Documentation:
  • Improve monitoring page, providing instructions on how to evaluate the observability capabilities of CloudNativePG on a local system using Prometheus and Grafana (#968)
  • Add page on design reasons for custom controller (#918)
  • Updates to the End-to-End Test Suite page (#945)
  • New subcommands in the cnpg plugin:
    • pgbench generates a job definition executing pgbench against a cluster (#958)
    • install generates an installation manifest for the operator (#944)
  • Set PostgreSQL 15.0 as the new default version (#821)

Fixes:

  • Import a database with plpgsql functions (#974)
  • Properly find the closest backup when doing Point-in-time recovery (#949)
  • Clarify that the ScheduledBackup format does not follow Kubernetes CronJob format (#883)
  • Bases the failover logic on the Postgres information from the instance manager, rather than Kubernetes pod readiness, which could be stale (#890)
  • Ensure we have a WAL to archive for every newly created cluster. The lack could prevent backups from working (#897)
  • Correct YAML key names for barmanObjectStore in documentation (#877)
  • Fix krew release (#866)