Release notes for CloudNativePG 1.16
History of user-visible changes in the 1.16 minor release of CloudNativePG.
For a complete list of changes, please refer to the commits on the release branch in GitHub.
Version 1.16.5
Release date: Dec 21, 2022
Warning
This is expected to be the last release in the 1.16.X series. Users are encouraged to update to a newer minor version soon.
Important announcements:
- Recognizing Armando Ruocco (@armru) as a new CloudNativePG maintainer for his consistent and impactful contributions (#1167)
- Remove ARMv7 support (#1092)
- FINAL patch release for 1.16: 1.16.5. Release 1.16 reaches end of life.
Enhancements:
- Add rpm/deb package for kubectl-cnpg plugin (#1008)
- Update default PostgreSQL version for new cluster definitions to 15.1 (#908)
- Documentation
- Remove references to CNPG sandbox (#1120) - the CNPG sandbox has been deprecated, in favor of instructions on monitoring in the Quickstart documentation
- Link to the "Release updates" discussion (#1148) - the release updates discussion will become the default channel for release announcements and discussions
- Document emeritus status for maintainers in GOVERNANCE.md (#1033) - explains how maintainers should proceed if they are not ready to continue contributing
- Improve instructions on creating pull requests (#1132)
- Troubleshooting emergency backup instructions (#1184)
Fixes:
- Ensure PGDATA permissions on bootstrap are properly set to 750 (#1164)
- Ensure that we create secrets and services only when not found (#1145)
- Respect configured pg-wal when restoring (#1216)
- Filter out replicas from nodeToClusters map (#1194)
Technical enhancements:
- Use
ciclops
for test summary (#1064): rely on the ciclops GitHub action to provide summaries of the E2E suite, inheriting improvements from that project - Add backport pull request workflow (#965) - automatically backport patches to release branches if they are so annotated
- Make the operator log level configurable in e2e test suite (#1094)
- Enable test execution based on labels (#951)
- Update Go version from 1.18 to 1.19 (#1166)
Version 1.16.4
Release date: Nov 10, 2022
Security:
- Add
SeccomProfile
to Pods and Containers (#888)
Enhancements:
status
command for thecnpg
plugin:- Clarify display for fenced clusters (#886)
- Improve display for replica clusters (#871)
- Documentation:
- Improve monitoring page, providing instructions on how to evaluate the observability capabilities of CloudNativePG on a local system using Prometheus and Grafana (#968)
- Add page on design reasons for custom controller (#918)
Fixes:
- Import a database with
plpgsql
functions (#974) - Properly find the closest backup when doing Point-in-time recovery (#949)
- Clarify that the
ScheduledBackup
format does not follow KubernetesCronJob
format (#883) - Bases the failover logic on the Postgres information from the instance manager, rather than Kubernetes pod readiness, which could be stale (#890)
- Ensure we have a WAL to archive for every newly created cluster. The lack could prevent backups from working (#897)
- Correct YAML key names for
barmanObjectStore
in documentation (#877) - Fix
krew
release (#866)
Version 1.16.3
Release date: Oct 6, 2022
Enhancements:
- Introduce
leaseDuration
andrenewDeadline
parameters in the controller manager to enhance configuration of the leader election in operator deployments (#759) - Improve the mechanism that checks that the backup object store is empty
before archiving a WAL file for the first time: a new file called
.check-empty-wal-archive
is placed in thePGDATA
immediately after the cluster is bootstrapped and it is then removed after the first WAL file is successfully archived
Security:
- Explicitly set permissions of the instance manager binary that is copied in
the
distroless/static:nonroot
container image, by using thenonroot:nonroot
user (#754)
Fixes:
- Drop any active connection on a standby after it is promoted to primary (#737)
- Honor MAPPEDMETRIC and DURATION metric types conversion in the native Prometheus exporter (#765)
Version 1.16.2
Release date: Sep 6, 2022
Enhancements:
- Enable configuration of low-level network TCP settings in the PgBouncer connection pooler implementation (#584)
- Make sure that the
cnpg.io/instanceName
and thecnpg.io/podRole
labels are always present on pods and PVCs (#632 and #680) - Propagate the
role
label of an instance to the underlying PVC (#634)
Fixes:
- Use
shared_preload_libraries
when bootstrapping the new cluster's primary (#642) - Prevent multiple in-place upgrade processes of the operator from running simultaneously by atomically checking whether another one is in progress (#655)
- Avoid using a hardcoded file name to store the newly uploaded instance manager, preventing a possible race condition during online upgrades of the operator (#660)
- Prevent a panic from happening when invoking
GetAllAccessibleDatabases
(#641)
Version 1.16.1
Release date: Aug 12, 2022
Enhancements:
- Enable the configuration of the
huge_pages
option for PostgreSQL (#456) - Enhance log during promotion and demotion, after a failover or a switchover, by printing the time elapsed between the request of promotion and the actual availability for writes (#371)
- Introduce the PostgreSQL cluster’s timeline in the cluster status (#462)
- Add the
instanceName
andclusterName
labels on jobs, pods, and PVCs to improve interaction with these resources (#534) - Add instructions on how to create PostGIS clusters (#570)
Security:
- Explicitly assign
securityContext
to thePooler
deployment (#485) - Add read timeout values to the internal web servers to prevent Slowloris DDoS (#437)
Fixes:
- Use the correct delays for restarts (
stopDelay
) and for switchover (switchoverDelay
), as they were erroneously swapped before. This is an important fix, as it might block indefinitely restarts ifswitchoverDelay
is not set and uses the default value of 40000000 seconds (#531) - Prevent the metrics collector from causing panic when the query returns an error (#396)
- Removing an unsafe debug message that was referencing an unchecked pointer, leading in some cases to segmentation faults regardless of the log level (#491)
- Prevent panic when fencing in case the cluster had no annotation (#512)
- Avoid updating the CRD if a TLS certificate is not changed (#501)
- Handle conflicts while injecting a certificate in the CRD (#547)
- Database import:
- Use the
postgres
user while runningpg_restore
in database import (#411) - Document the requirement to explicitly set
sslmode
in the monolith import case to control SSL connections with the origin external server (#572) - Fix bug that prevented import from working when
dbname
was specified inconnectionParameters
(#569)
- Use the
- Backup and recovery:
- Correctly pass object store credentials in Google Cloud (#454)
Minor changes:
- Set the default operand image to PostgreSQL 15.0
Version 1.16.0
Release date: Jul 7, 2022 (minor release)
Features:
- Offline data import and major upgrades for PostgreSQL: introduce the
bootstrap.initdb.import
section to provide a way to import objects via the network from an existing PostgreSQL instance (even outside Kubernetes) inside a brand new CloudNativePG cluster using the PostgreSQL logical backup concept (pg_dump
/pg_restore
). The same method can be used to perform major PostgreSQL upgrades on a new cluster. The feature introduces two types of import:microservice
(import one database only in the new cluster) andmonolith
(import the selected databases and roles from the existing instance). - Anti-affinity rules for synchronous replication based on labels: make sure that synchronous replicas are running on nodes with different characteristics than the node where the primary is running, for example, availability zone
Enhancements:
- Improve fencing by removing the existing limitation that disables failover when one or more instances are fenced
- Enhance the automated extension management framework by checking whether an
extension exists in the catalog instead of running
DROP EXTENSION IF EXISTS
unnecessarily - Improve logging of the instance manager during switchover and failover
- Enable redefining the name of the database of the application, its owner, and
the related secret when recovering from an object store or cloning an
instance via
pg_basebackup
(this was only possible in theinitdb
bootstrap so far) - Backup and recovery:
- Require Barman >= 3.0.0 for future support of PostgreSQL 15
- Enable Azure AD Workload Identity for Barman Cloud backups through the
inheritFromAzureAD
option - Introduce
barmanObjectStore.s3Credentials.region
to define the region in AWS (AWS_DEFAULT_REGION
) for both backup and recovery object stores
- Support for Kubernetes 1.24
Changes:
- Set the default operand image to PostgreSQL 15.0
- Use conditions from the Kubernetes API instead of relying on our own implementation for backup and WAL archiving
Fixes:
- Fix the initialization order inside the
WithActiveInstance
function that starts the CSV log pipe for the PostgreSQL server, ensuring proper logging in the cluster initialization phase - this is especially useful in bootstrap operations like recovery from a backup are failing (before this patch, such logs were not sent to the standard output channel and were permanently lost) - Avoid an unnecessary switchover when a hot standby sensitive parameter is decreased, and the primary has already restarted
- Properly quote role names in
ALTER ROLE
statements - Backup and recovery:
- Fix the algorithm detecting the closest Barman backup for PITR, which was comparing the requested recovery timestamp with the backup start instead of the end
- Fix Point in Time Recovery based on a transaction ID, a named restore
point, or the “immediate” target by providing a new field called
backupID
in therecoveryTarget
section - Fix encryption parameters invoking
barman-cloud-wal-archive
andbarman-cloud-backup
commands - Stop ignoring
barmanObjectStore.serverName
option when recovering from a backup object store using a server name that doesn’t match the current cluster name
cnpg
plug-in:- Make sure that the plug-in complies with the
-n
parameter when specified by the user - Fix the
status
command to sort results and remove variability in the output
- Make sure that the plug-in complies with the