CloudNativePG is an open source operator designed to manage PostgreSQL workloads on any supported Kubernetes cluster running in private, public, hybrid, or multi-cloud environments. CloudNativePG adheres to DevOps principles and concepts such as declarative configuration and immutable infrastructure.
It defines a new Kubernetes resource called
Cluster representing a PostgreSQL
cluster made up of a single primary and an optional number of replicas that co-exist
in a chosen Kubernetes namespace for High Availability and offloading of
Applications that reside in the same Kubernetes cluster can access the PostgreSQL database using a service which is solely managed by the operator, without having to worry about changes of the primary role following a failover or a switchover. Applications that reside outside the Kubernetes cluster, need to configure a Service or Ingress object to expose the Postgres via TCP. Web applications can take advantage of the native connection pooler based on PgBouncer.
Based on the Operator Capability Levels model, users can expect a "Level V - Auto Pilot" set of capabilities from the CloudNativePG Operator.
Supported Kubernetes distributions
CloudNativePG 1.15 requires Kubernetes 1.21 through 1.23. For more information, please refer to the "Supported releases" page.
The CloudNativePG community maintains container images for both the operator and the operand, that is PostgreSQL.
The CloudNativePG operator container images are distroless
and available on the
cloudnative-pg project's GitHub Container Registry.
The PostgreSQL operand container images are available for all the
PGDG supported versions of PostgreSQL,
on multiple architectures, directly from the
postgres-containers project's GitHub Container Registry.
Additionally, the Community provides images for the PostGIS extension.
CloudNativePG requires that all nodes in a Kubernetes cluster have the same CPU architecture, thus a hybrid CPU architecture Kubernetes cluster is not supported.
- Direct integration with Kubernetes API server for High Availability, without requiring an external tool
- Self-Healing capability, through:
- failover of the primary instance by promoting the most aligned replica
- automated recreation of a replica
- Planned switchover of the primary instance by promoting a selected replica
- Scale up/down capabilities
- Definition of an arbitrary number of instances (minimum 1 - one primary server)
- Definition of the read-write service, to connect your applications to the only primary server of the cluster
- Definition of the read-only service, to connect your applications to any of the instances for reading workloads
- Declarative management of PostgreSQL configuration, including certain popular
Postgres extensions through the cluster
- Support for Local Persistent Volumes with PVC templates
- Reuse of Persistent Volumes storage in Pods
- Rolling updates for PostgreSQL minor versions
- In-place or rolling updates for operator upgrades
- TLS connections and client certificate authentication
- Support for custom TLS certificates (including integration with cert-manager)
- Continuous backup to an object store (AWS S3 and S3-compatible, Azure Blob Storage, and Google Cloud Storage)
- Backup retention policies (based on recovery window)
- Full recovery and Point-In-Time recovery from an existing backup in an object store
- Parallel WAL archiving and restore to allow the database to keep up with WAL generation on high write systems
- Support tagging backup files uploaded to an object store to enable optional retention management at the object store layer Replica clusters for
- PostgreSQL deployments across multiple Kubernetes clusters, enabling private, public, hybrid, and multi-cloud architectures
- Support for Synchronous Replicas
- Connection pooling with PgBouncer
- Support for node affinity via
- Native customizable exporter of user defined metrics for Prometheus through the
- Standard output logging of PostgreSQL error messages in JSON format
- Automatically set
readOnlyRootFilesystemsecurity context for pods
- Fencing of an entire PostgreSQL cluster, or a subset of the instances
- Simple bind and search+bind LDAP client authentication
- Multi-arch format container images
About this guide
Follow the instructions in the "Quickstart" to test CloudNativePG on a local Kubernetes cluster using Kind, or Minikube.
In case you are not familiar with some basic terminology on Kubernetes and PostgreSQL, please consult the "Before you start" section.
Postgres, PostgreSQL and the Slonik Logo are trademarks or registered trademarks of the PostgreSQL Community Association of Canada, and used with their permission.