Skip to content

Rate this page
Thanks for your feedback
Thank you! The feedback has been submitted.

Get free database assistance or contact our experts for personalized support.

Speed-up backups with pgBackRest asynchronous archiving

Backing up a database with high write-ahead logs (WAL) generation can be rather slow, because PostgreSQL archiving process is sequential, without any parallelism or batching. In extreme cases backup can be even considered unsuccessful by the Operator because of the timeout.

The pgBackRest tool used by the Operator solves this problem by using the WAL asynchronous archiving feature.

Asynchronous archiving is enabled by default in the pgBackRest configuration. It requires the spool path to store transient data. The spool path may differ depending on the image and version. An example spool path is /pgdata/pgbackrest-spool.

You can further fine-tune asynchronous archiving by setting the max number of parallel processes for archive-push and archive-get commands.

Be sure not to set a high process-max value because it may affect normal database operations.

Your storage configuration file may look as follows:

s3.conf
[global]
repo2-s3-key=REPLACE-WITH-AWS-ACCESS-KEY
repo2-s3-key-secret=REPLACE-WITH-AWS-SECRET-KEY
repo2-storage-verify-tls=n
repo2-s3-uri-style=path

[global:archive-get]
process-max=2

[global:archive-push]
process-max=4

No modifications are needed aside of setting these additional parameters. You can find more information about WAL asynchronous archiving in gpBackRest official documentation and in this blog post .


Last update: March 18, 2026
Created: April 16, 2024