Skip to content

How to configure the rancher-backup operator to perform a Backup and Restore using local storage

Article Number: 000022314

Environment

Rancher v2.5+

Procedure

The recommended method to configure the rancher-backup operator is to use S3 storage. This article explains how to use local storage only when S3 is not available, such as for Rancher migration between clusters or disaster recovery when the new cluster cannot access the original S3 backups.

Important: This setup is not recommended for production use. In production, backups should be stored in a persistent external location (such as S3) to ensure they are available externally in the event of a complete cluster failure.

Backup Steps

  1. Create a hostPath PV in the Rancher local cluster using your desired local path (/backup in this example). The backup will be written only to the node running the rancher-backup pod at the time.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-rancher-backup
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 3Gi
  hostPath:
    path: /backup
  persistentVolumeReclaimPolicy: Retain
2. Install the Rancher-backup operator, per the Rancher documentation. During installation, choose "Use an existing persistent volume" and select the PV created above in Step 1 as the default storage location. 3. Create a backup using the Rancher UI, per the rancher-backup operator documentation. Set the following options for the backup:

  • Schedule: One-Time Backup
  • Resource Set: Full Rancher backup resource set
  • Storage Location: Use the default storage location configured during installation.
  • Encryption: Store the contents of the backup unencrypted (or you can optionally configure encryption for the backup file as detailed in the documentation)

Note: You might face the following error during the backup creation.

Error creating backup tar gzip file: open /var/lib/backups/test-backup-3a869826-b3f6-4290-a083-78b801198d26-2026-01-22T09-45-33Z.tar.gz: permission denied

This error is due to a permission issue on the hostPath volume. The rancher-backup pod runs as UID 1000, so ensure the host directory (for example, /backup) is owned by UID 1000. 4. When the backup shows as completed, you can copy the backup file from the host path (e.g. /backup) directory on the node running the rancher-backup Pod.

Restore Steps

  1. Copy your backup file onto all the nodes in the new cluster. This is to ensure that the backup-operator can find the file no matter where it is scheduled.
  2. Create a hostPath PV that mounts the directory where you copied your backup (/migration-backup in this example):

apiVersion: v1
kind: PersistentVolume
metadata:
  name: migration
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  hostPath:
    path: /migration-backup
    type: ""
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem
3. Proceed with installing the Rancher Backup CRDs and charts by following section 1. Install the rancher-backup Helm chart from the rancher-backup operator documentation. Please note that in step 3. Install the charts, when installing the rancher-backup chart, you should edit the command to set the values persistence.enabled=true and persistence.volumeName=migration (adjusting the volumeName to match the PV created above in step 2.). For example:

helm install rancher-backup rancher-charts/rancher-backup -n cattle-resources-system --version $CHART_VERSION --set persistence.enabled=true --set persistence.volumeName=migration
4. Confirm that the backup operator successfully mounted the hostPath PV and your backup is present

kubectl -n cattle-resources-system exec deploy/rancher-backup   -- ls /var/lib/backups 
5. Create a Restore object. Update backupFilename to the name of the backup file copied in step 1. If the backup file is encrypted you will need to create the encryption secret in the cluster first and reference this in the encryptionConfigSecretName field of the Restore manifest spec.

# restore-migration.yaml
apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
  name: restore-migration
spec:
  backupFilename: migration-adb5ba4a-ace3-4e53-878b-895170c9615c-2023-08-02T19-43-26Z.tar.gz
  prune: false
6. Apply the Restore object:

kubectl apply -f restore-migration.yaml
7. Watch the restoration logs:

kubectl logs -n cattle-resources-system --tail 100 -f -l app.kubernetes.io/instance=rancher-backup
8. Continue with section 3. Install cert-manager onwards of the the rancher-backup operator migration documentation.