Backup and Restore Overview
Cloud Execution Environment

Contents

1Introduction
1.1Scope
1.2Overview

2

Backup and Restore Solutions
2.1CIC Domain Data Backup and Restore
2.2Atlas Backup and Restore
2.3Fuel Synchronization
2.4Disaster Recovery
2.5Exporting Fuel VM Image
2.6Exporting CIC VM Images

3

Backup Contents
3.1CIC Domain Data
3.2Atlas
3.3Fuel
3.4Disaster Recovery

4

Backup Location
4.1CIC Domain Data
4.2Atlas
4.3Fuel
4.4Disaster Recovery

5

Retention Policy
5.1CIC Domain Data
5.2Atlas
5.3Disaster Recovery

6

Backup Sizes

7

Procedure Durations
7.1Transfer Durations

8

Storage Requirements
8.1Storage Volume
8.2Accessible Storage
8.3Bandwidth
8.4Security Policy

9

Transferring Backups Using FTPS
9.1Remote FTPS Server for Storing Backups
9.2Procedures

10

Backup Strategy
10.1Recommended Failover Strategy for Fuel

1   Introduction

This document describes the backup and restore options available in the Cloud Execution Environment (CEE).

For a quick reference to the procedures, see Table 1 for a list of the respective operating instruction (OPI) documents.

Table 1    Backup and Restore Procedures in CEE

Affected CEE Component

Backup

Restore

CIC infrastructure

CIC Domain Data Backup

 

CIC Domain Data Restore

 

Atlas

Atlas Backup

 

Atlas Restore

 

Fuel

Fuel Synchronization

 

CEE infrastructure

Disaster Recovery

 

1.1   Scope

The contents of the individual backup solutions are described in the respective subsections of Section 3.

The current CEE backup and restore solutions do not cover backup or restore of any of the following:

For more information on the scope of the individual procedures, refer to the relevant sections of the OPIs listed in Table 1.

1.1.1   Target Audience

This document is primarily intended to be used by staff handling CEE, including operational personnel performing installing, upgrading/updating, or maintaining activities.

1.2   Overview

The document has the following sections:

Backup and Restore Solutions This section gives a brief description of the individual backup and restore options, their respective use cases and hardware support. See Section 2.
Backup Contents This section gives a detailed list of the files included in the individual backups. See Section 3.
Backup Location This section describes the default location of the locally created backups for each backup option. See Section 4.
Retention Policy This section gives additional information on the applied retention policies for the relevant backup procedures. See Section 5.
Backup Sizes This section lists approximate sizes for the individual backups The listed sizes are estimated, and may differ depending on the size and configuration of the deployment. See Section 6.
Procedure durations This section lists the approximate duration of the individual backup and restore procedures. The listed durations do not include the transfer time of the backup files to or from the external storage. See Section 7.
Storage Requirements This section gives recommendations regarding the persistent storage used for storing the backup files outside the CEE region. See Section 8.
Transferring Backup Files This section describes a recommended way for transferring the backup files between the CEE region and the external persistent storage using FTPS. See Section 9.
Backup Strategy This section lists CEE environment changes and the recommended backup solutions after each environment change. See Section 10.

2   Backup and Restore Solutions

This section gives a brief description of each of the backup and restore options available in the current CEE release.

2.1   CIC Domain Data Backup and Restore

Description

CIC Domain Data Backup creates a copy of vital infrastructure data of the multi-node CIC domain. SDN configuration files are included in the backup if tightly integrated SDN is enabled on the system.

Restore is required in the following cases:

In some situations, a full restore of all backed up components is needed, while in others, a partial restore of one or more components is enough.

Note:  
CIC Domain Data Restore cannot be performed after Disaster Recovery.

The CIC domain data backup command implements a retention policy, which defines the number of backups kept. For more information on retention policies, see Section 5.

The procedures cannot be performed on single server deployment.

Procedures

CIC Domain Data Backup

CIC Domain Data Restore

2.2   Atlas Backup and Restore

Description

Atlas backup is used to restore the Atlas configuration to a previous state.

Restore is required in the following cases:

The Atlas backup command implements a retention policy, which defines the number of backups kept. For more information on retention policies, see Section 5.

The procedures are supported on all systems running Atlas.

Procedures

Atlas Backup

Atlas Restore

2.3   Fuel Synchronization

Description

The purpose of Fuel synchronization is to achieve redundancy by creating a replica of the Fuel VM image on another compute host.

In CEE with Fuel synchronization, two Fuel VMs are hosted on separate compute servers: an active Fuel VM is hosted on one compute server, and another passive (or cold-standby) Fuel VM is hosted on another compute server. During synchronization, the active Fuel VM is copied over to the passive Fuel VM. This is useful if either the active Fuel VM, or the compute server hosting it is not operational. In these situations, failover to the passive Fuel VM is possible. Earlier states of the Fuel VM are not preserved.

Synchronization and failover are manual procedures; automatic backup and restore are not available in this CEE release.

The procedure cannot be performed on single server deployment.

Procedures

Fuel Synchronization

2.4   Disaster Recovery

Description

The purpose of Disaster Recovery is to restore the CEE infrastructure following a man-made or natural disaster, such as a hurricane, flood, or similar. Disaster recovery is achieved by redeployment of the CEE Infrastructure based on the backup of the infrastructure configuration data from an earlier, healthy state.

After recovery, only infrastructure functions are operational.

Note:  
CIC Domain Data Restore cannot be performed after Disaster Recovery.

For redeployment to be successful, the hardware topology of the backed up CEE infrastructure and the CEE infrastructure to be restored must be the same regarding parameters described in the configuration files. These parameters include blade, networking, and cabling information. For more information, refer to Configuration File Guide.

The disaster recovery backup command implements a retention policy, which defines the number of backups kept. For more information on retention policies, see Section 5.

Procedures

Disaster Recovery

2.5   Exporting Fuel VM Image

In addition to Fuel synchronization (creating a local cold standby for Fuel), the Fuel VM image and configuration dump XML file can be exported and stored outside the CEE Region. This stored Fuel VM image and configuration file can be used for rollback purposes and for restoring the Fuel VM to a previous state.

Note:  
If the VM image is exported directly to the external storage, additional steps are necessary. For more information, see Section 9.

The overview of the procedure for creating an externally stored copy of the Fuel VM is the following:

  1. Perform Fuel Synchronization as described in the document Fuel Synchronization.
  2. Insert the forwarding rule for the routes to the external FTPS server on all vCICs as follows:
    1. Log on to one of the vCICs using SSH. For more information, refer to the CEE Connectivity User Guide.
    2. Execute the following command:

      ip netns e vrouter iptables -I FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
      ip netns e vrouter iptables -I FORWARD -m state --state⇒
       RELATED,ESTABLISHED -j ACCEPT

    3. Log out of the vCIC.
  3. Add a route between the vFuel compute host and the external storage using the ip netns command on all vCICs.
    1. Execute the command:

      ip netns e vrouter iptables -I FORWARD -s <host_mgmt_ip>/32 -d <ftps_server_ip>/32 -p tcp -j ACCEPT
      ip netns e vrouter iptables -I FORWARD -s ⇒
      <host_mgmt_ip>/32 -d <ftps_server_ip>/32⇒
       -p tcp -j ACCEPT

    2. Verify that the route is created by executing the following command:

      ip netns e vrouter iptables-save | grep <ftps_server_ip>
      ip netns e vrouter iptables-save ⇒
      | grep <ftps_server_ip>

    For the variable <host_mgmt_ip>, use the management IP address of the compute host hosting vFuel. This information can be collected by executing the following command on the compute host:

    ifconfig br-mgmt


    The management IP address of the host is listed as inet addr: in the printout.

    An example of the printout is the following:

    root@compute-0-1:~# ifconfig br-mgmt
    br-mgmt   Link encap:Ethernet  HWaddr ca:59:75:45:4c:45
    inet addr:192.168.2.23  Bcast:0.0.0.0  Mask:255.255.255.128
    root@compute-0-1:~# ifconfig br-mgmt
    br-mgmt   Link encap:Ethernet  HWaddr ca:59:75:45:4c:45
    inet addr:192.168.2.23  Bcast:0.0.0.0  ⇒
    Mask:255.255.255.128

  4. Log on to vFuel using SSH. For more information, refer to the CEE Connectivity User Guide.
  5. Identify the compute hosts hosting the active vFuel VM and the cold standby vFuel VM:

    [root@fuel ~]# for node in primary secondary 
    do
    ip=$(get_vfuel_info --ip --$node);
    name=$(ssh $ip hostname -s 2>&1 | grep compute);
    stat=$(ssh $ip sudo virsh list --all 2>&1 | grep fuel);
    stat=$(echo $stat | awk '{print $3 " " $4}');
    printf "%-10s | %s | %s\n" "$name" "$ip" "$stat";
    done

    An example of the printout is the following:

    compute-0-6 | 192.168.0.23 | running
    compute-0-1 | 192.168.0.20 | shut off
    


    In the printout, running identifies the compute host hosting the active vFuel VM, and shut off identifies the compute host hosting the cold standby vFuel VM. Record the IP addresses of the hosts hosting the vFuel VMs from the printouts as this data is required in the rollback procedure.

  6. Log on to the host hosting the active vFuel VM using SSH. For more information, refer to the CEE Connectivity User Guide
  7. On the vFuel host, retrieve the configuration dump of the vFuel VM using the virsh dumpxml command:

    virsh dumpxml fuel_master > <dump_file_name>.xml

    An example of the command is the following:

    root@compute-0-6:/var/lib/nova# virsh dumpxml fuel_master > fuel_master_compute6_running.xml
  8. Shut down the vFuel VM using the virsh shutdown command:
    virsh shutdown fuel_master

    The expected printout is the following:
    Domain fuel_master is being shutdown.

  9. Copy and transfer the dump XML file to the external storage using the curl command.

    curl -k --ftp-ssl --upload-file "<file_name>" ftp://<username>:<password>@<ftps_server_ip>//<target_path>
    curl -k --ftp-ssl --upload-file "<file_name>"⇒
     ftp://<username>:<password>⇒
    @<ftps_server_ip>//<target_path>

    The variables are the following:

    • <file_name> is the name of the dump XML file.
    • <username> and <password> are the credentials to the FTPS server.
    • <ftps_server_ip> is the IP address of the external FTPS server used for storing the CEE component backups.
    • <target_path> is the path on the FTPS server to the directory for storing the CEE component backup files.

    An example of the command is the following:

    root@compute-0-6:/var/lib/nova# curl -k --ftp-ssl ⇒
    --upload-file "fuel_master_compute6_running.xml" ftp://admin:admin@10.0.0.1//rollback/
  10. Compress, copy and transport the vFuel VM .qcow2 image file to the external storage using the curl command:

    curl -k --ftp-ssl --upload-file <(pigz -1 --stdout --keep --blocksize 4096 --processes $(xmlstarlet sel -t -v /domain/vcpu < /etc/libvirt/qemu/<fuel_xml_name>) <vfuel-img_file_name>) ftp:// <username> : <password> @ <ftps_server_ip> // <target_path>/<compressed_file_name>
    
    curl -k --ftp-ssl --upload-file <(pigz -1 --stdout --keep⇒
     --blocksize 4096 --processes $(xmlstarlet sel -t -v⇒
     /domain/vcpu < /etc/libvirt/qemu/<fuel_xml_name>)⇒
     <vfuel-img_file_name>) ftp:// <username> : <password> @⇒
     <ftps_server_ip> // <target_path>/<compressed_file_name>
    

    The variables are the following:

    • <vfuel-img_file_name> is the vFuel VM image file name.
    • <fuel_xml_name> is the corresponding configuration XML file.
    • <username> and <password> are the credentials to the FTPS server.
    • <ftps_server_ip> is the IP address of the external FTPS server used for storing the CEE component backups.
    • <target_path> is the path on the FTPS server to the directory for storing the CEE component backup files.
    • <compressed_file_name> is the file name for the compressed vCIC image. It is recommended to use the value <vfuel-img_file_name>.gz.

    An example of the command is the following:

    root@compute-0-6:/var/lib/nova# curl -k --ftp-ssl --upload-file <(pigz -1 --stdout⇒
     --keep --blocksize 4096 --processes $(xmlstarlet sel -t -v /domain/vcpu <⇒
     /etc/libvirt/qemu/fuel_master_compute6_running.xml) fuel_br3035.qcow2)⇒
     ftp://admin:admin@10.0.0.1//rollback/fuel_br3035.qcow2.gz
  11. Start the active vFuel VM using the virsh start command:
    virsh start fuel_master

    The expected printout is the following:
    Domain fuel_master started

  12. Log out of vFuel.
  13. Remove the route between the vFuel compute host and the external storage using the ip netns command on all vCICs:

    ip netns e vrouter iptables -D FORWARD -s  <host_mgmt_ip>/32 -d <ftps_server_ip>/32 -p tcp -j ACCEPT
    ip netns e vrouter iptables -D FORWARD -s  ⇒
    <host_mgmt_ip>/32 -d <ftps_server_ip>/32 ⇒
    -p tcp -j ACCEPT

    For the variable <host_mgmt_ip>, use the management IP address of the compute host hosting vFuel. For more information, see Step 3.

  14. After all backup files required for exporting the Fuel VM image have been transferred out of the region, delete the forwarding rule for the routes to the external FTPS server on all vCICs:
    1. Log on to one of the vCICs using SSH. For more information, refer to the CEE Connectivity User Guide.
    2. Execute the following command:

      ip netns e vrouter iptables -D FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
      ip netns e vrouter iptables -D FORWARD -m state --state ⇒
      RELATED,ESTABLISHED -j ACCEPT

    3. Log out of the vCIC.

During the procedure, Fuel failed alarm is expected to be issued. This is intended behavior and the alarm ceases after the affected node becomes operational again.

2.6   Exporting CIC VM Images

In addition to CIC domain data backup, the CIC VM images, configuration files, and XML templates can be exported and stored outside the CEE Region. These stored CIC VM images can be used for rollback purposes and for restoring the entire vCIC quorum to an earlier state.

Note:  
The size of the exported CIC VMs can exceed 500 GB depending on the deployment. The transfer time using SFTP as described in Section 9 can take 8 hours for three vCICs with a size of 494 GB, as PIGZ tool is used for compression along with file transfer.

If the VM images are exported directly to the external storage, additional steps are necessary. For more information, see Section 9.


In this section, the three vCICs are referred to as vCIC1, vCIC2 and vCIC3. The procedure below describes exporting CIC VM images from vCIC1. The procedure must be repeated for all vCIC that need to be exported, with different values for the variables, respectively.

The overview of the procedure for creating an externally stored copy of one of the CIC VMs is the following:

  1. Perform CIC domain data backup as described in the document CIC Domain Data Backup.
  2. Insert the forwarding rule for the routes to the external FTPS server on all vCICs as described in Step 2 in Section 2.5.
  3. On vCIC2 and vCIC3, add a route between the vCIC1 compute host and the external storage using the ip netns command.
    1. Execute the following command:

      ip netns e vrouter iptables -I FORWARD -s <host_mgmt_ip>/32 -d <ftps_server_ip>/32 -p tcp -j ACCEPT
      ip netns e vrouter iptables -I ⇒
      FORWARD -s <host_mgmt_ip>/32 -d ⇒
      <ftps_server_ip>/32 -p tcp -j ACCEPT

    2. Verify that the route is created by executing the following command:

      ip netns e vrouter iptables-save | grep <ftps_server_ip>
      ip netns e vrouter iptables-save ⇒
      | grep <ftps_server_ip>

    For the variable <host_mgmt_ip>, use the management IP address of the compute host hosting vCIC1. This information can be collected by executing the following command on the compute host:

    ifconfig br-mgmt


    The management IP address of the host is listed as inet addr: in the printout.

    An example of the printout is the following:

    root@compute-0-1:~# ifconfig br-mgmt
    br-mgmt   Link encap:Ethernet  HWaddr ca:59:75:45:4c:45
    inet addr:192.168.2.23  Bcast:0.0.0.0  Mask:255.255.255.128
    root@compute-0-1:~# ifconfig br-mgmt
    br-mgmt   Link encap:Ethernet  HWaddr ca:59:75:45:4c:45
    inet addr:192.168.2.23  Bcast:0.0.0.0  ⇒
    Mask:255.255.255.128

  4. Log on to vFuel using SSH. For more information, refer to the CEE Connectivity User Guide.
  5. Identify the hosts hosting the vCICs by executing the fuel node command.
  6. On the compute host hosting vCIC1, shut down the CIC VM using the virsh shutdown command:
    virsh shutdown <cic_vm_name>

    The expected printout is the following:
    Domain <cic_vm_name> is being shutdown

  7. Copy and transfer the dump XML file and the XML template one by one to the external storage using the curl command.

    curl -k --ftp-ssl --upload-file "<file_name>" ftp://<username>:<password>@<ftps_server_ip>//<target_path>
    curl -k --ftp-ssl --upload-file "<file_name>" ⇒
    ftp://<username>:<password>@<ftps_server_ip>//<target_path>

    The variables are the following:

    • <file_name> is the filename of one of the following:
      • The corresponding <cic_name>_vm.xml configuration XML file
      • The corresponding template_<cic_name>_vm.xml template file
    • <username> and <password> are the credentials to the FTPS server.
    • <ftps_server_ip> is the IP address of the external FTPS server used for storing the CEE component backups.
    • <target_path> is the path on the FTPS server to the directory for storing the CEE component backup files.

    An example of the commands is the following:

    root@compute-0-6:/var/lib/nova# curl -k --ftp-ssl ⇒
    --upload-file "cic-1_vm.xml" ftp://admin:admin@10.0.0.1//rollback/
    root@compute-0-6:/var/lib/nova# curl -k --ftp-ssl ⇒
    --upload-file "template_cic-1_vm.xml" ftp://admin:admin@10.0.0.1//rollback/
  8. Compress, copy, and transport the CIC VM .img image file to the external storage using the curl command:

    curl -k --ftp-ssl --upload-file <(pigz -1 --stdout --keep --blocksize 4096 --processes $(xmlstarlet sel -t -v /domain/vcpu < /etc/libvirt/qemu/<<cic_name>_vm.xml>) <vcic-img_file_name>) ftp:// <username> : <password> @ <ftps_server_ip> // <target_path>/<compressed_file_name>
    
    curl -k --ftp-ssl --upload-file <(pigz -1 --stdout --keep⇒
     --blocksize 4096 --processes $(xmlstarlet sel -t -v⇒
     /domain/vcpu < /etc/libvirt/qemu/<<cic_name>_vm.xml>)⇒
     <vcic-img_file_name>) ftp:// <username> : <password> @⇒
     <ftps_server_ip> // <target_path>/<compressed_file_name>
    

    The variables are the following:

    • <vcic-img_file_name> is the vCIC VM image file name.
    • <cic_name>_vm.xml is the corresponding configuration XML file.
    • <username> and <password> are the credentials to the FTPS server.
    • <ftps_server_ip> is the IP address of the external FTPS server used for storing the CEE component backups.
    • <target_path> is the path on the FTPS server to the directory for storing the CEE component backup files.
    • <compressed_file_name> is the file name for the compressed vCIC image. It is recommended to use the value <vcic-img_file_name>.gz.

    An example of the command is the following:

    root@compute-0-6:/var/lib/nova# curl -k --ftp-ssl --upload-file <(pigz -1 --stdout⇒
     --keep --blocksize 4096 --processes $(xmlstarlet sel -t -v /domain/vcpu <⇒
     /etc/libvirt/qemu/cic-1_vm.xml) cic-1_vm.img)⇒
     ftp://admin:admin@10.0.0.1//rollback/cic-1_vm.img.gz
  9. Start the CIC VM using the virsh start command:
    virsh start <cic_vm_name>

    The expected printout is the following:
    Domain <cic_vm_name> started

  10. As the forwarding rule is deleted when the vCIC is restarted, insert the forwarding rule for the routes to the external storage again:

    ip netns e vrouter iptables -I FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
    ip netns e vrouter iptables -I FORWARD -m state --state⇒
     RELATED,ESTABLISHED -j ACCEPT

  11. Log out of the vCIC.
  12. Perform a health check as described in the Health Check Procedure.
  13. Remove the route between the vCIC1 compute host and the external storage from vCIC2 and vCIC3 using the ip netns command:

    ip netns e vrouter iptables -D FORWARD -s <host_mgmt_ip>/32 -d <ftps_server_ip>/32 -p tcp -j ACCEPT
    ip netns e vrouter iptables -D FORWARD -s ⇒
    <host_mgmt_ip>/32 -d <ftps_server_ip>/32⇒
     -p tcp -j ACCEPT

    For the variable <host_mgmt_ip>, use the management IP address of the compute host hosting vCIC1. For more information, see Step 3.

  14. Repeat the procedure from Step 6 on the remaining vCICs.
    • For vCIC2, the route must be added and removed on vCIC1 and vCIC3.
    • For vCIC3, the route must be added and removed on vCIC1 and vCIC2.
  15. After all backup files required for exporting the CIC VM images have been transferred out of the region, delete the forwarding rule for the routes to the external FTPS server on all vCICs:
    1. Log on to one of the vCICs using SSH. For more information, refer to the CEE Connectivity User Guide.
    2. Execute the following command:

      ip netns e vrouter iptables -D FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
      ip netns e vrouter iptables -D FORWARD -m state --state ⇒
      RELATED,ESTABLISHED -j ACCEPT

    3. Log out of the vCIC.

During the procedure, CIC failed alarms are expected to be issued. This is intended behavior and the alarms cease after the affected nodes become operational again.

3   Backup Contents

This section describes the data contained in the individual backup options.

3.1   CIC Domain Data

CIC Domain Data Backup contains the following:

3.2   Atlas

The following information is stored as Atlas backup:

Note:  
The Atlas backup does not store time zone and Atlas GUI settings information.

3.3   Fuel

After synchronization, the cold standby Fuel VM is identical to the active Fuel VM.

3.4   Disaster Recovery

The following files from the Fuel VM are required for a successful restoration of CEE infrastructure and are preconfigured to be included in the backup:

Note:  
ScaleIO operations executed through the Meta Data Manager (MDM) CLI or the ScaleIO GUI are not propagated back to the config.yaml, therefore such changes are not included in the backup.

In addition, the following files can optionally be added to the backup:

4   Backup Location

This section describes the root location of the created local backups, and the naming rules for the backup directories.

Note:  
It is strongly recommended to export the backups regularly to an external, permanent storage solution. The requirements for the external storage are described in Section 8. The recommended procedure for transferring the backups out of the CEE region is described in Section 9.

4.1   CIC Domain Data

The local backup root is at /var/lib/glance/backup on each vCIC. For every backup, a separate directory is created in the backup root. The directory is named cic_data_backup.<x>, where x=0 is the latest backup created, and the highest number is the oldest backup, according to the retention policy.

In Example 1, the retention policy is three.

Example 1   List of Local Backup Directories, CIC Domain Data Backup

cic_data_backup.0 (newest)
cic_data_backup.1
cic_data_backup.2 (oldest)

One copy of all backups is stored on each virtual Cloud Infrastructure Controller (vCIC). After the local backup is created, it is synchronized to the other vCICs (referred to as remote backup).

It is highly recommended to regularly export the entire backup root directory from one vCIC to a remote location, such as an external backup solution. Glance images are not included in the CIC domain backup and are expected to be kept as part of an external backup.

4.2   Atlas

The backup root is at /var/archives/ on Atlas. For every backup, a directory is created in the backup root. The directory is named atlas_backup.<date>, where <date> is in s format and corresponds to the number of seconds elapsed since 1970-01-01 00:00:00 UTC.

The user must ensure that the backup files are stored externally in a persistent storage location, in case Atlas becomes inaccessible or corrupted.

4.3   Fuel

Replicas are created upon copying the original Fuel VM image to the external storage. The active and the cold standby Fuel VMs are located on separate compute hosts. To identify the compute host hosting the cold standby Fuel VM, refer to the relevant section of the document Fuel Synchronization.

4.4   Disaster Recovery

The backup root is at /var/disaster_backup on Fuel. For every backup, a directory is created underneath the backup root. The directory is named Disaster_backup.<x>, where x=0 is the latest backup created, and the highest number is the oldest backup, according to the retention policy.

In Example 2, the retention policy is three.

Example 2   List of Local Backup Directories, Disaster Recovery Backup

Disaster_backup.0 (newest)
Disaster_backup.1
Disaster_backup.2 (oldest)

For more information on retention policies, see Section 5.

The .tgz backup file must be copied or moved to a storage outside the CEE region, at a separate location.

For the storage requirements of the recovery backup files, refer to Disaster Recovery.

5   Retention Policy

Some of the backup solutions implement a retention policy, which defines the maximum number of backups retained before they are automatically deleted.

5.1   CIC Domain Data

The maximum number of backups retained is defined in the retain cic_data_backup variable in /etc/rsnapshot-cee.conf on every vCIC. The default number of retained backups is three.

Note:  
The retention policy is always the same on all vCICs.

Upon creating a backup, the existing backups are modified as follows:

  1. The oldest backup in cic_data_backup.2 is deleted.
  2. cic_data_backup.1 is renamed to cic_data_backup.2.
  3. cic_data_backup.0 is renamed to cic_data_backup.1.
  4. The new backup is named cic_data_backup.0.

The retention policy can be changed as described in the relevant section of the document CIC Domain Data Backup.

5.2   Atlas

The maximum number of backups retained is defined in the BM_ARCHIVE_COUNT variable in /etc/backup-manager.conf. The default number of retained backups is 10.

5.3   Disaster Recovery

The maximum number of backups retained is defined in the retain Disaster-backup variable in /etc/disaster-backup.conf on vFuel. The default number of retained backups is three.

Upon creating a backup, the existing backups are modified as follows:

  1. The oldest backup in Disaster_backup.2 is deleted.
  2. Disaster_backup.1 is renamed to Disaster_backup.2.
  3. Disaster_backup.0 is renamed to Disaster_backup.1.
  4. The new backup is named Disaster_backup.0.

The retention policy can be changed as described in the relevant section of the document Disaster Recovery.

6   Backup Sizes

CEE backup files have the following storage volume requirements:

Procedure

Approximate size

Disaster recovery backup

configuration files: 300 KB

installation media (tarball): 4.5 GB

CIC Domain Data backup

5 MB without Tight Integrated SDN(1)


700 MB with Tight Integrated SDN (1)

Atlas backup

Under 1 GB

Fuel VM image (exported, compressed)

Up to 45 GB, depending on configuration

CIC VM image (exported, compressed)

Can exceed 250 GB

(1)  The estimated size is valid for a backup taken after CEE deployment.


7   Procedure Durations

The approximate duration of the individual procedures is the following:

Procedure

Approximate Duration

CIC Domain Data Backup

2 min without SDN


20 min with SDN

CIC Domain Data Restore

40 min

Atlas Backup

2-10 min, depending on the size of the backup

Atlas Restore

2-10 min depending on the size of the backup

Fuel Synchronization

30 min

Disaster Recovery backup

20 min

7.1   Transfer Durations

Based on the backup sizes listed in Section 6, the approximate transfer time of the backup files to or from the FTPS server using the procedures described in Section 9 can be calculated based on the following table:

Backup file

Transfer time

Disaster recovery backup

under 2 min

CIC Domain Data backup

under 2 min

Atlas backup

2 min

Fuel image (exported)

7-8 min(1)

vCIC image (exported)

27-40 min (per vCIC)(2)

(1)  Calculated for an average vFuel VM size of 65 GB

(2)  Calculated for a vCIC size of 500 GB


8   Storage Requirements

This section describes requirements and recommendations regarding the external storage used for storing the CEE component backup files.

8.1   Storage Volume

The minimum required size of the external storage can be calculated based on the size of the backups to be stored on the storage. For more information on the size of backup files, see Section 6

8.2   Accessible Storage

Storage used for the backup of the CEE infrastructure must be accessible, as backup files need to be transferred to and from the CEE environment.

8.3   Bandwidth

The bandwidth of the network required for efficient data transfer and the required transfer time to and from the backup storage can be calculated based on the size of the files to be backed up. For more information, see Section 6.

8.4   Security Policy

By default, the created backup files are not encrypted, with the exception of Atlas backup files. It is the responsibility of the user to ensure the secure encryption, transfer and handling of backup files outside the CEE region. It is recommended to implement an encryption solution on the CEE data stored on the external storage.

The user must be aware of security policies. Refer to the document Security User Guide.

9   Transferring Backups Using FTPS

This section describes an example procedure for transferring the created backup files to an external persistent storage using FTPS.

Note:  
If this procedure is used, the following conditions must be considered:
  • Due to the transfer of the backup files between the CEE region and the external server, bandwidth is affected, and transfer time must be taken into consideration.
  • During the transfer of Fuel VM image, the Fuel VM is shut off.
  • During the transfer of a CIC VM image, the exported vCIC must be shut off.

9.1   Remote FTPS Server for Storing Backups

The remote server must fulfill the following requirements:

9.2   Procedures

9.2.1   Entering and Exiting vCIC Maintenance Mode

During the transfer of a Fuel VM image, all vCICs must be in maintenance mode. During the transfer of a CIC VM image, the exported vCIC must be shut off, and the other vCICs must be in maintenance mode.

To enter maintenance mode on a vCIC, execute the following command on the vCIC:

umm on

To exit maintenance mode and enter operational mode after the transfer is complete, execute the following command on the vCIC:

umm off

9.2.2   Inserting Forwarding Rule on the vCICs

To insert a forwarding rule for the communication with the FTPS servers, execute the following command on all vCICs:

iptables -t nat -A POSTROUTING -j MASQUERADE

9.2.3   Adding Route between Compute and FTPS Server

To add a route between a compute host and the remote FTPS server for transferring backup files to and from the region, execute the following command on the compute host:

route add <ftps_server_ip> gw <vcic_ip>

The variables are the following:

9.2.4   Transferring Files between a Host and the FTPS Server

Files must be transferred to the FTPS server one by one.

To transfer a file from a compute host to the FTPS server, execute the following command on the compute host containing the file to be transferred:

curl -k --ftp-ssl --upload-file "<file_name>" ftp://<username>:<password>@<ftps_server_ip>//<target_path>
curl -k --ftp-ssl --upload-file "<file_name>"⇒
 ftp://<username>:<password>@<ftps_server_ip>//<target_path>

The variables are the following:

To compress and transfer an image file from a compute host to the FTPS server, execute the following command on the compute host containing the image file to be transferred:

curl -k --ftp-ssl --upload-file <(pigz -1 --stdout --keep --blocksize 4096 --processes $(xmlstarlet sel -t -v /domain/vcpu < /etc/libvirt/qemu/<xml_file_name>) <img_file_name>) ftp:// <username> : <password> @ <ftps_server_ip> // <target_path> / <compressed_file_name>
curl -k --ftp-ssl --upload-file <(pigz -1 --stdout --keep⇒
 --blocksize 4096 --processes $(xmlstarlet sel -t -v⇒
 /domain/vcpu < /etc/libvirt/qemu/<xml_file_name>)⇒
 <img_file_name>) ftp:// <username> : <password> @⇒
 <ftps_server_ip> // <target_path> / <compressed_file_name>

The variables are the following:

To transfer a file from the FTPS server to a compute host, execute the following command on the destination compute host:

curl -k --ftp-ssl ftp://<username>:<password>@<ftps_server_ip>/<source_path>/<file_name> > <target_path>/<file_name>
curl -k --ftp-ssl ftp://<username>:<password>@⇒
<ftps_server_ip>/<source_path>/<file_name> > ⇒
<target_path>/<file_name>

The variables are the following:

To transfer an image file from the FTPS server to a compute host, execute the following command on the destination compute host:

curl -k --ftp-ssl ftp:// <username> : <password> @ <ftps_server_ip> // <source_path> / <compressed_file_name> | pigz --stdout --decompress --processes $(xmlstarlet sel -t -v /domain/vcpu < ./<xml_file_name>) > /var/lib/nova/<img_file_name>
curl -k --ftp-ssl ftp:// <username> : <password> @⇒
 <ftps_server_ip> // <source_path> / <compressed_file_name> ⇒
 | pigz --stdout --decompress --processes⇒
 $(xmlstarlet sel -t -v /domain/vcpu < ./<xml_file_name>)⇒
 > /var/lib/nova/<img_file_name>

The variables are the following:

9.2.5   Removing Route between Compute and FTPS Server

To remove a route between a compute host and the remote FTPS server, execute the following command on the compute host:

route delete <ftps_server_ip>


where <ftps_server_ip> is the IP address of the external FTPS server used for storing the CEE component backups.

9.2.6   Removing Forwarding Rule from the vCICs

After all necessary backup files have been transferred to or from the external FTPS server, delete the forwarding rule by executing the following command on all vCICs:

iptables -t nat -D POSTROUTING -j MASQUERADE

10   Backup Strategy

Based on the frequency of changes to the infrastructure, it is recommended to perform backups at regular intervals, for example, on a weekly or monthly basis.

The following list describes the recommended backups to perform after each change to the CEE environment:

Note:  
If any of the listed operations occur in your deployment since the latest backups were taken, the restore process can cause inconsistencies in the node, and the restore procedures can fail.

Use case

Recommended backup or backups

Deployment changes of the vCIC domain data

CIC domain data backup

CEE deployment changes, such as the following:


  • Region expansion

  • Region Scale-in

  • Server replacement

  • Repairing CEE

CIC domain data backup


Fuel Synchronization


Disaster Recovery Backup

Configuration changes, including any changes to the configuration files described in Section 3.4

CIC domain data backup


Fuel Synchronization


Disaster Recovery Backup

Update and rollback of the CEE region

CIC domain data backup


Fuel Synchronization


Disaster Recovery Backup(1)

Changes in the deployment of VNFs, including the following:


  • The creation of new VMs belonging to an already deployed VNF

  • The deployment of a new VNF

CIC domain data backup

Any VM lifecycle event, including the following:


  • boot

  • start

  • stop

  • delete

  • migrate

  • evacuate

  • forcemove

CIC domain data backup

Automated VM lifecycle events executed by Continuous Monitoring High Availability (CM-HA) or CEE SW changes

CIC domain data backup

Any modification in the OpenStack configuration database, for example, the creation or deletion of the following:


  • Glance images

  • Cinder volumes

  • Neutron networks

CIC domain data backup

If a fault is observed on the compute node running the active Fuel VM

Fuel Synchronization

(1)  The CEE Software package must also be backed up for disaster recovery purposes.


Based on the frequency of changes to the infrastructure, it is recommended to perform backups at regular intervals as well, for example, on a weekly or monthly bases.

10.1   Recommended Failover Strategy for Fuel

Failover to the passive standby Fuel VM must be considered in the following cases:

For the failover procedure refer to Fuel Synchronization.