When the production center encounters a disaster, services in the production center can be switched to the same-city DR center using eReplication.
Prerequisites
- Recovery plans have been created for protected groups.
- Data has been tested and the data generated during the test has been cleared in the same-city DR center.
- If the information about storage devices, hosts, or VMs is modified at the production or DR site, manually refresh the information. For details, see Refreshing Resource Information.
- The datastore name cannot contain Chinese characters.
Context
When the production center encounters an irrecoverable disaster, its services can be switched to the same-city DR center based on a remote recovery plan.
- In a cascading network, services are switched to the same-city DR center and asynchronous data replication is periodically implemented from the same-city DR center to the remote DR center.
- In a parallel network, services are switched to the same-city DR center and data is not automatically replicated from the same-city DR center to the remote DR center. Asynchronous replication must be manually configured from the same-city DR center to the remote DR center.
Procedure
- Optional: Perform configurations before the recovery.
- When the protected objects are VMware VMs, perform the following configurations:
- Without configuration, the VM IP address for fault recovery is the same as that in the production center. You can configure one for fault recovery on the Protected Object tab page of the recovery plan. For details, see Self-defining Startup Parameters for a Protected Object.
- When the asynchronous replication (NAS) DR solution is deployed, you need to create a share and configure permissions on DeviceManager of the storage array at the DR site. Permissions must be the same as those in the production center.
If you fail to create a share and configure permissions, faults cannot be rectified.
- When the type of protected objects is FusionCompute VMs, perform the following configurations:
- Without configuration, the VM IP address for fault recovery is the same as that in the production center. You can change it for the planned VM migration on the Protected Object tab page of the recovery policy.
- After adding or removing disks for a protected VM, refresh the information about the VM and manually enable DR for the protected group where the VM resides in time.
- Perform a fault recovery.
If Huawei UltraPath has been installed on the Linux-based DR host, ensure that I/O suspension time is not 0 and all virtual devices generated by UltraPath have corresponding physical devices. For details, see the OceanStor UltraPath for Linux xxx User Guide.
- On the menu bar, select Utilization > Data Restore.
- Select the recovery plan used for fault recovery and click More > Fault Recovery on the Operation list.
- Perform fault recovery based on the protected object type.
- If the type of protected objects is LUN, Local File System, Oracle, IBM DB2, Microsoft SQL Server, SAP HANA, or Microsoft Exchange Server, perform the following operations:
- Select DR Site.
- Select Host (Group) > Available DR Hosts or Host Groups (This operation is optional when the protected object type is LUN).
- If the storage array used at the DR site is T series V2 or later, the to-be-recovered host selected by a user can belong to only one host group on the storage array, and the host group can belong to only one mapping view. Moreover, the storage LUN used by protected applications and its corresponding secondary remote replication LUNs must belong to one LUN group, and the LUN group must reside in the same mapping view as the host group. If the storage array version is T series V2R2, deselect Enable Inband Command to change the mapping view attribute after the mapping view is created.
- If the storage array is T series V2R2 or later, or 18000 series, automatic host adding and storage mapping are provided. Ensure that the storage is connected to hosts' initiators properly. In this manner, the system can automatically create hosts, host groups, LUN groups, and mapping views on the storage. The creation principles are as follows:

- If no DR host or DR host group is selected, you need to manually map DR LUNs to the DR host when the type of protected objects is LUN.
- Click Fault Recovery.
- In the Warning dialog box that is displayed, read the content of the dialog box carefully and select I have read and understood the consequences associated with performing this operation.
- Click OK.
- If the type of protected objects is VMware VM, perform the following steps:
- Select a recovery cluster.
VMs will be recovered to the cluster. Select DR Site, DR vCenter, and DR Cluster.
Upon the first network recovery, you need to set the cluster information.
- Select a recovery network.
The network is used to access recovered VMs.
- If Production Resource and DR Resource are not paired, select Production Resource and DR Resource, and click Add to the mapping view to pair them.
- If Keep the mac unchange is selected, the system checks whether the MAC addresses of production VMs conflict with those of all VMs in the DR vCenter. If the MAC addresses do not conflict, the system retains the MAC addresses of the VMs in the DR vCenter. Otherwise, the recovery task fails.
- If Keep the mac unchange is not selected and the mounted VM is stopped, the MAC address of the VM mounted to the vCenter remains unchanged.After the VM is started, vCenter automatically assigns a MAC address to the VM.
- Set Logical Port IP Address to recover hosts in the cluster to access DR file systems over the logical port.
In scenarios where the asynchronous replication (NAS) DR solution is deployed, you need to set Access Settings.
- Stop non-critical VMs when executing recovery.
In the Available VMs list, select non-critical VMs to stop them to release computing resources.
- Click Fault Recovery.
- If the protected object type is FusionCompute VM, perform the following steps:
- Select the cluster you want to recover.
VMs will be recovered in the test cluster. Select DR Site.
- Select an available powered-on host.
The available powered-on host can provide resources for VMs.
- Select non-critical VMs.
In the Available VMs list, select non-critical VMs you want to stop to release computing resources.
- Click Fault Recovery.
- In the Warning dialog box that is displayed, read the content of the dialog box carefully and select I have read and understood the consequences associated with performing this operation.
- Click OK.
- In the production center, check the application startup status.
After the fault recovery is complete, check whether the applications and data are normal. If an application or data encounters an exception, contact Huawei technical support.
- Note the following when checking the startup status of applications.
- If the protection policies are based on applications, check whether the applications are started successfully and data can be read and written correctly.
- If the protection policies are based on LUNs, you need to log in to the application host in the disaster recovery center, scan for disks, and start applications. Then check whether the applications are started successfully and data can be read and written correctly.
You can use self-developed scripts to scan for disks, start applications, and test applications.
- If the environment is networked in cascading mode, use the storage array management software in the same-city DR center to create asynchronous remote replication from the same-city DR center to the remote DR center.
When creating the asynchronous replication relationship, you are advised to create LUNs in the storage array in the remote DR center for DR. If you use the original LUNs for DR, the original DR data will be overwritten.
- If the environment is networked in parallel mode, log in to eReplication in the same-city DR center to create DR protected groups and recovery plans from the same-city DR center to the remote DR center for the service system.
Result
After the production center is faulty, services are taken over the same-city DR center.
Copyright © Huawei Technologies Co., Ltd.