Checking the DR Environment

Before the DR service configuration, you need to check the application environment and storage at the production and DR sites where the protected objects reside, to ensure the normal running of subsequent configuration tasks.

Common Check Items

The following configuration items must be checked and configured on databases at both the production and DR ends.

  1. If the host where the openGauss database resides runs Linux, check the disk mapping mode using UDEV.

    When UDEV is used to map disks, the restrictions on UDEV disk mapping modes are as follows:

    • Only disk partitions can be used for UDEV disk mapping.
    • When UDEV is used to map cm_vote, cm_shared, dss_shared, and dss_private_0 disks, only UDEV disks can be used as disks of their disk groups.

    cm_vote: CM voting disk, one for each cluster

    cm_shared: CM data disk, one for each cluster

    dss_shared: database data disk, one or more for each cluster

    dss_private_0: database xlog disk, one for each cluster

  2. Check the cluster authentication.

    During the creation of an openGauss protected group, users can perform cluster authentication. Authentication requires that the user names and passwords of all nodes in the cluster at the production site and DR site be the same.

Check Items at the Production and DR Ends

  1. Log in to the active node of the openGauss cluster and run the following command to switch to the cluster user.

    su - omm

  2. (Optional) Run the following command to load the environment variables.

    source xxx

    xxx indicates the file name of the environment variable to be loaded.

    If environment variables are separated during openGauss installation, you need to load environment variables separately.

  3. Run the following command to check whether the cluster status is normal. If the value of cluster_state is Normal, the cluster is normal.

    cm_ctl query -Cvipd

    Perform the check on the production end. The following information is displayed:

    [  CMServer State   ]
    node            node_ip         instance                              state
    -----------------------------------------------------------------------------
    1  node3.38.232 8.xx.xx.232     1    /opt/omm/install/cm/cm_server Primary
    2  node4.38.233 8.xx.xx.233     2    /opt/omm/install/cm/cm_server Standby
    [ Defined Resource State ]
    node            node_ip         res_name instance  state  
    ----------------------------------------------------------
    1  node3.38.232 8.xx.xx.232     dms_res  6001      OnLine 
    2  node4.38.233 8.xx.xx.233     dms_res  6002      OnLine 
    1  node3.38.232 8.xx.xx.232     dss      20001     OnLine 
    2  node4.38.233 8.xx.xx.233     dss      20002     OnLine 
    [   Cluster State   ]
    cluster_state   : Normal
    redistributing  : No
    balanced        : Yes
    current_az      : AZ_ALL
    [  Datanode State   ]
    node            node_ip         instance                                state            | node            node_ip         instance                                state
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    1  node3.38.232 8.xx.xx.232     6001 19339  /opt/omm/install/data/dn P Primary Normal | 2  node4.38.233 8.xx.xx.233     6002 19339  /opt/omm/install/data/dn S Standby Normal

    Perform the check on the DR end. The following information is displayed:

    [  CMServer State   ]
    node            node_ip         instance                              state
    -----------------------------------------------------------------------------
    1  node5.38.234 8.xx.xx.234     1    /opt/omm/install/cm/cm_server Standby
    2  node6.38.235 8.xx.xx.235     2    /opt/omm/install/cm/cm_server Primary
    [ Defined Resource State ]
    node            node_ip         res_name instance  state  
    ----------------------------------------------------------
    1  node5.38.234 8.xx.xx.234     dms_res  6001      OnLine 
    2  node6.38.235 8.xx.xx.235     dms_res  6002      OnLine 
    1  node5.38.234 8.xx.xx.234     dss      20001     OnLine 
    2  node6.38.235 8.xx.xx.235     dss      20002     OnLine 
    [   Cluster State   ]
    cluster_state   : Normal
    redistributing  : No
    balanced        : Yes
    current_az      : AZ_ALL
    [  Datanode State   ]
    node            node_ip         instance                                state            | node            node_ip         instance                                state
    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    1  node5.38.234 8.xx.xx.234     6001 19339  /opt/omm/install/data/dn P Main Standby Normal | 2  node6.38.235 8.xx.xx.235     6002 19339  /opt/luwang/install/data/dn S Standby Normal

Storage End

Check whether the synchronous replication status of the xlog disks is normal.

Table 1 describes the storage environment requirements of disaster recovery solutions.

Table 1 Storage environment requirements of disaster recovery solutions

Technology

Restriction and Requirement

Synchronous Replication (SAN)

  • When DR is implemented based on storage array remote replication, consistency replication relationships must be set up between storage arrays used by applications.
    • If storage arrays are used for protection, remote replication relationships must be set up for LUNs used by applications and the status of remote replication must be normal.
    • If multiple LUNs are used, related remote replications must be added to one consistency group.
    • For flash storage, automatic host adding and storage mapping are provided. Ensure that the storage is connected properly to host initiators. In this manner, the system can automatically create hosts, host groups, LUN groups, and mapping views on the storage.
    • For flash storage, the DR host can belong to only one host group that belongs to only one mapping view. Moreover, the storage LUN used by protected applications and its corresponding remote replication secondary LUN must belong to one LUN group, and the LUN group must reside in the same mapping view as the host group.
  • After the application environment at the DR site is set up, the secondary LUN cannot be unmapped.
  • On DeviceManager, check whether the secondary LUN of the remote replication used by the DR database has been mapped to the host or host group where the standby cluster resides.

Copyright © Huawei Technologies Co., Ltd.