Creating a Replication Cluster

A replication cluster consists of replication nodes. Creating a replication cluster includes creating a replication control cluster and a replication service cluster, configuring the networks, and authenticating the cluster. You must create a replication cluster in both local and remote storage systems.

Procedure

  1. Choose Data Protection > Configuration > Replication Cluster.
  2. Click Create.

    The Create Replication Cluster wizard page is displayed.

  3. Create a replication control cluster.

    • Recommended

      You can select Recommended only when you create a replication cluster for the first time.

      1. Set the cluster name.
        • The name contains 1 to 31 characters.
        • A replication control cluster name contains only letters, digits, underscores (_), and hyphens (-).
      2. In the Selected Nodes area, view node configurations.
      3. Click Submit.
      4. Wait until the configuration is successful, and then click Next.
    • Custom

      1. Set the name of the replication control cluster.
        • The name contains 1 to 31 characters.
        • A replication control cluster name contains only letters, digits, underscores (_), and hyphens (-).
      2. In Metadata Storage Location, select the location to store the metadata.
        • The size of the metadata disk must be greater than 105 GB.
        • If a system disk partition is used as the replication service metadata disk, the system disk must be:
          • A SAS or SSD disk when the number of local storage pools is less than or equal to 4.
          • An SSD when the number of local storage pools is greater than 4.
        • If you select Physical disk, specify Disk Type and Disk Selection Mode.

          Possible options are SAS, SATA, SSD card or NVMe SSD, SSD, and M.2 SSD. If SSD card or NVMe SSD or M.2 SSD is selected, you cannot select Specify Slot.

          Disk Selection Mode can be:

          • Specify Slot: Manually enter a slot number. A slot number ranges from 0 to 60.
          • Manually Select: Manually select disks for each node.
        • If you select System disk partition, the system stores metadata in the /opt/ccdb_disk2 partition of the system disks.
      3. Select the nodes to create the replication control cluster.
        • If Metadata Storage Location is set to Physical disk, select the nodes and disks for creating the replication control cluster from Available Nodes.
        • If Metadata Storage Location is set to System disk partition, select the nodes for creating the replication control cluster from Available Nodes.
      4. Click Submit.
      5. Wait until the configuration is successful, and then click Next.

  4. Create a replication service cluster.

    1. Select the nodes to create the replication service cluster.
    2. Click Submit.
    3. Wait until the creation is successful, and then click Next.

  5. Configure the network.

    1. Configure network information for replication nodes. Table 1 describes related parameters.
      Table 1 Replication node network parameters

      Parameter

      Description

      Transmission Protocol

      Transmission protocol used by the replication network.

      IP Address Range

      IP address range of the replication network.

      NOTE:

      Plan the IP address properly to ensure that the replication IP address is different from other IP addresses.

      Subnet Mask/Prefix

      When an IPv4 address is used, this parameter indicates the subnet mask of the replication IPv4 address and identifies the subnet to which the IP address belongs. When an IPv6 address is used, this parameter indicates the prefix of the replication IPv6 address.

      Port

      Port number of the replication network.

    2. Click Preview.
    3. Click Submit.
    4. Wait until the creation is successful, and then click Next.

  6. Authenticate the cluster.

    Table 2 describes related parameters.
    Table 2 Cluster authentication parameters

    Parameter

    Description

    Pre-Shared Key Label

    Identifies the pre-shared key of a cluster.

    [Value range]

    • A pre-shared key label contains 5 to 32 characters.
    • A pre-shared key label can contain only letters, digits, and underscores (_), and must start with a letter.

    Pre-Shared Key

    Implements identity authentication. This parameter functions together with Pre-Shared Key Label.

    [Value range]

    • A pre-shared key contains 8 to 31 characters.
    • A pre-shared must contain special characters (excluding <>' &") and any two types of the following characters: uppercase letters, lowercase letters, and digits.

    Confirm Pre-Shared Key

    Confirms the pre-shared key.

  7. Click Submit.
  8. (Optional) In the Manage Remote Device area, click Configure to add a remote device for the cluster. For details, see Adding a Remote Device.

    If different replication ports on the replication nodes are configured with IP addresses in the same network segment, you need to add policy-based routes for these ports before adding a remote device to the cluster. For details about how to add policy-based routes, see Configuring the Replication Network Routing Information in the product documentation of the corresponding version..

  9. (Optional) If you need to configure the HyperGeoMetro&HyperGeoEC feature, perform the following operations in the Support Cross-Site DR area.

    • Only one replication cluster in the clusters supports cross-site DR.
    • After the function is enabled, you can click Disable to disable the replication cluster from supporting cross-site DR.
    • Cross-site DR can be configured only after an advanced license is imported.
    1. Click Enable.
      The Enable Support for Cross-Site DR page is displayed on the right.
      • After cross-site DR is enabled:
        1. The DNS TTL will change from 120s to 5s. If the system fails to change the TTL, run the following CLI command to change it.
          change dns ttl dns_ttl=5
        2. Data write will be disabled at the passive site. If the system fails to disable data write at the passive site, run the following CLI command to disable it.
          change passive writeswitch oscPassiveWriteSwitch=false
        3. Automatic takeover will be disabled, so that after the faulty cluster is recovered, it does not automatically take over services. Otherwise, data written before the failback may fail to be read after the failback. If the system fails to disable automatic takeover, run the following CLI command to disable it.
          change cluster_status auto_switch value=off
    2. Click OK.
    3. After the function is enabled, click Close.
    4. Click Configure Now. The cross-site DR configuration page is displayed. Click Configuration Wizard to configure the cross-site DR feature. For details, see Using the Configuration Wizard.