Operating Instructions 8/1543-AXB 901 33/7 Uen A

Add Active-Standby Geographical Redundancy to a Live SAPC
Ericsson Service-Aware Policy Controller

Contents


1 Add Active-Standby Geographical Redundancy Description

This document describes how to add Active-Standby Geographical Redundancy to a live SAPC, that is, how to add a second cluster to a standalone system. It also describes how to make the introduction of the second cluster transparent from a network point of view. The second cluster can be installed on the same site where the standalone system is running, or it can be installed on a different site; this document assumes that a different location is used.

This document is valid for both PNF and VNF deployments. Any different depending on the type of deployment is explicit in the procedure.

1.1 Add Active-Standby Geographical Redundancy Prerequisites

The following sections describe the documents, conditions, and tools required before the procedure.

Documents

Before starting this procedure, ensure that you have read the following documents:

Check also the following documents:

For the installation of the new cluster (required only on the new site, site 2):

2 Add Geographical Redundancy Procedure

This section describes the installation procedure step-by-step. Some steps must be coordinated between the two clusters while others can be performed independently. If coordination is required, it is clearly stated in the relevant step.

2.1 Procedure in Mated SAPC (SAPC2)

Steps

  1. Follow the installation instructions for the SAPC2 cluster contained in SAPC PNF Deployment Instruction, SAPC VNF Deployment Instruction for OpenStack, or SAPC VNF Deployment Instruction for VMware, respectively, to configure the Mated SAPC as non-preferred because live SAPC has to be configured as preferred.

Results

Once the configuration is applied, the SAPC2 is in Initial State. That means that it is ready to be the Standby SAPC in a Active-Standby Geographical Redundancy, after the live SAPC has been configured as Active SAPC.

2.2 Procedure in Live SAPC (SAPC1)

Steps

  1. ForPNF deployments, add the replication template in BSP, following the BSP 8100 Configuration section in SAPC PNF Deployment Instruction.
  2. Access to the <OAM-VIP> and then the SC-1.
    <InstallationServer>:# ssh root@<OAM-VIP>
    SC-<X>:# ssh root@SC-1
  3. Copy the following file:
    SC-1:# cp /cluster/storage/no-backup/adapt/adapt_cluster.cfg.processed
    /cluster/storage/no-backup/adapt/adapt_cluster.cfg
  4. Update write permissions: SC-1:# chmod u+w /cluster/storage/no-backup/adapt/adapt_cluster.cfg
  5. Modify the adapt_cluster.cfg as follows:
    1. If provisioning VIP is not configured, add it in the[Network] section:
      [Network]
      PROV_IP = <VIP-Provisioning> VIP address for provisioning shared between the SAPC clusters. For further information, refer to Active-Standby Geographical Redundancy Network Configuration Guide.
    2. If a new ALB is going to be used for replication, add the corresponding values in the [Network] section. For further details, refer to Adapt Cluster Tool.
      Caution!

      The creation of a new ALB for replication is only supported in PNF deployments.

    3. Add [GeoRed] section:
      [GeoRed]
      LOCAL_REP_VIP = <VIP-Replication SAPC1> . VIP address for data replication in the local cluster. For further information, refer to Active-Standby Geographical Redundancy Network Configuration Guide.
      PEER_REP_IP = <VIP-Replication SAPC2> . VIP address for data replication in the remote cluster. For further information, refer to Active-Standby Geographical Redundancy Network Configuration Guide.
      PREFERRED = <1> <This SAPC has to be configured as preferred>.
      Note: The SAPC configured as preferred maintains the database to resolve some fault situations where is not possible to know which of the SAPC clusters holds the most up-to-date database.
  6. Execute the customizing tool command.
    • <SC-1>:# adapt_cluster -f /cluster/storage/no-backup/adapt/adapt_cluster.cfg geored

    Once the tool finishes, the Geographical Redundancy configuration is applied.
  7. To check that everything has been successful, the following file must contain the same info as adapt_cluster.cfg
    • <SC-1>:# cat /cluster/storage/no-backup/adapt/adapt_cluster.cfg.processed_expansion

  8. Once the configuration is applied, the SAPC1 is ready to become the Active SAPC in a Active-Standby Geographical Redundancy. Consider that SAPC1 arises several alarms ( Policy Control, Geographical Redundancy Unable To Reach Peer, DBS, NR, Synchronization Needed and DBS, NR, Connection Lost) until the SAPC2 becomes the Standby SAPC.

2.3 Start Mated SAPC (SAPC2) as Standby SAPC

Steps

  1. To configure SAPC2 as Standby SAPC, follow the Start Active-Standby Geographical Redundancy procedure.