Installation Instructions 8/1531-AXB 901 33/7 Uen A1

SAPC PNF Deployment Instruction
Ericsson Service-Aware Policy Controller

Contents


1 SAPC PNF Deployment Introduction

Document Purpose and Scope

The scope of this document is to install a SAPC covering operating system (OS), platform software Component Based Architecture (CBA), and application software products.

Geographical Redundancy requires the deployment of two SAPC nodes to guarantee the data replication between them. This document is followed to install each of these nodes and configure them for providing this function.

Intended Audience

The intended audience of this document is software installation technicians. The installer needs the following:

  • Linux system administrator knowledge

  • General networking knowledge

  • User root privileges

  • General virtualization knowledge (KVM, QEMU, Open vSwitch)

  • Administration user identities and default passwords for the SAPC. This information can be found in SAPC Users and Passwords.

2 SAPC PNF Deployment Overview

This section describes what is required and what must be prepared before the installation procedure can begin.

2.1 SAPC PNF Installation Steps

Virtual Machine: A Virtual Machine is an operating system or application environment that is installed on software which imitates dedicated hardware (the HOST system). The operating system is LDEwS 4.4 CPxx.

Blade Hosting SC-1: Blade of the system that hosts Virtual Machine acting as SC-1. It uses SLES12 SP2 as operating system.

Blade Hosting SC-2: Blade of the system that hosts Virtual Machine acting as SC-2. It uses SLES12 SP2 as operating system.

Installation Server: External machine used for the installation of the HOST operating system. It uses SLES11 SP3/SP4 as operating system.

Installation elements needed to install the SAPC are shown in Figure 1. These are an external device or installation server and several hardware machines which host the application.

Figure 1   Installation Elements

Connect the installation server to the different hardware machines and proceed to install the HOST operating system SLES12 SP2 in the blade hosting SC-1 and the blade hosting SC-2.

Figure 2   HOST Machines Installation

After the SLES12 SP2 installation, to install the System Controllers, the software is copied to the blade hosting SC-1. Because the System Controllers are virtualized, the installation is done using images. Once the software is unpacked, create the Virtual Machines for SC-1 and SC-2. SC-1 is created based on the related qcow2 image, while the SC-2 is synchronized from the SC-1.

Figure 3   Virtual Machines Initialization

Scale out the rest of the blades. In each of them, depending on the network configuration, VIP Front Ends Elements (FEE) are created automatically to fulfill the different scenarios (GeoRed, external database, or traffic).

Figure 4   CBA Platform and SAPC Installation

2.2 SAPC PNF Deployment Prerequisites

2.2.1 PNF Deployment Requirements

SAPC supports BSP 8100 and NSP 6.1 hardware. For any other hardware, modifications need to be done during the hardware installation to achieve the same purposes.

  • Software Gateway access is necessary for operating system and additional software installation. The software repositories are available in the Software Gateway.

  • BIOS Power-saving options are disabled.

  • System is correctly powered and cabled.

  • Serial or Management (MGMT) access to the hardware. For details on how to connect using a serial or MGMT connection, consult your hardware provider documentation.

  • Network interface for Host management.

  • Secure Shell (SSH) client: once the operating system is installed and configured, installation can continue remotely. An SSH connection is used for this purpose for which an SSH client needs to be installed in the remote PC to be used.

  • TSP software version TSP6833 or higher. Otherwise the availability of the commands to install DMX is not guaranteed.

Caution!

If the hardware used is NSP 6.1 with GEP3 blades, the validated and mandatory GEP3 firmware version to ensure a successful installation is R11A or later.

Do!

The GEP3 firmware upgrade to R11A (validated version) or later is mandatory for the blades hosting the SCs and PLs. This procedure is complex and needs special knowledge, therefore it is recommended to be performed by Ericsson personnel. Do not perform the upgrade in a blade in service, as it implies powering off the blades. Follow the BIOS Upgrade Instruction document to perform the upgrade. Contact GEP support in case there is any problem.

2.2.2 PNF Deployment Network Requirements

If the hardware used is BSP 8100, use as reference the BSP 8100 Network Configuration Guide to complete all the variables needed.

If the hardware used is NSP 6.1, use as reference the NSP 6.1 Network Configuration Guide.

For any other non-Ericsson hardware use equal reference network configuration documents.

2.3 SAPC PNF Deployment Deliverables

The required software is listed in Table 1 and can be downloaded from Ericsson Software Gateway under a unique SAPC ticket number. Refer to Release Notes document to check the version of each product and the ticket number. The SAPC software includes a tar.gz file with all the tools needed for the installation.

Table 1   Products and Deliverables

Product

Deliverables

sles12_sp2_cxp9031686_<revision>.tar.gz

SLES SP2 operating system and updates

sles12_sp2_patches_cxp9031686_<revision>.tar.gz

SLES12 SP2 Vulnerability updates

vdp_sapc_qcow2_cxp9032851_<revision>.tar.gz

SAPC Virtual Delivery Package

DMX.tar

DMX Software for NSP 6.1 installations

Once uncompressed the following files are available which are needed for the PNF Installation.

Table 2   Files

Filename

Description

BSP_templates_ipv4.tar.gz

BSP templates needed to configure the BSP Software with IPv4.

BSP_templates_ipv6.tar.gz

BSP templates needed to configure the BSP Software with IPv6.

NSP_templates.tar.gz

NSP templates needed to configure the DMX Software.

shares.tar.gz

Tools needed to install SLES operating system in blade hosting SC-1 and blade hosting SC-2.

sapc_sc-1_cxp9032851_<revision>.qcow2

Image with the SAPC installation.

host-config.tar.gz

Tools needed to install SAPC.

adapt_cluster_PNF_BSP.cfg

Basic configuration template for the PNF BSP environment customization with IPv4.

adapt_cluster_PNF_BSP_IPv6.cfg

Basic configuration template for the PNF BSP environment customization with IPv6.

adapt_cluster_PNF_NSP.cfg

Basic configuration template for the PNF NSP environment customization with IPv4.

adapt_cluster_PNF_NSP_IPv6.cfg

Basic configuration template for the PNF NSP environment customization with IPv6.

For specific version information, see the release notes.

3 Installation for Standalone Deployment

This section describes all the installation steps. Once completed this section the system is installed and ready.

3.1 Standalone Deployment Prerequisites

This section describes the prerequisites which must be fulfilled before the SAPC can be installed.

3.1.1 Hardware Requirements

  • Installation server

    An installation server is mandatory for the installation. This server is always attached to the system, as it provides a DHCP service which is needed for the HOST to start.

  • Serial console access

    Serial console access to the serial ports of the machines in the system. This can be achieved in different ways, for example by using a terminal server with serial communication ports.

  • All blades connectivity

    Check that blades with external connectivity are correctly wiring.

3.1.2 Installation Server Requirements

Warning!

This document considers that the installation server uses SLES11 SP3/SP4 distribution. It is suggested to use that distribution.

To fulfill the installation, use a SUSE Linux PC as an installation server with at least one Ethernet interface and the following services installed on it.

advanced Trivial File Transfer Protocol (aTFTP)

Dynamic Host Configuration Protocol (DHCP)

Network Time Protocol (NTP)

Network File System (NFS)

Bash 4.2 or higher

How to install and configure an installation server is explained in this section using a SUSE distribution (other Linux distributions have similar commands).

Steps

  1. Check that the atftp, dhcp-server, ntp, syslinux, and nfs-kernel-server packages are installed.
    <InstallationServer>:# rpm -q atftp dhcp-server ntp syslinux nfs-kernel-server
    atftp-0.7.0-135.21.27
    dhcp-server-4.2.4.P2-0.16.15
    ntp-4.2.4p8-1.22.1
    syslinux-3.82-8.10.23
    nfs-kernel-server-1.2.3-18.29.1
    If not, install them with zypper.
    <InstallationServer>:# zypper install atftp dhcp-server ntp syslinux nfs-kernel-server
  2. Make sure that the bash version you have in your installation server is 4.2 or higher.
    The bash version included in SLES11 SP3/SP4 is lower than the one requested, so it must be updated.
    <InstallationServer>:# bash --version
    GNU bash, version 3.2.51(1)-release (x86_64-suse-linux-gnu)
    Copyright (C) 2007 Free Software Foundation, Inc.
    To update it to the 4.4 version, download, configure, compile, and install it from sources.
    <InstallationServer>:# pushd /home/
    <InstallationServer>:# wget http://ftp.gnu.org/gnu/bash/bash-4.4.tar.gz
    <InstallationServer>:# tar xvf bash-4.4.tar.gz
    <InstallationServer>:# pushd bash-4.4/
    <InstallationServer>:# ./configure
    <InstallationServer>:# make && make install
    <InstallationServer>:# popd
    Once the installation ends, the old bash binary is replaced with the new one.
    <InstallationServer>:# pushd /bin/
    <InstallationServer>:# ln -fs /usr/local/bin/bash bash
    <InstallationServer>:# popd
    <InstallationServer>:# popd
    Check again the bash version to ensure that the new version is the installed one.
    <InstallationServer>:# bash --version
    GNU bash, version 4.4.0(1)-release (x86_64-unknown-linux-gnu)
    Copyright (C) 2016 Free Software Foundation, Inc.
    [...]

3.2 Software Download

Steps

  1. Download sles12_sp2_cxp9031686_<revision>.tar.gz, sles12_sp2_patches_cxp9031686_<revision>.tar.gz, and the vdp_sapc_qcow2_cxp9032851_<revision>.tar.gz delivery package in the InstallationServer.
  2. The compressed file vdp_sapc_qcow2_cxp9032851_<revision>.tar.gz downloaded from the software gateway must be decompressed into a directory of the installation server, for example /home/SAPCInstallation/.
    <InstallationServer>:# mkdir -p /home/SAPCInstallation
    <InstallationServer>:# tar xvfz vdp_sapc_qcow2_cxp9032851_<revision>.tar.gz -C /home/SAPCInstallation/
  3. Extract the delivery package, the sles12_sp2_cxp9031686_<revision>.tar.gz, and the sles12_sp2_patches_cxp9031686_<revision>.tar.gz. Prepare the files needed for the installation.
    <InstallationServer>:# mkdir -p /home/SAPCInstallation/SLES12_SP2/Updates/
    <InstallationServer>:# tar xvfz sles12_sp2_cxp9031686_<revision>.tar.gz -C /home/SAPCInstallation/
    <InstallationServer>:# tar xvfz sles12_sp2_patches_cxp9031686_<revision>.tar.gz -C /home/SAPCInstallation/SLES12_SP2/Updates/
    <InstallationServer>:# mkdir -p /home/SAPCInstallation/shares/PNF
    <InstallationServer>:# mkdir -p /home/SAPCInstallation/BSP_templates
    <InstallationServer>:# mkdir -p /home/SAPCInstallation/NSP_templates
    <InstallationServer>:# tar xvfz /home/SAPCInstallation/vdp_sapc_qcow2_cxp9032851_<revision>/shares.tar.gz -C /home/SAPCInstallation/shares/PNF/
    <InstallationServer>:# tar xvfz /home/SAPCInstallation/vdp_sapc_qcow2_cxp9032851_<revision>/BSP_templates_<ipversion>.tar.gz -C /home/SAPCInstallation/BSP_templates/
    <InstallationServer>:# tar xvfz /home/SAPCInstallation/vdp_sapc_qcow2_cxp9032851_<revision>/NSP_templates.tar.gz -C /home/SAPCInstallation/NSP_templates/
    <InstallationServer>:# chown -R root:root /home/SAPCInstallation/shares/PNF/*

3.3 DMX Installation and Configuration

DMX configuration differs from BSP 8100 to NSP 6.1.

This document considers that the installation server uses SLES11 SP3/SP4 distribution for installing DMX.

Attention!

This section is exclusive for BSP 8100 and NSP 6.1 installations. For any other hardware, do the router configuration and the networking manually depending on the customer needs.

3.3.1 BSP 8100 Installation and Configuration

Install both the hardware of the BSP 8100 system and the software version BSP R12.0.1 following BSP Installation. Do the BSP 8100 initial platform configuration explained in BSP Initial Configuration. For detailed information about how to configure the BSP Northbound Interface (NBI), see the provided BSP templates.

Do the BSP 8100 external connectivity configuration explained in BSP External Network Connectivity. All these documents are available in the BSP 8100 library.

This step describes how to create a working BSP tenant configuration using BSP 8100 documentation and providing specific configuration templates that must be used. The different templates are prepared to do the configuration of a BSP blade system depending on the customer needs.

  • The basic configuration is needed for all the deployments. Follow the template included in /home/SAPCInstallation/BSP_templates/common.

    • For the first subrack, follow the template included in BSP_PNF_config_template_1st_subr_base.

    • If a second subrack is needed, follow the template included in BSP_PNF_config_template_2nd_subr.

    • If a third subrack is needed, follow the template included in BSP_PNF_config_template_3rd_subr.

  • For Standalone deployments, add the following templates included in /home/SAPCInstallation/BSP_templates/standalone.

    • For the basic configuration, follow the templates included in BSP_PNF_config_template_oam and BSP_PNF_config_template_signaling.

    • If traffic separation is needed, follow the template included in BSP_PNF_config_template_signaling2.

    • If an external database is needed, follow the template included in BSP_PNF_config_template_extDB.

  • For Active-Standby Geographical Redundancy deployments, add the following templates included in /home/SAPCInstallation/BSP_templates/geored.

    • For the basic configuration, follow the templates included in BSP_PNF_config_template_oam, BSP_PNF_config_template_signaling and BSP_PNF_config_template_replication.

    • If traffic separation is needed, follow the template included in BSP_PNF_config_template_signaling2.

    • If an external database is needed, follow the template included in BSP_PNF_config_template_extDB.

  • For Active-Active Geographical Redundancy deployments, add the following templates included in /home/SAPCInstallation/BSP_templates/geored-active-active.

    • For the basic configuration, follow the templates included in BSP_PNF_config_template_oam, BSP_PNF_config_template_signaling and BSP_PNF_config_template_replication.

    • If traffic separation is needed, follow the template included in BSP_PNF_config_template_signaling2.

    • If an external database is needed, follow the template included in BSP_PNF_config_template_extDB.

3.3.2 NSP 6.1 Installation and Configuration

Caution!

All blades must be powered off before starting the DMX installation process.

Install both the hardware of the NSP 6.1 system and the software version DMX 3.1 CP8 following DMX Installation Instruction. Do the NSP 6.1 initial platform configuration explained in DMX Initial Start according to the network configuration defined in NSP 6.1 Network Configuration Guide. All these documents are available in the DMX library.

Although the DMX installation documents refer to different operating system and mention different paths, filenames, and applications, it is possible, with minor deviations, to use the same installation server with SLES11 SP3/SP4 described in this document for the SAPC installation.

The file DMX.tar, downloaded from the software gateway, contains DMX software required for NSP 6.1 system.

This step describes how to create a working configuration using the DMX documentation and providing specific configuration templates that must be used. The different templates are prepared to do an automatic configuration of a NSP 6.1 blade system depending on the customer needs.

  • The basic configuration is needed for all the deployments. Follow the template included in /home/SAPCInstallation/NSP_templates.

    • For the first subrack, follow the template included in NSP_PNF_config_template_1st_subr_base.

      Attention!

      During the addition of the extra subracks, the port SCX-0-25:GE1 is blocked. Thus, it is recommended to use console connection for setting up the initial DMX configuration.

    • If a second subrack is needed, follow the template included in NSP_PNF_config_template_2nd_subr.

    • If a third subrack is needed, follow the template included in NSP_PNF_config_template_3rd_subr.

3.4 Installation Server Preparation

Attention!

If the hardware used is NSP 6.1, use as reference the NSP 6.1 Network Configuration Guide instead of the BSP 8100 Network Configuration Guide specified in this section.

Steps

  1. Modify the install_server.cfg file adding a line for the NTP servers explained in the BSP 8100 Network Configuration Guide. Also modify if needed the interface for the local one in the installation server from which you are going to proceed with the installation. To do so, modify DHCPD_SERVER_IFACE.
    <InstallationServer>:# vi /home/SAPCInstallation/shares/PNF/install_server/install_server.cfg
    DHCPD_SERVER_IFACE=<eth1>
    NTP_SERVER=<NTP_SERVER_IP_ADDRESS1> <NTP_SERVER_IP_ADDRESS2>
    For example:
    NTP_SERVER=10.221.17.38 10.221.17.150 10.221.17.182 10.221.17.14
  2. Modify the system_common.cfg assigning the VRRP IP address of the Hypervisor network, the DNS server, and the NTP server as explained in the BSP 8100 Network Configuration Guide.
    <InstallationServer>:# vi /home/SAPCInstallation/shares/PNF/config/system_common.cfg
    DEFAULT_GATEWAY=<Hypervisor VRRP IP address>
    DNS_SERVER=<DNS_SERVER_IP_ADDRESS1> <DNS_SERVER_IP_ADDRESS2>
    NTP_SERVER=<NTP_SERVER_IP_ADDRESS1> <NTP_SERVER_IP_ADDRESS2>
    Note: The Hypervisor VRRP IP address has the same value as the sapc_om2_sp_gw that is explained in the BSP 8100 Network Configuration Guide.
  3. Modify the networking_template.cfg with the values explained in the BSP 8100 Network Configuration Guide.
    <InstallationServer>:# cp -p /home/SAPCInstallation/shares/PNF/config/networking_template.cfg /home/SAPCInstallation/shares/PNF/config/networking_Host_1.cfg
    <InstallationServer>:# cp -p /home/SAPCInstallation/shares/PNF/config/networking_template.cfg /home/SAPCInstallation/shares/PNF/config/networking_Host_2.cfg
    Attention!

    At this stage, the procedure depends on the hardware.

    1. BSP 8100
      The following lines are modified in the networking_Host_1.cfg file (default BSP values are explained, change accordingly if they are different).
      VLAN_MGMT0=<137>
      VLAN_MGMT1=<138>
      VLAN_MGMT2=<>
      IP_MGMT0=<Hypervisor Management Network IP address for this blade>
      IP_MGMT1=<Internal Management Network IP address for this blade>
      IP_MGMT2=
      BOOT_MAC_ADDR=<Initial MAC Address for booting for this blade>
      Note: The default value for IP_MGMT1 is <192.168.100.1/24> .
      Same changes apply for the networking_Host_2.cfg file with the specific values for that blade.
      VLAN_MGMT0=<137>
      VLAN_MGMT1=<138>
      VLAN_MGMT2=<>
      IP_MGMT0=<Hypervisor Management Network IP address for this blade>
      IP_MGMT1=<Internal Management Network IP address for this blade>
      IP_MGMT2=
      BOOT_MAC_ADDR=<Initial MAC Address for booting for this blade>
      Note: The default value for IP_MGMT1 is <192.168.100.2/24> .
      To obtain the BOOT_MAC_ADDR variable, access the DMX for the blade hosting SC-1 and blade hosting SC-2.
      <InstallationServer>:# ssh -p 2024 advanced@<DMX>
      <DMX>:> show-table ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=SAPC -m Blade -p userLabel,firstMacAddr -c ((userLabel=="SC-1")||(userLabel=="SC-2"))
      =================================
      | userLabel | firstMacAddr      |
      =================================
      | SC-1      | A4:A1:C2:E9:E7:ED |
      | SC-2      | A4:A1:C2:E9:E7:A7 |
      =================================
    2. NSP 6.1
      The following lines are modified in the networking_Host_1.cfg file (default NSP values are explained, change accordingly if they are different).
      VLAN_MGMT0=<137>
      VLAN_MGMT1=<138>
      VLAN_MGMT2=<138>
      IP_MGMT0=<Hypervisor Management Network IP address for this blade>
      IP_MGMT1=<Primary Internal Management Network IP address for this blade>
      IP_MGMT2=<Secondary Internal Management Network IP address for this blade>
      BOOT_MAC_ADDR=<Initial MAC Address for booting for this blade>
      Same changes apply for the networking_Host_2.cfg file with the specific values for that blade.
      VLAN_MGMT0=<137>
      VLAN_MGMT1=<138>
      VLAN_MGMT2=<138>
      IP_MGMT0=<Hypervisor Management Network IP address for this blade>
      IP_MGMT1=<Primary Internal Management Network IP address for this blade>
      IP_MGMT2=<Secondary Internal Management Network IP address for this blade>
      BOOT_MAC_ADDR=<Initial MAC Address for booting for this blade>
      To obtain the BOOT_MAC_ADDR variable, access the DMX for the blade hosting SC-1 and blade hosting SC-2.
      <InstallationServer>:# ssh -p 2024 expert@<DMX>
      <DMX>:> show table ManagedElement 1 DmxFunctions 1 BladeGroupManagement 1 Group SAPC ShelfSlot Blade 1 userLabel | select Blade firstMacAddress | match "SC-[12]"
      0-9    1      SC-1   a4:a1:c2:e9:e7:ed
      0-11   1      SC-2   a4:a1:c2:e9:e7:a7
      
    3. The procedure to obtain the BOOT_MAC_ADDR applies for BSP and NSP hardware
      In this example, the values obtained for the firstMacAddr are A4:A1:C2:E9:E7:ED and A4:A1:C2:E9:E7:A7. The BOOT_MAC_ADDR are the next ones for each of them, ie, increasing one unit. In this example, they are as follows.
      A4:A1:C2:E9:E7:EE
      A4:A1:C2:E9:E7:A8
  4. Create the directory tree for the SLES12 SP2 DVD and mount the ISO file there.
    <InstallationServer>:# mkdir -p /home/SAPCInstallation/shares/PNF/SLES12_SP2/DVD1
    <InstallationServer>:# mount -o ro,loop /home/SAPCInstallation/SLES12_SP2/SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso /home/SAPCInstallation/shares/PNF/SLES12_SP2/DVD1
    <InstallationServer>:# mkdir -p /srv/tftpboot/sles12_sp2/
  5. Copy files to the /srv/tftpboot/sles12_sp2/ directory.
    <InstallationServer>:# pushd /home/SAPCInstallation/shares/PNF/SLES12_SP2/DVD1/boot/x86_64/loader
    <InstallationServer>:# cp -a linux initrd message memtest /srv/tftpboot/sles12_sp2
    <InstallationServer>:# cp -a /usr/share/syslinux/pxelinux.0 /srv/tftpboot/sles12_sp2
    <InstallationServer>:# popd
  6. Execute the script.
    <InstallationServer>:# /home/SAPCInstallation/shares/PNF/install_configuration.sh install "/home/SAPCInstallation/shares/PNF/install_server/install_server.cfg" "/home/SAPCInstallation/shares/PNF"
    BEGIN
    Execution: install_configuration.sh install "/home/SAPCInstallation/shares/PNF/install_server/install_server.cfg" "/home/SAPCInstallation/shares/PNF"
    Checking required packages
    atftp - installed
    dhcp-server - installed
    ntp - installed
    syslinux - installed
    nfs-kernel-server - installed
    Parsing 'install_server.cfg' configuration file
    Customizing DHCP server
    Customizing NIC 'eth1' connected to blades
    Customizing NTP server
    Customizing NFS
    Customizing PXE kernel boot line
    Customizing autoinstallation profiles
    Restarting 'DHCPD' service
    Restarting 'ATFTPD' service
    Restarting 'NTP' service
    Restarting 'NFSSERVER' service
    Exporting configured NFS path
    END
    The dhcpd.conf file has been created successfully, modifying the eth1 interface.
    <InstallationServer>:# ifconfig
    [...]
    eth1 Link encap:Ethernet  HWaddr 00:50:56:A1:15:F1
    inet addr:192.168.101.1  Bcast:192.168.101.255  Mask:255.255.254.0
    [...]
  7. Check the status of installation port and enable it if necessary. The SLES12 SP2 installation uses this port to access the blade.
    Attention!

    At this stage, the procedure depends on the hardware.

    1. BSP 8100
      The default installation port is SCX-0-0:GE1.
      <InstallationServer>:# ssh -p 2024 advanced@<DMX>
      <DMX>:> show ManagedElement=1,DmxcFunction=1,Trm=1,VirtualBridge=BSP,BridgePort=0-0:GE1
      BridgePort=0-0:GE1
      adminState=DISABLED
      Use the configure mode and enable it.
      <DMX>:> configure
      <DMX>:(config)> ManagedElement=1,DmxcFunction=1,Trm=1,VirtualBridge=SAPC,tenantInstallMode=UNTAGGED
      <DMX>:(config)> commit
      Check now that the status is enabled as before.
    2. NSP 6.1
      The default installation port is SCX-0-0:GE2.
      <InstallationServer>:# ssh -p 2024 expert@<DMX>
      Use the configure mode to check the SCX-0-0:GE2 port configuration.
      <DMX>:> configure
      <DMX>:(config)% show ManagedElement 1 DmxFunctions 1 Transport 1 Bridge 0-0 Port GE2 | select defaultVlan | select adminState
      defaultVlan 4001;
      adminState  enabled;
      Check that the SCX-0-0:GE2 port belongs to the untaggedMemberPorts group from the VLAN ID 4001.
      <DMX>:(config)% show ManagedElement 1 DmxFunctions 1 Transport 1 Bridge 0-0 Vlan 4001 untaggedMemberPorts
      untaggedMemberPorts [ BP1 BP3 BP5 BP7 BP9 BP11 BP13 BP15 BP17 BP19 BP21 BP23 GE2 ];
      Exit the configure mode.
      <DMX>:(config)% exit
    Exit from the CLI.
    <DMX>:> exit
  8. Before starting the operating system installation procedure, the boot device must be set up for all blades.
    • For blades hosting SCs, the permanent boot device must be the hard disk drive.

      1. For BSP 8100, select "18 - SATA-0 Internal Slim-SATA SSD" in boot menu.

      2. For NSP 6.1, select "10 - Hard drive GMB SAS ID09" in boot menu.

      Note: The first boot is performed from the left backplane network interface (00 - Ethernet Backplane Left) to fetch configuration from the installation server.
    • For blades hosting PLs, the boot devices must be the backplane network interfaces (00 - Ethernet Backplane Left and 01 - Ethernet Backplane Right).

    Access the DMX from the installation server and do the following procedure for each blade. Power on the blade.
    Attention!

    At this stage, the procedure depends on the hardware.

    1. BSP 8100
      <InstallationServer>:# ssh -p 2024 advanced@<DMX>
      <DMX>:> configure
      <DMX>:(config)> ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=SAPC,<Blade=0-1>,administrativeState=UNLOCKED
    2. NSP 6.1
      <InstallationServer>:# ssh -p 2024 expert@<DMX>
      <DMX>:> configure
      <DMX>:(config)% set ManagedElement 1 DmxFunctions 1 BladeGroupManagement 1 Group SAPC ShelfSlot <0-1> Blade 1 administrativeState unlocked
    <DMX>:(config)> commit
    In this example, Blade=0–1 means blade 1 from subrack 0.
    Exit from the CLI.
    <DMX>:> exit
    To set the boot device configuration, the message shown in Figure 5 is displayed on the blade booting process.

Figure 5   Message to Enter Boot Menu
  1. Press F3 to enter the boot menu shown in Figure 6.
    In the boot menu, choose the option 40 - UEFI Shell (PBIST) to enter the BIOS shell.
    Enter boot device in hex: 40

Figure 6   Boot Menu
  1. After pressing Enter to save the chosen option, the message in Figure 7 is shown. Press any key to enter the BIOS shell.
    Note: This may change depending on the HW.

Figure 7   Message to Enter the BIOS Shell
  1. Once in the BIOS shell, set the boot devices according to the type of blade is being configured (SC or PL). Do this following the hardware-specific BIOS configuration guide and save the configuration.
    • GEP3

      • Remove the current boot order:

        GEP3> ipmi -o erase

      • Set to boot from the hard disk:

        GEP3> ipmi -o push 10

      • Show the current boot order:

        GEP3> ipmi -o display

      • Reboot the board (keeps the current configuration):

        GEP3> pbist -r

    • GEP 5

      • Show the current boot order:

        GEP5> ipmi bo display

      • Remove the current boot device in position 1:

        GEP5> ipmi bo remove 1

      • Set to boot from Hard Disk SATA-0 Internal Slim-SATA SSD:

        GEP5> ipmi bo insert 1 18

      • Reboot the board (keeps the current configuration):

        GEP5> pbist -r

    • GEP 7

      • Show the current boot order:

        GEP7> ipmi oem bcs r

      • Set to boot from Hard Disk SATA-0 Internal Slim-SATA SSD:

        GEP7> ipmi oem bcs b 0 0x18

      • Reboot the board (keeps the current configuration):

        GEP7> reset

    After configuring the boot devices, the blades are rebooted to start up with the new configuration.
  2. For the blades hosting the SCs, enter the boot menu shown in Figure 6, choosing the option 00 - Ethernet Backplane Left to launch the SLES12 SP2 automated installation process.
    Enter boot device in hex: 00
    After pressing Enter, the blade will boot from the installation server, and the operating system installation starts.
    During the installation process, the blades are rebooted once. Wait until the process is finished and the Linux prompt shown.
    <Host_1> login:
    <Host_2> login:
  3. At this point, the SLES12 SP2 is installed. Log on as root and reboot both blades.
    <Host_1> login: root
    <Host_1>:# reboot
    <Host_2> login: root
    <Host_2>:# reboot
  4. After SLES12 SP2 installation in NSP 6.1 blades, it is necessary to remove in DMX the port used for SLES12 SP2 installation, GE2.
    Attention!

    At this stage, the procedure depends on the hardware.

    1. BSP 8100
      Skip this step.
    2. NSP 6.1
      <InstallationServer>:# ssh -p 2024 expert@<DMX>
      <DMX>:> configure
      <DMX>:(config)% delete ManagedElement 1 DmxFunctions 1 Transport 1 Bridge 0-0 Vlan 4001 memberPorts GE2 untaggedMemberPorts GE2
      <DMX>:> commit
      Exit from the CLI.
      <DMX>:> exit
  5. Once access is available to blade hosting SC-1 and blade hosting SC-2, copy the Updates directory and update the operating system.
    If SSH access is not available at this moment because client network is not fully prepared, some additional steps are needed to be able to transfer Updates directory to the blades hosting SC-1 and SC-2 respectively. If SSH access is already available, skip the following preparations:
    Temporarily, configure the front port interface in the same network of your installation server in both blades, for instance:
    <Host_1>:# ifconfig br_mgmt 192.168.101.11 netmask 255.255.254.0 up
    <Host_2>:# ifconfig br_mgmt 192.168.101.12 netmask 255.255.254.0 up
    Check interface is up with ping from the blades:
    <Host_1>:# ping 192.168.101.11
    <Host_2>:# ping 192.168.101.12
    At this point, Host_1 has assigned the IP 192.168.101.11 and Host_2 has assigned IP 192.168.101.12
    Create directory structure in blades.
    <Host_1>:# mkdir -p /mnt/images/SAPCDeployment/SLES12_SP2/Updates
    <Host_1>:# mkdir -p /mnt/images/originalImage
    <Host_1>:# mkdir -p /mnt/images/interfaces
    <Host_1>:# mkdir -p /mnt/store/SAPC
    <Host_2>:# mkdir -p /mnt/images/SAPCDeployment/SLES12_SP2/Updates
    <Host_2>:# mkdir -p /mnt/images/originalImage
    <Host_2>:# mkdir -p /mnt/images/interfaces
    <Host_2>:# mkdir -p /mnt/store/SAPC
    Copy directories from Installation Server to blades:
    <InstallationServer>:# scp -r /home/SAPCInstallation/SLES12_SP2/Updates root@<Host_1_IP>:/mnt/images/SAPCDeployment/SLES12_SP2/
    <InstallationServer>:# scp -r /home/SAPCInstallation/SLES12_SP2/Updates root@<Host_2_IP>:/mnt/images/SAPCDeployment/SLES12_SP2/
    SLES12 SP2 Updates installation:
    <Host_1>:# /mnt/images/SAPCDeployment/SLES12_SP2/Updates/installation-server/shares/repositoriesUpdater.sh -p /mnt/images/SAPCDeployment/SLES12_SP2/Updates/
    <Host_2>:# /mnt/images/SAPCDeployment/SLES12_SP2/Updates/installation-server/shares/repositoriesUpdater.sh -p /mnt/images/SAPCDeployment/SLES12_SP2/Updates/
    If temporary IP addresses were needed to be created, remove them in both blades, otherwise skip this step:
    <Host_1>:# ip addr flush dev br_mgmt
    <Host_2>:# ip addr flush dev br_mgmt.
  6. Reboot both blades.
    <Host_1>:# reboot
    <Host_2>:# reboot
  7. After reboot, SSH access to both blades is available.
    <InstallationServer>:# ssh root@<Host_1>
    <InstallationServer>:# ssh root@<Host_2>

3.5 Host Configuration

3.5.1 Remote Trust Connection

Establish a remote trust relation between the blade hosting SC-1 and the blade hosting SC-2. This avoids asking for the password every time an SSH connection is needed.

Steps

  1. Access the first host and generate the private and public key pair.
    <InstallationServer>:# ssh root@<Host_1_IP>
    <Host_1>:# ssh-keygen -t rsa
    Note: Press Enter for every question requested.
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa):
    Created directory '/root/.ssh'.
    Enter passphrase (empty for no passphrase):
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    
  2. Execute the following command.
    <Host_1>:# ssh-copy-id -i root@<Host_1>
    <Host_1>:# ssh-copy-id -i root@<Host_2>
  3. Repeat the previous steps for the second host machine.
    <InstallationServer>:# ssh root@<Host_2_IP>
    <Host_2>:# ssh-keygen -t rsa
    Note: Press Enter for every question requested.
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa):
    Created directory '/root/.ssh'.
    Enter passphrase (empty for no passphrase):
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    
  4. Execute the following command.
    <Host_2>:# ssh-copy-id -i root@<Host_1>
    <Host_2>:# ssh-copy-id -i root@<Host_2>

3.6 SAPC Deployment

Steps

  1. Access the DMX from the installation server and make sure that all the blades in the system are powered off (LOCKED), except the blade hosting SC-1 and the blade hosting SC-2.
    Attention!

    At this stage, the procedure depends on the hardware.

    1. BSP 8100
      InstallationServer:# ssh -p 2024 advanced@<DMX>
      DMX:> show-table ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=SAPC -m Blade -p userLabel,bladeId,administrativeState
      =============================================
      | bladeId | userLabel | administrativeState |
      =============================================
      | 0-1     | SC-1      | UNLOCKED            |
      | 0-11    | PL-6      | UNLOCKED            |
      | 0-13    | PL-7      | UNLOCKED            |
      | 0-15    | PL-8      | UNLOCKED            |
      | 0-17    | PL-9      | UNLOCKED            |
      | 0-19    | PL-10     | UNLOCKED            |
      | 0-21    | PL-11     | UNLOCKED            |
      | 0-23    | PL-12     | LOCKED              |
      | 0-3     | SC-2      | UNLOCKED            |
      | 0-5     | PL-3      | UNLOCKED            |
      | 0-7     | PL-4      | UNLOCKED            |
      | 0-9     | PL-5      | UNLOCKED            |
      =============================================
    2. NSP 6.1
      InstallationServer:# ssh -p 2024 expert@<DMX>
      DMX:> show table ManagedElement 1 DmxFunctions 1 BladeGroupManagement 1 Group SAPC ShelfSlot Blade 1 userLabel | select Blade administrativeState

    If there is any blade other than the ones hosting SC-1 and SC-2 powered on (UNLOCKED), power it off (LOCKED).

    DMX:> configure
    1. BSP 8100
      DMX:(config)> ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=SAPC,<Blade=0-23>,administrativeState=LOCKED
    2. NSP 6.1
      DMX:(config)% set ManagedElement 1 DmxFunctions 1 BladeGroupManagement 1 Group SAPC ShelfSlot <0-23> Blade 1 administrativeState locked
    DMX:(config)> commit

    In this example, Blade=0–23 means blade 12 from subrack 0. Exit from the CLI.

    DMX:> exit
  2. From the installation server, copy and rename the necessary files to the blade hosting SC-1.
    InstallationServer:# scp /home/SAPCInstallation/vdp_sapc_qcow2_cxp9032851_<revision>/host-config.tar.gz root@<Host_1>:/mnt/store/SAPC/
    InstallationServer:# scp /home/SAPCInstallation/vdp_sapc_qcow2_cxp9032851_<revision>/sapc_sc-1_cxp9032851_<revision>.qcow2 root@<Host_1>:/mnt/images/originalImage/sapc_sc-1_cxp9030138.qcow2
    InstallationServer:# scp /home/SAPCInstallation/vdp_sapc_qcow2_cxp9032851_<revision>/adapt_cluster_PNF*.cfg root@<Host_1>:/mnt/images
    InstallationServer:# ssh root@<Host_1>
    Host_1:# cd /mnt/store/SAPC/
    Host_1:# tar xvfz /mnt/store/SAPC/host-config.tar.gz
  3. Resize the QCOW2 image file.
    InstallationServer:# ssh root@<Host_1>
    Host_1:# qemu-img resize /mnt/images/originalImage/sapc_sc-1_cxp9030138.qcow2 100G
  4. Create a file <mac_base_file> under /mnt/images/interfaces with the base MAC addresses for PL-3 and PL-4.
    Attention!

    At this stage, the procedure depends on the hardware.

    1. BSP 8100
      InstallationServer:# ssh -p 2024 advanced@<DMX>
      DMX:> show-table ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=SAPC -m Blade -p userLabel,firstMacAddr -c ((userLabel=="PL-3")||(userLabel=="PL-4"))
      =================================
      | userLabel | firstMacAddr      |
      =================================
      | PL-3      | A4:A1:C2:E9:E7:ED |
      | PL-4      | A4:A1:C2:E9:E7:A7 |
      =================================
    2. NSP 6.1
      InstallationServer:# ssh -p 2024 expert@<DMX>
      DMX:> show table ManagedElement 1 DmxFunctions 1 BladeGroupManagement 1 Group SAPC ShelfSlot Blade 1 userLabel | select Blade firstMacAddress | match "PL-3|PL-4"
      0-1    1      PL-3   a4:a1:c2:e9:e7:ed  
      0-3    1      PL-4   a4:a1:c2:e9:e7:a7  
      

    Exit from the CLI.

    DMX:> exit

    In this example, the file /mnt/images/interfaces/<mac_base_file> is created with the following content.

    A4:A1:C2:E9:E7:ED
    A4:A1:C2:E9:E7:A7
  5. To create the /mnt/images/ PL_interfaces file for the PL-3 and PL-4, execute:
    Host_1:# pushd /mnt/images
    Attention!

    At this stage, the procedure depends on the hardware.

    1. BSP 8100
      1. GEP5 Boards

        Host_1:# /mnt/store/SAPC/host-config/scripts/management/build_PLinterfaces.sh 2 BSP /mnt/images/interfaces/<mac_base_file>
      2. GEP7 Boards

        Host_1:# /mnt/store/SAPC/host-config/scripts/management/build_PLinterfaces.sh 2 BSP_GEP7 /mnt/images/interfaces/<mac_base_file>
    2. NSP 6.1
      Host_1:# /mnt/store/SAPC/host-config/scripts/management/build_PLinterfaces.sh 2 NSP /mnt/images/interfaces/<mac_base_file>
    File PL_interfaces created successfully
    This is an example of the output.
    # PL-3
    interface 3 eth0 ethernet a4:a1:c2:e9:e7:f2
    interface 3 eth1 ethernet a4:a1:c2:e9:e7:f3
    interface 3 eth2 ethernet a4:a1:c2:e9:e7:ee
    interface 3 eth3 ethernet a4:a1:c2:e9:e7:ef
    # PL-4
    interface 4 eth0 ethernet a4:a1:c2:e9:e7:ac
    interface 4 eth1 ethernet a4:a1:c2:e9:e7:ad
    interface 4 eth2 ethernet a4:a1:c2:e9:e7:a8
    interface 4 eth3 ethernet a4:a1:c2:e9:e7:a9
    Host_1:# popd
  6. Create the /mnt/images/adapt_cluster.cfg and the adapt_cluster.iso specific files for this deployment. To create those files, refer to Adapt Cluster Tool.
  7. Copy the following files from the blade hosting SC-1 to the blade hosting SC-2 so there is a backup in case the blade is lost at any moment.
    Host_1:# scp /mnt/images/adapt_cluster.cfg root@<Host_2>:/mnt/images/
    Host_1:# scp /mnt/images/adapt_cluster.iso root@<Host_2>:/mnt/images/
    Host_1:# scp /mnt/images/PL_interfaces root@<Host_2>:/mnt/images/
  8. Perform a cleanup.
    Attention!

    This is a preventive step. The first time you do the installation it is not needed but in case you need to repeat it, previous installations could affect it and this step cleans everything.

    Host_1:# /mnt/store/SAPC/host-config/scripts/management/sapc_vm-manager_cxp9030138.sh -c cleanup -x
  9. To define the Virtual Machines, create the sapc_vm_CXP9030138.conf configuration file as explained in Creating sapc_vm_CXP9030138.conf File. Once created, execute:
    Attention!

    At this stage, the procedure depends on the hardware.

    1. BSP 8100
      Host_1:# /mnt/store/SAPC/host-config/scripts/management/sapc_vm-generator_cxp9030138.sh -c /mnt/store/SAPC/host-config/config/sapc_vm_CXP9030138.conf -d /mnt/store/SAPC/host-config/VM/vms $(cat /mnt/store/SAPC/host-config/config/PNF/BSP/2SC-2LBTP/sapc_vm-generator_extra-args)
    2. NSP 6.1
      Host_1:# /mnt/store/SAPC/host-config/scripts/management/sapc_vm-generator_cxp9030138.sh -c /mnt/store/SAPC/host-config/config/sapc_vm_CXP9030138.conf -d /mnt/store/SAPC/host-config/VM/vms $(cat /mnt/store/SAPC/host-config/config/PNF/NSP/2SC-2LBTP/sapc_vm-generator_extra-args)
    ---------------- 
    | VM Generator | 
    ----------------  
    Generating XML from 'sapc_vm_CXP9030138.conf' ... 
    Generating 'SCs' ... 
    Building 'diskManager.cfg' ['SC' block] from 'sapc_vm_CXP9030138.conf' ... 
    Generating 'TPs' ... 
    No nodes for TP node type 
    XML successfully created under '/mnt/store/SAPC/host-config/VM/vms'
  10. Create and boot the Virtual Machine for the SC-1.
    Host_1:# /mnt/store/SAPC/host-config/scripts/management/sapc_vm-manager_cxp9030138.sh -c reset -i /mnt/images/originalImage -s 1 -x
    --------------
     | VM Manager | 
    -------------- 
    
     Remote preconfiguration ...  
    Preconfiguring [<Host_1>]: '<Host_1>' -> '<Host_1>' 
    No public key found Generating private key ('/mnt/store/SAPC/host-config/keys/id_rsa')... 
    Installing public key on '<Host_1>'... 
    Password: 
     
    Executing 'reset' on 'SC-1' [<Host_1>]...
  11. Check the running state of the Virtual Machine.
    Host_1:# virsh list --all
    Id    Name                           State
    ----------------------------------------------------
    <id>    SC-1.<Host_1>               running
  12. Check the adapt_cluster script finishes correctly as explained in Adapt Cluster Tool.
  13. Create and boot the Virtual Machine for the SC-2.
    Host_1:# /mnt/store/SAPC/host-config/scripts/management/sapc_vm-manager_cxp9030138.sh -c reset -i /mnt/images/ -s 2 -x

    Wait for the SC-2 to synchronize.

    Host_1:# ssh root@192.168.100.126
    SC-1:# drbd-overview

    The output must have the following line Connected(2*) Primar/Second UpToDa/UpToDa like in the example:

    0:drbd0/0 Connected(2*) Primar/Second UpToDa/UpToDa lvm-pv: lde-cluster-vg 95.87g 48.07g
  14. Power on PL-3 and PL-4.
    Attention!

    At this stage, the procedure depends on the hardware.

    1. BSP 8100
      InstallationServer:# ssh -p 2024 advanced@<DMX>
      DMX:> configure
      DMX:(config)> ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=SAPC,Blade=0-5,administrativeState=UNLOCKED
      DMX:(config)> ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=SAPC,Blade=0-7,administrativeState=UNLOCKED
    2. NSP 6.1
      InstallationServer:# ssh -p 2024 expert@<DMX>
      DMX:> configure
      DMX:(config)% set ManagedElement 1 DmxFunctions 1 BladeGroupManagement 1 Group SAPC ShelfSlot 0-1 Blade 1 administrativeState unlocked
      DMX:(config)% set ManagedElement 1 DmxFunctions 1 BladeGroupManagement 1 Group SAPC ShelfSlot 0-3 Blade 1 administrativeState unlocked

    Commit the changes and exit from the CLI.

    DMX:(config)> commit
    DMX:> exit
  15. Change the Key exchange algorithm in SSH configuration in NSP 6.1.
    Attention!

    At this stage, the procedure depends on the hardware.

    1. BSP 8100

      Skip this step.

    2. NSP 6.1
      Host_1:# echo "KexAlgorithms +diffie-hellman-group1-sha1" >> /root/.ssh/config
      Host_2:# echo "KexAlgorithms +diffie-hellman-group1-sha1" >> /root/.ssh/config
  16. Expand the rest of the blades as explained in the SAPC PNF Scale Out.
  17. In case IPv6-only is configured, an additional step is required to get alarms raised when the connection with an essential router is lost. To configure these supervised gateways, refer to Configuration of supervised gateways for IPv6-only with OSPF.

3.6.1 SAPC Status Verification

For that purpose, the deployment includes a health-check script called sapcHealthCheck. Execute after the deployment has finished, but it is always available. For more details, refer to SAPC Advanced Troubleshooting Guideline document.

3.7 SAPC Configuration

3.7.1 Performance Management

As a result of the installation all the SAPC counters are active.

For further information on Performance Management, refer to Measurements.

3.7.3 Fault Management Configuration

To be able to send alarms, configure the SNMP.

For further information on Fault Management Configuration, refer to Fault Management.

For security reasons, it is highly recommended to use CreateSNMPv3 Target for Fault Management.

Also legacy versions can be used, refer to Create SNMPv2C Target, Create SNMPv1 Target.

3.7.4 End-User Notifications Configuration

Previous to install End-User Notification module, connectivity between SAPC and the servers (SMS Center or HTTP server) must be checked.

Besides the default value of environment variables, must be changed according to the specific deployment on-site.

For further information about these variables, refer to the System Administrator Guide.

3.7.5 Licenses Configuration

Steps

  1. Set the fingerprint with the value given during license ordering. Read License Fingerprinting section in LM User Guide for ELIM.
  2. Install license key file following Install License Key File.
  3. Check the license information following View License Information.

3.8 Final Backups

3.8.1 System Data Backup

To create a system data backup, follow the instructions specified in the Backup and Restore.

3.8.2 Emergency Recovery Backup

The emergency recovery backup is used as part of the SAPC Emergency Recovery Procedure document.

Steps

  1. From the installation server, access the blade hosting SC-1.
    <InstallationServer>:# ssh root@<Host_1>
  2. Bring down all guests.
    <Host_1>:# /mnt/store/SAPC/host-config/scripts/management/sapc_vm-manager_cxp9030138.sh -c stop
  3. Shrink the images.
    <Host_1>:# mkdir /mnt/images/tmp_sparsify
    <Host_1>:# export TMPDIR=/mnt/images/tmp_sparsify
    <Host_1>:# virt-sparsify --check-tmpdir continue /mnt/images/sapc_sc-1_cxp9030138.qcow2 /mnt/images/sapc_sc-1_cxp9030138.qcow2.SHRUNK
    <Host_1>:# mv /mnt/images/sapc_sc-1_cxp9030138.qcow2.SHRUNK /mnt/images/sapc_sc-1_cxp9030138.qcow2
    <Host_1>:# ssh root@<Host_2>
    <Host_2:># mkdir /mnt/images/tmp_sparsify
    <Host_2>:# export TMPDIR=/mnt/images/tmp_sparsify
    <Host_2>:# virt-sparsify --check-tmpdir continue /mnt/images/sapc_sc-2_cxp9030138.qcow2 /mnt/images/sapc_sc-2_cxp9030138.qcow2.SHRUNK
    <Host_2>:# mv /mnt/images/sapc_sc-2_cxp9030138.qcow2.SHRUNK /mnt/images/sapc_sc-2_cxp9030138.qcow2
    <Host_2>:# exit
  4. Copy all the qcow2 files to the external device. Access one by one all the host machines and copy the following from each of them.
    <Host_1>:# scp /mnt/images/*.qcow2 <user>@<EXTERNAL_DEVICE>:/<EXTERNAL_BACKUP_DIRECTORY>
  5. Bring up all guests.
    <Host_1>:# /mnt/store/SAPC/host-config/scripts/management/sapc_vm-manager_cxp9030138.sh -c restart

4 Installation for Geographical Redundancy Deployment

To perform the SAPC installation on Geographical Redundancy, install SAPC1 and SAPC2 clusters as stated in Installation for Standalone Deployment, also create the adapt_cluster.cfg file according to the desired Geographical Redundancy (Active-Active or Active-Standby). To create those files, refer to Adapt Cluster Tool.

  • SAPC1 Cluster: Configure this cluster as Preferred.

    Note: For Active-Standby Geographical Redundancy the Preferred SAPC is the recommended to be the Active SAPC.
  • SAPC2 Cluster: Configure this cluster as Non-Preferred.

    Note: For Active-Standby Geographical Redundancy the Non-Preferred SAPC is the recommended to be the Standby SAPC.

5 SAPC PNF Deployment Annex

5.1 Creating sapc_vm_CXP9030138.conf File

Steps

  1. From the InstallationServer access <Host_1> .
    <InstallationServer>:# ssh root@<Host_1>
    <Host_1>:# cd /mnt/store/SAPC/host-config/config/
  2. There is one directory per hardware:
    PNF/BSP/2SC-2LBTP for BSP 8100.
    PNF/NSP/2SC-2LBTP for NSP 6.1.
  3. There you can find the sapc_vm_CXP9030138.conf file. Execute the following command for the file chosen:
    Attention!

    At this stage, the procedure depends on the hardware.

    1. BSP 8100
      <Host_1>:# cp /mnt/store/SAPC/host-config/config/PNF/BSP/2SC-2LBTP/sapc_vm_CXP9030138_BSP_2SC-2LBTP.conf /mnt/store/SAPC/host-config/config/sapc_vm_CXP9030138.conf
    2. NSP 6.1
      <Host_1>:# cp /mnt/store/SAPC/host-config/config/PNF/NSP/2SC-2LBTP/sapc_vm_CXP9030138_NSP_2SC-2LBTP.conf /mnt/store/SAPC/host-config/config/sapc_vm_CXP9030138.conf
  4. Next step is to modify the file /mnt/store/SAPC/host-config/config/sapc_vm_CXP9030138.conf with the specific parameters of the installation.
    Do!

    For both BSP 8100 and NSP 6.1 installations, the only parameter to be modified is DestinationNode_sc with the hostname of the blades hosting the SC-1 ( <Host_1> ) and the SC-2 ( <Host_2> ). The rest of the parameters are already customized to the desired values in the files delivered as part of the software.For other hardware, use adequate values depending on the resources.

5.1.1 Modifying the File

The parameters included in the file are explained here.

Table 3   Configuration Parameters

Parameter

Description

Count_sc

Number of system controllers. Always 2. No need to change.

Cpus_sc

Number of virtual CPU per system controller. Use the Dimensioning Guidelines to assign this value.

Mem_sc

Memory used for each system controller Virtual Machine. Use the Dimensioning Guidelines to assign this value.

Disk_sc

Disk free space for each system controller Virtual Machine. Use the Dimensioning Guidelines to assign this value.

Pinning_sc

Pinning virtual CPU assignment for each system controller Virtual Machine. Use the Dimensioning Guidelines to assign this value.

VMname_sc

Names given to the system controller Virtual Machines. No change needed.

Networks_sc

Virtual Switches to which the system controller Virtual Machines are attached. No change needed.

MACs_sc

MAC addresses used for the system controllers. No need to change.

DestinationNode_sc

Hostname of the blades where the system controllers are running.