System Expansion for Native Deployment
Ericsson Dynamic Activation 1

Contents

1Introduction
1.1Purpose and Scope
1.2Target Group
1.3Typographic Conventions

2

Tools

3

License Prerequisites for Expanding a Dynamic Activation Cluster

4

Blade Insertion

5

Expansion of Dynamic Activation Using GEP3 Blades
5.1Expansion - GEP3

6

Expansion of Dynamic Activation Using GEP 5 Blades
6.1Expansion - GEP5

Reference List

1   Introduction

This document describes the expansion process in Ericsson Dynamic Activation (EDA), for both GEP3 and GEP5.

The hardware components included in this configuration contain 4-12 GEP blades. Two switches (SCXB3), and two routers (CMXB3) are included in the hardware.

1.1   Purpose and Scope

This document focuses on the expansion of Dynamic Activation using GEP3 or GEP5 blades.

1.2   Target Group

The target groups for this document are as follows:

The target groups are described in more detail in the Library Overview, Reference [1].

1.3   Typographic Conventions

Typographic conventions are described in the document Library Overview, Reference [1].

For information about abbreviations used throughout this document refer to Glossary of Terms and Acronyms, Reference [2].

2   Tools

The following tools are required:

3   License Prerequisites for Expanding a Dynamic Activation Cluster

Attention!

Make sure to have a valid Dynamic Activation 1 license reflecting the number of GEP blades that the cluster will be expanded to.

4   Blade Insertion

Attention!

Only new blades can be used for system expansion, which means that there must be no data, partitions, or Cassandra data on the blades.

Insert the new blade and connect a new cable for the console.

Caution!

Always wear an Electrostatic Discharge (ESD) strap when touching the hardware in the rack. Sensitive components, such as Integrated Circuits (ICs), can be damaged by discharges of static electricity.

Note:  
Use the screwdriver of type TORX T5, or torque screwdriver with TORX T5 bit for the cable.

Use the screwdriver of type TORX T8, or torque screwdriver with TORX T8 bit for the blade.


Continue with the expansion according to used hardware, see specific subsection.

5   Expansion of Dynamic Activation Using GEP3 Blades

This section contains instructions for how to expand an existing cluster installation of the Dynamic Activation system when using GEP3 blades.

5.1   Expansion - GEP3

This section is valid for:

Note:  
All commands throughout this section is run as user root.

Caution!

Ensure that the blades used for system expansion are new ones, containing no partitions or Cassandra data.

Otherwise, the already installed system can be damaged.

Prior to below expansion procedure (step-list), the following needs to be performed for every added PL node:

After the above configuration requirements have been completed, continue with the step-list below:

Note:  
In examples below it is assumed that a third PL node, (PL-5) will be added.

  1. Reload cluster.conf on all nodes:

    # cluster config -r -a

  2. Add the linux-payload RPM package to the new PL node in the cluster:

    # cluster rpm -a ldews-payload-cxp9020125-<version>.x86_64.rpm -n <node_ID>

    In order to expand a four node cluster to a five node cluster, use the following command:

    # cd /cluster/rpms

    # cluster rpm -a ldews-payload-cxp9020125-<version>.x86_64.rpm -n 5

  3. Change directory to /var/log/installfiles/<prod_number>-<version>/evip

    # cd /var/log/installfiles/<prod_number>-<version>/evip

  4. Extract and run the installation script for eVIP. Execute the following commands:

    # tar xvf <prod_number>-<version>.sdp

    # ./bundle extend_standalone <hostname>

    <hostname> is the hostname of the newly added node.

  5. Log in to the DMXC and unlock PL-5:

    Make sure to enter configuration mode:

    > configure

    (config)> ManagedElement=1,Equipment=1,Shelf=0,Slot=<slot_position>,Blade=1,administrativeState=UNLOCKED

    > commit -s

    The <slot_position> variable corresponds to the slot position on the GEP3 blade.

    In order to expand a four node cluster to a five node cluster, use the following command:

    (config)> ManagedElement=1,Equipment=1,Shelf=0,Slot=9,Blade=1,administrativeState=UNLOCKED

    Commit the changes:

    > commit -s

  6. Partition the disk on the added PL node according to instructions in section Partitioning on PL Nodes in Hardware Installation and IP Infrastructure Setup for Native Deployment GEP3, Reference [3].
    Note:  
    Depending on the revision of the new GEP3 blade, the 300 GB disk can be either on the sda or sdb device.

  7. To add the new node, go to /var/log/installfiles/<prod_number>-<version> and run the following command:

    On the first control node (SC-1):

    # cd /var/log/installfiles/<prod_number>-<version>

    # ./ema addnode --host <hostname>

  8. Run test traffic to test the node:
    1. Check if it is possible to send traffic through the new PL node. Use the test port 8888 or 8989.
    2. Check the modules of the added node so that Test mode is enabled:

      # bootloader.py node status --host <hostname>

    3. After traffic has been tested over test port, disable test port and enable regular traffic port for live traffic:

      # bootloader.py config remove --parameter @REGISTER_TEST_SERVICES@

      # bootloader.py config remove --parameter @REGISTER_SERVICES@

      # bootloader.py node activate --host <new_PL_hostname>

      # bootloader.py node activate --host <SC-1_hostname>

  9. Check Dynamic Activation status on all nodes.

    From any node, locally:

    # 3ppmon status --host all

    From an SC node:

    # bootloader.py node status --host all

  10. Check /var/logs/<PL-5_hostname>/messages. Make sure it does not contain any error messages.
  11. To add more blades in the cluster, repeat Step 1 to Step 10 for blades 6-n.
  12. Activate all of the nodes, one at a time, to get knowledge about the expanded node(s), in a rolling procedure of sc1, sc2, and pl3:

    # bootloader.py node activate --host <SC-1_hostname>

    # bootloader.py node activate --host <SC-2_hostname>

    # bootloader.py node activate --host <PL-3_hostname>

6   Expansion of Dynamic Activation Using GEP 5 Blades

This section contains instructions for how to expand an existing cluster installation of the Dynamic Activation system when using GEP5 blades.

6.1   Expansion - GEP5

This section is valid for:

Note:  
All commands throughout this section is run as user root.

Caution!

Ensure that the blades used for system expansion are new ones, containing no partitions or Cassandra data.

Otherwise, the already installed system can be damaged.

Prior to below expansion procedure (step-list), the following needs to be performed for every added PL node:

After the above configuration requirements have been completed, continue with the step-list below:

Note:  
In examples below it is assumed that a third PL node will be added.

  1. Reload cluster.conf on all nodes:

    # cluster config -r -a

  2. Add the linux-payload RPM package to the new PL node in the cluster:

    # cluster rpm -a ldews-payload-cxp9020125-<version>.x86_64.rpm -n <node_ID>

    In order to expand a four node cluster to a five node cluster, use the following command:

    # cd /cluster/rpms

    # cluster rpm -a ldews-payload-cxp9020125-<version>.x86_64.rpm -n 5

  3. Change directory to /var/log/installfiles/<prod_number>-<version>/evip

    # cd /var/log/installfiles/<prod_number>-<version>/evip

  4. Extract and run the installation script for eVIP. Execute the following commands:

    # tar xvf <prod_number>-<version>.sdp

    # ./bundle extend_standalone <hostname>

    <hostname> is the hostname of the newly added node.

  5. Log in to the DMXC and unlock PL-5:

    Make sure to enter configuration mode:

    > configure

    (config)> ManagedElement=1,Equipment=1,Shelf=0,Slot=<slot_position>,Blade=1,administrativeState=UNLOCKED

    > commit -s

    The <slot_position> variable corresponds to the slot position on the GEP5 blade.

    In order to expand a four node cluster to a five node cluster, use the following command:

    (config)> ManagedElement=1,Equipment=1,Shelf=0,Slot=9,Blade=1,administrativeState=UNLOCKED

    Commit the changes:

    > commit -s

  6. To add the new node, go to /var/log/installfiles/<prod_number>-<version> and run the following command:

    On the first control node (SC-1):

    # cd /var/log/installfiles/<prod_number>-<version>

    # ./ema addnode --host <hostname>

  7. Run test traffic to test the node:
    1. Check if it is possible to send traffic through the new PL node. Use the test port 8888 or 8989.
    2. Check the modules of the added node so that Test mode is enabled:

      # bootloader.py node status --host <hostname>

    3. After traffic has been tested over test port, disable test port and enable regular traffic port for live traffic:

      # bootloader.py config remove --parameter @REGISTER_TEST_SERVICES@

      # bootloader.py config remove --parameter @REGISTER_SERVICES@

      # bootloader.py node activate --host <new_PL_hostname>

      # bootloader.py node activate --host <SC-1_hostname>

  8. Check Dynamic Activation status on all nodes.

    From any node, locally:

    # 3ppmon status --host all

    From an SC node:

    # bootloader.py node status --host all

  9. Check /var/logs/<PL-5_hostname>/messages. Make sure it does not contain any error messages.
  10. To add more blades in the cluster, repeat Step 1 to Step 9 for blades 6-n.
  11. Activate all of the nodes, one at a time, to get knowledge about the expanded node(s), in a rolling procedure of sc1, sc2, and pl3:

    # bootloader.py node activate --host <SC-1_hostname>

    # bootloader.py node activate --host <SC-2_hostname>

    # bootloader.py node activate --host <PL-3_hostname>


Reference List

Ericsson Documents
[1] Library Overview, 18/1553-CSH 109 628 Uen
[2] Glossary of Terms and Acronyms, 0033-CSH 109 628 Uen
[3] Hardware Installation and IP Infrastructure Setup for Native Deployment GEP3, 2/1531-CSH 109 628 Uen
[4] Hardware Installation and IP Infrastructure Setup for Native Deployment GEP5, 3/1531-CSH 109 628 Uen
[5] System Administrators Guide for Native Deployment, 1/1543-CSH 109 628 Uen


Copyright

© Ericsson AB 2017. All rights reserved. No part of this document may be reproduced in any form without the written permission of the copyright owner.

Disclaimer

The contents of this document are subject to revision without notice due to continued progress in methodology, design and manufacturing. Ericsson shall have no liability for any error or damage of any kind resulting from the use of this document.

Trademark List
All trademarks mentioned herein are the property of their respective owners. These are shown in the document Trademark Information.

    System Expansion for Native Deployment         Ericsson Dynamic Activation 1