1 Introduction
This document describes the expansion process in Ericsson Dynamic Activation (EDA), for both GEP3 and GEP5.
The hardware components included in this configuration contain 4-12 GEP blades. Two switches (SCXB3), and two routers (CMXB3) are included in the hardware.
1.1 Purpose and Scope
This document focuses on the expansion of Dynamic Activation using GEP3 or GEP5 blades.
1.2 Target Group
The target groups for this document are as follows:
- Ericsson installation engineers
- Other Dynamic Activation related engineers
The target groups are described in more detail in the Library Overview, Reference [1].
1.3 Typographic Conventions
Typographic conventions are described in the document Library Overview, Reference [1].
For information about abbreviations used throughout this document refer to Glossary of Terms and Acronyms, Reference [2].
2 Tools
The following tools are required:
- A screwdriver of type TORX T5, or torque screwdriver with TORX T5 bit is needed for the cables.
- A screwdriver of type TORX T8, or torque screwdriver with TORX T8 bit is needed for the GEP blade.
- A Console Server is required to remotely login to GEP and SCX blades. An alternative to a Console Server could be a terminal or a PC with a serial port.
3 License Prerequisites for Expanding a Dynamic Activation Cluster
Make sure to have a valid Dynamic Activation 1 license reflecting the number of GEP blades that the cluster will be expanded to.
4 Blade Insertion
Only new blades can be used for system expansion, which means that there must be no data, partitions, or Cassandra data on the blades.
Insert the new blade and connect a new cable for the console.
Always wear an Electrostatic Discharge (ESD) strap when touching the hardware in the rack. Sensitive components, such as Integrated Circuits (ICs), can be damaged by discharges of static electricity.
- Note:
- Use the screwdriver of type TORX T5, or torque screwdriver
with TORX T5 bit for the cable.
Use the screwdriver of type TORX T8, or torque screwdriver with TORX T8 bit for the blade.
Continue with the expansion according to used hardware, see specific subsection.
5 Expansion of Dynamic Activation Using GEP3 Blades
This section contains instructions for how to expand an existing cluster installation of the Dynamic Activation system when using GEP3 blades.
5.1 Expansion - GEP3
This section is valid for:
- GEP3, SCXB3, and CMXB3
- Note:
- All commands throughout this section is run as user root.
Ensure that the blades used for system expansion are new ones, containing no partitions or Cassandra data.
Otherwise, the already installed system can be damaged.Prior to below expansion procedure (step-list), the following needs to be performed for every added PL node:
- Log on to the CLI:
# ssh -p 2024 advanced@<BSP_NBI_IP>
- Make sure the boot device order is correctly set. See instructions in section Change Boot Device Order and Baud Rate on GEP Blades in Hardware Installation and IP Infrastructure Setup for Native Deployment GEP3, Reference [3].
- Make sure the power technology is disabled. See instructions in section Disable Power Technology in Hardware Installation and IP Infrastructure Setup for Native Deployment GEP3, Reference [3]
- Manually update the cluster.conf file for each PL node used for expanding the Dynamic Activation
GEP3 system.
To expand a four node cluster to a five node cluster, the following example shows how to edit cluster.conf.
On SC-1, update cluster.conf in directory /cluster/etc with the MAC address for the new blade.
To get the MAC address for the new blade, perform the following:
- Log on to the CLI:
# ssh -p 2024 advanced@<BSP_NBI_IP>
- Enter password for user advanced
- Enter the command to show the MAC addresses.
> ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=pg
> show-table -m Blade -p bladeId,firstMacAddr
A printout shows all MAC addresses of all blades in the PG group (the following printout is just an example).
=============================== | bladeId | firstMacAddr | =============================== | 0-1 | 00:1e:df:79:ec:f9 | | 0-3 | 90:55:AE:3B:07:95 | | 0-5 | 90:55:AE:3B:05:FD | | 0-7 | 90:55:AE:3B:09:8D | | 0-9 | 04:4E:06:D0:33:29 | (This is the MAC address of the new blade) ===============================
Add expanded node_ID to boot server list, as shown in example below.
node 5 payload <PL-5_hostname> #First mac adress 04:4E:06:D0:33:29 interface 5 eth3 ethernet 04:4E:06:D0:33:2a interface 5 eth4 ethernet 04:4E:06:D0:33:2b interface 5 eth5 ethernet 04:4E:06:D0:33:31 interface 5 eth6 ethernet 04:4E:06:D0:33:32
- Log on to the CLI:
After the above configuration requirements have been completed, continue with the step-list below:
- Note:
- In examples below it is assumed that a third PL node, (PL-5) will be added.
- Reload cluster.conf on all nodes:
# cluster config -r -a
- Add the linux-payload RPM package to the new PL node in
the cluster:
# cluster rpm -a ldews-payload-cxp9020125-<version>.x86_64.rpm -n <node_ID>
In order to expand a four node cluster to a five node cluster, use the following command:
# cd /cluster/rpms
# cluster rpm -a ldews-payload-cxp9020125-<version>.x86_64.rpm -n 5
- Change directory to /var/log/installfiles/<prod_number>-<version>/evip
# cd /var/log/installfiles/<prod_number>-<version>/evip
- Extract and run the installation script for eVIP. Execute
the following commands:
# tar xvf <prod_number>-<version>.sdp
# ./bundle extend_standalone <hostname>
<hostname> is the hostname of the newly added node.
- Log in to the DMXC and unlock PL-5:
Make sure to enter configuration mode:
> configure
(config)> ManagedElement=1,Equipment=1,Shelf=0,Slot=<slot_position>,Blade=1,administrativeState=UNLOCKED
> commit -s
The <slot_position> variable corresponds to the slot position on the GEP3 blade.
In order to expand a four node cluster to a five node cluster, use the following command:
(config)> ManagedElement=1,Equipment=1,Shelf=0,Slot=9,Blade=1,administrativeState=UNLOCKED
Commit the changes:
> commit -s
- Partition the disk on the added PL node according to instructions
in section Partitioning on PL Nodes in Hardware Installation and IP Infrastructure Setup for Native Deployment GEP3, Reference [3].
- Note:
- Depending on the revision of the new GEP3 blade, the 300 GB disk can be either on the sda or sdb device.
- To add the new node, go to /var/log/installfiles/<prod_number>-<version> and
run the following command:
On the first control node (SC-1):
# cd /var/log/installfiles/<prod_number>-<version>
# ./ema addnode --host <hostname>
- Run test traffic to test the node:
- Check if it is possible to send traffic through the new PL node. Use the test port 8888 or 8989.
- Check the modules of the added node so that Test mode
is enabled:
# bootloader.py node status --host <hostname>
- After traffic has been tested over test port, disable
test port and enable regular traffic port for live traffic:
# bootloader.py config remove --parameter @REGISTER_TEST_SERVICES@
# bootloader.py config remove --parameter @REGISTER_SERVICES@
# bootloader.py node activate --host <new_PL_hostname>
# bootloader.py node activate --host <SC-1_hostname>
- Check Dynamic Activation status on all nodes.
From any node, locally:
# 3ppmon status --host all
From an SC node:
# bootloader.py node status --host all
- Check /var/logs/<PL-5_hostname>/messages. Make sure it does not contain any error messages.
- To add more blades in the cluster, repeat Step 1 to Step 10 for blades 6-n.
- Activate all of the nodes, one at a time, to get knowledge
about the expanded node(s), in a rolling procedure of sc1, sc2, and
pl3:
# bootloader.py node activate --host <SC-1_hostname>
# bootloader.py node activate --host <SC-2_hostname>
# bootloader.py node activate --host <PL-3_hostname>
6 Expansion of Dynamic Activation Using GEP 5 Blades
This section contains instructions for how to expand an existing cluster installation of the Dynamic Activation system when using GEP5 blades.
6.1 Expansion - GEP5
This section is valid for:
- GEP5, SCXB3, and CMXB3
- Note:
- All commands throughout this section is run as user root.
Ensure that the blades used for system expansion are new ones, containing no partitions or Cassandra data.
Otherwise, the already installed system can be damaged.Prior to below expansion procedure (step-list), the following needs to be performed for every added PL node:
- Log on to the CLI:
# ssh -p 2024 advanced@<BSP_NBI_IP>
- Make sure the boot device order is correctly set. See instructions in section Change Boot Device Order and Baud Rate on GEP Blades in Hardware Installation and IP Infrastructure Setup for Native Deployment GEP5, Reference [4].
- Make sure the power technology is disabled . See instructions in section Disable Power TechnologyHardware Installation and IP Infrastructure Setup for Native Deployment GEP5, Reference [4]
- Manually update the cluster.conf file for each PL node used for expanding the Dynamic Activation
GEP5 system.
To expand a four node cluster to a five node cluster, the following example shows how to edit cluster.conf.
On SC-1, update cluster.conf in directory /cluster/etc with the MAC address for the new blade.
To get the MAC address for the new blade, perform the following:
- Log on to the CLI:
# ssh -p 2024 advanced@<BSP_NBI_IP>
- Enter password for user advanced
- Enter the command to show the MAC addresses.
> ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=pg
> show-table -m Blade -p bladeId,firstMacAddr
A printout shows all MAC addresses of all blades in the PG group (the following printout is just an example).
=============================== | bladeId | firstMacAddr | =============================== | 0-1 | 00:1e:df:79:ec:f9 | | 0-3 | 90:55:AE:3B:07:95 | | 0-5 | 90:55:AE:3B:05:FD | | 0-7 | 90:55:AE:3B:09:8D | | 0-9 | 04:4E:06:D0:33:29 | (This is the MAC address of the new blade) ===============================
Add expanded node_ID to boot server list, as shown in example below.
node 5 payload <HOSTNAME-PL-5> #First mac adress 04:4E:06:D0:33:29 interface 5 eth3 ethernet 04:4E:06:D0:33:2a interface 5 eth4 ethernet 04:4E:06:D0:33:2b interface 5 eth5 ethernet 04:4E:06:D0:33:2e interface 5 eth6 ethernet 04:4E:06:D0:33:2f
- Log on to the CLI:
After the above configuration requirements have been completed, continue with the step-list below:
- Note:
- In examples below it is assumed that a third PL node will be added.
- Reload cluster.conf on all nodes:
# cluster config -r -a
- Add the linux-payload RPM package to the new PL node in
the cluster:
# cluster rpm -a ldews-payload-cxp9020125-<version>.x86_64.rpm -n <node_ID>
In order to expand a four node cluster to a five node cluster, use the following command:
# cd /cluster/rpms
# cluster rpm -a ldews-payload-cxp9020125-<version>.x86_64.rpm -n 5
- Change directory to /var/log/installfiles/<prod_number>-<version>/evip
# cd /var/log/installfiles/<prod_number>-<version>/evip
- Extract and run the installation script for eVIP. Execute
the following commands:
# tar xvf <prod_number>-<version>.sdp
# ./bundle extend_standalone <hostname>
<hostname> is the hostname of the newly added node.
- Log in to the DMXC and unlock PL-5:
Make sure to enter configuration mode:
> configure
(config)> ManagedElement=1,Equipment=1,Shelf=0,Slot=<slot_position>,Blade=1,administrativeState=UNLOCKED
> commit -s
The <slot_position> variable corresponds to the slot position on the GEP5 blade.
In order to expand a four node cluster to a five node cluster, use the following command:
(config)> ManagedElement=1,Equipment=1,Shelf=0,Slot=9,Blade=1,administrativeState=UNLOCKED
Commit the changes:
> commit -s
- To add the new node, go to /var/log/installfiles/<prod_number>-<version> and
run the following command:
On the first control node (SC-1):
# cd /var/log/installfiles/<prod_number>-<version>
# ./ema addnode --host <hostname>
- Run test traffic to test the node:
- Check if it is possible to send traffic through the new PL node. Use the test port 8888 or 8989.
- Check the modules of the added node so that Test mode
is enabled:
# bootloader.py node status --host <hostname>
- After traffic has been tested over test port, disable
test port and enable regular traffic port for live traffic:
# bootloader.py config remove --parameter @REGISTER_TEST_SERVICES@
# bootloader.py config remove --parameter @REGISTER_SERVICES@
# bootloader.py node activate --host <new_PL_hostname>
# bootloader.py node activate --host <SC-1_hostname>
- Check Dynamic Activation status on all nodes.
From any node, locally:
# 3ppmon status --host all
From an SC node:
# bootloader.py node status --host all
- Check /var/logs/<PL-5_hostname>/messages. Make sure it does not contain any error messages.
- To add more blades in the cluster, repeat Step 1 to Step 9 for blades 6-n.
- Activate all of the nodes, one at a time, to get knowledge
about the expanded node(s), in a rolling procedure of sc1, sc2, and
pl3:
# bootloader.py node activate --host <SC-1_hostname>
# bootloader.py node activate --host <SC-2_hostname>
# bootloader.py node activate --host <PL-3_hostname>
Reference List
| Ericsson Documents |
|---|
| [1] Library Overview, 18/1553-CSH 109 628 Uen |
| [2] Glossary of Terms and Acronyms, 0033-CSH 109 628 Uen |
| [3] Hardware Installation and IP Infrastructure Setup for Native Deployment GEP3, 2/1531-CSH 109 628 Uen |
| [4] Hardware Installation and IP Infrastructure Setup for Native Deployment GEP5, 3/1531-CSH 109 628 Uen |
| [5] System Administrators Guide for Native Deployment, 1/1543-CSH 109 628 Uen |

Contents