LDEwS SW Installation
Linux Distribution Extensions

Contents

1Introduction
1.1Prerequisites
1.2Related Information
1.3Revision Information

2

Preparing to Install LDEwS
2.1Network Configuration
2.2Four or More Switches with TIPC
2.3Installation Media
2.4Updating Cluster Configuration File
2.5Preparing to Install or Upgrade LDEwS with Core MW

3

Manual Installation of LDEwS
3.1Installing First Control Node
3.2Installing Second Control Node
3.3Installing Payload Nodes
3.4Installing a Single Node System

4

Automatic installation of LDEwS
4.1Preparing the installation server or the flash media
4.2File structure under tftp root in the installation server
4.3Configuration files
4.4Hooks
4.5Application repository
4.6Interacting with the automatic installation

5

Installing LDEwS using AIT
5.1Initial installation
5.2Campaign installation

6

Installing LDEwS with Core MW
6.1Installing BRF Participants

7

Upgrading LDEwS with Core MW 3.2 or greater (BFU Upgrade)
7.1Upgrading the BRF participant
7.2Upgrading BRF Participant to BRF Script Participant

8

Upgrading LDEwS with Core MW prior to version 3.2 (non-BFU)
8.1Downgrading LDEwS with Core MW
8.2Upgrading the BRF participant

9

LDEwS System Recovery via secondary media
9.1Prerequisites
9.2Preparing Installation Repository
9.3Installing LDEwS from Repository
9.4Restoring Application from BRF backup

10

Adding, Removing, and Replacing a Node
10.1Adding a Second Control Node
10.2Adding Payload Nodes
10.3Removing a Node
10.4Replacing a Node

11

Backup Restore
11.1Restoring a Backup

12

Post Installation Activities

Reference List

1   Introduction

This document describes how the Linux® Distribution Extensions with SUSE Linux Enterprise Server (LDEwS) product can be installed and configured. It also describes the adaptations for running the LDE product in Core Middleware (Core MW).

Scope

This document covers the following topics:

Target Groups

This document is intended for personnel handling installations.

1.1   Prerequisites

This section describes the prerequisites which must be fulfilled before LDEwS can be installed or updated on a Core MW system.

1.1.1   Hardware and Software Required

This section described the required hardware and software.

For Installing LDEwS

The following hardware is required:

Before the installation can be performed, the following conditions must apply:

For Installing or Upgrading LDEwS for Running LDEwS in Core MW

A prerequisite is that LDEwS is installed.

In addition, Core MW requires all software to be installed as Software Delivery Packages (SDPs) to be able to control it.

The following software has been installed:

For Installing LDEwS using AIT

In addition to the requirements for Installing LDEwS, AIT should be installed and configured.

For more information how to install and configure AIT, see

1.1.2   Documents

This section describes the required documents.

For Installing LDEwS

Not applicable.

For Installing or Upgrading LDEwS for Running LDEwS in Core MW

For more information on how software is packaged for Core MW in general, see the following document:

The software management commands are described in the following document:

For Installing LDEwS using AIT

For more information on how AIT is installed and configured, see the following documents:

1.1.3   Tools

Not applicable.

1.1.4   Conditions

Not applicable.

1.2   Related Information

The definition and explanation of acronyms and terminology, information about trademarks used, and typographic conventions can be found in the following documents:

1.3   Revision Information

Other than editorial changes, this document has been revised from revision AA to revision AB according to the following:

2   Preparing to Install LDEwS

This section describes the preparations needed before the LDEwS software is installed.

The preparations in Section 2.1 Network Configuration, Section 2.2 Four or More Switches with TIPC and Section 2.2.1 Migrating from an Old Setup are not applicable for the standalone installation.

The preparations to install or upgrade LDEwS for running LDEwS in Core MW are described in Section 2.5 Preparing to Install or Upgrade LDEwS with Core MW.

2.1   Network Configuration

If the blade system is configured to have a redundant internal network and use Ethernet bonding, then the physical network topology and switch configuration must adhere to the following rules:

Figure 1 illustrates the required network topology for the internal network when using high availability Ethernet bonding. eth0 and eth1 represent the physical Ethernet interfaces that will be bonded together on each blade and bond0 represents the virtual bonded Ethernet interface.

Figure 1   Required Network Topology for High Availability Ethernet Bonding

2.2   Four or More Switches with TIPC

Problems have been observed when using the ARP monitoring of the high availability Ethernet bonding. ARP monitoring is required when using more than two switches in a hierarchy, as nodes in the cluster will not be attached to all switches and therefore can not detect loss of PHY connectivity, which is the default failure detection method when configuring high availability Ethernet bonding (this is known as "MII monitoring").

It is recommended to create two additional VLANs networks on top of the ordinary switches, one for each side. The reason for creating one VLAN per side is that TIPC requires plane separation if multiple bearer networks exist.

Note:  
MII monitoring along with this setup is not supported.

The two different VLAN interfaces are declared as in the following cluster.conf example, with VLAN id 500 and VLAN id 501. These lines are added and are not replacing anything that existed before:

interface all eth0.500 vlan
interface all eth1.501 vlan

The two new TIPC bearers must be defined, so where TIPC used to have bond0 or similar as its only bearer, the configuration row beginning with tipc must be replaced with a row specifying the new TIPC bearers. For example, if the configuration contained the following line before:

tipc all dynamic bond0

Instead the declaration is to contain the two VLAN interfaces, as follows:

tipc all dynamic eth0.500 eth1.501

The resulting topology is shown in Figure 2.

Figure 2   Recommended Network Topology for Four or More Switches when Using TIPC

2.2.1   Migrating from an Old Setup

This section described how to avoid downtime of the cluster service if a system is to be updated from the older configuration to the newer with multiple TIPC bearers.

To migrate from an old setup, perform the following steps:

  1. Add VLANs to the switches and ensure that there is one VLAN for the eth0 side and a different VLAN for the eth1 side.

    The traffic on these VLANs is be forwarded between the switches on the same side, but not between SW-00 and SW-01 that hold the links between the two sides.

  2. Upgrade the OS RPM Package Manager (RPM) or RPMs on the nodes and do a rolling reboot of the cluster to activate the new OS version.
  3. Edit the cluster.conf file and assign the new VLAN interfaces as bearers to the TIPC. For an example. see Section 2.2 Four or More Switches with TIPC.
  4. Run cluster config --reload --all
  5. Do a rolling reboot of the cluster to activate this configuration.
  6. Verify the new configuration:

    cluster config --validate

  7. Do a non-activating configuration reload:

    Run cp /cluster/etc/cluster.conf /boot/.cluster.conf on both control nodes.

  8. Manually bring the VLAN interface corresponding to the primary interface in the bonding interface on all nodes.

    Example: forallnodes 'ip link set dev eth0.500 up'

    To determine which interface is active in bond0, run cat /proc/net/bonding/bond0 and look at the line beginning with Currently Active Slave:. Ensure that both the bonding slaves are active when doing this, either by running cat or by checking the current alarms for the cluster.

    Note:  
    The previous and following examples use a function like the following to run commands into the nodes:

    forallnodes() {
      for f in \
        /etc/cluster/nodes/all/*/networks/internal/\
        address
      do
        ssh $(<$f) "$@"
      done
    }

    The bash function declaration can be written into a bash shell on the cluster, and then be used to run commands on all nodes in the cluster.


  9. Add the VLAN interface to the TIPC as a bearer on all nodes.

    Example: eth: in the beginning of the bearer name:
    forallnodes tipc-config -be=eth:eth0.500

  10. Ensure that all nodes have contact with each other over the newly added bearer.

    Example: forallnodes tipc-config -l | grep eth0.500

    The example on a 4-node cluster is as follows (three connections from each node to every other node):

    1.1.1:eth0.500-1.1.2:eth0.500: up
    1.1.1:eth0.500-1.1.3:eth0.500: up
    1.1.1:eth0.500-1.1.4:eth0.500: up
    1.1.2:eth0.500-1.1.1:eth0.500: up
    1.1.2:eth0.500-1.1.3:eth0.500: up
    1.1.2:eth0.500-1.1.4:eth0.500: up
    1.1.3:eth0.500-1.1.1:eth0.500: up
    1.1.3:eth0.500-1.1.2:eth0.500: up
    1.1.3:eth0.500-1.1.4:eth0.500: up
    1.1.4:eth0.500-1.1.1:eth0.500: up
    1.1.4:eth0.500-1.1.2:eth0.500: up
    1.1.4:eth0.500-1.1.3:eth0.500: up

    If any connections are missing at this point, the switch configuration or cabling is probably wrong.

  11. Do a new rolling reboot of the cluster to enable both bearers. Begin with a rolling reboot of the control nodes.
    Note:  
    During this step, there is no redundant TIPC connectivity between the group of nodes that are rebooted and the group of nodes that are not rebooted. A failure of the primary switch will lead to TIPC connectivity loss between the groups.

  12. Verify the links as in Step 10, but without the grep command.

    Now all nodes are to have two links to every other node over both eth0.500 and eth1.501 in the examples. Only nodes that have not been rebooted are to have links over bond0.

2.3   Installation Media

The software is provided as an ISO image, which is to be burned onto a CD/DVD. With the CD/DVD installation media available, the software can be installed on a system in the following ways:

If you want to do an automatic installation or an installation using AIT then you should use the runtime container, which is packed in the SDP format.

2.3.1   Configuring an Installation Server on Linux

If an installation server is used, the computer that acts as installation server must be connected to the first control node through an Ethernet network, either directly or through a switch.

The installation server must be able to run a Trivial File Transfer Protocol (TFTP) server as well as a Dynamic Host Configuration Protocol (DHCP) server. For simplicity, no other DHCP servers is to be running on the network. If another DHCP server is running, it must not answer queries from the control node that is to be installed.

To configure the installation server on Linux, perform the following steps:

  1. Insert the CD/DVD installation media in the CD/DVD drive on the installation server.
  2. Copy the whole structure (that is all files and directories) from the CD/DVD to the TFTP server root directory (usually /tftpboot).
    Note:  
    Steps 1 and 2 are needed only if you perform a manual installation.

  3. Start the TFTP server and confirm that it is running.

    For details about TFTP server configuration, see the tftpd(8) man page.

    The DHCP server must be set up to answer requests from the control node during installation.

    The control node is to receive the pxelinux.0 file during boot up.

    As a precaution, the DHCP configuration is to be limited to the MAC addresses of the control node, and the network used is to be private to the boot server and the control node that is to be installed. This is done to avoid interference with other networks.

  4. When the DHCP configuration has been updated, start the DHCP server and confirm that it is running.

The following is an example of how to configure a DHCP server. The MAC address and IP addresses are only examples.

#
# /etc/dhcpd.conf
#

ddns-update-style none;

subnet 192.168.0.0 netmask 255.255.255.0 {
       filename "pxelinux.0";
       next-server 192.168.0.10;
       host control1 {
            hardware ethernet 00:02:B3:BC:F8:A1;
            fixed-address 192.168.0.1;
       }
}

# End of file

For more information about DHCP server configuration, see the dhcpd.conf(5) man page.

2.4   Updating Cluster Configuration File

The system is configured using a cluster configuration file, located at /cluster/etc/cluster.conf, in a running system. During installation of the first control node an initial default configuration is automatically generated.

This file must be updated manually using a text editor, for example, when more nodes are added to the cluster and when subsystems, for example, Ethernet bonding, IP addresses, Domain Name System (DNS), and Network Time Protocol (NTP), are configured.

For more information about the syntax and options supported in the cluster configuration file, see the following document:

2.5   Preparing to Install or Upgrade LDEwS with Core MW

The LDEwS Runtime software bundle contains the LDEwS RPMs packaged in Core MW compliant SDP format. The Deployment Template contains the campaigns required to install the components.

To obtain the required software, perform the following steps:

  1. Download the LDEwS runtime and deployment SDPs from the SW Gateway or GASK.
    • SW Gateway: LDEwS Runtime (CXP 902 0125/4) and LDEwS Deployment Templates (CXP 902 0284/4)
      Note:  
      Zipped files from SW Gateway need to be extracted before the next step.

    • GASK: ldews-4.0.0-runtime-sle 190 10-CXP 902 0125/4 and ldews-4.0.0-deployment-sle 190 10-CXP 902 0284/4
  2. Transfer the LDEwS Runtime SDP (runtime filename) and LDEwS Deployment SDP (deployment filename) to one of the controller nodes.
    scp <runtime filename> root@<address>:<target path>/
    scp <deployment filename> root@<address>:<target path>/

3   Manual Installation of LDEwS

To install a complete cluster, perform the following steps:

  1. Install first control node, see Section 3.1 Installing First Control Node.
  2. Install second control node, see Section 3.2 Installing Second Control Node.
  3. Install zero or more payload nodes, see Section 3.3 Installing Payload Nodes.

The procedure for installing or upgrading LDEwS for running LDEwS in Core MW is described in Section 6 Installing LDEwS with Core MW.

3.1   Installing First Control Node

The first control node can be installed in the following two ways:

The procedures for installing the second control node (described in Section 3.2 Installing Second Control Node) and payload nodes (described in Section 3.3 Installing Payload Nodes) do not depend on how the first control node was installed.

3.1.1   Preparing Installation of First Control Node Using a CD/DVD

To install the first control node using a CD/DVD, perform the following steps:

  1. Ensure that all nodes in the cluster are powered off.
  2. Power on the first control node.
  3. Insert the installation media into the CD/DVD drive.
  4. Enter the Boot menu (a menu where you can select from which device to boot) and select the CD/DVD.

    For more information about how to enter the boot menu, see the BIOS manual.

  5. The first control node now boots from the CD/DVD. A boot sequence follows.
    Note:  
    By default the node uses serial console (not Video Graphics Adapter (VGA) console) as the primary output device. To temporarily change the default behavior and instead use VGA console as primary output device type vga at the boot: prompt. The boot prompt will only be shown for a short while (seconds) early in the boot process. If no input is given during this time the node will automatically select serial console as primary output device and the boot process will continue.

  6. Continue the installation by following the instructions in Section 3.1.3 Running Installation Program.

3.1.2   Preparing Installation of First Control Node Using an Installation Server

To install the first control node using an installation server, perform the following steps:

  1. Ensure that all nodes in the cluster are powered off.
    Note:  
    If an old LDEwS installation exists on the cluster, the secondary control node can be stopped from booting from the hard drive by erasing the boot block before power down. For more information, see Step 9 in Section 3.2 in Section 3.2 Installing Second Control Node.

  2. Connect the installation server to the first control node.
  3. Power on the first control node.
  4. Enter the Boot menu (a menu where you can select from which device to boot) and select the Network Interface Controller (NIC) that is connected to the installation server (or to a switch from where the installation server can be reached).
  5. The first control node is now able to request an IP address and receive a response from the DHCP server running on the installation server. A boot sequence follows.
    Note:  
    By default the node uses serial console (not VGA console) as the primary output device. To temporarily change the default behavior and instead use VGA console as primary output device, type vga at the boot: prompt. The boot prompt will only be shown for a short while (seconds) early in the boot process. If no input is given during this time the node will automatically select serial console as primary output device and the boot process will continue.

  6. Continue the installation by following the instructions in Section 3.1.3 Running Installation Program.

3.1.3   Running Installation Program

To run the installation program, perform the following steps:

  1. A boot sequence is shown.
  2. The following text is shown:

    Proceed with manual installation? (y/n)

    Enter y.

  3. The text Available installation disks: is shown, followed by a list of disks in the system, available for installation of LDEwS.

    Select one of the disks in the list by entering the number to the left and press enter. To use the default disk (the first in the list), only press enter.

  4. The following text is shown:

    Do a single-node installation (minimized)? (y/n)

    Enter n.

    For instructions on single node installations, see Section 3.4 Installing a Single Node System.

  5. The following text is shown:

    Do you want to use a disk cache for the root filesystem? (y/n)

    Enter y to enable that a root partition will be created on the disk and the contents of the initramfs will be extracted to it. The size of the root partition can be defined in a later step.

    The following are recommendations and considerations for how this option should be set:

    • When not to enable the disk cache and have the root filesystem in RAM (ie. answer n at this question):
      • Where faster execution time is desired. As the root filesystem is in RAM, access to files on the filesystem will be faster than if the filesystem is in the disk cache.
      • Where booting to a consistent state is desired. As the filesystem is located in RAM and reinitialized as part of the boot process, a reboot of the node will return the root filesystem to a consistent state.
    • When to enable the disk cache and have the root filesystem on disk (ie. answer y at this question):
      • When the system has limited RAM. Using the disk cache will avoid having the root filesystem in RAM. The root filesystem in RAM can occupy up to 2GB.
      • Where faster boot time is desired. As the filesystem is persistent across reboots, reinstallation of RPMs will only occur at time of operating system upgrade, or if an erase of the root partition is ordered.

    It is possible to erase the root partition and extract the initramfs on the next startup by using command cluster rootfs.

  6. The following text is shown:

    Do you want to use a disk cache for the root filesystem on payload nodes? (y/n)

    Enter y to enable booting from the hard disk on payload nodes. Disk is formatted and reinitialized after payload node first time boots with pxe. It is possible to erase the root partition by using cluster rootfs for payloads too.

    If y is selected, the following text is shown:

    Enter size (in gigabytes) for payload (root) partition (2 - 10):

    Enter payload root partition size in gigabytes.

  7. The following text is shown:

    Default output (serial/vga) [serial]:

    Press enter to use the default setting to output information on the serial console, or enter vga and press enter to make the VGA console the default output device.

  8. The following text is shown:

    Setup a detached upgrade? (y/n)

    Enter n to do a normal initial installation or y to install a detached control node.

    For more information about the detached upgrade, see the following document:

    If y is selected, the installation will probe for a running LDEwS cluster and for each interface show Probing on interface ethX... where X is a number. In the case of a DHCP reply, it will show success and verify that the node giving the lease is a LDEwS control node. The installation program will probe until either it has found a LDEwS cluster or tried all available interfaces. When a LDEwS control node has been found, the installation continues.

    Note:  
    If no LDEwS control node is found, the text No live cluster detected. Continue probing? is shown. Answer y to try again or n to abort the installation.

  9. The following text is shown:

    Enter password for root user:

    Select and enter a password for the root user that is to be used by the system.

    A question is shown to reenter the password to confirm the chosen password.

  10. The following text is shown:

    Do you want to define the size of the partitions? (y/n)

    Enter n to use the default size for the partitions. The default size for the partitions is as follows:

    Partition

    Mountpoint

    Default Size

    root

    /

    2GB

    swap

     

    4GB

    log

    /var/log

    10GB

    Mirrored storage area

    /cluster

    Half of the remaining disk space. The other half will be used for the snapshot creation when creating a full cluster backup.

    Note:  
    The size of the mirrored storage area directly affects the time it takes to complete a full disk synchronization.

    If y is entered to define the size of the partitions the following text is shown (the first prompt is shown only if it has been chosen to enable the disk cache):

    Enter size (in gigabytes) for / (root) partition (MIN_SIZE - MAX_SIZE):
    Enter size (in gigabytes) for swap partition (MIN_SIZE - MAX_SIZE):
    Enter size (in gigabytes) for /var/log partition (MIN_SIZE - MAX_SIZE):
    Enter size (in gigabytes) of mirrored storage area (MIN_SIZE - MAX_SIZE):

  11. The software installation starts and the following text is shown:

    Installing, please wait...

    The time it takes to install the software depends on which hardware is used. It can take as long as 10 minutes and is completed once the following text is shown:

    Installation completed successfully

    If anything went wrong during the installation, the following message is shown instead:

    Installation failed (see /root/install.log)

    When the installation is completed a login prompt is shown.

  12. Log in as root (use the password selected in Step 9) on the first control node.
  13. Reboot the first control node:

    reboot

    Note:  
    If the installation failed, instead of rebooting the node, examine the installation log (/root/install.log) to further troubleshoot the problem.

  14. Remove the installation media from the CD/DVD drive or disconnect the installation server from the first control node depending on the selected installation mode.
  15. Wait for the first control node to boot up in operational mode.

    When the boot sequence is completed a login prompt is shown.

    The first control node is now installed.

3.2   Installing Second Control Node

To install the second control node, perform the following steps:

  1. Log in as root on the first control node.
  2. Update the cluster configuration file to include the node ID, interfaces, and any other configuration related to the second control node. Edit the cluster configuration file:

    vi /cluster/etc/cluster.conf

    For more information about the syntax and options supported in the cluster configuration file, see the following document:

  3. Reload the cluster configuration on the first control node:

    cluster config --reload

  4. Reboot the first control node:

    reboot

  5. Log in as root on the first control node again.
  6. Add the ldews-control RPM package to the second control node:

    cd /cluster/rpms (optional)

    cluster rpm --add ldews-control-cxp9020125-<decimal version>-<release>.x86_64.rpm --node <id>

    <decimal version> is the RPM decimal version of LDEwS that is being installed e.g 4.0.0 and <release> is the RPM release version of LDEwS e.g 26.sle12. <id> is the node ID that was chosen for the second control node earlier.

  7. Log out from the first control node.
  8. Power on the second control node.
  9. Enter the Boot menu (a menu where you can select from which device to boot) and select one of the NICs that is connected to the "internal network" switch.

    For more information about how to configure the boot device, see the BIOS manual.

    Note:  
    It is very important that the second control node is not allowed to boot from its hard disk at this stage. For example, if the hard disk contains an old installation of LDEwS, then booting from the hard disk effectively means reactivation of the old cluster, which will cause the installation to be aborted and the disk mirror on the newly installed first control node to be discarded and resynchronized.

    To avoid this, it is possible to run the following command on the node before shutting it down:

    dd if=/dev/zero of=/dev/disk_boot bs=512 count=1

    /dev/disk_boot is only created when LDEwS is running in operational mode. It can be substituted with the LDEwS disk, for example, /dev/sda resulting in the following command:

    dd if=/dev/zero of=/dev/sda bs=512 count=1

    This command overwrites the boot block, making it impossible for the node to boot from disk until a new boot block has been installed, that is after the installation.


  10. The second control node now requests an IP address over the network and receives a response from the DHCP server running on the first control node.
    Note:  
    By default the node uses serial console (not VGA console) as the primary output device. To temporarily change the default behavior and instead use the VGA console as primary output device type vga at the boot: prompt. The boot prompt will only be shown for a short while (seconds) early in the boot process. If no input is given during this time the node will automatically select serial console as the primary output device and the boot process will continue.

  11. A boot sequence follows.
  12. The software installation will now start and the following text is shown:

    Installing, please wait...

    The time it takes to install the software depends on which hardware is used. It can take as long as 10 minutes and is completed once the following text is shown:

    Installation completed successfully

    After this the node will automatically reboot.

    If anything went wrong during the installation, the following message is shown instead:

    Installation failed (see /root/install.log)

    In case the installation fails login prompt is shown.

  13. Wait for the second control node to boot up in operational mode.

    When the boot sequence is completed a login prompt is shown.

    Note:  
    The boot sequence on the second control node can take a long time (usually between 30 minutes and several hours) due to the need for a full disk synchronization at first boot after installation. The actual time it takes to complete the disk synchronization depends on hardware properties (disk size, disk speed, and network speed). However, while the disk synchronization is in progress it is possible to in parallel continue to install payload nodes.

    Avoid rebooting either control node until the disk synchronization has completed, as that will only prolong the time it takes to complete the synchronization. On each control node the /proc/drbd file provides detailed information about the disk synchronization progress.


    The second control node is now installed.

  14. If you want to change the drbd synchronization rate, use the command:

    drbdsetup disk-options <minor> --c-max-rate <value>

    drbdsetup disk-options <minor> --c-min-rate <value>

    <minor> is got by typing drbdsetup show all | grep "device.*minor", and the value is the digit in "device minor <minor>"

    Note:  
    By default the c-min-rate is 8M and c-max-rate is 1G. DRBD will dynamically adapt the rate depending on the available bandwidth.

3.3   Installing Payload Nodes

To install payload nodes, perform the following steps:

  1. Log in as root on the first control node.
  2. Update the cluster configuration file to include the node IDs, interfaces, and any other configuration related to each new payload node. Edit the cluster configuration file:

    vi /cluster/etc/cluster.conf

    For more information about the syntax and options supported in the cluster configuration file, see the following document:

  3. Reload the cluster configuration on all nodes of the cluster:

    cluster config --reload --all

  4. Add the ldews-payload RPM package to each new payload node:

    cd /cluster/rpms (optional)

    cluster rpm --add ldews-payload-cxp9020125-<decimal version>-<release>.x86_64.rpm --node <id>

    <decimal version> is the RPM decimal version of LDEwS that is being installed e.g 4.0.0 and <release> is the RPM release version of LDEwS e.g 26.sle12. <id> is the node ID of each payload node. The command must be repeated once for each new payload node.

  5. Log out from the first control node.
  6. Power on each payload node.
  7. Enter the BIOS on each payload node and configure the node for PXE (network) booting and to automatically retry booting indefinitely.

    For more information about how to configure the boot device, see the BIOS manual.

    Note:  
    It is very important that the payload nodes are configured to only boot using the PXE and not hard disk. For example, if the hard disk contains a GRUB boot block which will not be able to chain load further stages from a disk partition, the node will hang indefinitely if the control nodes are unavailable during the time when the node tries to boot using the PXE. Ensure to check that the control is not handed over to, for example, a SCSI BIOS which also define boot devices.

  8. Exit the BIOS.
  9. Wait for each payload node to boot up in operational mode.

    When the boot sequence is completed a login prompt is shown.

    The payload nodes are now installed.

3.4   Installing a Single Node System

To install a single node system, perform the following steps:

  1. Prepare the node with installation media.

    For instructions when using a CD or DVD, see Section 3.1.1 Preparing Installation of First Control Node Using a CD/DVD.

    For instructions when using a installation server, see Section 3.1.2 Preparing Installation of First Control Node Using an Installation Server.

    For more information about single node installations, see the following document:

  2. The following text is shown:

    Do a single-node installation (minimized)? (y/n)

    Enter y.

  3. The following text is shown:

    Should logs (syslog) be stored to disk? (y/n)

    Select one of the following options:

    • Enter y for logs handled by syslog to be stored on the disk. Ensure that there is enough room for logs (300 MB at most) and that logging will not wear out the disk if used.
    • Enter n to only log to the RAM disk (logs are not persisted between reboots).
  4. The following text is shown:

    Enter password for root user:

    Select and enter a password for the root user that is to be used by the system. A question is shown to reenter the password to confirm the chosen password.

  5. The software installation will now start and the following text is shown:

    Installing, please wait...

    The time it takes to install the software depends on which hardware is used, but can take as long as 10 minutes and is completed once the following text is shown:

    Installation completed successfully

    If anything went wrong during the installation, the following message is shown instead:

    Installation failed (see /root/install.log)

    When the installation is completed a login prompt is shown.

  6. Log in as root (use the password selected earlier) on the first control node.
  7. Reboot the first control node:

    reboot

    Note:  
    If the installation failed, instead of rebooting the node, examine the installation log (/root/install.log) to further troubleshoot the problem.

  8. Remove the installation media from the CD/DVD drive or disconnect the installation server from the first control node, depending on the selected installation mode.
  9. Wait for the node to boot up in operational mode.

    When the boot sequence is completed a login prompt is shown.

    The node is now installed.

4   Automatic installation of LDEwS

It is possible to do a fully automatic installation of LDEwS. However LDEwS will not interact with the blades BIOS/ILOM/IPMI in order to make sure that the blades are configured properly or started up. This should be controlled by the user installing LDEwS. An automatic installation is possible either via an installation server or via a flash media.

To do an automatic installation you need to create an installation.conf file that will have all the information needed to carry out an LDEwS installation and have a proper structure in the installation server under the tftp root folder or in the flash media. In order to create the right structure in the tftp root folder, use the lde-installation-env-setup script, provided by LDE in the runtime container. During the automatic installation there will be three time-slots where non-LDE scripts can be executed. These scripts are called hooks.

It is possible to configure the:

It is not possible to configure:

4.1   Preparing the installation server or the flash media

This is additional to any preparation already done to the installation server described in Section 2.3.1. In case you want to prepare a flash media for automatic installation then genisoimage (version 1.1.8 or later) and syslinux (version 3.82 or later) should be installed in the sever that will be used to create the flash media. Use the installation environment tool called lde-installation-env-setup script to create a proper structure under the tftp root folder or to create the flash media. The script is included in the runtime container, under the ait folder (ait/lde-installation-env-setup).

Tool Usage

The lde-installation-env-setup tool has the following options:

Table 1    lde-installation-env-setup - General options

Option long, short

Explanation

--help, -h

Print information about all options

--edition, -l

Defines for what LDE product the installation environment should be prepared for. Specify one of the LDE products, currently supported by the tool, from the following list:


  • lfr for LDEfR

  • lws for LDEwS

--type, -t

Defines the type of installation environment to prepare. Two types are supported: usb and net.


usb - creates an image with the file structure and configuration files for an installation including installation media.


net - creates the file structure and configuration files at the installation server.

--config, -c

Points out the path to where additional configuration files are located. Currently the tool will look for cluster.conf and installation.conf in this path. The installation.conf file is mandatory, but cluster.conf is optional.

--prehook, -P

Points out the directory where any installation pre-hook is located (optional). For details see Section 4.4

--posthook, -O

Points out the absolute path to post-hook directory.

--afterhook, -A

Points out the absolute path to tftp server root directory.

--root_password_hash, -w

The hash for the password of the root user (the blowfish 2y algorithm should be used). (optional)
.Note: You can use any system with SLES12 or SLES11 SP1 (with latest updates) or later to generate the password hash.

--software, -s

Points out the absolute path to the directory where the LDE software is located. Needed files vmlinuz, pxelinux.0, initrd, boot.msg, <LDE control and payload RPMs> and the pxelinux.cfg directory with its contents.

If the installation environment type is net, the following additional options are required:

Table 2    lde-installation-env-setup - Options only valid with --type net

Option long, short

Explanation

--boot, -b

Points out the path to the TFTP server root directory (usually /tftpboot or /var/lib/tftpboot)

--clientdir, -I

Destination path in the tftp root folder to copy rpms, conf files and hooks (relative path to tftp root directory).

--download, -q

Points out the download_path (optional). This option is overridden by the --clientdir option.

If the installation environment type is usb, the following additional options can be defined>

Table 3    lde-installation-env-setup - Options only valid with --type usb

Option long, short

Explanation

--usb_path, -m

Points out the absolute path to either bootable device or a directory.

--repo, -e

The path to the directory where the repository that should be copied to the flash media. Everything will be placed in /repo in the media(optional)

Setting up the installation environment is preferable done by running the tool on the installation server directly. However, the structure can be prepared on a different host and later copied to correct locations on the installation server.

Make sure that the lde-installation-env-setup script will have write permissions in the destination folder. The lde-installation-env-script will copy the proper files to the tftp root folder or the flash media. Mount the media and provide the path of the device to the lde-installation-env-setup script if you use a USB flash media. The flash media will be completely erased and a bootable image will be written to it.

4.2   File structure under tftp root in the installation server

You should have the following structure in the tftp root folder

/<tftproot>/<download_path_A>/etc/installation.conf

/<tftproot>/<download_path_B>/etc/cluster.conf

/<tftproot>/<download_path_B>/hooks/pre-installation.tar.gz

/<tftproot>/<download_path_B>/hooks/post-installation.tar.gz

/<tftproot>/<download_path_B>/hooks/after-booting-from-disk.tar.gz

/<tftproot>/<download_path_B>/repo

/<tftproot>/<download_path_B>/ldews-control.rpm

/<tftproot>/<download_path_B>/ldews-payload.rpm

/<tftproot>/<path_in_the_dhcpd.conf>/pxelinux.0

/<tftproot>/<path_in_the_dhcpd.conf>/pxelinux.cfg/default

/<tftproot>/<download_path_A>/vmlinuz

/<tftproot>/<download_path_A>/initrd

<tftproot> is the TFTP root folder.

<download_path_A> is given as a parameter to the lde-installation-env-setup script

<download_path_B>is defined in the installation.conf file

The repo folder is optional and contains application files to be downloaded to the installation environment. See Section 4.5.

The pxelinux.0 can be at another location if you define this in the dhcpd.conf file

The installation.conf is mandatory to be on the installation server while the cluster.conf doesn't have to be in the installation server, but then it must be generated by a pre-installation hook under <repo_path>/etc. See Section 4.6 regarding the <repo_path> and Reference [6] on information about the cluster.conf. Hooks can be optional, if all configuration files are provided and contain all needed information to carry out the installation and setup of the blades. The rest of the files are mandatory to be on the installation server.

4.3   Configuration files

installation.conf

This file is mandatory. It is a text file containing a definition of keywords.

Note:  
Table 4 , Table 5 and Table 6 use the following syntax:

<parameter_name>=<value>.


Note:  
Use either Table 5 or Table 6 for partitioning configuration.

Table 4    installation.conf standard syntax

Keyword

 

Description

root_password_hash

mandatory

The hash of the root password for the installed system.

standalone_install

optional

Set to "y" if you want to do a standalone installation. Default set to "n".

standalone_volatile_logs

optional

Set to "y" if you do a standalone installation and want to store persistently the system logs. Default set to "n".

cluster_install_reboot

optional

Set to "n" if you don't want an automatic reboot, when the installation is done before booting from the hard disk. If unsure set to "y". Default set to "y".

control_rpm_name

optional

The name of the control rpm. Default set to ldews-control-cxp9020125-<decimal version>-<release>.x86_64.rpm.

payload_rpm_name

optional

The name of the control rpm. Default set to ldews-payload-cxp9020125-<decimal version>-<release>.x86_64.rpm.

download_path

optional

The name of the path under the tftp root where the hooks and RPMs are located.

Table 5    installation.conf basic partitioning syntax

Keyword

 

Description

disk_device_path

mandatory

Device path to the hard disk that will be used for installation e.g. /dev/sda or /dev/disk/by-path/pci-0000:03:00.0-sas-phy0-0x4433221100000000-lun-0


Note that on some hardware if you use anything other than the "disk by path" path, there is no guarantee that this will be the device that you actually intend to use. e.g. /dev/sda might show up as /dev/sdb in a later reboot. You can avoid this by using the disk by path notation or udev rules, which should be deployed via hooks.

partition_boot_size

optional

Size in megabytes. Default set to 4096

partition_log_size

optional

Size in megabytes. Default set to 10240

partition_root_size

optional

Size in megabytes. Set this value if you want to use disk cache (default set to 2048 if disk cache is used). If this option is not provided or its value is 0 there will be no disk cache for the root file system.

partition_swap_size

optional

Size in megabytes. Default set to 4096.

shared_filesystem_size

optional

Size in megabytes. Default is the remaining disk space divided by 2.The shared_filesystem_size is the actual size of the /cluster partition. You should make sure that there is twice that space available on the disk. The other half is used for LVM snapshot during the backup creation.

Table 6    installation.conf advanced partitioning syntax

Keyword

Description

Entity args

Option args

Flags

Tags

disk

The entity defines a disk.

id:


Entity name to be referenced later. (MANDATORY)


parent-entity-id:


N/A

path:


Path of the disk. (MANDATORY)

N/A

 

partition

The entity defines a partition.

id:


Entity name to be referenced later. (MANDATORY)


parent-entity-id:


Entity name on which to create the partition. (MANDATORY)

size:


Size of the partition in M, m, MB, mb, GB, gb, MiB, mib, GiB, gib units or in percentage of the parent entity. (MANDATORY)

boot:


Mark the partition as bootable.

 

filesystem

The entity defines a filesystem.

id:


Entity name to be referenced later. (MANDATORY)


parent-entity-id:


Entity name on which to create the filesystem. (MANDATORY)

fs_type:


Type of filesystem (e.g. ext2, ext3, swap, etc.) (MANDATORY)


name:


Label of the filesystem. If not specified entity-id is used instead. (NOT MANDATORY)

N/A

tag=shared (MANDATORY for DRBD)

pv

The entity defines a physical volume.

id:


Entity name to be referenced later. (MANDATORY)


parent-entity-id:


Entity name on which to create the physical volume. (MANDATORY)

name:


Physical volume name. If not specified entity-id is used instead. (NOT MANDATORY)

N/A

tag=shared (MANDATORY for DRBD)

vg

The entity defines a volume group.

id:


Entity name to be referenced later. (MANDATORY)


parent-entity-id:


Entity name on which to create the volume group.

name:


Volume group name. If not specified entity-id is used instead. (NOT MANDATORY)

N/A

tag=shared (MANDATORY for DRBD)

lv

The entity defines a logical volume.

id:


Entity name to be referenced later. (MANDATORY)


parent-entity-id:


Entity name on which to create the logical volume.

size:


Size of the partition in M,m,MB,mb,GB,gb,MiB,mib,GiB,gib unit or in percentage of the parent entity. (MANDATORY)


name:


Logical volume name. If not specified entity-id is used instead. (NOT MANDATORY)

stripe:


Scatter the logical volume over all physical volumes.


mirror:


Create the logical volume with one mirror copy.

tag=shared (MANDATORY for DRBD)

drbd

The entity defines a DRBD.

id:


Entity name to be referenced later. (MANDATORY)


parent-entity-id 1:


Entity name of referenced DRBD data entity. (MANDATORY)


parent-entity-id 2:


Entity name of referenced DRBD meta entity. If not specified, internal metadata is used instead. (NOT MANDATORY)

config:


Path to a script that generates DRBD resource configuration. (MANDATORY)


path:


Path of the DRBD device. (MANDATORY)

N/A

 

mdraid

The entity defines a software raid device.

id:


Entity name to be referenced later. (MANDATORY)


parent-entity-id:


Entity name of referenced entity or entities of which the mdraid device is consisted. (MANDATORY)

path:


Path of the mdraid device. (MANDATORY)


raid_level:


Raid level could be linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6, 6, raid10, 10, multipath, mp, faulty, container. Currently 0 and 1 is supported. (MANDATORY)

N/A

 

map

The keyword creates the given entity including it's parent to a specific Node/NodeGroup identity on the disk generated by cluster config under /etc/cluster/node.

id:


Entity name to be referenced later. (MANDATORY)


nodeid:


Nodeid or nodegroup name on which to be executed. (MANDATORY)


parent-entity-id:


N/A

N/A

N/A

 
Note:  
All keywords in Table 6 accept any user-defined tag.

Note:  
The shared_filesystem_size is the actual size of the /cluster partition. You should make sure that there is twice that space available on the disk. The other half is used for LVM snapshot during the backup creation.

Example 1   Basic installation.conf

# Installation configuration

disk_device_path = /dev/sda
partition_boot_size = 500M
partition_log_size = 300M
partition_root_size = 2000M
partition_swap_size = 500M
shared_filesystem_size = 7000M

In the above example, the required disk space is: 500 + 300 + (2000 * 2) + 500 + (7000 * 2) = 19300MB on a control blade.

Example 2   Installation.conf with advanced partitioning syntax

# Installation configuration

disk bd-main
option bd-main path=$c::disk_device
partition lde-boot-part bd-main
option lde-boot-part size=4G
option lde-boot-part boot
partition lde-root-part bd-main
option lde-root-part size=2G
partition lde-log-part bd-main
option lde-log-part size=4G
partition lde-drbdmeta-part bd-main
option lde-drbdmeta-part size=128M
partition lde-drbddata-part bd-main
option lde-drbddata-part size=4G
drbd lde-cluster-drbd lde-drbddata-part lde-drbdmeta-part
option lde-cluster-drbd config=/usr/lib/lde
  /config-management/drbd-resource-config
pv lde-cluster-pv lde-cluster-drbd
option lde-cluster-pv tag=shared
vg lde-cluster-vg lde-cluster-pv
option lde-cluster-vg tag=shared
lv lde-cluster-lv lde-cluster-vg
option lde-cluster-lv tag=shared
option lde-cluster-lv size=50%
filesystem lde-boot lde-boot-part
option lde-boot fs_type=ext3
filesystem lde-root lde-root-part
option lde-root fs_type=ext3
filesystem lde-log lde-log-part
option lde-log fs_type=ext3
filesystem lde-cluster lde-cluster-lv
option lde-cluster fs_type=ext3
option lde-cluster tag=shared

map control lde-boot
map control lde-root
map control lde-log
map control lde-cluster
map payload lde-root
Note:  
“map control lde-root” configuration element refers to disk_cache=y for control nodes, “map payload lde-root” configuration element refers to disk_cache=y for payload nodes.

Note:  
disk_cache=y option for payload nodes can not be set with basic installation.conf syntax.

Note:  
It is mandatory to include shared tag options for the configured physical volume (lde-cluster-pv), volume group (lde-cluster-vg), logical volume (lde-cluster-lv) and cluster filesystem (lde-cluster) in the installation.conf file as shown in the above example. This is to prevent any DRBD synchronization issues when replacing one of the control nodes.

cluster.conf

The system is configured using a cluster configuration file, located at /cluster/etc/cluster.conf, in a running system. If the cluster.conf is not provided, then an initial default configuration file is automatically generated during installation of the first control node. Thus is optional for the user to provide the cluster.conf before the installation.

This file must be updated manually using a text editor, for example, when more nodes are added to the cluster and when subsystems, for example, Ethernet bonding, IP addresses, Domain Name System (DNS), and Network Time Protocol (NTP), are configured.

For more information about the syntax and options supported in the cluster configuration file, see the following document:

4.4   Hooks

Hooks are executable files (e.g shell scripts, C binaries) that are executed in alphabetical order (they are numerically sorted according to the GNU definition) during and after the installation process. See Reference [13] for more information on what SW is installed in LDEwS and what can be used during hook execution. There are three types of hooks

In the hooks directories you should only have files not directories. Files must have the execute permission set. Files with the “non_exec-“ prefix will be copied to the node but not executed. They will be available under the same directory where the hook scripts are executed. The environmental variable HOOKS_DIR is set to the directory that the current executed hook is called from and the rest of the hooks (or non_exec files) of the same type are located. All hooks may have one of the following suffixes

In case the cluster.conf is not available when executing the pre-install hooks, then the hooks with the _blade_install and _blade_all suffix will be the only ones executed.

In case cluster.conf is updated or created via a hook, then the following command can be used to validate the updated cluster.conf cluster config -v -f <path_to_cluster_conf>. If the validation is OK then the message Configuration is OK! .

4.5   Application repository

When installing from an installation server, the contents of the optional <download_path>/repo folder will be downloaded to the installation environment subject to the following:

4.6   Interacting with the automatic installation

It is possible to interact with the LDEwS installation and modify or add configuration parameters in the two configuration files, installation.conf and cluster.conf. LDEwS provides the path where the "repo" used for the installation is residing and it contains the two configuration files that will be used for the installation under etc folder. Also the hook tarballs are kept under <repo_path>/hooks. It is possible to retrieve this path by executing:
cluster install --repo-path
The command will return the path to the repo or -1 in case of an error e.g path is not defined. The path to the configuration files is <repo_path>/etc and the path where the hooks are stored during installation (as compressed tarballs) is <repo_path>/hooks

Note:  
The path that the hooks are stored during installation is different than the path where the hooks are located when executed.

In order to modify any of the configuration files a hook should be used. For example in order to add MAC addresses for all/any blades in the cluster write a hook that appends the MAC addresses (in the right syntax) in the cluster.conf and is executed as a pre-installation hook. Duplicate entries should not be used in installation.conf. In case a parameter is defined twice, only the later definition is kept. For rules and syntax regarding cluster.conf see Reference [6]

After the installation is finished the configuration files are kept under /cluster/etc. Only relevant parts are kept in the installation.conf

5   Installing LDEwS using AIT

5.1   Initial installation

The os-plugin needed by the AIT is included in the runtime container, under the ait folder (ait/installation-env-setup). The os-plugin needs the location of the configuration file as an argument

os-plugin <path to os-plugin config file>

The format of the configuration file is a text file containing a definition of keywords.

Table 7    installation.conf for AIT

Keyword

 

Description

tftp_root

mandatory

The path to the root folder for the TFTP server

tftp_subdir

mandatory

The sub-directory in the TFTP root, where LDEwS files should be deployed

type

mandatory

Type of installation. It should be:


  • server for installation using an installation server

  • flash for installation using a flash media

  • iso for installation using a CD/DVD ROM media

agent_setup

mandatory

The path to the directory where the AIT files are stored.

config

mandatory

The path to the directory that contains the LDEwS configuration files installation.conf and cluster.conf

iso_name

mandatory (with type iso)

The name of the produced ISO file

pre_install

optional

The path to the directory containing hook scripts to be executed in the pre installation phase of the OS

post_install

optional

The path to the directory containing hook scripts to be executed immediately after the installation phase of the OS and before the first reboot into the running state

after_boot

optional

The path to the directory containing hook scripts to be executed during every boot phase of the OS

LDEwS os-plugin supports the AIT 1.2.0 interface version.

5.2   Campaign installation

The deployment container is ready to be used by AIT for the LDEwS campaign installation.

Note:  
uses the following syntax.

<parameter_name>=<value>.


6   Installing LDEwS with Core MW

After obtaining the SDPs (Section 2.5 Preparing to Install or Upgrade LDEwS with Core MW), to install LDEwS on a Core MW system, perform the following steps:

  1. Log in to a controller node as root:

    ssh -l root <address>

  2. Create a temporary directory where the LDEwS runtime SDP can be unpacked:

    mkdir /runtime

  3. Create a temporary directory where the LDEwS deployment SDP can be unpacked:

    mkdir /deployment

  4. Untar the LDEwS runtime SDP in its temporary directory:

    tar zxf <runtime filename> -C /runtime

  5. Untar the LDEwS deployment SDP in its temporary directory:

    tar zxf <deployment filename> -C /deployment

  6. Ensure that both control nodes are up and running and that cmw-status node su si returns Status OK
  7. Enter the temporary runtime directory:

    cd /runtime

  8. Import the controller software bundle:

    cmw-sdp-import ERIC-LINUX_CONTROL-CXP9013151_4.sdp

    On a successful import of the SDP the following output is shown (the R-state might be different):

    ERIC-LINUX_CONTROL-CXP9013151_4-R1A02 imported (type=Bundle)

  9. Import the payload software bundle:

    cmw-sdp-import ERIC-LINUX_PAYLOAD-CXP9013152_4.sdp

    On a successful import of the SDP the following output is shown (the R-state might be different):

    ERIC-LINUX_PAYLOAD-CXP9013152_4-R1A02 imported (type=Bundle)

  10. Enter the temporary deployment directory:

    cd /deployment

  11. Enter the directory of the tool used to create deployment templates:

    cd LDEwS_CAMPAIGN_TOOL_CXP9020125_4-<TO-VERSION>

    <TO-VERSION> is the version of LDEwS that has been downloaded. Versions are expressed as full R format, for example, R1A02.

  12. Create a campaign by running:

    ./lde_deployment_tool --install

    Note:  
    • The campaign will also update the Core MW Information Model Management (IMM) database with the LDE alarm model, which will be activated after COM and COM SA are installed.
    • LDEwS versions 2.5 and later no longer require --os-adapter and include the OS-Adapter by default
    • To generate a campaign for a system without payloads add --controller-only.
    • To generate a single step campaign, add --singlestep
    • To generate a campaign without the ECIM Equipment model, add --no-equipment

  13. Import the deployment campaign created in the directory where the tool was executed:

    cmw-sdp-import LDEwS-CAMPAIGN-CXP9020125_4-<TO-VERSION>.sdp

    On a successful import of the SDP the following output is shown:

    ERIC-LDEwS-CXP9020125_4-R1A02 imported (type=Campaign)

  14. List the imported campaigns:

    cmw-repository-list --campaign

    The list printed should include the name shown in Step 13, i.e. ERIC-LDEwS-CXP9020125_4-R1A02.

  15. Start the campaign:

    cmw-campaign-start ERIC-LDEwS-CXP9020125_4-<TO-VERSION>

    The nodes will now do a rolling upgrade and all the nodes will be restarted one by one. First the controller nodes are restarted and then the payload nodes.

  16. When the controller nodes are restarted ssh will disconnect and need to be reconnected

    ssh -l root <address>

  17. Before and after the controller nodes have restarted, the status of the running campaign can be checked with the command:

    cmw-campaign-status ERIC-LDEwS-CXP9020125_4-<TO-VERSION>

    While the campaign is running, one of the following two statuses will be printed:

    ERIC-LDEwS-CXP9020125_4-R1A02=INITIAL.
    (At this stage, the backup of the current state of the cluster is being performed).
    ERIC-LDEwS-CXP9020125_4-R1A02=EXECUTING

    If the campaign succeeds, the following will be printed:

    ERIC-LDEwS-CXP9020125_4-R1A02=COMPLETED

    If the campaign fails, the following will be printed and recovery steps should be taken:

    ERIC-LDEwS-CXP9020125_4-R1A02=FAILED

  18. Once the campaign has completed, commit the campaign:

    cmw-campaign-commit ERIC-LDEwS-CXP9020125_4-TO-VERSION>

    If this is successful the command cmw-campaign-status ERIC-LDEwS-CXP9020125_4-<TO-VERSION> will print:

    ERIC-LDEwS-CXP9020125_4-R1A02=COMMITTED

  19. Once the campaign has been committed, the SDP can be removed:

    cmw-sdp-remove ERIC-LDEwS-CXP9020125_4-<TO-VERSION>

    On a successful removal of the SDP, the following output should be printed:

    Campaign SDP removed ERIC-LDEwS-CXP9020125_4-R1A02

6.1   Installing BRF Participants

This sections details installation instructions for the two different BRF participants provided with LDEwS.

The BRF participants are mutually exclusive. Only one may be installed on a system at one time

The participants are named as:

  1. BRF Participant
  2. BRF Script Participant

The BRF Script Participant must be used to obtain the following features:

6.1.1   Installing BRF Participant

To install the BRF Participant for LDEwS, perform the following steps:

  1. Log in to the controller node as root:

    ssh -l root <address>

  2. Create a temporary directory where the runtime container can be unpacked:

    mkdir /runtime

  3. Create a temporary directory where the deployment template can be unpacked:

    mkdir /deployment

  4. Untar the runtime container, CXP 902 1148/1, in its temporary directory:

    tar zxf ldews-brf-<decimal version>-runtime-sle-cxp9021148.tar.gz -C /runtime

    <decimal version> is the RPM decimal version of LDE_BRF that is being installed e.g 1.1.0.

  5. Untar the deployment container, CXP 902 1149/1, template in its temporary directory:

    tar zxf ldews-brf-<decimal version>-deployment-sle-cxp9021149.tar.gz -C /deployment

  6. Ensure that both control nodes are up and running.
  7. Import the runtime container:

    cd /runtime

    cmw-sdp-import LDE_BRF-CXP9021148_1-<r-state>_RUNTIME.sdp

    On a successful import of the SDP the following output is shown:

    ERIC-LDE_BRF-CXP9021148_1-<r-state> imported (type=Bundle)

  8. Import the deployment container:

    cd ../deployment/LDE_BRF-CXP9021149_1-<r-state>_I1_TEMPLATE_<type>_<reboot>

    Note:  
    There are different installation campaigns for LDE_BRF depending on which role LDE_BRF should act as.

    <type> is either SYSTEM, USER or BOTH depending if LDE_BRF should act as a persistent storage owner for system data, user data or both system and user data.

    <reboot> is either 0 or 1 if a reboot is required after a restore.


    cmw-sdp-import LDE_BRF-CXP9021149_1-<r-state>_I1_TEMPLATE_<type>_<reboot>.sdp

  9. List the imported campaigns:

    cmw-repository-list --campaign

    At least the name output is to contain the name ERIC-LDE_BRF-Install.

  10. Start the campaign:

    cmw-campaign-start ERIC-LDE_BRF-Install

  11. Check status of campaign:

    cmw-campaign-status ERIC-LDE_BRF-Install

    If the campaign is running the following is shown:

    ERIC-LDE_BRF-Install=INITIAL
    ERIC-LDE_BRF-Install=EXECUTING

    If the campaign fails the following is shown:

    ERIC-LDE_BRF-Install=FAILED

    If the campaign succeeded the following is shown:

    ERIC-LDE_BRF-Install=COMPLETED

  12. When the command cmw-campaign-status returns ERIC-LDE_BRF-Install=COMPLETED, commit the campaign:

    cmw-campaign-commit ERIC-LDE_BRF-Install

    If this is successful the command cmw-campaign-status ERIC-LDE_BRF-Install will output:

    ERIC-LDE_BRF-Install=COMMITTED

6.1.2   Installing BRF Script Participant

BRF Script Participant implements a script based BRF persistent storage owner participant, which supports the following BRF features

Both features are documented in detail in BRF-C Management Guide, Reference [14]

Required software:

To install BRF Script Based Participant for LDE, perform the following steps:

  1. Follow steps 1 through 3 on Section 6.1 Installing BRF Participants
  2. Untar the runtime container, CXP 902 1148/2, in its temporary directory:

    tar zxf ldews-brf_script-<decimal version>-runtime-sle-cxp9021148.tar.gz -C /runtime

    <decimal version> is the RPM decimal version of BRF Script that is being installed e.g 2.1.0.

  3. Untar the deployment container, CXP 902 1149/2, template in its temporary directory:

    tar zxf ldews-brf_script-<decimal version>-deployment-sle-cxp9021149.tar.gz -C /deployment

  4. Ensure that both control nodes are up and running.
  5. Import the runtime container:

    cd /runtime
    cmw-sdp-import lde-brf-script-cxp9021148-<decimal version>-<release version>.noarch.rpm

    <decimal version> is the RPM decimal version of BRF Script that is being installed e.g 2.1.0 and <release version> is the RPM release version of BRF Script e.g 1.

    On a successful import of the SDP the following output is shown:
    ERIC-lde-brf-script-cxp9021148-<decimal version>-<release version> imported (type=Bundle)

  6. Import the deployment container:

    cd ../deployment/

    Note:  
    The default installation campaign will install a system type and a data type persistent storage owner participant with reboot flag 1.
    If you want a different setup, you have to generate a different deployment campaign. For example, to generate campaigns which installs only the user type PSO with reboot flag 0:
    cd ait
    ./generate-campaign.sh -u 0

    The campaigns will be generated in the tmp subdirectory.

    For further details, please see the README file in the deployment container.


    To import the default installation campaign perform the following steps:

    cmw-sdp-import ERIC-lde-brf-script_I1_TEMPLATE_CXP9021149-<decimal version>/ERIC-lde-brf-script_I1_TEMPLATE_CXP9021149-<decimal version>.sdp

  7. List the imported campaigns:

    cmw-repository-list --campaign

    The output should contain the following line: ERIC-lde-brf-script-Install

  8. Start the campaign:

    cmw-campaign-start --disable-backup ERIC-lde-brf-script-Install

  9. Check status of campaign:

    cmw-campaign-status ERIC-lde-brf-script-Install

    If the campaign is running the following is shown:

    ERIC-lde-brf-script-Install=INITIAL
    ERIC-lde-brf-script-Install=EXECUTING

    If the campaign fails the following is shown:

    ERIC-lde-brf-script-Install=FAILED

    If the campaign succeeded the following is shown:

    ERIC-lde-brf-script-Install=COMPLETED

  10. When the command cmw-campaign-status returns ERIC-lde-brf-script-Install=COMPLETED, commit the campaign:

    cmw-campaign-commit ERIC-lde-brf-script-Install

    If this is successful the command cmw-campaign-status ERIC-lde-brf-script-Install will output:

    ERIC-lde-brf-script-Install=COMMITTED

  11. Remove campaign SDP from the repository

    cmw-sdp-remove ERIC-lde-brf-script-Install

  12. Delete the temporary directories:

    rm -r /runtime /deployment

7   Upgrading LDEwS with Core MW 3.2 or greater (BFU Upgrade)

Support for Baseline Free Upgrades (BFU) is provided in LDEwS 2.2 CP1 or greater and must be used in conjunction with CoreMW 3.2 or greater.

Note:  
Before upgrading, ensure that a previous version of LDEwS has been installed by looking at the output of cmw-repository-list

To upgrade LDEwS via BFU, perform the following:

  1. Perform Steps 1 through 11 of Section 6 Installing LDEwS with Core MW.
  2. Import the BFU deployment campaign contained in the directory:

    cmw-sdp-import LDEwS-CAMPAIGN-BFU-CXP9020125_4-<NEW-VERSION>.sdp

    On a successful import of the SDP the following output is shown:

    ERIC-LDEwS-BFU-CXP9020125_4-<NEW-VERSION> imported (type=Campaign)

  3. Install the BFU campaign as described in step Step 15 in Section 6 through to step 19, substituting the BFU campaign name where necessary.

For Single-Step BFU upgrade campaigns, perform the following steps. Note that campaigns MUST be created on the target and that dummy campaign (same revision) upgrades are not permitted:

  1. Perform Steps 1 through 11 of Section 6 Installing LDEwS with Core MW.
  2. Create a BFU campaign by running:

    ./lde_deployment_tool --singlestep --bfu

    Note:  
    • For more information, execute: ./lde_deployment_tool --help
    • To generate a campaign without the ECIM Equipment model, add --no-equipment

  3. Import the deployment campaign created in the directory where the tool was executed:

    cmw-sdp-import LDEwS-CAMPAIGN-BFU-SS-CXP9020125_4-<NEW-VERSION>.sdp

    On a successful import of the SDP the following output is shown:

    ERIC-LDEwS-BFU-SS-CXP9020125_4-<NEW-VERSION> imported (type=Campaign)

  4. List the imported campaigns:

    cmw-repository-list --campaign

    The list printed should include the name shown in the previous step, i.e. ERIC-LDEwS-BFU-SS-CXP9020125_4-R1A02.

  5. Install the BFU campaign as described in Step 15 in Section 6 through to Step 19, substituting the BFU campaign name where necessary.

7.1   Upgrading the BRF participant

To upgrade the BRF participant in CoreMW 3.2 or greater, perform the following:

  1. Follow steps 1 through 7 on Section 6.1 Installing BRF Participants.
  2. Import the deployment container:

    cd ../deployment/LDE_BRF-CXP9021149_1-<r-state>_B1_TEMPLATE_<type>_<reboot>

    and import the relevant BFU SDP (note the _B1_ in the file name).

    cmw-sdp-import LDE_BRF-CXP9021149_1-<r-state>_B1_TEMPLATE_<type>_<reboot>.sdp

  3. Complete the upgrade by installing the campaign as described in Step 10 in Section 6.1.1 through to Step 12. Note, however, that the campaign to install is named ERIC-LDE_BRF-BFU

7.2   Upgrading BRF Participant to BRF Script Participant

To upgrade an installed BRF Participant to BRF Script Participant, perform the following:

  1. Follow steps 1 through 5 on Section 6.1.2 Installing BRF Script Participant
  2. Import the deployment container:

    cd ../deployment/
    cmw-sdp-import ERIC-lde-brf-script_B1_TEMPLATE_CXP9021149-<decimal version>/ERIC-lde-brf-script_BFU_TEMPLATE_CXP9021149-<decimal version>.sdp

    <decimal version> is the RPM decimal version of BRF Script that is being upgraded e.g 2.1.0.

    Note:  
    The deployment container contains a default upgrade campaign which installs a system type and a data type persistent storage owner participant with reboot flag 1. If you want a different setup, you have to generate a different campaign.
    For further details please see the note in Step 6 in Section 6.1.2

    The above example is for rolling bfu upgrade. If single step bfu upgrade is chosen, follow below example.

    cmw-sdp-import ERIC-lde-brf-script_B1_TEMPLATE_CXP9021149-<decimal version>/ERIC-lde-brf-script_BFU_SS_TEMPLATE_CXP9021149-<decimal version>.sdp

  3. Complete the upgrade by starting the campaign as described in Step 10 in Section 6.1.1 to Step 12. Note however, that the campaign to upgrade is named ERIC-lde-brf-script-Bfu-Rolling or ERIC-lde-brf-script-Bfu-SS.

8   Upgrading LDEwS with Core MW prior to version 3.2 (non-BFU)

All versions of LDEwS after 2.5 (R8A) (or versions of 2.4 installed with the OS-Adapter enabled) need to have the AMF components removed before a non-BFU upgrade.

Note:  
The command immlist safSg=NoRed-PMCounter,safApp=ERIC-LDE will verify if the PM-Counter component is installed.
Further details about LDE AMF Components can be found in the LDE Programmer's Guide LDE Programmer's Guide Reference [13]

To upgrade LDEwS perform the following steps:

  1. Remove the AMF components following the chapter in LDE CBA Adaptations LDE CBA Adaptations Reference [17]
  2. Perform Steps 1 through 11 in Section 6 Installing LDEwS with Core MW.
  3. Generate an upgrade campaign by running

    ./lde_deployment_tool --from-sc <FROM-VERSION> --upgrade

    where <FROM-VERSION> is the current LDEwS version installed.

    Note:  
    • To generate a campaign for a system without payloads add --controller-only.
    • To generate a single step campaign, add --singlestep
    • To generate a campaign without the ECIM Equipment model, add --no-equipment
    • For more information, execute: lde_deployment_tool --help

  4. Import and install the upgrade campaign as described in Step 13 in Section 6 through to Step 19.

8.1   Downgrading LDEwS with Core MW

Downgrading is the performed the same way as upgrading, but the new version is lower than the current version. Follow the steps in Section 8 Upgrading LDEwS with Core MW prior to version 3.2 (non-BFU), ensuring to first remove the AMF Components if necessary.

8.2   Upgrading the BRF participant

To upgrade the BRF participant within CoreMW (version prior to 3.2), perform the following steps. Note that the current version of BRF installed MUST match the version specified in the upgrade campaign or the upgrade will fail. The 'from' and 'to' versions of the BRF are specified in the campaign.xml file within the campaign sdp.

  1. Follow steps 1 through 7 on Section 6.1 Installing BRF Participants.
  2. Import the deployment container:

    cd ../deployment/LDE_BRF-CXP9021149_1-<r-state>_U1_TEMPLATE_<type>_<reboot>

    and import the relevant upgrade SDP (note the _U1_ in the file name).

    cmw-sdp-import LDE_BRF-CXP9021149_1-<r-state>_U1_TEMPLATE_<type>_<reboot>.sdp

  3. Complete the upgrade by installing the campaign as described in Step 10 in Section 6.1.1 through to Step 12. Note, however, that the campaign to install is named ERIC-LDE_BRF-Upgrade

9   LDEwS System Recovery via secondary media

The recovery is performed as an initial LDEwS installation followed by restoring a backup which will restore the full CBA system and any applications. The source of the installation is stored on a secondary media.

System recovery via secondary media only applies to the first control node. The way of installing the second control node and payload nodes are described in Section 3. This section describes the following:

9.1   Prerequisites

There is a working LDEwS installed on the secondary media. The media can be either internal or external as long as it is capable of booting the system and has enough space to house a standalone installation.

9.2   Preparing Installation Repository

There are two types of source input used to generate the installation repository:

In order to prepare the installation repository, perform the following steps:

  1. Boot the system into secondary media, and login to maintenance mode as root user.
  2. Make source input visible on the file system. This could be done by transferring a BRF backup file, such as systemdata.tar to the system with the constraint that it needs to fit into RAM, or mounting a (network) file system, such as /cluster/.
  3. Generate the installation repository:
    • To create repository from BRF backup file:

      cluster install -f <brf_file> [-o output_dir] -C

    • To create repository from DRBD directory of working LDEwS:

      cluster install -d <src_dir> [-o output_dir] -C

    -o is optional. By default, the repository will be created under /tmp/installation_repo/. You can get the repository path by running cluster install --repo-path

  4. Update cluster.conf and installation.conf under <output_dir>/etc/ according to the hardware information and expected installation preference. This step is optional.
  5. Update or remove the hooks files under <output_dir>/hooks. There are three compressed tarballs containing the hook files for each hooks type (one tarball per hook type). The hooks are the same as the ones used for the maiden installation of the blades and will be executed in the exact same manner as in an maiden installation. If you don't want to execute all or some of the hooks you need to remove the tarballs or update them. See Section 4.4 for more details about hooks .
    Note:  
    There will be no network connectivity during hooks execution (similar to the installation from flash media), unless the user set this up.

9.3   Installing LDEwS from Repository

  1. Install LDEwS from the created repository under <output_dir>:

    cluster install

  2. Reboot the node:

    reboot

    This step is omitted unless cluster_install_reboot is set to "n" in the installation.conf

9.4   Restoring Application from BRF backup

User applications can be restored from BRF backup files. For more information about the syntax and options supported in the BRF commands, see the following document:

10   Adding, Removing, and Replacing a Node

This section describes the following:

Note:  
This section is only applicable for cluster installations, not single node installations.

10.1   Adding a Second Control Node

Adding a control node to the cluster can not be done at runtime without affecting the other control node in the cluster. Only two control nodes can be added to a cluster. This section only describes how to add a second control node to the cluster.

If the cluster does not already have a control node an initial installation is needed. Contact Ericsson support personnel for further help.

To add a second control node, perform the following steps:

  1. Log in as root on the first control node.
  2. Update the cluster configuration file to include the node ID, interfaces, and any other configuration related to the second control node. Edit the cluster configuration file:

    vi /cluster/etc/cluster.conf

    For more information about the syntax and options supported in the cluster configuration file, see the following document:

  3. Reload the cluster configuration on the first control node:

    cluster config --reload

  4. Reboot the first control node:

    reboot

  5. Log in as root on the first control node again.
  6. Add the ldews-control RPM package to the second control node:

    cd /cluster/rpms (optional)

    cluster rpm --add ldews-control-cxp9020125-<decimal version>-<release>.x86_64.rpm --node <id>

    <decimal version> is the RPM decimal version of LDEwS that is being installed e.g 4.0.0 and <release> is the RPM release version of LDEwS e.g 26.sle12. <id> is the node ID of the second control node.

  7. Log out from the first control node.
  8. Power on the second control node.
  9. Enter the BIOS on the second control node and configure the node for PXE (network) booting.

    For more information about how to configure the boot device, see the BIOS manual.

    Note:  
    It is very important that the second control node is not allowed to boot from its hard disk at this stage. For example, if the hard disk contains an old installation of LDEwS, then booting from the hard disk effectively means reactivation of the old cluster, which will cause the installation to be aborted and the disk mirror on the newly installed first control node to be discarded and resynchronized.

  10. Exit the BIOS.
  11. The second control node now requests an IP address and receives a response from the DHCP server running on the first control node.
    Note:  
    By default the node uses serial console (not VGA console) as the primary output device. To temporarily change the default behavior and instead use the VGA console as primary output device type vga at the boot: prompt. The boot prompt will only be shown for a short while (seconds) early in the boot process. If no input is given during this time the node will automatically select serial console as primary output device and the boot process will continue.

  12. A boot sequence follows.
  13. The software installation will now start and the following text is shown:

    Installing, please wait...

    The time it takes to install the software depends on which hardware is used. It can take as long as 10 minutes and is completed once the following text is shown:

    Installation completed successfully

    After this the node will automatically reboot.

    If anything went wrong during the installation, the following message is shown instead:

    Installation failed (see /root/install.log)

    In case the installation fails login prompt is shown.

  14. Enter the BIOS on the second control node and configure the node to boot from the hard disk.

    For more information about how to configure the boot device, see the BIOS manual.

  15. Exit the BIOS.
  16. Wait for the second control node to boot up in operational mode.

    When the boot sequence is completed a login prompt is shown.

    Note:  
    The boot sequence of the second control node can take a long time (hours) due to the need for a full disk synchronization at first boot after installation. However, while the disk synchronization is in progress it is possible to in parallel continue to install payload nodes.

    Avoid rebooting either control node until the disk synchronization has completed, as that will only prolong the time it takes to complete the synchronization. On each control node the /proc/drbd file provides detailed information about the disk synchronization progress.


    The second control node is now installed.

10.2   Adding Payload Nodes

Adding payload nodes to the cluster can be done at runtime without affecting the other nodes in the cluster.

To add payload nodes, perform the following steps:

  1. Log in as root on the first control node.
  2. Update the cluster configuration file to include the node IDs, interfaces, and any other configuration related to each new payload node. Edit the cluster configuration file:

    vi /cluster/etc/cluster.conf

    For more information about the syntax and options supported in the cluster configuration file, see the following document

  3. Reload the cluster configuration on all nodes in the cluster:

    cluster config --reload --all

  4. Add the linux-payload RPM package to each new payload node:

    cd /cluster/rpms (optional)

    cluster rpm --add ldews-payload-cxp9020125-<decimal version>-<release>.x86_64.rpm --node <id>

    <decimal version> is the RPM decimal version of LDEwS that is being installed e.g 4.0.0 and <release> is the RPM release version of LDEwS e.g 26.sle12. <id> is the node ID of each payload node. The command must be repeated once for each new payload node.

  5. Log out from the first control node.
  6. Power on each payload node.
  7. Enter the BIOS on each payload node and configure the node for PXE (network) booting and to automatically retry booting indefinitely.

    For more information about how to configure the boot device, see the BIOS manual.

    Note:  
    It is very important that the payload nodes are configured to only boot using PXE and not hard disk. For example, if the hard disk contains a GRUB boot block which will not be able to chain load further stages from a disk partition, the node will hang indefinitely if the control nodes are unavailable during the time that the node tries to boot using PXE. Ensure to check that the control is not handed over to, for example, a SCSI BIOS which also define boot devices.

  8. Exit the BIOS.
  9. Wait for each payload node to boot up in operational mode.

    When the boot sequence is completed a login prompt is shown.

    The payload nodes are now installed.

10.3   Removing a Node

Removing a node from the cluster can be done at runtime without affecting the other nodes in the cluster.

To remove a node, perform the following steps:

  1. Power down the node to be removed.
  2. Log in as root on the first control node.
  3. Update the cluster configuration file to remove the node IDs, interfaces, and any other configuration related to the removed node. Edit the cluster configuration file:

    vi /cluster/etc/cluster.conf

  4. Reload the cluster configuration on all nodes in the cluster:

    cluster config --reload --all

  5. Enter the following commands on the first control node to remove all old log files for the removed node.

    cd /var/log

    rm -rf <hostname>

    <hostname> is the host name of the removed node.

  6. Enter the following commands on the first control node to remove all old boot and configuration files for the removed node.

    cd /cluster/nodes

    rm -rf <id>

    <id> is the node ID of the removed node.

  7. Log out from the first control node.
  8. Log in as root on the second control node.
  9. Issue the following commands on the second control node to remove all old log files for the removed node.

    cd /var/log

    rm -rf <hostname>

    <hostname> is the host name of the removed node.

  10. Log out from the second control node.

    The node is now removed.

10.4   Replacing a Node

Replacing a node in the cluster can be done at runtime without affecting the other nodes in the cluster.

To replace a node, perform the following steps:

  1. Power down the node that is to be replaced.
  2. Replace the hardware unit.
  3. Log in as root on one of the control nodes.
  4. Update the cluster configuration file by replacing the MAC addresses of the old node with the MAC addresses of the new node. Edit the cluster configuration file:

    vi /cluster/etc/cluster.conf

  5. Reload the cluster configuration on the first control node:

    cluster config --reload

  6. Log out from the first control node.
  7. Log in as root on the second control node. If not available, continue with step Step 10.
  8. Reload the cluster configuration on the second control node:

    cluster config --reload

    Note:  
    While this step might seem unnecessary or redundant, it is in fact very important that the cluster configuration is reloaded on both control nodes.

  9. Log out from the second control node.
  10. Select the appropriate action:

11   Backup Restore

Backup restore makes it possible to recover from a complete failure of a site.

11.1   Restoring a Backup

The procedure in this section assumes the system is up and running when the restore procedure starts. If this is not the case, contact Ericsson support personnel for further help.

To restore a backup from a remote server, perform the following steps:

  1. Power down all nodes in the cluster.
  2. Power up the first control node.
  3. Boot into maintenance mode:

    Select Maintenance mode in the GRUB boot menu.

    To see the GRUB boot menu a VGA or serial control must be attached to the machine. Press any key when the text Press any key to continue. is shown early in the boot phase. This text is only shown for a few seconds before it boots using the default Operational mode.

  4. Enter the default root password when prompted.

    For more information about the default password, see the following document:

  5. Configure the network so that the node can communicate with the remote server where the backup is stored. Use the standard tool ip.

    Example: If the node is to use the network interface eth0, IP address 10.0.0.22 with netmask 255.255.255.0 and gateway with IP address 10.0.0.1, then the following would be issued:

    ip addr add 10.0.0.22/255.255.255.0 dev eth0

    ip link set eth0 up

    ip route add default via 10.0.0.1

  6. Start the restore:

    lde-backup --restore <user>@<server>:</path/file.tar.gz>

    <user> is a user name available on the remote server. <server> is the IP address of the remote server. </path/file.tar.gz> is the location and file name of the backup to restore on the remote server.

  7. Enter the password for <user> on <server> when prompted.
    Note:  
    The password must be entered twice due to an initial check that makes sure the backup is available before performing the actual restore.

    The system will now restore the backup. This can take a while depending on how much data is stored in the backup.

    If the restore completes successfully the following text is shown:

    Restore completed

  8. Reboot the node:

    reboot

    This time let it boot into the default Operational mode.

  9. Power on the second control node.
  10. Enter the BIOS on the second control node and configure the node for PXE (network) booting.

    For more information about how to configure the boot device, see the BIOS manual.

    Note:  
    It is very important that the second control node is not allowed to boot from its hard disk at this stage. For example, if the hard disk contains an old installation of LDEwS, then booting from the hard disk effectively means reactivation of the old cluster, which will cause the installation to be aborted and the disk mirror on the newly installed first control node to be discarded and resynchronized.

  11. Exit the BIOS.
  12. The second control node now requests an IP address and receives a response from the DHCP server running on the first control node.
    Note:  
    By default the node uses serial console (not VGA console) as the primary output device. To temporarily change the default behavior and instead use the VGA console as primary output device type vga at the boot: prompt. The boot prompt will only be shown for a short while (seconds) early in the boot process. If no input is given during this time the node will automatically select serial console as primary output device and the boot process will continue.

  13. A boot sequence follows.
  14. The following text is shown:

    Proceed with installation? (y/n)

    Enter “y.

  15. The text Available installation disks: is shown, followed by a list of disks in the system available for installation of LDEwS.

    Select one of the disks in the list by entering the number to the left and press enter. To use the default disk (the first one in the list), only press enter.

  16. The software installation will now start and the following text is shown:

    Installing, please wait...

    The time it takes to install the software depends on which hardware is used. It can take as long as 10 minutes and is completed once the following text is shown:

    Installation completed successfully

    If anything went wrong during the installation, the following message is shown instead:

    Installation failed (see /root/install.log)

    When the installation is completed a login prompt is shown.

  17. Log in as root.

    On the second control node, use the maintenance mode password described in the following document:

  18. Reboot the second control node:

    reboot

    Note:  
    If the installation failed, instead of rebooting the node, examine the installation log (/root/install.log) to further troubleshoot the problem.

  19. Enter the BIOS on the second control node and configure the node to boot from the hard disk.

    For more information about how to configure the boot device, see the BIOS manual.

  20. Exit the BIOS.
  21. Power up all payload nodes.

    The system is now restored.

12   Post Installation Activities

Not applicable.


Reference List

[1] Core MW SW Installation, INSTALLATION INSTRUCTIONS, 1/1531-APR 901 0444/4
[2] Core MW Management Guide, USER GUIDE, 2/1553-APR 901 0444/4
[3] Core MW Software Management, USER GUIDE, 1/1553-APR 901 04444/4
[4] COM SW Installation, INSTALLATION INSTRUCTIONS, 2/1531-APR 901 0443/6
[5] LDE Glossary of Terms and Acronyms, TERMINOLOGY, 1/0033-APR 901 0551/4
[6] LDE Management Guide, USER GUIDE, 1/1553-CAA 901 2978/4
[7] LDE Product Description, DESCRIPTION, 1/1551-APR 901 0551/4
[8] LDE Trademark Information, LIST, 1/006 51-APR 901 0551/4
[9] LDEwS Upgrade Instruction, UPGRADE INSTRUCTIONS, 1/153 72-ANA 901 39/4
[10] Typographic Conventions, DESCRIPTION, 1/1551-FCK 101 05
[11] AIT Installation Instructions, INSTALLATION INSTRUCTIONS, 1/1531-APR 901 0496/1
[12] AIT User Guide, OPERATING INSTRUCTIONS, 1/1543-APR 901 0496/1
[13] LDE Programmer's Guide, User Instructions, 1/198 17-CAA 901 2978/4
[14] BRF-C Management Guide, USER GUIDE, 1/1553-APR 901 0485/1
[15] BRF-C Software Installation Instructions, INSTALLATION INSTRUCTIONS, 1/1531 - APR 901 0485/1
[16] BRF-EIA Software Installation Instructions, INSTALLATION INSTRUCTIONS, 2/1531-APR 901 0485/1
[17] LDE CBA Adaptations, USER GUIDE, 1/1553-CXP 902 0284/4


Copyright

© Ericsson AB 2015. All rights reserved. No part of this document may be reproduced in any form without the written permission of the copyright owner.

Disclaimer

The contents of this document are subject to revision without notice due to continued progress in methodology, design and manufacturing. Ericsson shall have no liability for any error or damage of any kind resulting from the use of this document.

Trademark List
All trademarks mentioned herein are the property of their respective owners. These are shown in the document LDE Trademark Information.

    LDEwS SW Installation         Linux Distribution Extensions