Software Installation for Virtual and Cloud Deployment
Ericsson Dynamic Activation 1

Contents

1Introduction
1.1Typographic Conventions
1.2Prerequisites

2

Installation Process

3

Installation
3.1Prerequisites for Installing Dynamic Activation
3.2Guidelines for the Readers
3.3Deploying Virtual Machines
3.3.1Deploying Dynamic Activation in Virtualized Environment
3.3.2Deploying Dynamic Activation in Cloud - ECEE
3.3.3Deploying Dynamic Activation in Cloud - OpenStack
3.3.4Installing Licenses (Only ECEE and OpenStack)
3.3.5Installing HSS Validator Plug-in (Optional)
3.3.6Creating Administrative Users
3.3.7SSL Configuration (Optional)
3.3.8Configurations
3.3.9Modify Notification Rules (Optional)
3.3.10Set Initial License Counters
3.3.11External OpenID Connect Provider Configuration (Optional)

4

SNMP Configuration

5

Update and Rollback of Dynamic Activation on Virtualized Deployment
5.1Preparations
5.2Update Instructions
5.2.1All Nodes with Downtime
5.2.2Node by Node without Downtime
5.3Rollback Instructions

6

Backup

Reference List

1   Introduction

This document contains instructions regarding how to install Ericsson Dynamic Activation (EDA) in virtualized and cloud deployments.

1.1   Typographic Conventions

Typographic conventions are described in the document Library Overview, Reference [1].

For information about abbreviations used throughout this document, see Glossary of Terms and Acronyms, Reference [2].

1.2   Prerequisites

The following are the prerequisites to make full use of this document:

2   Installation Process

This section gives an overview of the whole installation process.

The Table 1 lists the installation steps, including an estimation of the time needed to perform them.

Table 1    Installation Process

Installation Step

Time Estimation

Preparing deployment files(1)

About 30 minutes.


For details, see Section 3.3.

Deploy Virtual Machines

Less than 30 minutes.


For details, see Section 3.3.

Backup

For details, see Backup and Restore Guideline for Virtual and Cloud Deployment, Reference [5]

(1)  Not valid for Cloud deployment using ECEE or OpenStack.


3   Installation

This section covers how to install the software to be used for Dynamic Activation.

3.1   Prerequisites for Installing Dynamic Activation

This section lists the Dynamic Activation installation prerequisites:

3.2   Guidelines for the Readers

3.3   Deploying Virtual Machines

Attention!

For commercial deployment by using KVM and VMware ESXi, it is mandatory to set up external block storage for nodes 1-3, see section Set up Persistent Block Storage on VMs in System Administrators Guide for Virtual and Cloud Deployment, Reference [4].

In ECEE deployment, ephemeral disk is used. In OpenStack deployment, ephemeral disk is used by instance and cinder volume is used by Cassandra.
Note:  
The following needs to be performed for every added Virtual Machine (VM).

If using KVM, make sure:

For KVM deployment, continue with Section 3.3.1.

If using VMware ESXi, make sure:

For VMware ESXi deployment, continue with Section 3.3.1.

If using ECEE, make sure:

For ECEE Cloud deployment, continue with Section 3.3.2.

If using OpenStack, make sure:

For OpenStack Cloud deployment, continue with Section 3.3.3.

3.3.1   Deploying Dynamic Activation in Virtualized Environment

This section describes how to install Dynamic Activation in virtualized deployment

Figure 1   Workflow of Virtualization Deployment

3.3.1.1   Preparing Deployment Artifacts

When the prerequisites specified in Section 3.3 are met, continue with the following step-list:

  1. Download and start the EDA Deployment Manager:
    • Save the zip file, EDA_Deployment_Manager.zip to, for example a local area on a local machine
    • Unpack the zip file.
    • Double-click the .jar file.
      Note:  
      Requires Oracle's JAVA version 1.8.0_71 or higher. The file is about 600 KB in size.

      If not having Oracle's JAVA version 1.8.0_71 or later, there is a possibility to create an Oracle JAVA independent .bat file and run the tool. To create such file:

      Note:  
      This alternative works only on windows.

      1. Download (do not install) the appropriate Oracle JRE tar.gz file for the machine that the EDA Deployment Manager tool will be run from, and store it in the same folder as where the ActivationDeploymentArtifactManager.jar file was unpacked.
      2. In the same directory, create the EDA_DeploymentManager.bat file, with the following content:

        <Path to java.exe file in downloaded JRE> -jar ActivationDeploymentArtifactManager.jar

  2. Fill in the required generic values.

    Start by filling in the values that are shared by all VMs:

    Example, Generic values shared by all VMs


  3. Add hosts.

    Add a host entry for each hypervisor that will host Dynamic Activation VMs.

    To add a host, right-click on List of VM hosts and select Add Host:

    Add Host


  4. Add VMs to the hosts.

    Add a VM entry for each VM that will be deployed under their respective host, where they are supposed to be hosted.

    To add a VM, right-click on a host, for example Host-1 and select Add VMs:

    Example, Add VM - Before configuration:



    Example, Add VM - After configuration:


  5. Fill in the VM-specific data:
    Note:  
    Parameters Number of vCPUs per VM, Amount of RAM (GB), # of vCPUs (0 = default) and Amount of RAM (GB) (0 = default) are not applicable if the chosen target hypervisor is VMware.

    Example,VM-specific data


  6. After adding the wanted hosts, their respective VMs, and entering all the required generic values, click the Generate Hypervisor Artifacts button, see example in Figure 2. Store the generated artifacts (bootstrap.iso) on, for example an empty directory on a local machine.

Example Configuration of EDA Deployment Manager

Note:  
Parameters Number of vCPUs per VM, Amount of RAM (GB), # of vCPUs (0 = default) and Amount of RAM (GB) (0 = default) are not applicable if the chosen target hypervisor is VMware.

Figure 2   EDA Deployment Manager

  1. From the EDA Deployment Manager tool, save the updated deployment schema file, .ds, on an appropriate storage area. This file can, for example be used for future cluster expansion.
Note:  
If there is a need or desire to deploy a virtualized Dynamic Activation cluster in an unsupported configuration, it is still possible to generate artifacts using the EDA Deployment Manager (possibly with some dummy values), and then manipulate the generated artifacts before deployment.

The main artifact to focus on is the user-data file for each VM, which as the name suggests, contains user-data. It is possible to add own user commands too, as well as modifying existing ones, before deployment.


  1. If using KVM, continue with Section 3.3.1.2.

    If using VMware ESXi, continue with Section 3.3.1.3.

3.3.1.2   Deploying Hypervisor Artifacts - KVM

The following instruction is used to deploy Dynamic Activation:

  1. Create one folder on each KVM host that will hold deployment artifacts as well as the images for each VM.
    Note:  
    Make sure that there is enough disk space. The space required is at least 61 GB times (*) the number of VMs that will be running on that specific host.

  2. Copy or move all node-n folders to their respective KVM host.
    Note:  
    The folders themselves need to be copied, not just their content.

    For example:

  3. On all KVM hosts, extract the <Software_Package>.tar.gz file (EDA System KVM&Cloud SW) on the same folder as where the node-n folders were previously stored, see section Preparing Deployment Artifacts Step 6 in Section 3.3.1.1:

    # tar xvf <Software_Package>.tar.gz

  4. On all KVM hosts where the node-n folders reside, run the deploy.sh script, contained in the extracted (EDA System KVM&Cloud SW) file, to define and start all VMs:

    # KVM/deploy.sh

    Note:  
    Depending on the host performance and number of VMs in the cluster, the deployment process can vary in time to complete. Verify that all processes are operational with the command in Step 2 in Section 3.3.1.4, Section 3.3.1.4.

  5. Continue with Section 3.3.1.4.

3.3.1.3   Deploying Hypervisor Artifacts - VMware

This section includes instructions on how to create a template, that will be used to deploy a Dynamic Activation cluster.

  1. Unpack the <Software_Package>.ova file (EDA System VMWare SW) that was delivered with the Dynamic Activation deliverable.
  2. In VMware vSphere, upload the bootstrap.iso files that were created with EDA Deployment Manager in Step 6 in Section 3.3.1.1, to a datastore that is connected to each hypervisor.
    Note:  
    There is one specific bootstrap.iso for each VM.

  3. In VMware vSphere, go to Deploy OVF Template and follow the vSphere instructions.

    Use the following settings:

    • Disk format - Thin Provisioning

    • Connect the VM's internal interface to the internal network

    • Connect the VM's external OAM interface to the external OAM network

  4. Deploy OVF template.
    Note:  
    Choose to NOT start the VM.

  5. Configure CPU and Memory resources according to the resource planning of the Dynamic Activation system. Refer to Requirements on Virtualization and Cloud Infrastructure, Reference [3].
  6. Convert the VM created in Step 3, to a template.
  7. Use the template created in Step 6, and create the desired amount of VMs. The amount should be the same as the number of VMs created in the EDA Deployment Manager tool.
  8. For all VMs, connect each bootstrap.iso to the virtual CD/DVD.
  9. If traffic separation is used between OAM and Provisioning, add a Network adapter to node-1 and node-2, and bind it to the Traffic/Provisioning network.
  10. It is strongly recommended to create an Affinity rule, saying that node-1, node-2, and node-3 cannot be placed on the same host.
  11. Power on all created VMs.
  12. Continue with Section 3.3.1.4.

3.3.1.4   Activating Dynamic Activation

Note:  
Make sure all VMs are deployed on respective physical host before proceeding with the following instruction.

  1. Log in as root on node-1 and run the following command to make sure that all configurations are completed (the total changes are 0 for all nodes in the cluster):
    Note:  
    If running the command to early, a password prompt may appear. This means that the puppet configuration is not finished with the distribution of the SSH keys. Also, the following may appear: /var/lib/puppet/state/last_run_summary.yaml does not exist yet. Keep running the command until all changes are 0 and the output /var/lib/puppet/state/last_run_summary.yaml does not exist yet does not appear.

    # for host in $(grep 'node-' /etc/hosts | awk '{print $1}'); do ssh "$host" "grep -A1 changes /var/lib/puppet/state/last_run_summary.yaml"; done

    Example of a successful output in a three node cluster:

      changes:
        total: 0
      changes:
        total: 0
      changes:
        total: 0
    

     
  2. Check that all processes are running (UP) on all VMs:

    # 3ppmon status --host all

    All processes must have status UP or, if the process is not supposed to run on the node, have a dash (-).

  3. Obtain the license locking codes for the system.

    From the master VM (type name node-1):

    # /var/log/installfiles/<Prod_Number>-<Version>/ema licenseCodes

    Output:

    INFO - *** Locking codes for <node-1>:
    INFO - ***
      Sentinel RMS Development Kit 8.6.2.0053 Host Locking Code Information Utility
      Copyright (C) 2015 SafeNet, Inc.
     
     
     
                    Locking Code 1     : 2008-*15J 2JTH NXAK S4MY
                    Locking Code 1 (Old Style) : 2008-2DA0B
     
    INFO - *** Locking codes for <node-2>:
    INFO - ***
      Sentinel RMS Development Kit 8.6.2.0053 Host Locking Code Information Utility
      Copyright (C) 2015 SafeNet, Inc.
     
     
     
                    Locking Code 1     : 2008-*1Q4 V7H2 7Q88 53NM
                    Locking Code 1 (Old Style) : 2008-C5D4C
    

  4. Provide all Locking Code for the Ericsson License Information System (ELIS) and get the license file.
  5. Transfer (SFTP) the license file to /var/log/installfiles/ on the master VM (type name node-1), and rename the file to license.txt. This will automatically install all license files when running the /ema deploy -p EMA script.
    Note:  
    It is possible to proceed with the installation without installing the licenses at this stage. If this is the case, the licenses need to be manually installed at a later occasion. For detailed information on how to manually install the licenses, see section License Administration in System Administrators Guide for Virtual and Cloud Deployment, Reference [4].

  6. Install Dynamic Activation software:

    Both Resource Activation and Resource Configuration will be installed.

    Run the following command from node-1:

    # /var/log/installfiles/<Prod_Number>-<Version>/ema deploy -p EMA

  7. From node-1, run the following command to check that no errors exist and that all bindings are OK:

    # bootloader.py node status --host all

  8. Change password for the root account on all VMs in the cluster. Execute the following command on all nodes:

    # passwd

When Dynamic Activation is successfully installed, continue with Section 3.3.5.

3.3.2   Deploying Dynamic Activation in Cloud - ECEE

This section describes how to install Dynamic Activation in cloud deployment by using ECEE.

Figure 3   Workflow of Deployment in ECEE

3.3.2.1   Unpack Deployment Package

  1. Make sure that the deployment package <Software_Package>.tar.gz (EDA System KVM&Cloud SW) is available.
  2. Unpack the deployment package (EDA System KVM&Cloud SW) to a location that can be reached from ECEE::

    # tar -xzf <Software_Package>.tar.gz

    The following files are unpacked:

    • CXP-<version>.qcow2 – Dynamic Activation image file
    • CEE folder that contains:
      • vEDA_infrastructure.yaml
      • vEDA_single.yaml
      • vEDA_cluster.yaml
      • vEDA_cluster_without_anti_affinity_rules.yaml
      • vEDA_expansion.yaml (used for system expansion, not installation)

3.3.2.2   Deploy Image

Use Atlas to deploy Dynamic Activation image.

  1. Log on to Atlas as a user with correct rights.
  2. Choose Compute > Images > Create Image.
  3. Enter an image name, and select Image File from the Image Source drop-down list.
  4. Use Choose File to select the CXP-<version>.qcow2 file that was unpacked in Section 3.3.2.1.
  5. Click Create Image, and wait until the image is created.
  6. Choose Compute > Images, and check the image:
    • Status must be Active

3.3.2.3   Deploy Infrastructure

Use Atlas to deploy Dynamic Activation Infrastructure.

  1. Log on to Atlas as a user with correct rights.
  2. Upload the infrastructure template.
    Note:  
    The infrastructure template needs to be uploaded only once. If it was uploaded before and is launched, go to Step 3.

    1. Choose Orchestration > Catalog, click Upload.
    2. Enter an application name.
    3. In Type, select HOT.
    4. In Application Source drop-down list, select From File, and then use Choose File to select the vEDA_infrastructure.yaml file that were unpacked in Section 3.3.2.1, and then click Upload.
    5. Choose Orchestration > Catalog, select the created application name, and click Launch, and then Next.
    6. Set the following configurations:
      • A Stack Name
      • The password of the logged in user
      • Management/provisioning network address
      • Management network gateway
      • Public Management DHCP net pool start
      • Public Management DHCP net pool end
      • VlanId that is configured in CMXes for the management network
      • Internal network address
    7. Click Launch, and wait until the stack is launched.
      Attention!

      Only one Dynamic Activation infrastructure template can be launched at a time. Launching multiple infrastructure templates causes deployment failures.

    8. In Orchestration > Stacks, check the created stack:
      • Status must be Create Complete
  3. Upload the Dynamic Activation deployment template.
    1. Choose Orchestration > Catalog, and click Upload.
    2. Enter an application name.
    3. In Type, select HOT.
    4. In Application Source drop-down list, select From File, and then use Choose File to select either of the following that were unpacked in Section 3.3.2.1.
      • vEDA_cluster.yaml – for commercial deployment
      • vEDA_single.yaml – one VM instance in single deployment, for non-commercial usage
      • vEDA_cluster_without_anti_affinity_rule.yaml – Three VM instances (node-1 ~ node-3) deployment on the same underlying server, for non-commercial usage
      Note:  
      Both non-commercial deployments have fully functionality of Dynamic Activation system, but limit capacities.

    5. Click Upload, and wait until the file is uploaded.
    6. In Orchestration > Catalog, check the application:
      • Status must be Active

3.3.2.4   Install Virtual Dynamic Activation Cluster

Use ECEE GUI Atlas to install virtual Dynamic Activation cluster.

  1. Log on to Atlas as a user with correct rights.
  2. In Orchestration > Catalog, ensure that the application for deployment created in Section 3.3.2.3 is in Active status.
  3. Click Launch, and then Next.
  4. Set the following configurations:
    • A Stack Name
    • The password of the logged in user
    • Name of instance/hostname
    • VRID of the vEDA cluster
      Note:  
      The VRID must be unique when there are several vEDA deployments in the same subnet.

  5. In Name of the vEDA image drop-down list, select the image created in Section 3.3.2.2.
  6. In vEDA flavor drop-down list, select a flavor name.
    Note:  
    The first part of the flavor name is the same as the stack name created in Section 3.3.2.3.

  7. Set the ephemeral size, which specifies the size (in GB) of Cassandra database.
  8. In Number of vEDA instances in additional to the minimum 3 mandatory vEDA instances, add the number of additional nodes to the minimum three nodes (node-1, node-2 and node-3, which are mandatory for commercial use).
    Note:  
    For information on how to determine the number, refer to Requirements on Virtualization and Cloud Infrastructure, Reference [3].

  9. Set the following configurations:
  10. Click Launch, and wait until all VM instances are launched.
  11. Choose Orchestration > Stacks, select the created stack name, and then:
    • Choose Events tab to check:
      • Status must be Create Complete
      • Status Reason must be Stack CREATE completed successfully
    • Choose Overview tab to find:
      • Node-1 and node-2 external IP addresses
      • VIP external/provisioning traffic

3.3.2.5   Verify Dynamic Activation Cloud Deployment

It takes approximate 10 minutes to finish deploying Dynamic Activation in CEE environment.

To verify the deployment, do as follows:

  1. Log on as root user (password rootroot) to all created VM instances by using SSH.
    Note:  
    Node-1 and node-2 external IP addresses can be found in Step 11 in Section 3.3.2.4.

    Node-3 can be logged on from node-1 by using SSH.


  2. Change the root user password.

    # passwd

  3. From the master VM (with type name node-1), check the log file located in /var/log/cloud-init.log and look for the text:

    "Deploy of package EMA finished!"

  4. Check the network setup. For instructions, refer to section Network Setup Check in System Administrators Guide for Virtual and Cloud Deployment, Reference [4].

When Dynamic Activation is deployed successfully in Cloud, continue with Section 3.3.4.

3.3.3   Deploying Dynamic Activation in Cloud - OpenStack

This section describes how to install Dynamic Activation in cloud deployment by using OpenStack newton.

Figure 4   Workflow of Deployment in OpenStack

3.3.3.1   Unpack Deployment Package

  1. Make sure that the deployment package <Software_Package>.tar.gz (EDA System KVM&Cloud SW) is available.
  2. Unpack the deployment package (EDA System KVM&Cloud SW):

    # tar -xzf <Software_Package>.tar.gz

    The following files are unpacked:

    • CXP<version>.qcow2 – Dynamic Activation image file
    • OpenStack folder that contains:
      • vEDA_infrastructure.yaml
      • vEDA_single.yaml
      • vEDA_cluster.yaml
      • vEDA_cluster_without_anti_affinity_rules.yaml
      • vEDA_expansion.yaml (used for system expansion, not installation)
  3. Store the Dynamic Activation image file and the OpenStack folder to a location that can be reached from OpenStack.

3.3.3.2   Deploy Image

Use OpenStack GUI to deploy Dynamic Activation image.

  1. Log on to OpenStack GUI as a user with correct rights.
  2. Choose Compute > Images > Create Image.
  3. Enter a name in the Image Name box.
  4. In the File section click on the Browse... button to select the CXP<version>.qcow2 file that was unpacked in Section 3.3.3.1.
    1. In Format section, choose QCOW2-QEMU emulator.
    2. Set Image Sharing to Private
  5. Click Create Image, and wait until the image is created.
  6. Choose Compute > Images, and check the image:
    • Status must be Active

3.3.3.3   Deploy Infrastructure

Use the OpenStack GUI to deploy Dynamic Activation Infrastructure.

  1. Log on to the OpenStack GUI as a user with correct rights.
  2. Upload the infrastructure template.
    1. Choose Orchestration > Stacks, click Launch Stack.
    2. Enter an application name.
    3. In Template Source drop-down list, select File, and then use Choose File to select the vEDA_infrastructure.yaml file that were unpacked in Section 3.3.3.1, and then click Next.
    4. Set the following configurations:
      • A Stack Name
      • The password of the logged in user
      • Name of provider Network
      • Management/provisioning network address
      • Internal network address
    5. Click Launch, and wait until the stack is launched.
      Attention!

      Only one Dynamic Activation infrastructure template can be launched at a time. Launching multiple infrastructure templates causes deployment failures.

    6. In Orchestration > Stacks, check the created stack:
      • Status must be Create Complete

3.3.3.4   Install Virtual Dynamic Activation Cluster

Use the OpenStack GUI to install virtual Dynamic Activation cluster.

  1. Log on to the OpenStack GUI as a user with correct rights.
  2. Upload the Dynamic Activation deployment template.
    1. Choose Orchestration > Stacks, and click Launch Stack.
    2. In Template Source drop-down list, select File, and then use Choose File in the Template File area to select either of the following that were unpacked in Section 3.3.3.1.
      • vEDA_cluster.yaml – for commercial deployment
      • vEDA_single.yaml – one VM instance in single deployment, for non-commercial usage
      • vEDA_cluster_without_anti_affinity_rule.yaml – Three VM instances (node-1 ~ node-3) deployment on the same underlying server, for non-commercial usage
      Note:  
      Both non-commercial deployments have fully functionality of Dynamic Activation system, but limit capacities.

    3. Click Next
    4. Set the following configurations:
      • A Stack Name
      • The password of the logged in user
      • VM hostname prefix
      • Name of provider Network
      • Dynamic Activation cluster VRID
        Note:  
        The VRID must be unique when there are several vEDA deployments in the same subnet.

      • In the Name of the vEDA image box, enter the name of the image created in Section 3.3.3.2.
      • In flavor drop-down list, select a flavor name.
      • Choose in what Availability Zone the vEDA cluster will be deployed in.
      • Set the volume size, which specifies the size (in GB) of Cassandra database.
      • In Number of vEDA instances in additional to the minimum 3 mandatory vEDA instances, add the number of additional nodes to the minimum three nodes (node-1, node-2 and node-3, which are mandatory for commercial use).
        Note:  
        For information on how to determine the number, refer to Requirements on Virtualization and Cloud Infrastructure, Reference [3].

      • Set the following configurations:
  3. Click Launch, and wait until all VM instances are launched.
  4. Choose Orchestration > Stacks, click on the created stack name and then:
    • Choose Events tab to check:
      • Status must be Create Complete
      • Status Reason must be Stack CREATE completed successfully
    • Choose Overview tab to find:
      • Floating IP for external/provisioning traffic/GUI, node-1 and node-2

3.3.3.5   Verify Dynamic Activation OpenStack Cloud Deployment

It takes approximate 10 minutes to finish deploying Dynamic Activation in OpenStack environment.

To verify the deployment, do as follows:

  1. Log on as root user (password rootroot) to all created VM instances by using SSH.
    Note:  
    Node-1 and Node-2 external IP addresses can be found in Step 4 in Section 3.3.3.4.

  2. Change the root user password.

    # passwd

  3. From the master VM (with type name node-1), check the log file located in /var/log/cloud-init.log and look for the text:

    "Deploy of package EMA finished!"

  4. Check the network setup. For instructions, refer to section Network Setup Check in System Administrators Guide for Virtual and Cloud Deployment, Reference [4].

When Dynamic Activation is deployed successfully in OpenStack Cloud, continue with Section 3.3.4.

3.3.4   Installing Licenses (Only ECEE and OpenStack)

After the Dynamic Activation software installation (on ECEE or OpenStack), perform the following procedure to install the licenses:

  1. Login to the master VM (with type name node-1), where the Sentinel license server resides.
  2. Obtain the Locking Code from both node-1 and node-2, on which the license server is installed:

    # cd /opt/sentinel/bin

    # ./echoid

    Example Printout

    Sentinel RMS Development Kit 8.6.2.0053 Host Locking Code Information Utility
    Copyright (C) 2014 SafeNet, Inc.


       Locking Code 1  : 2008-*1MS LHEN 9GMR X8EQ
       Locking Code 1 (Old Style) : 2008-BA44D

    Caution!

    The echoid command needs to be run in the /opt/sentinel/bin/ directory. Otherwise the license provided for the Ericsson License Information System (ELIS) will not work.

  3. If using a High Availability (HA) solution, login to the backup VM (with type name node-2), and repeat Step 2 to obtain the locking codes.
  4. Provide all Locking Code for the ELIS and get the license file.
  5. Make sure that the license server is started:

    # 3ppmon startlserv

  6. Set the environment variable LSHOST to the license server:

    # export LSHOST=localhost

  7. Import the licenses from the license file.

    The licenses can be imported either from a file or from the string found in the file.

    Example of License

    13 FAT1022833/5 Ni LONG NORMAL NETWORK EXCL 100000 INFINITE_KEYS
     15 NOV 2013 11 NOV 2016 NO_SHR SLM_CODE 1 NON_COMMUTER NO_GRACE NO_OVERDRAFT
    CL_ND_LCK NON_REDUNDANT AB99F208,FB97E2008 NO_HLD 5 M2M_Start,_
    PS T,fzJ:wrGlhYlTlWOoaZ3VyN5wPB1aJd4HVM505BvjfWZAcemFO1DYYtplY:y90yLgTU2Vw2Z7C1x
    FWRupieI93p#AID=fe665bf5-a24a-4c3c-910b-e882f41cb146

    Note:  
    The license string cannot contain any word-wrapping and the whole string needs to be on a single line for Sentinel™ to be able to read it.

    Install the Dynamic Activation license from a string:

    Note:  
    Install the Dynamic Activation license file on both the master VM (with type name node-1), and backup VM (with type name node-2). Perform Step 5 to Step 7 on both nodes.

    # cd /opt/sentinel/bin

    # ./lslic -A "<license_string>"

    Example Printout

    # ./lslic -A /home/actadm/license-design

    Sentinel RMS Development Kit 8.6.2.0053
    License Addition/Deletion Utility
    Copyright (C) 2014 SafeNet, Inc.
    License code
    13 FAT1022833/5 Ni LONG NORMAL NETWORK EXCL 100000 INFINITE_KEYS
     15 NOV 2013 11 NOV 2016 NO_SHR SLM_CODE 1 NON_COMMUTER NO_GRACE NO_OVERDRAFT
    CL_ND_LCK NON_REDUNDANT AB99F208,FB97E2008 NO_HLD 5 M2M_Start,_
    PS T,fzJ:wrGlhYlTlWOoaZ3VyN5wPB1aJd4HVM505BvjfWZAcemFO1DYYtplY:y90yLgTU2Vw2Z7C1x
    FWRupieI93p#AID=fe665bf5-a24a-4c3c-910b-e882f41cb146

    or

    Install the Dynamic Activation license from a file.

    Transfer (SFTP) the license file to /tmp on both the master VM (type name node-1), and backup VM (with type name node-2):

    # cd /opt/sentinel/bin

    # ./lslic -F /tmp/<file_name>

    Example of License

    Sentinel RMS Development Kit 8.6.2.0053
    License Addition/Deletion Utility
    Copyright (C) 2014 SafeNet, Inc.
    License code
    13 FAT1022833/5 Ni LONG NORMAL NETWORK EXCL 100000 INFINITE_KEYS
     15 NOV 2013 11 NOV 2016 NO_SHR SLM_CODE 1 NON_COMMUTER NO_GRACE NO_OVERDRAFT
    CL_ND_LCK NON_REDUNDANT AB99F208,FB97E2008 NO_HLD 5 M2M_Start,_
    PS T,fzJ:wrGlhYlTlWOoaZ3VyN5wPB1aJd4HVM505BvjfWZAcemFO1DYYtplY:y90yLgTU2Vw2Z7C1x
    FWRupieI93p#AID=fe665bf5-a24a-4c3c-910b-e882f41cb146

  8. Verify on the localhost, that a license is installed on both nodes:

    # /opt/sentinel/bin/lsmon localhost

    Note:  
    Make sure that the response does not contain:

    There is no license in the server


  9. For the new license to take effect, an update on the Dynamic Activation applications needs to be performed in the Dynamic Activation GUI.
    1. Log on to the Dynamic Activation web GUI using HTTP:

      https://<VIP-OAM-IP>:8383/management

      For more information, refer to User Guide for Resource Activation, Reference [10].

    2. Go to System > Licenses
    3. Click the update arrow for the related Dynamic Activation feature to load the license on the system.

3.3.5   Installing HSS Validator Plug-in (Optional)

This section contains information on how to install the HSS Validator Plug-in. It is only applicable for User Data Consolidation (UDC) provisioning installations.

Note:  
Make sure to have the correct Plug-in available.

If the file to download has .tgz as filename extension, it needs first to be unzipped, and then zipped again as tar.gz before renaming it.

The file needs to have the name <name>-<R-state>.tar.gz, for example HssProvisioningValidator-R4A.tar.gz. If it does not, it needs to be renamed.

The correct <R-state> version is found in the tar.gz file, on .jar level.


  1. Copy the HSS Validator Plug-in software to the /home/bootloader/repository/ directory on node-1.
  2. Change owner and group of HssProvisioningValidator-<R-state>.tar.gz by running the following command.

    # chown actadm:activation /home/bootloader/repository/HssProvisioningValidator-R4A.tar.gz

    Note:  
    In this example, <R-state> is R4A.

  3. From node-1, repeat the following command for all nodes in the cluster, to add the plug-in as submodule.

    # bootloader.py submodule add -n <HSS Plugin>.tar.gz -t lib-ext -p dve-application --host <hostname>

    <hostname> is the hostname of the node to which the submodule is being added.

  4. From node-1, repeat the following command for all nodes in the cluster, one by one, to activate the plug-in as submodule:

    # bootloader.py node activate --host <hostname>

    <hostname> is the hostname of the node to which the submodule is being activated.

3.3.6   Creating Administrative Users

Create non-root users for administering purposes, such as log file reading, process monitoring, managing Dynamic Activation processes, installation of modules, and more. For information on how to create administrative users, see section Users > Create Administrative User in System Administrators Guide for Virtual and Cloud Deployment, Reference [4].

3.3.7   SSL Configuration (Optional)

For information on how to configure SSL, follow the instructions in System Administrators Guide for Virtual and Cloud Deployment, Reference [4].

3.3.8   Configurations

Before Dynamic Activation is fully operational, the different application services need to be configured. Refer to Configuration Manual for Resource Activation, Reference [11].

For information on how to import the default NE groups and routing methods to the Dynamic Activation system, refer to Load Default NE Groups and Routing Methods in System Administrators Guide for Virtual and Cloud Deployment, Reference [4].

3.3.9   Modify Notification Rules (Optional)

For HSS and DAE provisioning, the application notification rules files need to be deployed to Dynamic Activation in order to send the notification message to the relevant FE. The application notification rules file needs to be retrieved from each application.

For more information, see section Notification Rules File Administration in System Administrators Guide for Virtual and Cloud Deployment, Reference [4].

3.3.10   Set Initial License Counters

Note:  
This section is only valid when migrating subscribers from monolithic NEs to User Data Consolidation (UDC).

For more information, refer to License Counter Management, Reference [13].

3.3.11   External OpenID Connect Provider Configuration (Optional)

For information on how to configure External OpenID Connect Provider, follow the instructions in System Administrators Guide for Virtual and Cloud Deployment, Reference [4].

4   SNMP Configuration

The SNMP configuration is automatically handled when providing correct OSS IP address in the EDA Deployment Manager tool.

For information on how to change trap destination, see System Administrators Guide for Virtual and Cloud Deployment, Reference [4].

5   Update and Rollback of Dynamic Activation on Virtualized Deployment

An update is an upgrade of all RPMs and modules on the current system to a newer software version, for example update the software to a new Correction Package (CP) or Product Customization Package (PC) level.

Note:  
Check the Delivery Report for details about supported update paths. Contact Ericsson support for further instructions.

The update is proceeded node by node without downtime. If any previously modified configuration files are affected by the update, they may need to be modified again.

To see if a configuration file is replaced in platform RPMs, and where the backup of the original file is placed, look in the /home/actadm/config/log/config.log file. Search for <date> [INFO] Replacing <file> with new config file, and <date> [INFO] Backing up <file> to <file>.

To see if a configuration file is replaced in the common and provisioning logic modules, and where the backup of the original file is placed, look in the /home/bootloader/config/module_config_files/log/config.log file. Search for <config file> replaced due to new original file, old file renamed to <config file>.save

Note:  
New provisioning features need to be configured. For instructions, refer to Configuration Manual for Resource Activation, Reference [11].

5.1   Preparations

Backup VMs:

To support seamless rollback of the VMs, a backup of the VM disks is mandatory before an update.

Before proceeding to Section 5.2, refer to System Administrators Guide for Virtual and Cloud Deployment, Reference [4] and Backup and Restore Guideline for Virtual and Cloud Deployment, Reference [5] for more information about backing up VMs.

Note:  
The backup procedure should be done for all the VMs in the cluster before proceeding to Section 5.2.

Backup Cassandra:

When update an Dynamic Activation system, back up the Cassandra by following the instructions described in Backup and Restore Guideline for Virtual and Cloud Deployment, Reference [5].

Copy the Software Package (EDA System Base SW):

On the first node, node-1, as user root:

Caution!

Make sure there is enough free disk-space (4 GB minimum) in /var/log/, to be able to copy and untar the new tar file.

  1. Transfer the software to the /var/log/installfiles directory on node-1.
  2. Change directory:

    # cd /var/log/installfiles/

  3. Untar the software (EDA System Base SW):

    # tar -zxf <Software_Package>.tar.gz

5.2   Update Instructions

To update all nodes in the cluster at the same time, resulting in downtime, follow the steps in Section 5.2.1. To update the cluster node by node, without any downtime, follow the steps in Section 5.2.2.

5.2.1   All Nodes with Downtime

The fastest way to update the whole cluster is to update all nodes at the same time, resulting in downtime.

Attention!

Make sure all traffic is down. Ongoing traffic will cause inconsistency.

Note:  
The update needs to be performed from the first node (node-1), as user root.

  1. Update the cluster:

    # cd /var/log/installfiles/<Prod_Number>-<Version>

    # ./ema update --host all

    The following is prompted:

    Warning: Updating all nodes at the same time means that ongoing provisioning will fail.Are you sure you want to continue? (y/n)

    Enter y and press Enter

  2. Run CAI3G test traffic.

    Run test traffic to verify the updated nodes. If traffic does not work as expected, perform a rollback.

5.2.2   Node by Node without Downtime

Note:  
The instructions for updating cluster node by node is not supported for Resource Configuration.

Start with updating node-1. Verify that the updated nodes work as expected by executing traffic on test ports. After verification, enable the nodes to take regular traffic, and then update the rest of the nodes.

During update, node-1 is temporarily disabled from regular traffic. This can cause some performance decrease.

Note:  
The update needs to be performed from the first node (node-1), as user root.

  1. Set the Services to only register test services when activated on node-1:

    # bootloader.py config set --parameter @REGISTER_SERVICES@ --value false

    # bootloader.py config set --parameter @REGISTER_TEST_SERVICES@ --value true

  2. Go to /var/log/installfiles/<Prod_Number>-<Version> and update node-1:

    # cd /var/log/installfiles/<Prod_Number>-<Version>

    # ./ema update --host node-1

  3. Check that all processes are running on the updated node (node-1).

    From node-1, run the following commands:

    # 3ppmon status --host node-1

    # bootloader.py node status --host node-1

  4. Run CAI3G test traffic.

    Run test traffic on test ports (8888, 8989) to verify the that node-1 are working properly.

  5. Set the Services to register real services when activated:

    # bootloader.py config remove --parameter @REGISTER_SERVICES@

    # bootloader.py config remove --parameter @REGISTER_TEST_SERVICES@

  6. Restart the services on node-1 so that it can take regular traffic:

    # bootloader.py node activate --host node-1

  7. When node-1 is updated, repeat steps Step 2 - Step 3 for the rest of the nodes in the cluster.
    Note:  
    Make sure to replace --host <nodeId> with the Id for the node that is currently being updated, for example node-2, node-3 and so on.

  8. When all nodes in the cluster are updated, the update to a new software level is completed.

5.3   Rollback Instructions

The rollback procedure rolls back Dynamic Activation to the software level that was installed before the last update.

For information on how to back up and restore VMs, see Backup and Restore Guideline for Virtual and Cloud Deployment, Reference [5].

6   Backup

When the system is installed and properly configured, make a full backup to be able to revert to the original state when needed. Create a full backup as described in Backup and Restore Guideline for Virtual and Cloud Deployment, Reference [5].


Reference List

Ericsson Documents
[1] Library Overview, 18/1553-CSH 109 628 Uen
[2] Glossary of Terms and Acronyms, 0033-CSH 109 628 Uen
[3] Requirements on Virtualization and Cloud Infrastructure, 2/2135-CSH 109 628 Uen
[4] System Administrators Guide for Virtual and Cloud Deployment, 3/1543-CSH 109 628 Uen
[5] Backup and Restore Guideline for Virtual and Cloud Deployment, 6/1553-CSH 109 628 Uen
[6] Network Description and Configuration for Virtual and Cloud Deployment, 1/1551-CSH 109 628 Uen
[7] Customer Questionnaire for Virtual and Cloud Deployment, 2/1057-CSH 109 628 Uen
[8] Parameter List for Virtual Deployment, 3/1057-CSH 109 628 Uen
[9] Parameter List for CEE Deployment, 6/1057-CSH 109 628 Uen
[10] User Guide for Resource Activation, 1/1553-CSH 109 628 Uen
[11] Configuration Manual for Resource Activation, 2/1543-CSH 109 628 Uen
[12] Product Overview, 1550-CSH 109 628 Uen
[13] License Counter Management, 1/197 21-CSH 109 628 Uen
[14] Hardening Guideline for Virtual and Cloud Deployment, 2/154 43-CSH 109 628 Uen


Copyright

© Ericsson AB 2017. All rights reserved. No part of this document may be reproduced in any form without the written permission of the copyright owner.

Disclaimer

The contents of this document are subject to revision without notice due to continued progress in methodology, design and manufacturing. Ericsson shall have no liability for any error or damage of any kind resulting from the use of this document.

Trademark List
All trademarks mentioned herein are the property of their respective owners. These are shown in the document Trademark Information.

    Software Installation for Virtual and Cloud Deployment         Ericsson Dynamic Activation 1