MTAS Scaling Management Guide
MTAS

Contents

1Introduction
1.1Prerequisites

2

Overview
2.1Scaling Terminology
2.2Limitations
2.3Subfunctions

3

Procedures
3.1Preparation
3.2Auto Scale-Out
3.3Graceful Scale-In
3.4Forceful Scale-In
3.5Post Activities

1   Introduction

This document describes the scalability functions of the MTAS cluster as a distributed system. It also gives instructions on how to do expansion or contraction of the cluster by these functions.

If scaling type is not mentioned, this document always refers to horizontal scaling where the scalability of the system is provided by multiple instances to distribute the load in parallel for having the capacity needed. Vertical scaling is not considered in this document.

1.1   Prerequisites

This section describes the prerequisites that must be fulfilled before expanding or contracting the MTAS cluster.

1.1.1   Licenses

The scaling function does not require a license.

1.1.2   Documents

Before starting these procedures, the following documents must be available:

1.1.3   Conditions

Before starting this procedure, ensure that the following conditions are met:

2   Overview

This section provides an overview of the scaling procedures. For the operational steps, see Section 3 Procedures.

2.1   Scaling Terminology

Throughout this document the following terminology is used:

Node Refers to a compute resource and can be a physical hardware blade or a virtual machine (VM) instantiation.
Fixed Domain The set of nodes that cannot be subject of a scaling operation. Fixed domain of MTAS consists of SC-1 and SC-2 nodes permanently. The domain cannot be changed.
Scaling Domain The set of nodes that can be subject of a scaling operation. MTAS scaling domain consists of all traffic nodes (PL-3, PL-4, PL-5 ... PL-N).

2.2   Limitations

This section summarizes the limitations relating scaling functions.

2.2.1   PL-3 and PL-4 Nodes Are Not Scalable

Even though PL-3 and PL-4 nodes are considered to be part of the scaling domain, they cannot be scaled in.

2.3   Subfunctions

This section describes the subfunctions related to the scalability of the cluster.

2.3.1   Auto Scale-Out

Auto Scale-Out is an operation when one or more new compute resources are launched, see Figure 1, and the system automatically detects, configures, and brings up the nodes as a member of the scaling domain of the cluster. See Figure 2 for an example when one new compute node is added to the cluster.

Figure 1   New Compute Resource Spawned and Available

Figure 2   After Auto Scale-Out New Resource is Added to Cluster

2.3.2   Graceful Scale-In

Graceful Scale-In is an operation where one or more compute resources, part of the scaling domain of the cluster, see Figure 3, are removed from the cluster, see Figure 4, to free up resources.

Figure 3   Node Named PL-(N-1) Is Part of Cluster

Figure 4   Node Named PL-(N-1) is Removed from Cluster and Its Resources Can Be Released

Note:  
The Graceful Scale-In operation can be rejected by the cluster in case, according to the automatic estimation of the system, the target size of the cluster does not have the memory resources to serve the needed memory capabilities for the ongoing traffic.

2.3.3   Forceful Scale-In

Forceful Scale-In is, similarly to Graceful Scale-In, an operation to remove one or more nodes from the scaling domain of the cluster. The only difference is that in this case the node is not available, see Figure 5, either because it already freed up its resources or because of a failure. Therefore the removal is only an administrative operation, see Figure 6.

Figure 5   Node Named PL-(N-1) in the Cluster Scaling Domain is Unavailable

Figure 6   Node Named PL-(N-1) is Removed Administratively From Cluster

3   Procedures

This section describes the procedures of preparation, Auto Scale-Out, Graceful Scale-In, and Forceful Scale-In.

3.1   Preparation

This section describes preparation for the procedure.

3.1.1   Prerequisites

Before starting these procedures, the user performing the operations must have access to the System Controller (SC) nodes.

Scaling must only be performed after site-specific initial configuration is applied on the node. For more details on scaling, refer to MTAS Hardening Guide.

3.1.2   Enable Scaling Feature

To enable the scaling feature:

  1. Connect to one of the SC nodes:

    ssh <user>@<system management IP address>

  2. Check the operational state of the scaling feature:

    SC-1: ~ # cmw-configuration --status SCALING

    Disable

  3. If the result is Disable, enable scaling functionality:

    SC-1: ~ # cmw-configuration --enable SCALING

3.1.3   Remove MRFP Links

If the MTAS is configured to use External Media Resource Function Controller (MRFC), skip this section.

To remove the Media Resource Function Processor (MRFP) links when scaling, refer to MTAS Media Control Management Guide.

3.1.4   Create Backup

Before any scaling-related activities are performed, create a backup. Refer to Create Backup.

3.2   Auto Scale-Out

The guide for the Auto Scale-Out procedure is detailed in this section.

3.2.1   Prerequisites

Before starting these procedures, ensure that the following conditions are met:

3.2.2   Create One or More Compute Resources

Creating a compute resource in the Virtualized Network Function (VNF) is out of the scope for this application. Follow the instructions given by the cloud management system about how to create a Virtual Machine (VM) instance.

The Scale-Out procedure is triggered automatically once the new resource is available and launched.

Note:  
The newly created VM or VMs must have the same number of Virtual CPUs, the same amount of RAM, and the same number of ports as the other Payload (PL) VMs in the cluster.

3.2.3   Monitor the Scale-Out Progress

To monitor the Scale-Out progress of the operation on one of the COM CLIs:

  1. Connect to the cluster through COM CLI:

    ssh -p 830 -t -s <user>@<OAM VIP> cli

  2. Navigate to the CrM Managed Object (MO), for example:

    >ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1

  3. Verify that the scale-out process has started:

    (CrM=1)>show -r

    The following is an example output:

    (CrM=1)>show -r
    CrM=1
       autoRoleAssignment=ENABLED
       ComputeResourceRole=PL-3
          adminState=UNLOCKED
          instantiationState=INSTANTIATED
          operationalState=ENABLED
          provides="ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,Role=Default-Role"
          uses="ManagedElement=1,Equipment=1,ComputeResource=PL-3"
       ComputeResourceRole=PL-4
          adminState=UNLOCKED
          instantiationState=INSTANTIATED
          operationalState=ENABLED
          provides="ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,Role=Default-Role"
          uses="ManagedElement=1,Equipment=1,ComputeResource=PL-4"
       ComputeResourceRole=PL-5
          adminState=UNLOCKED
          instantiationState=INSTANTIATING
          operationalState=DISABLED
          provides="ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,Role=Default-Role"
          uses="ManagedElement=1,Equipment=1,ComputeResource=PL-5"
       ComputeResourceRole=SC-1
          adminState=UNLOCKED
          instantiationState=INSTANTIATED
          operationalState=ENABLED
          provides="ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,Role=SYSTEM"
          uses="ManagedElement=1,Equipment=1,ComputeResource=SC-1"
       ComputeResourceRole=SC-2
          adminState=UNLOCKED
          instantiationState=INSTANTIATED
          operationalState=ENABLED
          provides="ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,Role=SYSTEM"
          uses="ManagedElement=1,Equipment=1,ComputeResource=SC-2"
       Role=Default-Role
          isProvidedBy
             "ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=PL-3"
             "ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=PL-4"
             "ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=PL-5"
          scalability=SCALABLE
       Role=SYSTEM
          isProvidedBy
             "ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=SC-1"
             "ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=SC-2"
          scalability=NON_SCALABLE
    (CrM=1)>

    This example shows that instantiationState has changed to INSTANTIATING for node PL-5. It means that the scale-out has started.

  4. Continue to monitor the progress until the scale-out process has ended and the added node has joined the cluster:

    (CrM=1)>show -m ComputeResourceRole -p instantiationState,operationalState

    The following example output shows the final result:

    (CrM=1)>show -m ComputeResourceRole -p instantiationState,operationalState
    ComputeResourceRole=PL-3
       instantiationState=INSTANTIATED
       operationalState=ENABLED
    ComputeResourceRole=PL-4
       instantiationState=INSTANTIATED
       operationalState=ENABLED
    ComputeResourceRole=PL-5
       instantiationState=INSTANTIATED
       operationalState=ENABLED
    ComputeResourceRole=SC-1
       instantiationState=INSTANTIATED
       operationalState=ENABLED
    ComputeResourceRole=SC-2
       instantiationState=INSTANTIATED
       operationalState=ENABLED
    (CrM=1)>

    This example shows that instantiationState has changed to INSTANTIATED for node PL-5. It means that PL-5 is added to the cluster. The example also shows that operationalState has changed to ENABLED for node PL-5. It means that node PL-5 has joined the cluster.

3.2.4   Check State of the Cluster

The Scale-Out procedure can be considered as successfully finished if the cluster is in healthy state after the operation, refer to MTAS Health Check.

3.3   Graceful Scale-In

This section describes a step by step guide for the Graceful Scale-In procedure.

3.3.1   Prerequisites

Before starting these procedures, ensure that the following conditions are met:

3.3.2   Scale-In One PL

To remove a PL from the cluster:

  1. Connect to the cluster through the COM CLI:

    ssh -p 830 -t -s <user>@<OAM VIP> cli

  2. Remove one or more PL nodes by navigating to the corresponding ComputeResourceRole MO in configure mode and removing the provides attribute.

    The following is an example of removing PL-5:

    >ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=PL-5
    (ComputeResourceRole=PL-5)>configure
    (config-ComputeResourceRole=PL-5)>no provides
    (config-ComputeResourceRole=PL-5)>up
    (config-CrM=1)>commit

3.3.2.1   Cancel Scale-In

The Scale-In procedure can be ended before committing the operation.

The following is an example of ending a multiple Scale-In procedure in the COM CLI:

>ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=PL-5
(ComputeResourceRole=PL-5)>configure
(config-ComputeResourceRole=PL-5)>no provides
(config-ComputeResourceRole=PL-5)>up
(config-CrM=1)ComputeResourceRole=PL-6
(config-ComputeResourceRole=PL-6)>no provides
(config-ComputeResourceRole=PL-6)>abort

3.3.3   Monitor Scale-In Progress

To monitor the progress of the Scale-In operation through the COM CLI:

  1. Verify that the Scale-In process has started:

    >ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1

    (CrM=1)>show -m ComputeResourceRole -p instantiationState,operationalState

    The following is an example output when the node PL-5 is subject of Scale-In:

    (CrM=1)>show -m ComputeResourceRole -p instantiationState,operationalState
    ComputeResourceRole=PL-3
       instantiationState=INSTANTIATED
       operationalState=ENABLED
    ComputeResourceRole=PL-4
       instantiationState=INSTANTIATED
       operationalState=ENABLED
    ComputeResourceRole=PL-5
       instantiationState=SHUTTINGDOWN
       operationalState=UNINSTANTIATING
    ComputeResourceRole=SC-1
       instantiationState=INSTANTIATED
       operationalState=ENABLED
    ComputeResourceRole=SC-2
       instantiationState=INSTANTIATED
       operationalState=ENABLED
    (CrM=1)>

    The PL-5 node attributes instantiationState=INSTANTIATED and operationalState=UNINSTANTIATING show that the graceful Scale-In has started.

  2. The Scale-In procedure can only be considered successfully finished if the compute resource entry cannot be found through the COM CLI.

    The following is an example where PL-5 was scaled in:

    (CrM=1)>show

    The following is an example output where ComputeResourceRole=PL-5 does not exist any more.

    (CrM=1)>show
    CrM=1
       autoRoleAssignment=ENABLED
       ComputeResourceRole=PL-3
       ComputeResourceRole=PL-4
       ComputeResourceRole=SC-1
       ComputeResourceRole=SC-2
       Role=Default-Role
       Role=SYSTEM
    (CrM=1)>

3.3.4   Remove Compute Resource

Removing a compute resource from the VNF is out of the scope for this application. Follow the instructions given by the cloud management system on how to remove VMs from the VNF.

3.3.5   Check State of the Cluster

The Graceful Scale-In procedure can be considered as successfully finished if the cluster is in healthy state after the operation, refer to MTAS Health Check.

3.3.6   Troubleshoot Scale-In Failures

In case of unsuccessful Scale In operation, refer to MTAS Troubleshooting Guideline.

3.4   Forceful Scale-In

This section describes a step by step guide for the Forceful Scale-In procedure.

3.4.1   Prerequisites

Before starting these procedures, ensure that the following conditions are met:

3.4.2   Scale-In Unavailable PL

This step is equivalent with the corresponding step of the Forceful Scale-In procedure, see Section 3.3.2.

3.4.3   Monitor Scale-In Progress

This step is equivalent with the corresponding step of the Forceful Scale-In procedure, see Section 3.3.3.

3.4.4   Check State of the Cluster

The Forceful Scale-In procedure can be considered as successfully finished if the cluster is in healthy state after the operation, refer to MTAS Health Check.

3.4.5   Troubleshoot Scale-In Failures

This step is equivalent with the corresponding step of the Forceful Scale-In procedure, see Section 3.3.6.

3.5   Post Activities

This section describes the post scaling activities needed for MTAS.

3.5.1   Add MRFP Links

If the MTAS is configured to use External MRFC, skip this section.

To add the MRFP links after scaling, refer to MTAS Media Control Management Guide.



Copyright

© Ericsson AB 2016. All rights reserved. No part of this document may be reproduced in any form without the written permission of the copyright owner.

Disclaimer

The contents of this document are subject to revision without notice due to continued progress in methodology, design and manufacturing. Ericsson shall have no liability for any error or damage of any kind resulting from the use of this document.

Trademark List
All trademarks mentioned herein are the property of their respective owners. These are shown in the document Trademark Information.

    MTAS Scaling Management Guide         MTAS