| 1 | Introduction |
| 1.1 | Prerequisites |
2 | Overview |
| 2.1 | Scaling Terminology |
| 2.2 | Limitations |
| 2.3 | Subfunctions |
3 | Procedures |
| 3.1 | Preparation |
| 3.2 | Auto Scale-Out |
| 3.3 | Graceful Scale-In |
| 3.4 | Forceful Scale-In |
| 3.5 | Post Activities |
1 Introduction
This document describes the scalability functions of the MTAS cluster as a distributed system. It also gives instructions on how to do expansion or contraction of the cluster by these functions.
If scaling type is not mentioned, this document always refers to horizontal scaling where the scalability of the system is provided by multiple instances to distribute the load in parallel for having the capacity needed. Vertical scaling is not considered in this document.
1.1 Prerequisites
This section describes the prerequisites that must be fulfilled before expanding or contracting the MTAS cluster.
1.1.1 Licenses
The scaling function does not require a license.
1.1.2 Documents
Before starting these procedures, the following documents must be available:
1.1.3 Conditions
Before starting this procedure, ensure that the following conditions are met:
- The procedure must only be performed by support personnel with experience of Cloud and MTAS.
- No other upgrade or maintenance activity must be performed during the procedure.
2 Overview
This section provides an overview of the scaling procedures. For the operational steps, see Section 3 Procedures.
2.1 Scaling Terminology
Throughout this document the following terminology is used: | ||
| Node | Refers to a compute resource and can be a physical hardware blade or a virtual machine (VM) instantiation. | |
| Fixed Domain | The set of nodes that cannot be subject of a scaling operation. Fixed domain of MTAS consists of SC-1 and SC-2 nodes permanently. The domain cannot be changed. | |
| Scaling Domain | The set of nodes that can be subject of a scaling operation. MTAS scaling domain consists of all traffic nodes (PL-3, PL-4, PL-5 ... PL-N). | |
2.2 Limitations
This section summarizes the limitations relating scaling functions.
2.2.1 PL-3 and PL-4 Nodes Are Not Scalable
Even though PL-3 and PL-4 nodes are considered to be part of the scaling domain, they cannot be scaled in.
2.3 Subfunctions
This section describes the subfunctions related to the scalability of the cluster.
2.3.1 Auto Scale-Out
Auto Scale-Out is an operation when one or more new compute resources are launched, see Figure 1, and the system automatically detects, configures, and brings up the nodes as a member of the scaling domain of the cluster. See Figure 2 for an example when one new compute node is added to the cluster.
2.3.2 Graceful Scale-In
Graceful Scale-In is an operation where one or more compute resources, part of the scaling domain of the cluster, see Figure 3, are removed from the cluster, see Figure 4, to free up resources.
- Note:
- The Graceful Scale-In operation can be rejected by the cluster in case, according to the automatic estimation of the system, the target size of the cluster does not have the memory resources to serve the needed memory capabilities for the ongoing traffic.
2.3.3 Forceful Scale-In
Forceful Scale-In is, similarly to Graceful Scale-In, an operation to remove one or more nodes from the scaling domain of the cluster. The only difference is that in this case the node is not available, see Figure 5, either because it already freed up its resources or because of a failure. Therefore the removal is only an administrative operation, see Figure 6.
3 Procedures
This section describes the procedures of preparation, Auto Scale-Out, Graceful Scale-In, and Forceful Scale-In.
3.1 Preparation
This section describes preparation for the procedure.
3.1.1 Prerequisites
Before starting these procedures, the user performing the operations must have access to the System Controller (SC) nodes.
Scaling must only be performed after site-specific initial configuration is applied on the node. For more details on scaling, refer to MTAS Hardening Guide.
3.1.2 Enable Scaling Feature
To enable the scaling feature:
- Connect to one of the SC nodes:
ssh <user>@<system management IP address>
- Check the operational state of the scaling feature:
SC-1: ~ # cmw-configuration --status SCALING
Disable
- If the result is Disable, enable
scaling functionality:
SC-1: ~ # cmw-configuration --enable SCALING
3.1.3 Remove MRFP Links
If the MTAS is configured to use External Media Resource Function Controller (MRFC), skip this section.
To remove the Media Resource Function Processor (MRFP) links when scaling, refer to MTAS Media Control Management Guide.
3.1.4 Create Backup
Before any scaling-related activities are performed, create a backup. Refer to Create Backup.
3.2 Auto Scale-Out
The guide for the Auto Scale-Out procedure is detailed in this section.
3.2.1 Prerequisites
Before starting these procedures, ensure that the following conditions are met:
- The cluster is in a healthy state, refer to MTAS Health Check.
- The target size of the cluster does not exceed the maximum cluster size supported, refer to Virtualized MTAS 1.2 Characteristics Specification.
- The user monitoring the Scale-Out procedure has access to the COM Command-Line Interface (CLI).
3.2.2 Create One or More Compute Resources
Creating a compute resource in the Virtualized Network Function (VNF) is out of the scope for this application. Follow the instructions given by the cloud management system about how to create a Virtual Machine (VM) instance.
The Scale-Out procedure is triggered automatically once the new resource is available and launched.
- Note:
- The newly created VM or VMs must have the same number of Virtual CPUs, the same amount of RAM, and the same number of ports as the other Payload (PL) VMs in the cluster.
3.2.3 Monitor the Scale-Out Progress
To monitor the Scale-Out progress of the operation on one of the COM CLIs:
- Connect to the cluster through COM CLI:
ssh -p 830 -t -s <user>@<OAM VIP> cli
- Navigate to the CrM Managed
Object (MO), for example:
>ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1
- Verify that the scale-out process has started:
(CrM=1)>show -r
The following is an example output:
(CrM=1)>show -r CrM=1 autoRoleAssignment=ENABLED ComputeResourceRole=PL-3 adminState=UNLOCKED instantiationState=INSTANTIATED operationalState=ENABLED provides="ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,Role=Default-Role" uses="ManagedElement=1,Equipment=1,ComputeResource=PL-3" ComputeResourceRole=PL-4 adminState=UNLOCKED instantiationState=INSTANTIATED operationalState=ENABLED provides="ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,Role=Default-Role" uses="ManagedElement=1,Equipment=1,ComputeResource=PL-4" ComputeResourceRole=PL-5 adminState=UNLOCKED instantiationState=INSTANTIATING operationalState=DISABLED provides="ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,Role=Default-Role" uses="ManagedElement=1,Equipment=1,ComputeResource=PL-5" ComputeResourceRole=SC-1 adminState=UNLOCKED instantiationState=INSTANTIATED operationalState=ENABLED provides="ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,Role=SYSTEM" uses="ManagedElement=1,Equipment=1,ComputeResource=SC-1" ComputeResourceRole=SC-2 adminState=UNLOCKED instantiationState=INSTANTIATED operationalState=ENABLED provides="ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,Role=SYSTEM" uses="ManagedElement=1,Equipment=1,ComputeResource=SC-2" Role=Default-Role isProvidedBy "ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=PL-3" "ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=PL-4" "ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=PL-5" scalability=SCALABLE Role=SYSTEM isProvidedBy "ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=SC-1" "ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=SC-2" scalability=NON_SCALABLE (CrM=1)>This example shows that instantiationState has changed to INSTANTIATING for node PL-5. It means that the scale-out has started.
- Continue to monitor the progress until the scale-out process
has ended and the added node has joined the cluster:
(CrM=1)>show -m ComputeResourceRole -p instantiationState,operationalState
The following example output shows the final result:
(CrM=1)>show -m ComputeResourceRole -p instantiationState,operationalState ComputeResourceRole=PL-3 instantiationState=INSTANTIATED operationalState=ENABLED ComputeResourceRole=PL-4 instantiationState=INSTANTIATED operationalState=ENABLED ComputeResourceRole=PL-5 instantiationState=INSTANTIATED operationalState=ENABLED ComputeResourceRole=SC-1 instantiationState=INSTANTIATED operationalState=ENABLED ComputeResourceRole=SC-2 instantiationState=INSTANTIATED operationalState=ENABLED (CrM=1)>
This example shows that instantiationState has changed to INSTANTIATED for node PL-5. It means that PL-5 is added to the cluster. The example also shows that operationalState has changed to ENABLED for node PL-5. It means that node PL-5 has joined the cluster.
3.2.4 Check State of the Cluster
The Scale-Out procedure can be considered as successfully finished if the cluster is in healthy state after the operation, refer to MTAS Health Check.
3.3 Graceful Scale-In
This section describes a step by step guide for the Graceful Scale-In procedure.
3.3.1 Prerequisites
Before starting these procedures, ensure that the following conditions are met:
- The cluster is in a healthy state, refer to MTAS Health Check.
- The user performing the operations has access to the COM CLI.
- SC-1, SC-2, PL-3, and PL-4 nodes cannot be subject of a Scale-In operation.
3.3.2 Scale-In One PL
To remove a PL from the cluster:
- Connect to the cluster through the COM CLI:
ssh -p 830 -t -s <user>@<OAM VIP> cli
- Remove one or more PL nodes by navigating to the corresponding ComputeResourceRole MO in configure mode and removing
the provides attribute.
The following is an example of removing PL-5:
>ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=PL-5 (ComputeResourceRole=PL-5)>configure (config-ComputeResourceRole=PL-5)>no provides (config-ComputeResourceRole=PL-5)>up (config-CrM=1)>commit
3.3.2.1 Cancel Scale-In
The Scale-In procedure can be ended before committing the operation.
The following is an example of ending a multiple Scale-In procedure in the COM CLI:
>ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1,ComputeResourceRole=PL-5 (ComputeResourceRole=PL-5)>configure (config-ComputeResourceRole=PL-5)>no provides (config-ComputeResourceRole=PL-5)>up (config-CrM=1)ComputeResourceRole=PL-6 (config-ComputeResourceRole=PL-6)>no provides (config-ComputeResourceRole=PL-6)>abort |
3.3.3 Monitor Scale-In Progress
To monitor the progress of the Scale-In operation through the COM CLI:
- Verify that the Scale-In process has started:
>ManagedElement=1,SystemFunctions=1,SysM=1,CrM=1
(CrM=1)>show -m ComputeResourceRole -p instantiationState,operationalState
The following is an example output when the node PL-5 is subject of Scale-In:
(CrM=1)>show -m ComputeResourceRole -p instantiationState,operationalState ComputeResourceRole=PL-3 instantiationState=INSTANTIATED operationalState=ENABLED ComputeResourceRole=PL-4 instantiationState=INSTANTIATED operationalState=ENABLED ComputeResourceRole=PL-5 instantiationState=SHUTTINGDOWN operationalState=UNINSTANTIATING ComputeResourceRole=SC-1 instantiationState=INSTANTIATED operationalState=ENABLED ComputeResourceRole=SC-2 instantiationState=INSTANTIATED operationalState=ENABLED (CrM=1)>
The PL-5 node attributes instantiationState=INSTANTIATED and operationalState=UNINSTANTIATING show that the graceful Scale-In has started.
- The Scale-In procedure can only be considered successfully
finished if the compute resource entry cannot be found through the
COM CLI.
The following is an example where PL-5 was scaled in:
(CrM=1)>show
The following is an example output where ComputeResourceRole=PL-5 does not exist any more.
(CrM=1)>show CrM=1 autoRoleAssignment=ENABLED ComputeResourceRole=PL-3 ComputeResourceRole=PL-4 ComputeResourceRole=SC-1 ComputeResourceRole=SC-2 Role=Default-Role Role=SYSTEM (CrM=1)>
3.3.4 Remove Compute Resource
Removing a compute resource from the VNF is out of the scope for this application. Follow the instructions given by the cloud management system on how to remove VMs from the VNF.
3.3.5 Check State of the Cluster
The Graceful Scale-In procedure can be considered as successfully finished if the cluster is in healthy state after the operation, refer to MTAS Health Check.
3.3.6 Troubleshoot Scale-In Failures
In case of unsuccessful Scale In operation, refer to MTAS Troubleshooting Guideline.
3.4 Forceful Scale-In
This section describes a step by step guide for the Forceful Scale-In procedure.
3.4.1 Prerequisites
Before starting these procedures, ensure that the following conditions are met:
- The user performing the operations has access to the COM interfaces and to the SC nodes.
- PL-3 and PL-4 nodes cannot be a subject of a Scale-In operation
- One or more nodes are unavailable, which results in faulty state of the cluster, refer to MTAS Health Check.
3.4.2 Scale-In Unavailable PL
This step is equivalent with the corresponding step of the Forceful Scale-In procedure, see Section 3.3.2.
3.4.3 Monitor Scale-In Progress
This step is equivalent with the corresponding step of the Forceful Scale-In procedure, see Section 3.3.3.
3.4.4 Check State of the Cluster
The Forceful Scale-In procedure can be considered as successfully finished if the cluster is in healthy state after the operation, refer to MTAS Health Check.
3.4.5 Troubleshoot Scale-In Failures
This step is equivalent with the corresponding step of the Forceful Scale-In procedure, see Section 3.3.6.
3.5 Post Activities
This section describes the post scaling activities needed for MTAS.
3.5.1 Add MRFP Links
If the MTAS is configured to use External MRFC, skip this section.
To add the MRFP links after scaling, refer to MTAS Media Control Management Guide.

Contents





