SAPC PNF Scale Out
Ericsson Service-Aware Policy Controller

Contents

1SAPC PNF Scale Out Introduction
1.1SAPC PNF Scale Out Prerequisites

2

SAPC PNF Scale Out Automated Procedure

3

SAPC PNF Scale Out Step by Step Procedure

1   SAPC PNF Scale Out Introduction

The purpose of this document is to provide detailed information about the scale out procedure for traffic processors in a PNF deployment.

PNF deployments covered in this document are as follows:

1.1   SAPC PNF Scale Out Prerequisites

The following requirements must be fulfilled to scale out the SAPC successfully.

2   SAPC PNF Scale Out Automated Procedure

Stop!

If the hardware being used for deployment is not either BSP nor NSP, proceed directly to Section 3.

  1. Check the health of the SAPC.

    Execute the sapcHealthCheck command as explained in the SAPC Troubleshooting Guide for getting the SAPC state.

  2. Log in to DMX and check that the blades to scale are powered off (administrativeState locked).
    Attention!

    At this stage, the procedure depends on the hardware.

    1. BSP 8100

      :# ssh -p 2024 advanced@<DMX>

      : password:

      DMX:> show-table ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=SAPC -m Blade -p userLabel,bladeId,administrativeState

      Stop!

      If the blades used for SC-1, SC-2, PL-3 and PL-4 are not 0-1,0-3,0-5 and 0-7, proceed to Section 3.

    2. NSP 6.1

      :# ssh -p 2024 expert@<DMX>

      : password:

      DMX:> configure

      DMX:(config)% show ManagedElement 1 DmxFunctions 1 BladeGroupManagement 1 Group SAPC ShelfSlot Blade 1

      Stop!

      If the blades used for SC-1, SC-2, PL-3 and PL-4 are not 0-9,0-11,0-1 and 0-3, proceed to Section 3.

      Note: step by step procedure.

  3. Login to one of the hosts where the virtual machines for the system controllers are running and execute the scale out script:

    Host_1:# cd /mnt/store/SAPC/host-config/scripts/management

    Host_1:# ./pnf_scale_out.sh <sc1_address> <dmx_address> <hw_type> <initial_pl> <final_pl>

    Execute the script with no parameters to get help about the parameters and options:

    Host_1:# ./pnf_scale_out.sh

    Note:  
    A password may be requested.

    For example, for an NSP subrack 12 blades the command is as follows.

    Host_1:# ./pnf_scale_out.sh 192.168.100.126 10.41.32.84 NSP 5 12

  4. The script executes all needed actions. If for some reason, the script is interrupted, it can be launched again, but the parameter <initial_pl> needs to be reviewed, so that the blades that have already been added to the SAPC are skipped now. For instance, of PL-5 and PL-6 have already been successfully escalated, the value of <initial_pl> must be 7 now.
  5. If there is an error, follow Data Collection Guideline for SAPC to collect all necessary information, and contact the support team.
  6. Upon successful execution, perform a SYSTEM DATA backup. For details explaining how to create a backup refer to Backup and Restore.

3   SAPC PNF Scale Out Step by Step Procedure

These steps are be used when automated procedure was not applicable.

  1. If case not done previously, check the health of the SAPC.

    SC-1:# sapcHealthCheck

    ==================== HEALTH CHECK REPORT ====================
    ..
    ..
    ..
    *** SAPC HEALTH CHECK SUMMARY ***
    WARNINGS: 0
    CRITICAL ERRORS: 0
    **********************************
    
     SAPC Health Check finished: stable.

  2. Expand the VIP configuration to all nodes
    . Access the SC-1 and execute the following command for all PLs.

    SC-1:# for node in {3..n}; do add_node_in_evip $node; done

    For example, for a complete cabinet with 36 blades the command is as follows.

    SC-1:# for node in {3..36}; do add_node_in_evip $node; done

    The result will prepare the VIP for future use.

    SC-1:# PL-$node added with success.

    In case the payload already has a front end, it informs about it and it does nothing.

    SC-1:# PL-$node is already in EVIP. Nothing to do.

  3. Proceed to steps 4, 5 and 6 per each blade to scale out.
  4. Check that the next blade to add is inactive and power it on.
    Attention!

    At this stage, the procedure depends on the hardware.

    1. BSP 8100

      InstallationServer:# ssh -p 2024 advanced@<DMX>

      InstallationServer: password:

      DMX:> ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=SAPC,Blade=<blade>

      DMX:> show

      DMX:> Blade=<blade>
      administrativeState=LOCKED

      DMX:> ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=SAPC,Blade=<blade>,administrativeState=UNLOCKED

      DMX:> commit

      DMX:> Commit complete.

    2. NSP 6.1

      InstallationServer:# ssh -p 2024 expert@<DMX>

      InstallationServer: password:

      DMX:> configure

      DMX:> show ManagedElement 1 Equipment 1 Shelf <shelf_number> Slot <slot_number> Blade 1 administrativeState

      DMX:> Blade=<blade>
      administrativeState=LOCKED

      DMX:> set ManagedElement 1 Equipment 1 Shelf <shelf_number> Slot <slot_number> Blade 1 administrativeState unlocked

      DMX:> commit

      DMX:> Commit complete.

    3. Other hardware

      Please, consult hardware documentation about how to check status and power on blades.

  5. Check that the new payload has scaled out correctly. The scale out takes some minutes, so the script waits until it finishes to return success or failure.

    Host_1:# ssh root@192.168.100.126

    SC-1:# sapcScaleOutHealthCheck PL-<node>

     Checking PL-<node> node becomes up at TIPC level....
     The new neighbor <node> is up at TIPC level.
     DBS is Started in PL-<node>. DBS level...
     Scale-Out for PL-<node> has been successfully performed.
     It is recommended to perform a whole SAPC Health Check:
     sapcHealthCheck

  6. (Optional Step) In case a front end needs to be added for that PL execute the sapcFeeManagement tool. Follow the VIP Front End Management Tool document. During an installation, this step is not needed, as the front end has already been added by the adapt_cluster tool.
  7. Repeat the procedure from step 4 onwards for the next payload.
  8. If there is an error, follow Data Collection Guideline for SAPC to collect all necessary information, and contact the support team.
  9. Perform a SYSTEM DATA backup. For details explaining how to create a backup refer to Backup and Restore.