Operating Instructions 2/1543-HDA 104 03/10 Uen S

CUDB System Administrator Guide

Contents


1 Introduction

This document describes every system-level, node-level and auxiliary administration and configuration procedure available for the Ericsson Centralized User Database (CUDB).

1.1 Document Purpose and Scope

The purpose of this document is to provide detailed information about the administration and configuration procedures available for CUDB.

Therefore, the guide describes the following:

  • The resources and settings available to access and configuration in a CUDB node.

  • The interfaces and tools through which the above resources and settings can be accessed and configured.

  • The steps of performing management procedures.

The final execution permissions on a procedure are determined by taking into consideration the more restrictive permissions applied to every command executed during the procedure. Refer to CUDB Node Commands and Parameters for user type permissions.

1.2 Revision Information

Rev. A

Rev. B

Rev. C

Rev. D

Rev. E

Rev. F

Rev. G

Rev. H

Rev. J

Rev. K

Rev. L

Rev. M

Rev. N

Rev. S

Other than editorial changes, this document has been revised as follows:

  • Updated Ericsson personnel information.

1.3 Typographic Conventions

Typographic Conventions can be found in the following document:

2 Overview

CUDB provides several interfaces and procedures for configuring CUDB nodes. All CUDB nodes in a CUDB system are managed and configured in the same way.

The available interfaces and configuration procedures are described in the sections below.

3 Interfaces and Tools

This section provides a description of the interfaces and tools used for CUDB system administration.

3.1 Interfaces

This section describes every Operation and Maintenance (OAM) interface used to access and manage CUDB. An overall view of these interfaces is shown in FigInterfaces. More information on each interface is available in the following chapters.

Figure 1   OAM Interfaces

3.1.1 SNMP

The Simple Network Management Protocol (SNMP) v3 is a standard protocol used to interchange administrative information between network elements using the User Datagram Protocol (UDP). Fault reporting in CUDB happens through trap messages sent through SNMP: the trap messages send alarm information to the Network Management System (NMS).

3.1.2 SSH

Secure Shell (SSH) is a network protocol allowing data exchange through a secure channel between two network devices.

SSH is used in CUDB to provide access to the Command Line Interface (CLI).

3.1.3 SFTP

The Secure File Transfer Protocol (SFTP) is built on top of SSH, and is used to remotely and securely access files stored in a CUDB node.

3.1.4 NETCONF

The Network Configuration Protocol (NETCONF) is a machine-to-machine interface providing means to install, modify, and delete the configuration of the network devices. The CUDB NETCONF interface is provided by Ericsson and can be used for configuration purposes instead of the CUDB Configuration CLI (see CUDB Configuration CLI). The CUDB NETCONF interface provided by Ericsson only supports some of the capabilities defined in the standards.

CUDB NETCONF clients must be configured to use the NETCONF SSH subsystem through the designated OAM NETCONF TCP port.

Example of starting a NETCONF session with OpenSSH client:
ssh <user>@<target_host> -p 830 -s netconf

The options are as follows:

  • <user> – Name of a valid user account. Only root user and users belonging to cudbadmin group are allowed access.

  • <target_host> – O&M IP address.

Refer to COM Management Guide and Ericsson NETCONF Interface for further details about NETCONF.

CUDB supports the use of external tools for configuration management through NETCONF such as the Ericsson NETCONF Browser. This tool provides a graphical user interface, allowing users to navigate and manage the Configuration Data Model. For more information on the Ericsson NETCONF Browser, refer to the Ericsson NETCONF Browser User Guide in the Ericsson NETCONF Browser CPI.

3.1.5 SCP

The Secure Copy Protocol (SCP) has the same purpose and use as of SFTP but implements a more efficient transfer algorithm.

SCP works with both IPv4 and IPv6 addresses. However, if IPv6 address is used, it must be enclosed in square brackets, for example:

scp <local_path> <user_name>@[<remote_address>]:<remote_path>

3.2 Tools

This section describes the tools used to access and configure CUDB.

3.2.1 CUDB CLI

CUDB provides a CLI interface to execute both Linux Distribution Extension (LDE) and specific CUDB commands (refer to LDE Management Guide and CUDB Node Commands and Parameters). Access to the CLI interface of the CUDB nodes is offered through SSH using port 22.

To access this CLI from the administration node, use the following command:

ssh <admin_user>@<CUDB_Node_OAM_VIP_Address>

Connection to this CLI requires authentication. Refer to CUDB Users and Passwords for user credentials that can be used to access the CUDB CLI.

3.2.1.1 Finding the Active System Controller

Several troubleshooting resources (such as the cudbGetLogs and cudbAnalyser scripts, or the CoreMW console) can be executed only on the active System Controller (SC). If needed, use the following command to determine which is the active SC:

# cudbHaState | grep COM | grep ACTIVE

The expected output must be similar to the below example:

Example 1  
COM is assigned as ACTIVE in controller SC-1.

3.2.2 CUDB Configuration CLI

CUDB provides a CLI interface to perform configuration updates. This CLI is based on the COM CLI and can be reached from a regular CUDB CLI session on the active SC. Refer to COM Management Guide for more information.

Steps

To access the configuration CLI from a regular CLI session, follow the steps below:

  1. Establish a CUDB CLI session towards the CUDB node, use the following command:
    ssh -l cudbadmin <CUDB_Node_OAM_IP_Address>
  2. Use the command below (the output is also shown) to find the active SC:
    cudbHaState | grep COM | grep ACTIVE
    COM is assigned as ACTIVE in controller SC-1
    The active SC (SC_2_1 in the example above) must be used for accessing the COM CLI.
    For cudbadmin user:
    sudo cudbHaState | grep COM | grep ACTIVE
  3. Access the COM CLI by executing the following command:
    sudo /opt/com/bin/cliss

3.2.3 CUDB Platform Interfaces

The following platform elements provide a CLI for configuration purposes:

3.2.4 LDAP Schema Management Tools

CUDB provides tools for managing the Lightweight Directory Access Protocol (LDAP) schema, which can be executed on any Linux machine with Java, or also on the SCs of a CUDB node through SSH. The available tools are as follows:

3.2.5 UDC Cockpit

UDC Cockpit tool, formerly known as CUDB Observability Tool is a GUI that is used to monitor the system at real-time, and to analyze historical data. The GUI monitors LDAP counters, mastership, and alarms. The Cockpit GUI is accessed through a web interface, and uses SSH and SNMP interfaces to communicate with the CUDB system.

4 Configuration Data Management

Most of the CUDB configuration data can be managed through the configuration model. Refer to CUDB Node Configuration Data Model Description for more information.

Configuration data management through the configuration model is possible through the CUDB Configuration CLI and NETCONF interface. See CUDB Configuration CLI for more information on how to access the CUDB Configuration CLI and NETCONF for more information on the NETCONF interface.

5 Fault Management

Fault management in CUDB provides a set of services that perform the following roles:

  • Notify the system administrator if a problem is detected that requires human intervention.

  • Notify the system administrator if particular events occur.

For more information about fault management, refer to CUDB Node Fault Management Configuration Guide.

6 Performance Management

Performance management in CUDB provides two types of statistics:

  • CUDB performance indicators, which contain information about CUDB behavior. These counters provide performance information for each CUDB node. Refer to CUDB Counters List for further information.

  • Application statistics, which provide information related to the application data stored in the CUDB system. The statistics are collected and provided on system level. Refer to CUDB Application Counters for further information.

For more information about performance management, refer to CUDB Performance Guide.

7 Logging Management

Logging management in CUDB is based on the reported log events.

For further information about logging events in CUDB, refer to CUDB Node Logging Events.

Information on configuring the Centralized Security Event Logging function is available in the Configuring Secure Centralized Security Event Logging section of CUDB Security and Privacy Management.

8 CUDB Log Collection

The CUDB system can collect the log files of a specific CUDB node or all the CUDB nodes with the cudbCollectInfo command.

This command collects the log files, and compresses them into tar file format.

For more information about this command, refer to CUDB Node Commands and Parameters.

9 Security Management

CUDB security management provides a set of features to ensure secure authentication and confidentiality for all administrative tasks and traffic scenarios.

Refer to CUDB Security and Privacy Management for information about security management in CUDB.

10 Backup and Restore Procedures

CUDB offers three types of backup operations:

  • Data backup: This operation creates data backups of the information stored in the database.

  • Software and configuration backup: This operation creates data backups of the following information:

    • CUDB node middleware configuration.

    • CUDB node configuration.

    • Software packages installed on the CUDB node.

    • Auxiliary information related to CUDB and stored in the Processing Layer Database (PLDB).

  • Configuration backup of specific infrastructure element(s).

Refer to CUDB Backup and Restore Procedures and CUDB Node Preventive Maintenance for information on how to perform backup and restore operations in CUDB.

11 Replication

Replication is automatically handled by CUDB and requires no administrative actions other than handling raised alarms. In case of replication issues, follow the procedures described in the Operating Instructions (OPIs) for replication alarms (refer to CUDB Node Fault Management Configuration Guide for further information).

12 Reconciliation

Reconciliation is automatically handled by CUDB and requires no administrative actions other than handling raised alarms. In case of reconciliation issues, follow the procedures described in the OPIs for reconciliation alarms (refer to CUDB Node Fault Management Configuration Guide for further information).

Note: In special cases, reconciliation must be requested manually. Refer to CUDB Data Storage Handling for more information on these cases and reconciliation in CUDB, and see Requesting Reconciliation Manually for more information on manual reconciliation.

13 Networking

The general network infrastructure of the CUDB system is described in CUDB Node Network Description.

14 Data and Schema Management

This section provides information on the administrative tasks related to data and schema management.

14.1 LDAP Data Import and Export

The CUDB system supports the import of LDAP data from LDAP Data Interchange Format (LDIF) files to the database, and also the export of data from the database to LDIF files. Refer to CUDB Import and Export Procedures for further information.

Import and export can also be performed by LDAP interface. This is commonly used for smaller amounts of data. Refer to Adding and Searching LDAP Data for more information.

14.2 Adding and Searching LDAP Data

To perform addition or search of data, stored in CUDB, using the LDAP interface, standard LDAP clients with access to CUDB traffic network can be used.

The following chapters contain information about Open LDAP tools ldapadd and ldapsearch, which are delivered with CUDB, to perform these operations.

14.2.1 Adding LDAP Data

The following steps give an example of adding LDAP data from LDIF files using ldapadd command, from within the CUDB node:

Steps

  1. Log on to the CUDB node with the PLDB master replica.
  2. Transfer all LDIF files to the CUDB SC node.
    Note: Verify the contents of each LDIF file and replace <root_DN> with the operator-specific root Distinguished Name (DN).
  3. Import LDIF files using the ldapadd command, for example:
    # ldapadd -Y CUDB-CRYPTO -c -H ldap://PL0:389 -U <ldap_root_user> -W -S /tmp/error.log -f <path_to_ldif_file>
    Note: A prompt to enter LDAP root user password will be shown.
    For more information about LDAP root user password, refer to CUDB Users and Passwords .
  4. Verify that all ldapadd operations are successful by querying the corresponding error.log file for errors.
    Note: Errors are related to common entries, for example, dn: serv=CSPS, ou=mscCommonData, dc=<root_DN> in multiple LDIF files can be ignored.

14.2.2 Searching LDAP Data

Searching LDAP data can be done by using ldapsearch command.

Refer to OpenLDAP, ldapsearch Tool Manual for more information about the command.

14.3 LDAP Schema Edition

The Schema Tools software provides an easy-to-use graphical interface to create and edit LDAP schemas. For further information, refer to CUDB LDAP Schema Management Graphical User Interface.

Note: The graphical tool requires a Linux machine with Java.

For further information, refer to CUDB LDAP Schema Management Graphical User Interface.

Note: LDAP schema management is restricted to Ericsson personnel. Contact the next level of maintenance support to perform such procedures.

14.4 Schema Update and Index Creation on New Attributes

CUDB provides methods for changing LDAP application schemas by adding new object classes, or adding new attributes to already existing object classes. It is also possible to define indexes on new attributes, no matter if they belong to a new or an already existing object class. CUDB also provides methods to add new application schemas.

Note: LDAP schema update and index creation are restricted to Ericsson personnel. Contact the next level of maintenance support to perform such procedures.

14.4.1 Index Creation

New indexes in a local CUDB node can be created by setting the multivalued attribute ldapAttrIndexes of the CudbLdapAccess class in the configuration model. For more information, refer to the Class CudbLdapAccess section of CUDB Node Configuration Data Model Description.

Values of the ldapAttrIndexes attribute must contain all attributes that are to be indexed separated by commas.

Attention!

LdapAttrIndexes must be defined in the same order on every node.

Refer to Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

15 Startup and Shutdown

This section describes the startup and shutdown operations available in CUDB.

Note: System startup and shutdown must be performed by authorized Ericsson personnel only.

15.1 Initial System Startup

Note: Initial system startup must be performed by authorized Ericsson personnel only.

15.2 Node Startup after Graceful Shutdown

In general, starting a gracefully shut down CUDB node requires no special procedures other than powering it on, and handling the alarms that may be raised (and remain raised) after the node is back online.

15.2.1 Node Startup after a Graceful Shutdown for CUDB Systems Deployed on Native BSP 8100

The standard procedure for starting up a node for CUDB systems deployed on native BSP 8100 is as follows:

Note: If the node has more than one subrack, the subrack with SCs and payload blades must be powered first.

Steps

  1. Login to the BSP 8100 CLI.
  2. Use the CLI console to check which shelf and slot numbers are associated with the SCs and payload blades. Execute the following commands to do so:
    > configure
    (config)> ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=cudb
    (config-VirtualEquipment=cudb)> show-table -m Blade -p bladeId, userLabel,administrativeState
    The expected output must be similar to the below example:
    =============================================
    | bladeId | userLabel | administrativeState |
    =============================================
    | 0-1     | SC-1      | LOCKED            |
    | 0-11    | PL-6      | LOCKED            |
    | 0-13    | PL-7      | LOCKED            |
    | 0-15    | PL-8      | LOCKED            |
    | 0-17    | PL-9      | LOCKED            |
    | 0-19    | PL-10     | LOCKED            |
    | 0-21    | PL-11     | LOCKED            |
    | 0-23    | PL-12     | LOCKED            |
    | 0-3     | SC-2      | LOCKED            |
    | 0-5     | PL-3      | LOCKED            |
    | 0-7     | PL-4      | LOCKED            |
    | 0-9     | PL-5      | LOCKED            |
    =============================================
    
  3. Use the CLI console to power on the SCs and payload blades fetched in the previous step. After the SCs are unlocked, continue with the payload blades:
    (config)> ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=cudb
    (config-VirtualEquipment=cudb)> Blade=0-1,administrativeState=UNLOCKED
    (config-VirtualEquipment=cudb)> commit -s
    (config-VirtualEquipment=cudb)> Blade=0-3,administrativeState=UNLOCKED
    (config-VirtualEquipment=cudb)> commit -s
    ...
    (config-VirtualEquipment=cudb)> end
    (VirtualEquipment=cudb)> exit
  4. Exit the CLI console.
  5. Wait until the SCs are completely powered up.
  6. After a few minutes, it is possible to log in to the SCs of the CUDB and verify that all blades in the CUDB are starting up. To do so, execute the following command:
    tipc-config -n
    Note: Blades that are up and running are indicated with up (see example below). Therefore, check that all lines contain up. If not all blades are running (indicated with down), wait approximately 10 minutes until all of them are initialized. Some blades which are not running can be missing from the list, so it must be checked that all blades are on the list and all of them are up.
    CUDB_79 SC_2_2# tipc-config -n
    Neighbors:
    <1.1.1>: up
    <1.1.3>: up
    <1.1.4>: up
    <1.1.5>: up
    <1.1.6>: up
    <1.1.7>: down
    <1.1.8>: up
    <1.1.9>: up
    <1.1.10>: up
    <1.1.11>: up
    <1.1.12>: up
    <1.1.13>: up
    <1.1.14>: up
    

    Contact the next level of maintenance support if one or more blades fail to initialize in 10 minutes.

  7. Start the database cluster processes with the following command:
    sudo cudbManageStore -a -o start

15.2.2 Node Startup after Graceful Shutdown for CUDB Systems Deployed on a Cloud Infrastructure

In case of using Cloud Execution Environment (CEE), starting a gracefully shut down CUDB node deployed on a cloud infrastructure can be done through the Atlas GUI or the OpenStack command-line clients. These procedures are described in Node Startup after Graceful Shutdown Using the Atlas GUI and Node Startup after Graceful Shutdown Using OpenStack Command-Line Tools, respectively. For more information about the Atlas GUI, refer to the "Atlas Dashboard End User Guide" document and for more information about the OpenStack command-line tools, refer to the "OpenStack End User Guide" document in the CEE CPI.

In case of using a different cloud solution, refer to the solution-specific documentation for more information.

15.2.2.1 Node Startup after Graceful Shutdown Using the Atlas GUI

Steps

To start a gracefully shut down CUDB node through the Atlas GUI, perform the following steps:

  1. Log in to the Atlas Dashboard.
  2. Select the appropriate project in the Current Project field and select Project in the View field.
  3. Click the Instances category.
  4. Mark both SCs of the CUDB and choose the Start Instances action located in the top bar, under the More Actions drop-down menu.
  5. Verify that the status of both SCs is Active and the power state is Running.
  6. After a few minutes, verify that both SCs are up and running. To do so, log in to the SCs and execute the following commands:
    1. tipc-config -n
      Note: VMs that are up and running are indicated with up (see example below). If one of the SCs is not running (indicated with down), wait a few minutes until it is up. VMs which are not running can be missing from the list, so check that both VMs are on the list and both of them are up.
      CUDB_79 SC_2_1# tipc-config -n
      Neighbors:
      <1.1.2>: up
    2. cudbHaState
      Note: If both SCs are fully initialized, only payload VMs will be listed having problems in the output of the command under the "SU states" section. If the "SU states" section lists one of the SCs, it means that the mentioned VM is not fully initialized. If this happens, wait a few minutes, then execute the command again and see if the output has changed and that the SC is not listed anymore.
  7. Mark all payload VMs of the CUDB and choose the Start Instances action located in the top bar, under the More Actions drop-down menu.
  8. Verify that the status of all payload VMs is Active and the power state is Running.
  9. After a few minutes, verify that all VMs are up and running. To do so, log in to the SCs and execute the following commands:
    1. tipc-config -n
      Note: VMs that are up and running are indicated with up (see example below). Therefore, check that all lines contain up. If not all VMs are running (indicated with down), wait approximately 15 minutes until all of them are initialized. VMs which are not running can be missing from the list, so check that all VMs are on the list and all of them are up.

      CUDB_79 SC_2_2# tipc-config -n

      Neighbors:
      <1.1.1>: up
      <1.1.3>: up
      <1.1.4>: up
      <1.1.5>: up
      <1.1.6>: up
      <1.1.7>: down
      <1.1.8>: up
      <1.1.9>: up
      <1.1.10>: up
      <1.1.11>: up
      <1.1.12>: up
      <1.1.13>: up
      <1.1.14>: up
      
    2. cudbHaState
      Note: If all VMs are fully initialized, then the message "Status OK" will be listed in the output of the command under the "SU states" section. If some of the VMs are listed in this section, it means the mentioned VM is not fully initialized. In case this happens, wait a few minutes, then execute the command again and see if the status is changed and the VM is not listed anymore.

      Contact the next level of maintenance support if one or more VMs fail to initialize in 15 minutes.

  10. Start the database cluster processes with the following command:
    sudo cudbManageStore -a -o start

15.2.2.2 Node Startup after Graceful Shutdown Using OpenStack Command-Line Tools

Steps

To start a gracefully shut down CUDB node through the OpenStack command-line, execute the following commands from the Cloud Infrastructure Controller (CIC):

  1. Execute the following command to start the SCs of the CUDB:
    nova start <SC_id>
    Note: Execute the command for both SCs.
  2. Verify that the status of both SCs is Active and the power state is "1" (Running) with the following command:
    nova show <SC_id>
    Note: Execute the command for both SCs.
  3. After a few minutes, verify that both SCs are up. To do so, log in to the SCs and execute the following commands:
    1. tipc-config -n
      Note: VMs that are up and running are indicated with up (see example below). If one of the SCs is not running (indicated with down), wait a few minutes until it is up. VMs which are not running can be missing from the list, so check that both VMs are on the list.
      CUDB_79 SC_2_1# tipc-config -n
      Neighbors:
      <1.1.2>: up
      
    2. cudbHaState
      Note: If both SCs are fully initialized, only payload VMs will be listed having problems in the output of the command under the "SU states" section. If the "SU states" section lists one of the SCs, it means that the mentioned VM is not fully initialized. If this happens, wait a few minutes, then execute the command again and see if the output has changed and that the SC is not listed anymore.
  4. Execute the following command to start payload VMs:
    nova start <payload_vm_id>
    Note: Execute the command for all payload VMs.
  5. Verify that the status of payload VMs is Active and the power state is "1" (Running) with the following command:
    nova show <payload_vm_id>
    Note: Execute the command for all payload VMs.
  6. After a few minutes, verify that all VMs are up and running. To do so, log in to the SCs and execute the following commands:
    1. tipc-config -n
      Note: VMs that are up and running are indicated with up (see example below). Therefore, check that all lines contain up. If not all VMs are running (indicated with down), wait approximately 15 minutes until all of them are initialized. VMs which are not running can be missing from the list, so check that all VMs are on the list and all of them are up.

      CUDB_79 SC_2_2# tipc-config -n

      Neighbors:
      <1.1.1>: up
      <1.1.3>: up
      <1.1.4>: up
      <1.1.5>: up
      <1.1.6>: up
      <1.1.7>: down
      <1.1.8>: up
      <1.1.9>: up
      <1.1.10>: up
      <1.1.11>: up
      <1.1.12>: up
      <1.1.13>: up
      <1.1.14>: up
      
    2. cudbHaState
      Note: If all VMs are fully initialized, then the message "Status OK" will be listed in the output of the command under the "SU states" section. If some of the VMs are listed in this section that means the mentioned VM is not fully initialized. In case this happens, wait a few minutes, then execute the command again and see if the status is changed and the VM is not listed anymore.

      Contact the next level of maintenance support if one or more VMs fail to initialize in 15 minutes.

  7. Start the database cluster processes with the following command:
    sudo cudbManageStore -a -o start

15.3 Node Startup after a Non-Graceful Shutdown

In general, starting a CUDB node after shutting it down requires no special procedures other than powering it on, and handling the alarms that may be raised (and remain raised) after the node is back online. This also applies if the node is restarted after a power failure.

15.3.1 Node Startup after a Non-Graceful Shutdown for CUDB Systems Deployed on Native BSP 8100

Compared to a startup after a graceful shutdown, two additional steps must be performed if CUDB nodes deployed on native BSP 8100 are started up after a non-graceful shutdown: due to BSP 8100 characteristics, the blades must be locked before startup, then unlocked when the node is running again.

Refer to the "BSP Equipment Management" document in the BSP 8100 CPI for information on locking blades. See Node Startup after a Graceful Shutdown for CUDB Systems Deployed on Native BSP 8100 for more information on the startup after a graceful shutdown procedure.

Note: If the node has more than one subrack, the subrack with SCs and payload blades must be powered first.

15.4 Full CUDB System Startup

In certain cases (such as large-scale power outages) it can happen that the entire CUDB system must be started up.

Note: Full CUDB system startups must be performed by Ericsson personnel only. Contact the next level of maintenance support to perform this procedure.

15.5 Node Graceful Shutdown

Node graceful shutdown allows to manually turn off a CUDB node.

Note: Before performing a graceful shutdown, verify that the CUDB node to turn off does not have any Data Store Unit Group (DSG) or PLDB masters (see Listing the Master Replicas for more information). If the node contains a DSG or PLDB master, the mastership must be manually moved to another CUDB node (see Changing DSG or PLDB Mastership Manually for more information).

15.5.1 Node Graceful Shutdown for CUDB Systems Deployed on Native BSP 8100

Note: Depending on the number of sites and nodes in the system, shutting down a CUDB node deployed on native BSP 8100 can result in a symmetrical split situation. For example, if a CUDB node is shut down on a 2-node, 2-site deployment, the remaining node finds itself in a symmetrical split situation.

To perform a graceful shutdown, complete the following steps:

Steps

  1. Establish a CUDB configuration CLI session towards the target CUDB node with the following command:
    ssh <admin_user>@<CUDB_Node_OAM_VIP_Address>
  2. Perform a software and configuration backup as described in CUDB Backup and Restore Procedures.
  3. Stop database cluster processes with the following command:
    sudo cudbManageStore -a -o stop
  4. Exit the CUDB configuration CLI session.
  5. Establish a root CUDB CLI session towards the target CUDB node with the following command:
    ssh root@<CUDB_Node_OAM_VIP_Address>
  6. Stop the AMF cluster services in the connected SC with the following command:
    service opensafd stop
    After the AMF cluster services in the SC are stopped, connect to the other SC (indicated as SC_2_<x> below, where <x> is the number of the other SC), and stop the AMF cluster services as follows:
    ssh SC_2_<x>
    service opensafd
    stop
    Exit the connection to the SC established in this step.
  7. Exit the root CUDB CLI session to the SC established in Step 5.
  8. Login to the BSP 8100 CLI.
  9. Use the CLI console to check which shelf and slot numbers are associated with the SCs and payload blades. Execute the following commands to do so:
    > configure
    (config)> ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=cudb
    (config-VirtualEquipment=cudb)> show-table -m Blade -p bladeId, userLabel,administrativeState
    The expected output must be similar to the below example:
    =============================================
    | bladeId | userLabel | administrativeState |
    =============================================
    | 0-1     | SC-1      | UNLOCKED            |
    | 0-11    | PL-6      | UNLOCKED            |
    | 0-13    | PL-7      | UNLOCKED            |
    | 0-15    | PL-8      | UNLOCKED            |
    | 0-17    | PL-9      | UNLOCKED            |
    | 0-19    | PL-10     | UNLOCKED            |
    | 0-21    | PL-11     | UNLOCKED            |
    | 0-23    | PL-12     | UNLOCKED            |
    | 0-3     | SC-2      | UNLOCKED            |
    | 0-5     | PL-3      | UNLOCKED            |
    | 0-7     | PL-4      | UNLOCKED            |
    | 0-9     | PL-5      | UNLOCKED            |
    =============================================
    
  10. Use the CLI console to power off the payload blades and SCs fetched in the previous step. After the payload blades switched off, continue with SCs:
    (config)> ManagedElement=1,DmxcFunction=1,Eqm=1,VirtualEquipment=cudb
    (config-VirtualEquipment=cudb)> Blade=0-23,administrativeState=LOCKED
    (config-VirtualEquipment=cudb)> commit -s
    (config-VirtualEquipment=cudb)> Blade=0-21,administrativeState=LOCKED
    (config-VirtualEquipment=cudb)> commit -s
    ...
    (config-VirtualEquipment=cudb)> Blade=0-3,administrativeState=LOCKED
    (config-VirtualEquipment=cudb)> commit -s
    (config-VirtualEquipment=cudb)> Blade=0-1,administrativeState=LOCKED
    (config-VirtualEquipment=cudb)> commit -s
    (config-VirtualEquipment=cudb)> end
    (VirtualEquipment=cudb)> exit
  11. Exit the CLI console.

After This Task

Note: The CUDB CLI session cannot be established once the blades are stopped.

After performing a graceful node shutdown, performing a health check is recommended on the other nodes of the system. For a detailed description on health check, refer to CUDB Health Check.

15.5.2 Node Graceful Shutdown for CUDB Systems Deployed on a Cloud Infrastructure

In case of using Cloud Execution Environment (CEE), the graceful shutdown of a CUDB node deployed on a cloud infrastructure can be done through the Atlas GUI or the OpenStack command-line tools. These procedures are described in Node Graceful Shutdown Using the Atlas GUI and Node Graceful Shutdown Using OpenStack Command-Line Tools, respectively. For more information about the Atlas GUI, refer to the "Atlas Dashboard End User Guide" document and for more information about the OpenStack command-line tools, refer to the "OpenStack End User Guide" document in the CEE CPI.

In case of using a different cloud solution, refer to the solution-specific documentation for more information.

After performing a graceful node shutdown, performing a health check is recommended on the other nodes of the system. For a detailed description on health check, refer to CUDB Health Check.

15.5.2.1 Node Graceful Shutdown Using the Atlas GUI

Steps

To gracefully shut down a node through the Atlas GUI, perform the following steps:

  1. Stop the database cluster processes with the following command:
    sudo cudbManageStore -a -o stop
  2. Log in to the Atlas Dashboard.
  3. Select the appropriate project in the Current Project field and select Project in the View field.
  4. Click the Instances category.
  5. Mark all payload VMs of the CUDB and choose the Shut Off Instances action located in the top bar, under the More Actions drop-down menu.
  6. Verify that the status of all payload VMs is Shutoff and the power state is Shut Down.
  7. Mark both SCs of the CUDB and choose the Shut Off Instances action located in the top bar, under the More Actions drop-down menu.

15.5.2.2 Node Graceful Shutdown Using OpenStack Command-Line Tools

Steps

To gracefully shut down a CUDB node through the OpenStack command-line, execute the following commands from CIC:

  1. Stop the database cluster processes with the following command:
    sudo cudbManageStore -a -o stop
  2. Execute the following command to stop payload VMs:
    nova stop <payload_vm_id>
    Note: Execute the command for all payload VMs.
  3. Execute the following command to stop the SCs of the CUDB:
    nova stop <SC_id>
    Note: Execute the command for both SCs.

16 Regular Maintenance Procedures

There are maintenance procedures that need to be performed regularly to ensure the correct operation of CUDB.

These procedures also allow system administrators to detect incidental problems. Refer to CUDB Node Preventive Maintenance for more information on how to detect common problems.

Note: Maintenance of infrastructure equipment is outside the scope of this document.

17 System Level CUDB Procedures

This section describes system-level procedures (such as maintenance operations and scalability tasks) that must be regularly performed in the CUDB system to ensure uninterrupted operation.

Note: Always contact Ericsson support if any problem occurs while performing the following operations.
Stop!

Do not perform operations (such as adding, removing, or including a CUDB node again in the system) that change the number of available CUDB nodes in the system if the Control, Potential Split Brain Detected alarm is raised.Refer to CUDB High Availability for further details on symmetrical split.

17.1 Configuring Network Routes Towards Notification Endpoints

After configuring CUDB notifications, the configuration of the network connectivity facilities is needed in the system for communication towards the endpoints. Refer to CUDB Notifications for more information on how to configure CUDB notifications.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to configure network routes towards notification endpoints.

Steps

  1. Contact the next level of maintenance support if not Ericsson personnel.

17.2 Disabling a CUDB Node in a CUDB System

Attention!

Authenticated access to certain CUDB modules may be needed. Contact Ericsson staff support to obtain the access.

Note: When disabling a CUDB node that holds 3 BC servers in a two-node site, a split situation will occur in the CUDB system. In this case, it is recommended to manually move all masters from the CUDB site where the CUDB node will be disabled to another CUDB site before disabling the node. For more information on how to manually manage mastership, see Changing DSG or PLDB Mastership Manually. In case of a two-site deployment, the system will find itself in a symmetrical split situation.

17.2.1 Disabling a CUDB Node Using Manual Procedure

This section describes how to manually disable a CUDB node in a CUDB system.

The procedure is as follows:

Steps

  1. Verify that the node to disable (that is, the target node) does not have any DSG or PLDB masters (see Listing the Master Replicas).
    If the node to disable contains DSG or PLDB masters, the mastership must be manually moved to another CUDB node (see Changing DSG or PLDB Mastership Manually).
  2. Deactivate the CUDB node locally by modifying the CudbLocalNode class in the target node. See Deactivating a Local CUDB Node for more information.
  3. Deactivate the CUDB node remotely by disabling it in the configuration of the rest of CUDB nodes in the system. See Deactivating a Remote CUDB Node for more information.
  4. Disable the PROVISIONING_VIP, SITE_VIP, and FE_VIP addresses in the local CUDB node. See Disabling PROVISIONING_VIP, FE_VIP, and SITE_VIP Addresses for more information on this procedure.

17.3 Enabling a Previously Disabled CUDB Node in a CUDB System

If a CUDB node is disabled in a CUDB system, it can be enabled again if the following conditions are met:

  • The configuration of the CUDB node to be enabled again (in short, the "target CUDB node") must contain all information related to the rest of the CUDB nodes present in the system.

  • The configuration of the rest of the CUDB nodes in the system must contain all the information on the target CUDB node and its PLDB/DSG units.

17.3.1 Enabling a CUDB Node Using Manual Procedure

This section describes how to manually enable a CUDB node in a CUDB system.

Perform the following steps to enable a previously disabled CUDB node in a CUDB system:

Steps

  1. Enable the PROVISIONING_VIP, SITE_VIP, and FE_VIP addresses in the local CUDB node. See Enabling PROVISIONING_VIP, FE_VIP, and SITE_VIP Addresses for more information on this procedure.
  2. Activate the target node locally. See Activating a Local CUDB Node for more information.
  3. Activate the target node in the rest of nodes in the CUDB system. See Activating a Remote CUDB Node for more information.

17.4 Creating a New DSG

If more storage capacity is needed in a CUDB system, new DSGs must be added.

Note: A drop in LDAP Front End (FE) node and server counters may be observed until the procedure has been fully completed. This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support if new DSG(s) must be created in the CUDB system.
Note: Perform software and configuration backup and system data backup before creating new DSG(s). Follow the procedures to run backups as specified in CUDB Backup and Restore Procedures.

Steps

  1. Contact the next level of maintenance support if not Ericsson personnel.

17.5 Activating a DSG

If a DSG was created recently, or it was deactivated earlier, it must be activated before using it.

If a CUDB node is added to a CUDB system, the situation can look as follows from the perspective of the DSGs:

  • A deactivated DSG is added again to the system.

  • A new DSG is added to the system.

Both scenarios are covered in the following sections.

17.5.1 Activating a Previously Deactivated DSG in the CUDB System

Follow procedure below for manual activation:

The activation procedure for a deactivated DSG in a CUDB system is as follows:

Steps

  1. Activate the Data Store (DS) Unit which was the DSG master before deactivation. See Activating a DS Unit in a Local CUDB Node for more information.
  2. Activate the rest of the DS Units in the DSG by following the procedure in Activating a DS Unit in a Local CUDB Node.

17.5.2 Activating a New DSG in the CUDB System

Do!

If a new DSG is added to the configuration, always create a new system data backup, because previous backups are not valid any more.To create a new system data backup, follow the steps described in the Performing System Data Backup section of the CUDB Backup and Restore Procedures.

17.5.2.1 Activation Procedure for a New DSG

CUDB supports the activation of new DSGs in the system.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support if new DSG(s) must be activated in the CUDB system.

Steps

  1. Contact the next level of maintenance support if not Ericsson personnel.

17.6 Managing DSG for Provisioning

Specific DSG can be disabled or enabled for provisioning of Distribution Entries (DEs). Once provisioning for a DSG is disabled, new entries cannot be created through subscription creation nor through subscription reallocation.

To disable a specific DSG for provisioning, execute the command cudbDsgProvisioningManage in one node of the CUDB system:

cudbDsgProvisioningManage -d <DSGId>

For further information on this command, refer to the Managing DSG for Provisioning section of CUDB Node Commands and Parameters. For further information on subscription reallocation, see Configuring Subscription Reallocation.

17.7 Listing the Master Replicas

Execute the following command to list all master PLDB and DSG replicas:

cudbSystemStatus -R

For further information on this command, refer to CUDB Node Commands and Parameters.

17.8 Setting the Default Geographical Zone Globally

CUDB supports the global configuration of the default geographical zone.

Perform the procedure described in Setting the Default Geographical Zone on every CUDB node to set the default geographical zone. Refer to CUDB Multiple Geographical Areas for more information on geographical zones.

17.9 Setting Custom Distribution Policy for Distribution Entries Globally

CUDB supports the global configuration of custom distribution policies for Distribution Entries (DEs).

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to configure custom distribution policies for DEs globally.

17.10 Checking the Custom Distribution Policy of DEs Globally

CUDB supports the global check of the custom distribution policies configured for DEs.

Perform the procedure described in Checking The Custom Distribution Policy of DEs on every CUDB node to check the custom distribution policy of DEs.

17.11 Restoring the Default Distribution Policy of DEs Globally

CUDB supports the global restore of the custom distribution policies configured for DEs.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to restore custom distribution policies for DEs globally.

17.12 Activating and Deactivating Notifications Globally

CUDB supports the global activation and deactivation of notifications.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to activate or deactivate notifications globally.

17.13 Deactivating Specific Notifications Globally

CUDB supports the global deactivation of specific notifications.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to deactivate specific notifications globally.

17.14 Adding Space to a BLOB in the Disk Storage System Globally

CUDB supports the global increase of space for a Binary Large Object (BLOB) in the disk storage system.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to add space to a BLOB in the disk storage system globally.

17.15 Configuring Subscription Reallocation

Subscription Reallocation (also known as "reallocation") makes it possible to move stored data from one DSG to another.

Reallocation can be executed by moving away a specified percentage of DS entries from source DSG or by providing a list of entries to be moved to the specified destination DSG. Refer to CUDB Subscription Reallocation for more information about reallocation and to CUDB Node Commands and Parameters for more information about the options of the cudbReallocate command.

Performing a backup in the CUDB system is recommended before running the reallocation. Refer to CUDB Backup and Restore Procedures for more information.

Do!

Always execute defragmentation on the source DSG after any kind of reallocation has been applied. See Running Defragmentation for more information.

17.15.1 Shortening the Reallocation Process

To shorten the reallocation duration, it is possible to run one cudbReallocate process per SC blade. If the command is executed with source, destination, and percentage syntax, the source DSG must be different for each process. It is advised to move the target and destination DSG masters to the site where the PLDB master is, as reallocation is completed faster if it reallocates subscriptions within the same site.

17.16 Requesting Reconciliation Manually

Perform the following procedure to request a reconciliation task manually:

Steps

  1. Find the master DS replica of the affected DSG. See Listing the Master Replicas for more information.
  2. Perform the procedure described in Subscribing a DSG Master Replica to Reconciliation Manually on the CUDB node hosting the master replica to subscribe the DS master replica to reconciliation.

17.17 Running Defragmentation

Defragmentation is aimed to reorganize the way memory is stored in the PLDB or DS units of the target DSG, to reduce the number of memory gaps.

Attention!

Run defragmentation only in slave replica. For master replica first do mastership change.

It is recommended to execute defragmentation during low traffic hours, taking into account that the process may take up to a few hours to finish.

Do not run backup and defragmentation at the same time.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to perform defragmentation.

17.18 Configuring Provisioning Gateway Nodes

CUDB is able to send notifications to the Ericsson Provisioning Gateway (PG) nodes for provisioning to be stopped during procedures such as system data backup or software upgrade.

For more information on stopping provisioning during backup, refer to CUDB Backup and Restore Procedures.

The prerequisite for sending PG notifications is that CUDB nodes are updated with the latest PG configuration information in case there is any change in the PGs connected to the CUDB system. This includes adding or removing PG nodes, or changing the IP address of an existing PG node.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to configure PG nodes.

17.19 CUDB LDAP User Management

If a new CUDB LDAP user or group is added, an existing one is deleted, or the CUDB LDAP user attributes are changed, the LDAP configuration must be updated accordingly. Perform the following procedure to do so:

Steps

  1. Choose a PLDB Master node where the LDAP user data change is added to the configuration model.
  2. Perform one (or more) of the following procedures, depending on the type of LDAP change:
  3. The above changes in LDAP user management are automatically propagated to the rest of the CUDB nodes through replication. If that happens, choose another CUDB node and follow the procedure described in Updating CUDB LDAP User Information in a CUDB Node .
  4. Repeat Step 3 on every CUDB node apart from the one selected in Step 1.

Results

Refer to CUDB LDAP Interwork Description for more information on managing LDAP user attributes.

Note: It is not mandatory to assign CUDB LDAP users to a CUDB LDAP group.

17.20 Changing DSG or PLDB Mastership Manually

This section describes how to move the mastership of a DSG or PLDB to a preferred node as part of a planned master change.

Note: The cudbSystemStatus command is used to locate the master replicas.

To perform the mastership change, log in to the target node (where the master replica is to be hosted) and execute the following command:

cudbDsgMastershipChange

Refer to CUDB Node Commands and Parameters for more information on the command.

17.21 Updating LDAP Schema Globally

CUDB supports the global update of the LDAP schema.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to update the LDAP schema globally.

17.22 Activating DS Units

The process of activating a DS Unit depends on whether the unit plays a local or remote role in a CUDB node.

17.23 Deactivating DS Units

The process of deactivating a DS Unit depends on whether the unit plays a local or remote role in a CUDB node.

17.24 Configuring Automatic Mastership Change

CUDB is able to automatically change the master to the highest priority DSG or PLDB replica, when the master replica is in a different DSG or PLDB replica. For more information, refer to CUDB High Availability.

To configure Automatic Mastership Change (AMC), modify the CudbAutomaticMasterChange class (refer to the Class CudbAutomaticMasterChange section of CUDB Node Configuration Data Model Description) in the configuration model as follows:

  • To enable or disable AMC, set the enabled attribute to the corresponding value of true or false.

  • To change the maximum replication delay, modify the maxReplicationTimeDelay attribute. The value of the attribute is the number of milliseconds. Refer to the Class CudbAutomaticMasterChange section of CUDB Node Configuration Data Model Description for the default value.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

17.25 LDAP Data Views Management

The LDAP Data Views function supports accessing stored data through customizable views. The function is based on assigning views to LDAP users. If the LDAP user sending a request has an LDAP view assigned by configuration, then that user can access the data through that view.

Refer to CUDB LDAP Data Views Management for detailed information on all the steps required to create, configure, and delete an LDAP view.

Note: The LDAP Data Views function can only be used if the Application Facilitator Value Package is available.

17.26 OAM Centralized Authentication System Support

CUDB allows to define and manage new OAM users and groups in remote LDAP servers only, in a centralized way, instead of having to create them locally on all CUDB nodes.

Refer to CUDB Security and Privacy Management for more information on how to create and authenticate remote OAM users.

17.27 Optimized Subtree Searches

The Optimized Subtree Searches function enables accessing data in subtree searches in a more efficient way. The function is based on assigning subtree search configurations to LDAP users.

Refer to CUDB Optimized Subtree Searches, for detailed information on all the steps required to create, configure, and delete an optimized subtree search.

18 Node Level CUDB Procedures

This section describes node-level procedures that may need to be performed in the CUDB system.

Note: Always contact Ericsson support if any problem occurs while performing the following operations.

18.1 Activating a Local CUDB Node

When a CUDB node is installed , it must be activated before using it.

To activate a CUDB node locally, set the value of the enabled attribute of the corresponding instance of the CudbLocalNode class to true. For more information, refer to the Class CudbLocalNode section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

18.2 Activating a Remote CUDB Node

Before activating a previously deactivated CUDB node locally, it must be activated remotely in the rest of the CUDB nodes of the system.

To activate a CUDB node remotely, set the value of the enabled attribute of the corresponding instance of the CudbRemoteNode class to true. For more information, refer to the Class CudbRemoteNode section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

18.3 Modifying a CUDB Node Name

The name of a CUDB node can be modified after installation.

To modify the name of a CUDB node, change the value of the networkElementName attribute of the corresponding instance of the CudbLocalNode class. For more information, refer to the Class CudbLocalNode section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

18.4 Modifying the DSG or PLDB Cluster State

The procedure to modify the state of a DSG or PLDB cluster is the following:

Steps

  1. Establish a CUDB CLI session towards the target CUDB node with the following command:
    ssh <admin_user>@<CUDB_Node_OAM_VIP_Address>
  2. Execute the cudbManageStore command on the target cluster to modify its state to the new state. In case of a DSG cluster, use the command as follows:
    sudo cudbManageStore --ds <dsId> --order <New_state>
    In case of a PLDB cluster, use the command as follows:
    sudo cudbManageStore --pl --order <New_state>
    Refer to CUDB Node Commands and Parameters for detailed information on cluster states.
  3. Exit the CUDB CLI session with the following command:
    exit

Results

Note: Changing the state of a cluster in a way that renders the cluster offline (such as changing to maintenance mode, or restoring an earlier backup) can have serious consequences.

If a DSG cluster containing a master replica goes offline, a new DSG master is selected. However, if the PLDB cluster goes offline, the corresponding CUDB node also goes offline.

18.5 Checking Software Inventory on a CUDB Node

CUDB provides several ways to check what software is installed on a CUDB node. These methods are described in the next sections.

18.5.1 Checking Imported Software on a CUDB Node

CUDB provides a mechanism to check which packages have been imported to a CUDB node. To check the Software Inventory of a CUDB node, perform the following steps:

Steps

  1. Establish a CUDB CLI session towards the target CUDB node with the following command:
    ssh <admin_user>@<CUDB_Node_OAM_VIP_Address>
  2. Check the software packages installed in the CUDB node with the following command:
    cmw-repository-list
    Note: The above procedure lists packages either as "Used" or "Not used". Packages listed as "Used" are installed packages, while packages listed as "Not used" are packages that have been imported, but not yet installed.
  3. Exit the CUDB CLI session with the following command:
    exit

18.5.2 Checking Installed Software on Each Blade or VM of a CUDB Node

CUDB provides a mechanism to check what packages are installed on each blade or VM of a CUDB node. To check the Software Inventory in a specific blade or VM of a CUDB node, perform the following steps:

Steps

  1. Establish a CUDB CLI session towards the CUDB node with the following command:
    ssh <admin_user>@<CUDB_Node_OAM_VIP_Address>
  2. Check the software packages installed in the CUDB node with the following command:
    sudo cmw-rpm-list <CUDB_node_blade_or_vm_host_name>
    In the above command, the <CUDB_node_blade_or_vm_host_name> variable is used to indicate the blade or VM host name to be checked. If left empty, all blades or VMs are listed. The host name can be the following:
    • SC_2_1 : First SC.

    • SC_2_2 : Second SC.

    • PL_2_<n> : Payload blades or VMs. The variable <n> stands for the payload blade or VM number that must be set from the highest to the lowest value, according to the configured blades or VMs. <n> can have a value of up to 40.

  3. Exit the CUDB CLI session with the following command:
    exit

18.6 Defragmenting a Database Cluster

The CUDB system supports the defragmentation of individual database clusters.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to defragment CUDB nodes.

Steps

  1. Contact the next level of maintenance support if not Ericsson personnel.

18.7 Subscribing a DSG Master Replica to Reconciliation Manually

Perform the following steps to subscribe a DSG master replica to reconciliation manually:

Steps

  1. Establish a new administrative CUDB CLI session towards the target CUDB node with the following command:
    ssh <admin_user>@<CUDB_Node_OAM_VIP_Address>
  2. Execute the following command:
    sudo cudbReconciliationMgr --add <dsg_id>
    In the above command, <dsg_id> is the identifier of the DSG to which the target master DSG replica belongs.
    Refer to CUDB Node Commands and Parameters for further information on this command.

18.8 Deactivating a Local CUDB Node

To deactivate a CUDB node locally, set the value of the enabled attribute of the corresponding instance of the CudbLocalNode class to false. For more information, refer to the Class CudbLocalNode section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

18.9 Storage Performance Monitoring Function

The CUDB system can detect storage system failures on SCs and payload blades or VMs by two mechanisms:

  • Monitoring I/O heavy processes (the time spent in D state).

  • Probing the file system.

CUDB supports the activation, deactivation of the Storage Performance Monitoring function, and the configuration of the function parameters by modifying the /home/cudb/monitoring/blade/config/cudbHwFaultReaction.json configuration file. Storage Performance Monitoring is enabled by default.

Note: This configuration file can be changed by Ericsson personnel only.

19 Auxiliary CUDB Procedures

The procedures described in this section are required to properly maintain a CUDB system, but are not supposed to be executed in a standalone manner. Instead, they are parts of higher-level, more complex procedures, such as those described in System Level CUDB Procedures and Node Level CUDB Procedures.

19.1 Activating a DS Unit in a Local CUDB Node

Before using a DS Unit in a DSG, it must be activated.

To activate a DS Unit in a local CUDB node, set the value of the enabled attribute of the corresponding instance of the CudbLocalDs class to true. For more information, refer to the Class CudbLocalDs section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.2 Deactivating a DS Unit in a Local CUDB Node

DS Units in a CUDB node are usually deactivated for maintenance reasons.

To deactivate a DS Unit in a local CUDB node, set the value of the enabled attribute of the corresponding instance of the CudbLocalDs class to false. For more information, refer to the Class CudbLocalDs section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

Note: Even if a DS Unit is deactivated, it continues to operate. Therefore, it can raise alarms even if being deactivated.

19.3 Activating a DS Unit in a Remote CUDB Node

To activate a DS Unit in a remote CUDB node, set the value of the enabled attribute of the corresponding instance of the CudbRemoteDs class to true. For more information, refer to the Class CudbRemoteDs section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.4 Deactivating a DS Unit in a Remote CUDB Node

To deactivate a remote DS Unit, set the value of the enabled attribute of the corresponding instance of the CudbRemoteDs class to false. For more information, refer to the Class CudbRemoteDs section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

Note: Even if a DS Unit is deactivated, it continues to operate. Therefore, it can raise alarms even if being deactivated.

19.5 Deactivating a Remote CUDB Node

After a CUDB node is deactivated locally, it is necessary to deactivate it also in the rest of CUDB nodes.

To deactivate a remote CUDB node, set the value of the enabled attribute of the corresponding instance of the CudbRemoteNode class to false. For more information, refer to the Class CudbRemoteNode section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.6 Adding a DSG to a Local CUDB Node

CUDB supports adding new DSGs to a local CUDB node.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to add new DSGs to a local CUDB node.

The unique values of the attributes of the new DSG must be set according to the following guidelines:

19.7 Adding a DS Unit to a Local CUDB Node

When a new DSG is created or the geographical redundancy of the system is upgraded, the new DS Units must be added to the CUDB node configuration where the new DS Unit is hosted.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to add a DS Unit to a local CUDB node.

Steps

  1. Contact the next level of maintenance support if not Ericsson personnel.

19.8 Adding a Generic Blade or VM to a Local CUDB Node

CUDB supports adding new blades or VMs to local CUDB nodes.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to add new blades or VMs to CUDB nodes.

19.9 Configuring a DS Unit Added to a Local CUDB Node

After new blades or VMs were added to a CUDB node, the DS Unit must be configured.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to configure the new blades or VMs added to the CUDB node.

19.10 Configuring a DS Unit Added to a Remote CUDB Node

When a new DS Unit is added to the CUDB system, it must be configured both on its local node, and also on the rest of the CUDB nodes to recognize it in the entire system.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to configure the new blades or VMs added to the CUDB node.

19.11 Deactivating a Local PLDB Unit

To deactivate a local PLDB unit in a CUDB node, set the value of the enabled attribute of the corresponding instance of the CudbLocalPl class to false. For more information, refer to the Class CudbLocalPl section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.12 Activating a Local PLDB Unit

To activate a PLDB unit in a CUDB node, set the value of the enabled attribute of the corresponding instance of the CudbLocalPl class to true. For more information, refer to the Class CudbLocalPl section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.13 Deactivating a Remote PLDB Unit

To deactivate a PLDB unit belonging to another CUDB node, set the value of the enabled attribute of the corresponding instance of the CudbRemotePl class to false. For more information, refer to the Class CudbRemotePl section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.14 Activating a Remote PLDB Unit

To activate a PLDB unit belonging to another CUDB node, set the value of the enabled attribute of the corresponding instance of the CudbRemotePl class to true. For more information, refer to the Class CudbRemotePl section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.15 Setting the Default Geographical Zone

To set the default geographical zone in a specific CUDB node, set the value of the defaultZone attribute of the corresponding instance of the CudbSystem class to the new default geographical zone. For more information, refer to the Class CudbSystem section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

Note: Setting the default zone is only possible if the Deployment Flexibility Value Package is available.

19.16 Setting Custom Distribution Policy for DEs

CUDB supports the configuration of custom distribution policies for DEs in the CUDB nodes.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to set custom distribution policy for DEs.

Steps

  1. Contact the next level of maintenance support if not Ericsson personnel.

19.17 Checking The Custom Distribution Policy of DEs

To check whether a specific CUDB node is using the default distribution policy or a custom one, perform the following steps:

Steps

  1. Establish a CUDB CLI session with the following command towards the CUDB node where the distribution policy is to be checked:
    ssh <admin_user>@<CUDB_Node_OAM_VIP_Address>
  2. Check the status of the distribution policy with the following command:
    sudo cudbManageLibDataDist --status
    The output of the command clearly states if a custom distribution policy is being applied.
  3. Exit the CUDB CLI with the following command:
    exit

19.18 Restoring the Distribution Policy of DEs

CUDB supports restoring the default distribution policy of DEs.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to restore the distribution policy of DEs.

19.19 Increasing the Space of a BLOB in the Disk Storage System

CUDB supports increasing the space of a BLOB in the disk storage system.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to increase the space of a BLOB in the disk storage system.

Steps

  1. Contact the next level of maintenance support if not Ericsson personnel.

19.20 Enabling or Disabling Notifications for All Applications

CUDB supports enabling or disabling notifications for all applications.

To enable or disable notifications for all applications, change the enabled attribute of the CudbNotifications class to the corresponding value, true or false in the configuration model. For more information, refer to the Class CudbNotifications section of CUDB Node Configuration Data Model Description.

Refer to CUDB Notifications for further details on the configuration of other parameters related to notifications.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.21 Disabling Notifications towards a Specification Application End Point

CUDB can prevent the sending of notifications to a particular application Front End (FE) even if notifications are globally enabled (see Enabling or Disabling Notifications for All Applications).

To achieve this, delete all the corresponding instances of the CudbNotificationEndPoint class related to the specific application FE from the configuration model. For more information, refer to the Class CudbNotificationEvent section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.22 Creating a CUDB LDAP User Group in a Local CUDB Node

To create a new CUDB LDAP user in a local CUDB node, add a new instance of the CudbLdapUserGroup class to the configuration model. For more information, refer to the Class CudbLdapUserGroup section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.23 Adding a New CUDB LDAP User to a Local CUDB Node

To add a new CUDB LDAP user to a local CUDB node, add a new instance of the CudbLdapUser class to the configuration model and set the following mandatory attributes:

  • cudbUserPassword

  • cudbUserGroup

  • readModeInPL

  • readModeInDS

Additional optional attributes can also be configured for the LDAP user, each in a new line.

For more information on the CudbLdapUser class, refer to the Class CudbLdapUser section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

CUDB supports the following value combinations for the readModeInPL and readModeInDS LDAP user attributes:

  • readModeInPL=LP (Local Preferred) and readModeInDS=MP (Master Preferred).

  • readModeInPL=MA and readModeInDS=MA (both Master Always).

  • readModeInPL=LP (Local Preferred) and readModeInDS=LP (Local Preferred).

    Note: The readModeInDS=LP can be configured only if the Deployment Flexibility Value Package is available.

19.24 Configuring Attributes of a CUDB LDAP User

Once a CUDB LDAP user is created, a set of attributes can be modified. These attributes are listed below:

'countersGroup'

'readModeInPL'

'readModeInDS'

'isProvisioningUser'

'isReProvisioningUser'

'userLdapAuth'

'userLdapHash'

'overloadRejectionWeight'

'cudbUserPassword'

'cudbLdapViewId'

'localReadsDsReplicationDelayThreshold'

Changes for the userLdapAuth, userLdapHash, and cudbUserPassword attributes are described in CUDB Security and Privacy Management.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.25 Configuring a PG Node in a Local CUDB Node for Backup Notification

PG nodes are configured in CUDB nodes as the destination for backup notifications.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to add new CUDB LDAP users to CUDB nodes.

Nodes are separated with "," and addresses for each node are separated with ";".

19.26 Updating CUDB LDAP User Information in a CUDB Node

To update CUDB LDAP user information in a remote CUDB node, perform the following steps:

Steps

  1. Execute the administrative operation updateUserInfo. Refer to the updateUserInfo section of CUDB Node Configuration Data Model Description for more information on the updateUserInfo administrative operation.
  2. After executing updateUserInfo, check the CudbLdapUser class in the configuration model to see whether the change was applied correctly.
    For more information on how to perform this check, refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description.
    For more information on the CudbLdapUser class, refer to the Class CudbLdapUser section of CUDB Node Configuration Data Model Description.
  3. If the changes were not applied, then the CUDB node has not been informed of the update yet. In this case, execute updateUserInfo again.

19.27 Deleting a CUDB LDAP User in a Local CUDB Node

To delete a CUDB LDAP user from a local CUDB node, delete the appropriate instance of the CudbLdapUser class from the configuration model. For more information, refer to the Class CudbLdapUser section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.28 Deleting a CUDB LDAP User Group in a Local CUDB Node

To delete a CUDB LDAP user in a local CUDB node, delete the appropriate instance of the CudbLdapUserGroup class from the configuration model. For more information, refer to the Class CudbLdapUserGroup section of CUDB Node Configuration Data Model Description.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.29 Removing Huge Files

The removal of huge files with the rm Linux command results in instability in blades or VMs. Also, if executed on the SC, input/output resources cannot be accessed by the processes running on it.

Therefore, to avoid stability and performance issues, it is strongly recommended to use the following command instead of rm to remove huge files:

ionice -c 3 ls -1 <files to remove> | sed 's/\(.*\)/sleep 1 \&\& ionice -c 3 rm -fv \1/g' | bash

19.30 Disabling PROVISIONING_VIP, FE_VIP, and SITE_VIP Addresses

To disable PROVISIONING_VIP, FE_VIP, and SITE_VIP addresses, perform the following steps:

Steps

  1. Set the value of the adminState attribute of the CUDB Node Configuration Data Model Description class to UNLOCKED.
    Note: Make sure that this step is executed before moving on to the next steps.
  2. Create a new blocking rule for the PROVISIONING_VIP address:
    1. Add a new instance of the CudbTrafficBlockingRule class under the CudbTrafficControlManager class to the configuration model. For more information about classes, refer to the Class CudbTrafficBlockingRule Class CudbTrafficBlockingRule and the Class CudbTrafficControlManager Class CudbTrafficControlManager sections of CUDB Node Configuration Data Model Description.
    2. Set the value of the blockedVIP attribute to the PROVISIONING_VIP address that can be obtained from the network plan.
  3. Create a new blocking rule for the FE_VIP address:
    1. Add a new instance of the CudbTrafficBlockingRule class under the CudbTrafficControlManager class to the configuration model. For more information about classes, refer to Class CudbTrafficBlockingRule Class CudbTrafficBlockingRule and Class CudbTrafficControlManager Class CudbTrafficControlManager sections of CUDB Node Configuration Data Model Description.
    2. Set the value of the blockedVIP attribute to the FE_VIP address that can be obtained from the trafficVIP attribute of the CUDB Node Configuration Data Model Description class.
  4. Create a new blocking rule for the SITE_VIP address:
    1. Add a new instance of the CudbTrafficBlockingRule class under the CudbTrafficControlManager class to the configuration model. For more information about classes, refer to Class CudbTrafficBlockingRule Class CudbTrafficBlockingRule and Class CudbTrafficControlManager Class CudbTrafficControlManager sections of CUDB Node Configuration Data Model Description.
    2. Set the value of the blockedVIP attribute to the SITE_VIP address that can be obtained from the cudbVIP attribute of the CUDB Node Configuration Data Model Description class.

Note: The cudbTrafficBlockingRuleId attribute must be unique for each newly created blocking rule.
  1. Check the trafficControlManagerState attribute of the CudbTrafficControlManager class.
    If trafficControlManagerState is DISABLED, then a problem occurred during the activation of the configuration change in the node, and the node behavior cannot be guaranteed to be consistent with the configuration.

Example 2   Disabling Addresses

Example 2 (shown in COM CLI) shows the commands for disabling PROVISIONING_VIP, FE_VIP, and SITE_VIP addresses of node 100 with 10.10.10.1 configured as PROVISIONING_VIP, 10.10.10.10 configured as FE_VIP, and 10.10.10.20 configured as SITE_VIP.

>configure
(config)>ManagedElement=1,CudbSystem=1,CudbLocalNode=100,CudbTrafficControlManager=1
(config-CudbTrafficControlManager=1)>adminState=UNLOCKED
(config-CudbTrafficControlManager=1)>CudbTrafficBlockingRule=1
(config-CudbTrafficBlockingRule=1)>blockedVIP="10.10.10.1"
(config-CudbTrafficBlockingRule=1)>up
(config-CudbTrafficControlManager=1)>CudbTrafficBlockingRule=2
(config-CudbTrafficBlockingRule=2)>blockedVIP="10.10.10.10"
(config-CudbTrafficBlockingRule=2)>up
(config-CudbTrafficControlManager=1)>CudbTrafficBlockingRule=3
(config-CudbTrafficBlockingRule=3)>blockedVIP="10.10.10.20"
(config-CudbTrafficBlockingRule=3)>commit
(CudbTrafficBlockingRule=3)>up
(CudbTrafficControlManager=1)>show all
CudbTrafficControlManager=1
adminState=UNLOCKED
trafficControlManagerState=ENABLED
CudbTrafficBlockingRule=1
blockedVIP="10.10.10.1"
CudbTrafficBlockingRule=2
blockedVIP="10.10.10.10"
CudbTrafficBlockingRule=3
blockedVIP="10.10.10.20"
(CudbTrafficControlManager=1)>exit

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.31 Enabling PROVISIONING_VIP, FE_VIP, and SITE_VIP Addresses

Note: This procedure may result in an unbalanced situation of the TCP connections in LDAP FEs.

To re-enable PROVISIONING_VIP, FE_VIP, and SITE_VIP addresses, perform the following steps:

Steps

  1. To list all present blocking rules, show all instances of the CudbTrafficBlockingRule class under the CudbTrafficControlManager class, together with their attributes. For more information about those classes, refer to the Class CudbTrafficBlockingRule Class CudbTrafficBlockingRule and the Class CudbTrafficControlManager Class CudbTrafficControlManager sections of CUDB Node Configuration Data Model Description.
  2. Remove the blocking rule for the PROVISIONING_VIP address:
    1. Identify which blocking rule is for the PROVISIONING_VIP address. To do that, examine the output of Step 1 and look for the instance of the CudbTrafficBlockingRule class where the value of the blockedVIP attribute is equal to the PROVISIONING_VIP address that can be obtained from the network plan. For more information, refer to the Class CudbTrafficBlockingRule section of CUDB Node Configuration Data Model Description.
    2. Remove the identified instance of the CudbTrafficBlockingRule class. For more information, refer to the Class CudbTrafficBlockingRule section of CUDB Node Configuration Data Model Description.
  3. Remove the blocking rule for the FE_VIP address:
    1. Identify which blocking rule is for the FE_VIP address. To do that, examine the output of Step 1 and look for the instance of the CudbTrafficBlockingRule class where the value of the blockedVIP attribute is equal to the FE_VIP address that can be obtained from the trafficVIP attribute of the CudbLocalNode class. For more information about classes, refer to the Class CudbTrafficBlockingRule Class CudbTrafficBlockingRule and the Class CudbLocalNode Class CudbLocalNode sections of CUDB Node Configuration Data Model Description.
    2. Remove the identified instance of the CudbTrafficBlockingRule class. For more information, refer to Class CudbTrafficBlockingRule section of CUDB Node Configuration Data Model Description.
  4. Remove the blocking rule for the SITE_VIP address:
    1. Identify which blocking rule is for the SITE_VIP address. To do that, examine the output of Step 1 and look for the instance of the CudbTrafficBlockingRule class where the value of the blockedVIP attribute is equal to the SITE_VIP address that can be obtained from the cudbVIP attribute of the CudbLocalNode class. For more information about classes, refer to the Class CudbTrafficBlockingRule Class CudbTrafficBlockingRule and the Class CudbLocalNode Class CudbLocalNode sections of CUDB Node Configuration Data Model Description.
    2. Remove the identified instance of the CudbTrafficBlockingRule class. For more information, refer to the Class CudbTrafficBlockingRule section of CUDB Node Configuration Data Model Description.
  5. Set the value of the adminState attribute of the to CudbTrafficControlManager class to LOCKED. For more information, refer to the Class CudbTrafficControlManager section of CUDB Node Configuration Data Model Description.
    Note: Make sure that Step 2, Step 3, and Step 4 are executed previously, because adminState cannot be set to LOCKED if there is any blocking rule present.
  6. Check the trafficControlManagerState attribute of the CudbTrafficControlManager class.
    If trafficControlManagerState is DISABLED, then a problem occurred during the activation of the configuration change in the node, and the node behavior cannot be guaranteed to be consistent with the configuration.

Example 3   Re-Enabling VIP Addresses

Example 3 (shown in COM CLI) shows the commands for re-enabling all VIP addresses of node 100.

>configure

(config)>ManagedElement=1,CudbSystem=1,CudbLocalNode=100,CudbTrafficControlManager=1
(config-CudbTrafficControlManager=1)>show all verbose
CudbTrafficControlManager=1
adminState=UNLOCKED
cudbTrafficControlManagerId="1"
trafficControlManagerState=ENABLED <read-only>
CudbTrafficBlockingRule=1
blockedVIP="10.10.10.1"
cudbTrafficBlockingRuleId="1"
CudbTrafficBlockingRule=2
blockedVIP="10.10.10.10"
cudbTrafficBlockingRuleId="2"
CudbTrafficBlockingRule=3
blockedVIP="10.10.10.20"
cudbTrafficBlockingRuleId="3"



(config-CudbTrafficControlManager=1)>adminState=UNLOCKED
(config-CudbTrafficControlManager=1)>no CudbTrafficBlockingRule=1
(config-CudbTrafficControlManager=1)>no CudbTrafficBlockingRule=2
(config-CudbTrafficControlManager=1)>no CudbTrafficBlockingRule=3
(config-CudbTrafficControlManager=1)>adminState=LOCKED
(config-CudbTrafficControlManager=1)>commit
(CudbTrafficControlManager=1)>show all
CudbTrafficControlManager=1
adminState=LOCKED
trafficControlManagerState=ENABLED
(CudbTrafficControlManager=1)>exit

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

19.32 Configuring PG Endpoints in a Local CUDB Node for the Provisioning Assurance Function

PG endpoints are configured in the CUDB nodes as destinations for the Provisioning Assurance function interworking.

Note: This procedure can be performed by Ericsson personnel only. Contact the next level of maintenance support to add new CUDB LDAP users to CUDB nodes.

19.33 Creating a New Notification Event

To create a new CUDB Notification Event, a new instance of the CudbNotificationEvent class must be added to the configuration model and the corresponding attributes must be set. For more information, refer to the Class CudbNotificationEvent section of CUDB Node Configuration Data Model Description.

Follow the steps below, shown in COM CLI, to insert a new Notification Event using the Object Model Modification procedure:

Note: Changing the notification configuration may require deactivation/activation of notifications globally to take effect. For more information, refer to CUDB Notifications.

Steps

  1. Establish a CUDB CLI session towards the CUDB node by executing the ssh -l cudbadmin <CUDB_Node_OAM_IP_Address> command.
    Important: This procedure must be done in all CUDB nodes in the system.
  2. If there is no backup of the present configuration, perform the backup by executing the sudo cmw-configuration-persist command. If the backup is already updated, proceed to Step 3.
  3. Establish a CUDB configuration CLI session in the active SC. Use the commands below (their output is also shown) to find the active SC: sudo cudbHaState | grep COM | grep ACTIVE COM is assigned as ACTIVE in controller SC-1
    The active SC, (SC_2_1 in the example above) must be used for accessing the COM CLI by executing the sudo /opt/com/bin/cliss command.
  4. Set the configuration session by executing the configure command.
  5. Check if CudbNotifications are already enabled and exist in CudbNotificationsEvents.
    Use the show verbose command to show ManagedElement=1,CudbSystem=1,CudbNotifications=1:
    (config)>show verbose ManagedElement=1,CudbSystem=1,CudbNotifications=1
    CudbNotifications=1
        cudbNotificationsId="1"
        enabled=true <default>
        maxReattempts=3 <default>
        reattemptTime=1000 <default>
        userLabel=[] <empty>
        CudbNotificationEvent=1
     (config)>
    If CudbNotifications are not already enabled, execute the following commands:
    (config)>ManagedElement=1,CudbSystem=1,CudbNotifications=1,enabled=true
     (config)>commit
  6. Execute the following command sequence to add a new CUDB Notification Event (CudbNotificationEvent=2) in the CUDB configuration model:
    Note: This is an example and must not be taken as a rule.
    >configure
     (config)>ManagedElement=1,CudbSystem=1,CudbNotifications=1,CudbNotificationEvent=2
     (config-CudbNotificationEvent=2)>eventId=SAE-HSS
     (config-CudbNotificationEvent=2)>notificationString=mobilityEvent
     (config-CudbNotificationEvent=2)>CudbNotificationEndPoint=1
     (config-CudbNotificationEndPoint=1)>name=Serv1
     (config-CudbNotificationEndPoint=1)>URI=http://127.0.0.1:8080
     (config-CudbNotificationEndPoint=1)>weight=1
     (config-CudbNotificationEndPoint=1)>up
     (config-CudbNotificationEvent=2)>CudbNotificationObjectClass=1
     (config-CudbNotificationObjectClass=1)>dn="serv=csps"
     (config-CudbNotificationObjectClass=1)>name=CsPsLocationData
     (config-CudbNotificationObjectClass=1)>type=related
     (config-CudbNotificationObjectClass=1)>CudbNotificationAttr=1
     (config-CudbNotificationAttr=1)>name=SGSNNUM
     (config-CudbNotificationAttr=1)>send=true
     (config-CudbNotificationAttr=1)>up
     (config-CudbNotificationObjectClass=1)>up
     (config-CudbNotificationEvent=2)>show verbose
     CudbNotificationEvent=2
        cudbNotificationEventId="2"
        eventId="SAE-HSS"
        notificationString="mobilityEvent"
        userLabel=[] <empty>
        CudbNotificationEndPoint=1
        CudbNotificationObjectClass=1
    Each operation is executed as a unique transaction.
    Note: In case endpoint address is IPv6, it should be defined within square brackets in URI field, for example: http://[2001:cdba:0000:0000:0000:0000:0000:3257]:8080.
  7. Commit the changes by executing the commit command.
  8. CUDB Node Logging Events.
  9. Check the configuration changes with the show ManagedElement=1,CudbSystem=1,CudbNotificationEvent=2 command.
    Note: Remember to use show verbose instead of show for not mandatory attributes that must have no set value, or for optional attributes whose value is set to the default one.Check the log files to see the result of the operations. For more information, refer to Check the log files to see the result of
  10. Exit from the configuration mode by executing the end command.
  11. Exit from the CUDB configuration CLI console by executing the exit command.

Results

Note: Configuration changes in the examples can also be performed through the NETCONF client.

Refer to the Notifications Parameters section of CUDB Notifications for more information about the Notifications Parameters.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

Example 4   CudBNotificationEvent=1 with eventId=”SAE-HLR”

Example 4, shown in COM CLI, presents CudBNotificationEvent=1 with eventId=”SAE-HLR” with defined one Notification Endpoint and five Notification Object Classes:

>show verbose ManagedElement=1,CudbSystem=1,CudbNotifications=1,CudbNotificationEvent=1
CudbNotificationEvent=1
   cudbNotificationEventId="1"
   eventId="SAE-HLR"
   notificationString="mobilityEvent"
   userLabel=[] <empty>
   CudbNotificationEndPoint=1
   CudbNotificationObjectClass=1
   CudbNotificationObjectClass=2
   CudbNotificationObjectClass=3
   CudbNotificationObjectClass=4
   CudbNotificationObjectClass=5

Define all required elements (attributes, object/object classes and their attributes) of the new CudbNotificationEvent in its tree structure (Directory Information Tree, DIT) before starting to create a new Notification Event.

Example 5   New CudbNotificationEvent=2 shown in COM CLI

Example 5 presents a new CudbNotificationEvent=2 shown in COM CLI.

>show verbose ManagedElement=1,CudbSystem=1,CudbNotifications=1,CudbNotificationEvent=2
CudbNotificationEvent=2
   cudbNotificationEventId="2"
   eventId="SAE-HSS"
   notificationString="mobilityEvent"
   userLabel=[] <empty>
   CudbNotificationEndPoint=1
   CudbNotificationObjectClass=1
   
>show verbose ManagedElement=1,CudbSystem=1,CudbNotifications=1,CudbNotificationEvent=2,CudbNotificationEndPoint=1
CudbNotificationEndPoint=1
   name="Serv1"
   URI="http://127.0.0.1:8080"
   webService="/"
   weight=1

>show verbose ManagedElement=1,CudbSystem=1,CudbNotifications=1,CudbNotificationEvent=2,CudbNotificationObjectClass=1
CudbNotificationObjectClass=1
   cudbNotificationObjectClassId="1"
   dn="serv=csps"
   name="CsPsLocationData"
   type="related"
   userLabel=[] <empty>
   CudbNotificationAttr=1

>show verbose ManagedElement=1,CudbSystem=1,CudbNotifications=1,CudbNotificationEvent=2,⇒
CudbNotificationObjectClass=1,CudbNotificationAttr=1
CudbNotificationAttr=1
   cudbNotificationAttrId="1"
   name="SGSNNUM"
   send=true
   userLabel=[] <empty>
   value=""
(CudbNotificationObjectClass=1)>
Note: In case endpoint address is IPv6, it should be defined within square brackets in URI field, for example: http://[2001:cdba:0000:0000:0000:0000:0000:3257]:8080.

As shown in Example 5, CudbNotificationEvent=2 will contain one CudbNotificationEndpoint=1 and one CudbNotificationObjectClass=1 with one CudbNotificationAttr=1.

19.34 Configuring an Existing CudbNotificationEvent

To change the configuration of an existing notification event, modify the read/write attributes of the corresponding instance of the CudbNotificationEvent class or the class instances it contains.

For example, to change the URI attribute of the notification end point from Example 5, follow the steps below (shown in COM CLI):

Note: In case endpoint address is IPv6, it should be defined within square brackets in URI field, for example: http://[2001:cdba:0000:0000:0000:0000:0000:3257]:8080.
Note: Changing the notification configuration may require deactivation/activation of notifications globally to take effect. For more information, refer to the Configuration section of CUDB Notifications.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

Example 6  
configure
(config)>ManagedElement=1,CudbSystem=1,CudbNotifications=1,CudbNotificationEvent=2,CudbNotificationEndPoint=1,URI="http://127.0.0.1:4040"
(config)>commit

19.35 Deleting a CudbNotificationEvent

To remove an instance of the CudbNotificationEvent class, first remove all instances of the classes it contains.

For example, to remove the notification event from Example 5, follow the steps below (shown in COM CLI):

Note: Changing the notification configuration may require deactivation/activation of notifications globally to take effect. For more information, refer to the Configuration section of CUDB Notifications.

Refer to the Object Model Modification Procedure in CUDB Node Configuration Data Model Description for more information on all the steps required to modify the object model (for example, on using the administrative operation applyConfig to activate the changes).

Example 7  
configure
(config)>no ManagedElement=1,CudbSystem=1,CudbNotifications=1,CudbNotificationEvent=2,CudbNotificationObjectClass=1,CudbNotificationAttr=1
(config)>commit -s
(config)>no ManagedElement=1,CudbSystem=1,CudbNotifications=1,CudbNotificationEvent=2,CudbNotificationObjectClass=1
(config)>commit -s
(config)>no ManagedElement=1,CudbSystem=1,CudbNotifications=1,CudbNotificationEvent=2,CudbNotificationEndPoint=1
(config)>commit -s
(config)>no ManagedElement=1,CudbSystem=1,CudbNotifications=1,CudbNotificationEvent=2
(config)>commit

19.36 Configuring QoS in CUDB

In CUDB the Differentiated Services (DiffServ) computer networking architecture is used to provide QoS (Quality of Service). The basic principle of DiffServ is the classification of IP packets by applying a special marking on them, that is placing a DSCP mark in the IPv4 TOS field in case of IPv4 protocol, or Traffic Class field in case of IPv6 protocol. Based on this mark, the DiffServ-Aware routers in the DiffServ domain can handle the traffic (Per Hop Behavior) depending on the traffic class which the packet belongs to. After that, QoS is provided by iptables in case of IPv4 and ip6tables in case of IPv6. Using this method, the outgoing packages are marked with the preconfigured DSCP values. Refer to CUDB Node Network Description for more information on the preconfigured values.

Note: Configuring QoS is restricted to Ericsson personnel. Contact the next level of maintenance support to perform such procedures.

Reference List