ERH, CUDB Node Failure
IPWorks

Contents

1Introduction
1.1Alarm Description
1.2Prerequisites

2

Procedure
2.1Analyzing the Alarm
2.2Troubleshooting IPWorks Configuration
2.3Troubleshooting the Network Issues
2.4Troubleshooting the CUDB Configuration

1   Introduction

This instruction concerns alarm handling.

1.1   Alarm Description

The alarm is issued when the ENUM server fails to access any specific CUDB node.

The possible alarm causes and the corresponding fault reasons, fault locations, and impacts are described in Table 1.

Table 1    Alarm Causes

Alarm Cause

Description

Fault Reason

Fault Location

Impact

Solution

Any specific CUDB server is down.

ENUM server fails to access any specific CUDB server.

CUDB node is down by maintenance activity or some other reason.

CUDB Server

The traffic of the failed CUDB node will be switched to another CUDB node.

See Section 2.2

CUDB server is unreachable

The CUDB server is unreachable because of network connection issues or other network related glitches.

Network connection error

Network

See Section 2.3

This CUDB node doesn’t handle direct traffic.

Flexible PL Deployment function is enabled, and this CUDB node is configured without a PLDB. This CUDB node will not handle direct traffic.

Flexible PL Deployment function is enabled in CUDB, and this CUDB node is without a PLDB.

CUDB Server

See Section 2.4

Note:  
An alarm can appear as a result of the maintenance activity.

The alarm attributes are listed and explained in Table 2.

Table 2    Alarm Attributes

Attribute Name

Attribute Value

Major Type

193

Minor Type

856108

Managed Object Class

ipworksErh

Source

ManagedElement=<Node Name>, SystemFunctions=1,Fm=1,FmAlarmModel=IpworksErh,FmAlarmType=ipworksERHCUDBNodeFailure, HostName=<Hostname>,IpworksErh,Node=<NodeIP>

Specific Problem

ERH, CUDB Node Failure

Event Type

communicationsAlarm(2)

Probable Cause

x733RemoteNodeTransmissionError(342)

Additional Text

This alarm is raised when CUDB Node %s fails during the access.;uuid:<Product_UUID>(1)

Perceived Severity

Warning

(1)  <Product_UUID> is the universally unique identifier (UUID) of machine that generates the alarm. The value can be fetched from /sys/devices/virtual/dmi/id/product_uuid on the PL node.


1.2   Prerequisites

This section provides information on the documents, tools, and conditions that apply to the procedure.

1.2.1   Documents

Before starting this procedure, ensure that you have read the following documents:

1.2.2   Tools

No tools are required.

1.2.3   Conditions

No conditions.

2   Procedure

This section describes the procedure to follow when this alarm is received.

2.1   Analyzing the Alarm

Do the following at the maintenance center:

  1. Troubleshoot the IPWorks configuration.
  2. Troubleshoot the network issues.
  3. Troubleshoot the CUDB Configuration.

2.2   Troubleshooting IPWorks Configuration

To clear the alarm, perform the following steps:

  1. Obtain the IP address of the failed CUDB server through the alarm attribute Additional Text. For example, the failed CUDB server is 10.170.15.188.
  2. Ensure that the CUDB Connection with ERH is configured correctly.

    For example, the following indicates that two CUDB nodes are deployed in one site, and the node 10.170.15.188 is in site1. Ensure that the IP addresses of site1 are configured the same as provided by the CUDB node.

    >ManagedElement=<Node Name>,IpworksFunction=1,IpworksCommonRoot=1,DataBaseInfo=1,
    CudbManager=1,CudbServiceSite=NP,CudbSiteManager=1,CudbSite=site1
    (CudbSite=site1)>show
    CudbSite=site1
       CudbNode=node2
       CudbNode=node1
    (CudbSite=site1)>CudbNode=node1
    (CudbNode=node1)>show -v
    CudbNode=node1
       address="192.168.20.14"
       cudbNodeId="node1" <default>
       distinguishedName=[] <empty>
       password=[] <empty>
       poolSize=16 <default>
       port=389 <default>
    (CudbNode=node1)>up
    (CudbSite=site1)>CudbNode=node2
    (CudbNode=node2)>show -v
    CudbNode=node2
       address="10.170.15.188" <default>
       cudbNodeId="node2"
       distinguishedName=[] <empty>
       password=[] <empty>
       poolSize=16 <default>
       port=389 <default>
    (CudbNode=node2)>
    

If the configuration is not correct, try to fix the configuration. Fore more detail, refer to section Configuring CUDB Connection Pool in Configure DNS and ENUM.

If the configuration is correct, and the alarm still exists, do the following:

  1. Fix the issues of the failed CUDB node.

    This action is outside the scope of IPWorks instruction.

  2. If the alarm remains, consult the next level of maintenance support. Further actions are outside the scope of this instruction.

2.3   Troubleshooting the Network Issues

To clear the alarm, perform the following steps:

  1. Debug and troubleshoot the network issues, for example, ping the IP address, check the cable connection and etc.

    The alarm is expected to be cleared automatically when the network connection returns to normal.

  2. Confirm that the alarm has ceased.

    If the alarm remains, consult the next level of maintenance support. Further actions are outside the scope of this instruction.

2.4   Troubleshooting the CUDB Configuration

Consult with CUDB maintenance support, and check whether the Flexible PL Deployment function is enabled and this CUDB node is without a PLDB replica.

If Yes, ignore this alarm, as this CUDB node is configured to not receive direct traffic. If No, consult the next level of maintenance support. Further actions are outside the scope of this instruction.



Copyright

© Ericsson AB 2017, 2018. All rights reserved. No part of this document may be reproduced in any form without the written permission of the copyright owner.

Disclaimer

The contents of this document are subject to revision without notice due to continued progress in methodology, design, and manufacturing. Ericsson shall have no liability for any error or damage of any kind resulting from the use of this document.


    ERH, CUDB Node Failure         IPWorks