| 1 | Introduction |
| 1.1 | Alarm Description |
| 1.2 | Prerequisites |
| 1.3 | Related Information |
2 | Procedure |
| 2.1 | Configuring MySQL Information in ECLI |
| 2.2 | Starting Management Node and Data Node |
1 Introduction
This instruction concerns alarm handling.
1.1 Alarm Description
The alarm is issued when the connection between AAA server and MySQL NDB cluster is lost.
The possible alarm causes and the corresponding fault reasons, fault locations and impacts are described in Table 1.
|
Alarm Cause |
Description |
Fault Reason |
Fault Location |
Impact |
Solution |
|---|---|---|---|---|---|
|
MySQL information configuration for AAA server is incorrect. |
The alarm is raised because of the incorrect configuration of MySQL information. |
The attribute ndbConnectString in the MO MySQLInfo is configured incorrectly. |
AAA server |
AAA server cannot provide service. |
|
|
NDB cluster is under abnormal condition. |
All Data Nodes are down. |
All the NDB connections are lost. |
NDB cluster |
- Note:
- An alarm can appear as a result of the maintenance activity.
The alarm attributes are listed and explained in Table 2.
|
Attribute Name |
Attribute Value |
|---|---|
|
Major Type |
193 |
|
Minor Type |
868354 |
|
Managed Object Class |
IpworksRadiusAAA |
|
Source |
ManagedElement=<Node Name>, SystemFunctions=1,Fm=1,FmAlarmModel=ipworksRadiusAAA, FmAlarmType=ipworksRadiusAAADBFailure |
|
Specific Problem |
|
|
Event Type |
processingErrorAlarm(10) |
|
Probable Cause |
x733ApplicationSubsystemFailure(302) |
|
Additional Text |
NDB Cluster or Data Nodes are down when AAA tries to connect to NDB.;uuid:<Product_UUID>(1) |
|
Perceived Severity |
Critical |
(1) <Product_UUID> is the universally unique identifier
(UUID) of machine that generates the alarm. The value can be fetched
from /sys/devices/virtual/dmi/id/product_uuid on the PL node.
1.2 Prerequisites
This section provides information on the documents, tools, and conditions that apply to the procedure.
1.2.1 Documents
Before starting this procedure, ensure that the following document has been read:
1.2.2 Tools
Not available.
1.2.3 Conditions
Not applicable.
1.3 Related Information
Trademark information, typographic conventions, and definition and explanation of abbreviations and terminology can be found in the following documents:
2 Procedure
This section describes the procedure to follow to clear this alarm.
2.1 Configuring MySQL Information in ECLI
To clear the alarm, do the following:
- Log on to the ECLI interface.
# ssh <username>@<OAM IP Address> -t -s cli
- Check the configuration
of MySQLInfo.
For example:
>ManagedElement=<Node Name>,IpworksFunction=1,IpworksCommonRoot=1,DataBaseInfo=1,MySQLInfo=1 (MySQLInfo=1)>show -v -r MySQLInfo=1 mySQLInfoId="1" ndbConnectString <default> "SC-2:1186" "SC-1:1186" SQLNodeInfo=1 host="ipw_sql" password="1:cRmtreL28X8=" port=3307 <default> sqlNodeInfoId="1" user="root" <default> SQLNodeInfo=2 host="ipw_sql" password="1:cRmtreL28X8=" port=3307 <default> sqlNodeInfoId="2" user="root" <default> - Check the configuration parameters of MySQL. The configuration
files are located at /etc/ipworks/mysql/confs on SC node.
File Name
Parameter
ipworks_datanode_my.conf
ndb-connectstring
ipworks_mgm.conf
HostName
PortNumber
ipworks_sqlnode.conf
ndb-connectstring
port
Verify whether the configuration shown in Step 2 matches with parameters values in these configuration files. If not, proceed with Step 4.
- Configure the MO MySQLInfo.
>ManagedElement=<Node Name>,IpworksFunction=1,IpworksCommonRoot=1,DataBaseInfo=1,MySQLInfo=1 (MySQLInfo=1)>configure (config-MySQLInfo=1)>ndbConnectString=["SC-1:1186","SC-2:1186"] (config-MySQLInfo=1)>commit (config-MySQLInfo=1)>exit
- Restart the AAA service on Payload (PL) to make the change
take effect.
PL-X:~ # ipw-ctr restart aaa_radius_backend
- Confirm that the alarm has ceased, if this alarm still exists, consult the next level of maintenance support. Further actions are outside the scope of this instruction.
2.2 Starting Management Node and Data Node
To clear the alarm, do the following:
- Log on to the SC-1.
# ssh <Username>@<SC-1 IP Address>
- Check NDB status.
SC-1:~ # ndb_mgm -- NDB Cluster -- Management Client -- ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=27 @169.254.100.1 (mysql-5.6.31 ndb-7.4.12, Nodegroup: 0, *) id=28 @169.254.100.2 (mysql-5.6.31 ndb-7.4.12, Nodegroup: 0) [ndb_mgmd(MGM)] 2 node(s) id=1 @169.254.100.1 (mysql-5.6.31 ndb-7.4.12) id=2 @169.254.100.2 (mysql-5.6.31 ndb-7.4.12) [mysqld(API)] 24 node(s) id=3 @169.254.100.1 (mysql-5.6.31 ndb-7.4.12) id=4 (not connected, accepting connect from SC-2) id=5 @169.254.100.3 (mysql-5.6.31 ndb-7.4.12) id=6 (not connected, accepting connect from any host) id=7 (not connected, accepting connect from any host) id=8 (not connected, accepting connect from any host) id=9 (not connected, accepting connect from any host) id=10 (not connected, accepting connect from any host) id=11 (not connected, accepting connect from any host) id=12 (not connected, accepting connect from any host) id=13 (not connected, accepting connect from any host) id=14 (not connected, accepting connect from any host) id=15 (not connected, accepting connect from any host) id=16 (not connected, accepting connect from any host) id=17 (not connected, accepting connect from any host) id=18 (not connected, accepting connect from any host) id=19 (not connected, accepting connect from any host) id=20 (not connected, accepting connect from any host) id=21 (not connected, accepting connect from any host) id=22 (not connected, accepting connect from any host) id=23 (not connected, accepting connect from any host) id=24 (not connected, accepting connect from any host) id=25 (not connected, accepting connect from any host) id=26 (not connected, accepting connect from any host) ndb_mgm> exit
Above output shows that all the NDB Cluster nodes are started. If result shows that certain nodes are not started, proceed with Step 3.
- Start the Management Node, Data
Node, and SQL Node.
SC-1:~ # /etc/init.d/ipworks.mysql start-mgmd
SC-1:~ # /etc/init.d/ipworks.mysql start-ndbd
SC-1:~ # /etc/init.d/ipworks.mysql start-sqlnode
For more information on how to manage MySQL NDB Cluster, refer to Configure MySQL NDB Cluster.
- Log on to the SC-2, then start the Management Node, Data
Node, and SQL Node. Ensure that the NDB status is identical to the
output of Step 2.
If all NDB nodes are down, you can start all the nodes by executing the following command :
SC-X:~ # /etc/init.d/ipworks.mysql start-ndbcluster
- Restart AAA server:
SC-X:~ # ipw-ctr restart aaa_radius_backend PL-3
SC-X:~ # ipw-ctr restart aaa_radius_backend PL-4
- Confirm that the alarm has ceased. If the alarm remains, consult the next level of maintenance support. Further actions are outside the scope of this instruction.

Contents