1 Introduction
This section is an introduction to this document. It contains information about the prerequisites, purpose, scope, and target group for the document. This section also contains explanations of typographic conventions used in this document.
1.1 Purpose and Scope
This document covers the system administration, operation, and maintenance routines available for Dynamic Activation 1.
1.2 Target Groups
The target groups for this document are as follows:
- System Administrator
- Network Administrator
- Network Supervision Administrator
1.3 Typographic Conventions
Typographic conventions are described in the document Library Overview, Reference [1].
For information about abbreviations and terms used throughout this document refer to Glossary of Terms and Acronyms, Reference [2].
1.4 Prerequisites
The following are the prerequisites to make full use of this document:
- Knowledge about the Ericsson Dynamic Activation (EDA) product
- Knowledge about Linux operating system
- Knowledge about the Linux Distribution Extensions (LDE) product.
2 System Information
This section holds information about the Dynamic Activation system and how to retrieve useful hardware and software information.
2.1 System Architecture
This section describes the Dynamic Activation system architecture, as shown in Figure 1.
The system consists of two types of nodes:
- System Controller Node
The System Controller (SC) nodes host the High Availability (HA) file system that is used by the Payload (PL) nodes. The SC nodes also provide generic services to the rest of the system, including hosting of the GUI. In a Dynamic Activation system two SC nodes are always present.
- Payload Node
The PL nodes execute the provisioning operations and host the provisioning logic. In a Dynamic Activation system 2-10 PL nodes can be present.
ZooKeeper
ZooKeeper is a cluster storage engine that provides mechanism to get notifications when data is modified, internal data distributor for configuration (for example NEs, NE Groups, Routing), symmetric data modifications, user management, session replication, group membership, and license management.
ZooKeeper runs in a separate process. It runs (by default) on three nodes in the cluster, on both SC nodes and on the first PL node.
For ZooKeeper to work, at least two ZooKeeper instances need to be operational at the same time. Stopping of ZooKeeper instances must be handled with caution, since the Dynamic Activation application is depending on its function. If more than one ZooKeeper instance is brought down simultaneously, the Dynamic Activation application will not function properly.
Restarting one ZooKeeper instance during traffic could cause short traffic disturbance.
Cassandra
Cassandra is a distributed database providing data access through the Cassandra Query Language (CQL) interface. Cassandra is used for storage and presentation of provisioning events performed by the Business Logic, these are the Dynamic Activation processing logs.
Function deployment in Dynamic Activation
The Dynamic Activation functions are implemented as Module and 3PP software, used by the modules. For each module there is a corresponding process (JVM), deployed on one or several nodes, see Table 2.
The functions showed in Figure 1 are implemented by modules, see Table 1.
|
Function |
Module |
|---|---|
|
Inbound Interfaces (MML) |
dve-inbound-interfaces-application |
|
Inbound Interfaces (CAI/CAI3G/EDIFACT) |
nbia-module |
|
tomcat-server | |
|
Provisioning Applications |
dve-application |
|
arh-module | |
|
akka-cluster-service-lib | |
|
activation-orchestration-module | |
|
lockservice-module | |
|
ema-pq-module | |
|
Processing Log Admin Tool |
admin-tool |
|
CUDB Block Proxy |
cudb-block-proxy |
|
Fault Management service |
ema-faultmgmt-module |
|
Authentication service |
ema-authentication-module |
|
Batch Handler service |
ema-batch-module |
- The dve-inbound-interfaces-application module supports the inbound MML Interface, mediating incoming requests to different services.
- The nbia-module supports the CAI3G , the CAI and the EDIFACT inbound interfaces, mediating incoming requests to different services.
- The tomcat-server module hosts the GUI function. It is based on the Apache Tomcat 3PP.
- The dve-application module executes provisioning logic, converting service invocations into provisioning requests towards the NEs, for example HLR, CUDB, HSS. It also supports massive updates, and multi-destination provisioning activities through the CLI.
- The arh-module provides support for asynchronous requests. When an asynchronous service request is received, the arh-module module generates a notification to the service consumer. Once the service request is processed by the business logic, the module generates a provisioning result notification.
- The akka-cluster-service-lib module implements the internal Service Location Register (SLR) function. Service provider modules register their services in the SLR when the module is started. Service consumer modules request service Lookups from the SLR, when needed.
- The activation-orchestration-module provides support for stateful and resilient execution of model driven activation logic and queueing towards provisioning resource endpoints.
- The lockservice-module provides support for preventing the concurrent provisioning to the same subscriber. When more than one requests are received on the same subscriber, the first request acquires a lock and other requests are rejected with an error code.
- The ema-pq-module provides support for asynchronous queues to process failed commands. Commands for a subscriber in the same NE are executed in sequence.
- The admin-tool provides an import for storing and an export function of the stored processing logs generated by the different modules.
- The cudb-block-proxy mediates
incoming blocking and unblocking requests into the dve-application module from the CUDB. Upon a blocking request, the dve-application module holds certain provisioning requests
towards the CUDB, until a deblocking request is received, on which
the provisioning requests are resumed.
For information about which provisioning requests that are blocked, see Configuration Manual for Resource Activation, Reference [9].
The backup notification from CUDB is received through the O&M VLAN in the O&M IP address. It is also necessary to configure the JMX ports in the right order, <VIP-OAM-IP>:8994:8099
- The ema-faultmgmt-module is responsible of receiving faults from all other modules. The faults are propagated from this module to ESA for sending SNMP alarms, and also sent out as SMTP if e-mail notifications are configured.
- The ema-authentication-module is responsible for authentication of users. Other modules in the system that need to authenticate user data, interacts with this module.
- The ema-batch-module provides support for bulk provisioning of CAI and synchronous CAI3G requests towards the NEs, it interacts with nbia-module to execute provisioning requests.
The deployment of modules and 3PPs is shown in Table 2 and Table 3.
|
Module |
Node |
|---|---|
|
nbia-module |
|
|
dve-inbound-interfaces-application |
|
|
akka-cluster-service-lib |
|
|
cudb-block-proxy |
SC1 only |
|
tomcat-server |
|
|
dve-application |
|
|
arh-module |
|
|
admin-tool |
|
|
ema-faultmgmt-module |
|
|
ema-authentication-module |
|
|
activation-orchestration-module |
|
|
lockservice-module |
|
|
ema-pq-module |
|
|
ema-batch-module |
|
3PP |
Node |
|---|---|
|
Linux/LDE |
|
|
Apache Tomcat |
|
|
ZooKeeper |
|
|
Cassandra |
|
|
License Server |
|
|
eVIP (Internal load balancer) |
|
|
Ericsson SNMP Agent (ESA) |
(1) By default, installed on the first PL node
(PL3)
2.1.1 Dynamic Activation Configurations
This section describes the available Dynamic Activation configurations.
The minimum Dynamic Activation system consists of two SC nodes and two PL nodes.
If more provisioning capacity is needed, extra PL nodes can be added to the system.
The largest Dynamic Activation system consists of two SC nodes and ten PL nodes.
For information about the hardware configurations in Dynamic Activation 1, refer to BSP 8100 documentation. For details, see document ECP Guideline for Ordering BSP8100 in Dynamic Activation 1, in http://prodcat.internal.ericsson.com.
2.2 Directory Structure
The most important parts of Dynamic Activation directory structure are shown in Figure 2.
2.2.1 Description of /opt
The folder /opt contains add on application software packages. See Table 4 for more specific information on the contents.
|
Directory |
Comments |
Node or Nodes on Which the Directory Exists |
|---|---|---|
|
/opt/dve/bin |
Dynamic Activation Application scripts |
|
|
/opt/dve/tools |
Contains System tools, for example, JMX batch client, multi-log-consolidation |
|
|
/opt/dve-repository |
RPM Software installation files |
|
|
/opt/sentinel |
Sentinel™ installation directory |
|
|
/opt/cassandra |
Cassandra installation directory |
|
|
/opt/zookeeper |
ZooKeeper server |
|
|
/opt/vip |
eVIP files and Linux core kernel files |
|
|
/opt/ericsson/activation |
Bootloader and healtcheck files |
(1) By default installed on the first PL node
(PL3).
2.2.2 Description of /var
The folder /var contains variable files, such as logs and spool files. See Table 5 for more specific information.
|
Directory |
Contents |
Node or Nodes on Which the Directory Exists |
|---|---|---|
|
/var/dve/cli |
CLI result files |
|
|
/var/log |
System log files |
|
|
/var/log/bootloader |
Bootloader log files |
|
|
/var/log/esa |
ESA log files |
|
|
/var/log/ssm |
SSM log files |
|
|
/var/log/zookeeper |
ZooKeeper log files, commit log and database |
|
|
/var/log/cassandra |
System log for Cassandra |
|
|
/var/log/dve |
Dynamic Activation system- and, custom log files |
|
|
/var/cassandra |
Cassandra data directory |
(1) By default installed on the first PL node
(PL3)
2.2.3 Description of /cluster
The folder /cluster contains Dynamic Activation configuration files and shared deployment directories. See Table 6 for more specific information.
|
Directory |
Contents |
|---|---|
|
/cluster/nodes/<nodeId> |
LDE-related configuration files |
|
/cluster/nodes/<nodeId>/etc |
This directory contains the rpm.conf file which describes a list of all rpm and the order in which they are installed. |
|
/cluster/nodes/<nodeId>/boot |
Boot directory |
|
/cluster/etc |
LDE cluster configuration files |
|
/cluster/rpms |
Rpm storage |
|
/home/dveinstaller |
Dynamic Activation application configuration files and installation logs |
|
/home/bootloader |
Bootloader home folder |
|
/home/bootloader/repository |
Software repository of Dynamic Activation modules and submodules |
|
/home/bootloader/config |
Automatically generated internal Dynamic Activation configuration files |
|
/home/actadm/config |
Persisted Dynamic Activation configuration files |
|
/home/actadm/config/log |
Log for configuration file changes |
|
/home/actadm/licenses |
Licenses (SC) |
|
/home/bootloader/config/module_config_files/<Module> |
Module configuration files with parameters that, individually, cannot be supported by the bootloader.py CLI |
|
/home/actadm/nodes/<nodeId>/ESA |
Programs and configuration files for ESA |
|
/home/actadm/nodes/<nodeId>/SSM |
Files containing information about ESA System Service Monitor (SSM) operations and errors |
|
/home/bootloader/CArepository |
All used customer adaptations |
|
/home/dvecli/cache |
Used by Dynamic Activation Scheduled Procedures to cache temporary results |
|
/home/zooadm/nodes/<nodeId>/ZooKeeper/conf |
ZooKeeper configuration files |
|
/home/casadm |
Home directory for Cassandra administrator user |
|
/home/casadm/nodes/<nodeId>/cassandra/conf |
Cassandra configuration files |
|
/home/syncuser |
Contains files used for synchronization of license counters and configuration |
|
/cluster/storage/system/config/evip-apr9010467 |
Dynamic Activation eVIP configuration files |
2.2.4 Description of /usr
The folder /usr application software packages. See Table 7 for more specific information on the contents.
|
Directory |
Contents |
Node or Nodes on Which the Directory Exists |
|---|---|---|
|
/usr/local/esa |
ESA installation |
|
|
/usr/local/ssm |
SSM installation |
|
|
/usr/local/pgngn |
Base directory for all Dynamic Activation modules, see Table 2. Multi-log-consolidation logs |
2.3 Log Files
This section contains information about different log files in the system.
- Note:
- Personal data consists of information related to an identifiable
person. An identifiable person is one that can be identified indirectly,
for example by its MSISDN, IMSI, or passport number, or directly,
for example name, date of birth and postal address. Dynamic Activation
handles personal data. In some cases that personal data can be saved
in certain log and result files. Those files are automatically maintained,
to make sure that personal data is not retained too long. For more
information, see Section 5.12.5.
Sensitive information such as passwords, is not recorded in log files.
- Information about how the log files are rotated in the nbia-module is available in the configuration files
below on each PL node:
- /usr/local/pgngn/<module>-<version>/etc/org.ops4j.pax.logging.cfg: The configuration of how module log files (including debug information, exceptions and errors) are rotated.
- /usr/local/pgngn/<module>-<version>/config/proclog-log4j.properties: The configuration of Processing Log.
- Information about how the log files are rotated in all
other modules is available in each corresponding log4j.xml file. These xml files are located in the /usr/local/pgngn/<module>-<version>/config directory,
on each PL node.
As a default setting, the logs have a maximum size of 15MB before they are rotated, and will store a maximum of 10 log files before the oldest logs are deleted.
2.3.1 Operating System
The following log files are available on both SC nodes:
- /var/log/<SC_hostname>/messages
This is the Linux system log file. It is rotated when larger than 4096 kb.
- /var/log/<SC_hostname>/auth
This is the log file for system login attempts.
- Note:
- For a full list of messages, check both SC nodes.
2.3.2 Dynamic Activation Application Logs
Log File Sizes
All the log files introduced in this section have a maximum size of 15 MB before they are rotated, and a maximum of 10 log files are stored before the oldest logs are deleted.
When a log file gets rotated, it is backed up with the same name, but appended with a "1", "2", "3", and then stepping on.
For example, massiveoperations.log is updated to massiveoperations.log.1, and then massiveoperations.log.2.
Ten rotated files can exist at the same time, meaning when the 11th log file gets rotated, it overwrites the previous log file appended with a "10".
Module Log Files
Modules in Dynamic Activation have their respective log files, which are located in /var/log/dve.
The log files use the following name convention:
<module-name>[-<log-type>].log, where:
- <module-name> indicates which module the log file belongs to.
- (Optional) <log-type> can be omitted, or one of the following, depending on what log information
is contained:
- Omitted: contains common log information.
- audit: contains failed login attempts to the Dynamic Activation.
- config: contains configuration changes in the GUI, personal data, or both.
- console: contains console output of this module.
- karaf: contains special log for NBIA.
- oauth: contains GUI login attempts and management of GUI users.
For more information, see Table 8.
Log file name examples: arh-module.log, arh-module-console.log.
|
Module |
.log |
audit |
config |
console |
karaf |
oauth |
|---|---|---|---|---|---|---|
|
dve-inbound-interfaces-application |
Yes |
Yes |
No |
Yes |
No |
No |
|
nbia-module |
Yes |
No |
No |
Yes |
Yes |
No |
|
tomcat-server |
Yes |
Yes |
Yes |
No |
No |
Yes |
|
dve-application |
Yes |
Yes |
Yes |
Yes |
No |
No |
|
arh-module |
Yes |
No |
No |
Yes |
No |
No |
|
akka-cluster-service-lib |
Yes |
No |
No |
Yes |
No |
No |
|
activation-orchestration-module |
Yes |
No |
No |
Yes |
No |
No |
|
admin-tool |
Yes |
No |
No |
No |
No |
No |
|
cudb-block-proxy |
Yes |
No |
No |
Yes |
No |
No |
|
ema-faultmgmt-module |
Yes |
No |
No |
Yes |
No |
No |
|
ema-authentication-module |
Yes |
Yes |
No |
Yes |
No |
No |
|
lockservice-module |
Yes |
No |
No |
No |
No |
No |
|
ema-pq-module |
Yes |
No |
No |
Yes |
No |
No |
|
ema-batch-module |
Yes |
No |
No |
Yes |
No |
No |
Special Log Files on Both SC and PL Nodes
The following log file is available on both SC and PL nodes:
- /var/log/dve/consolidated.log
- Note:
- Beware that the configurations will be restored to default settings when rebooting
the system, or when executing the command:
$ bootloader.py node activate --host <hostname>
This file logs exceptions and errors in the application, and can also be used for debugging the application by changing the log levels in the corresponding log4j.xml, or org.ops4j.pax.logging.cfg file.
The nbia-module is the module that use the org.ops4j.pax.logging.cfg config file. All other modules use the log4j.xml logging config file.
These files can contain personal data.
Special Log Files on PL Nodes
The following log files are available on PL nodes:
- /var/log/dve/multi-log-consolidation-console.log
This log file contains information of the multi-log-consolidation process. Under normal operating conditions, this file is empty.
- /var/log/dve/massiveoperations.log
- Note:
- This file is only applicable for UDC solutions.
This log file contains all messages sent over CLI.
This file can contain personal data.
- /var/log/dve/request/failedrequests.log
This log file contains failed requests.
- /var/log/dve/request/partiallysucceededoperations.log
This log file contains the partially successful request and the subsequent southbound operations.
If the sequence of operations was not successful, this causes inconsistency in the CUDB. Correct the inconsistency manually. For more information, refer to CUDB Subscription Repair and Remove Procedures, Reference [4].
This file can contain personal data.
- /var/log/dve/request/eir_partiallysucceededoperations.log
- Note:
- This file is only applicable for UDC solutions.
This log file contains the partially successful CAI3G request and the subsequent southbound operations.
These southbound operations relate to the logging of IMEIlogdata in the CUDB. The unsuccessful operation causes incomplete data in the CUDB and requires to be manually corrected through direct southbound requests towards the CUDB. For information on how to perform such manual procedures, refer to the EIR section in CUDB Subscription Repair and Remove Procedures, Reference [4].
This file can contain personal data.
- /var/log/dve/request/successfulrequests.log
This log file contains successful requests.
This file can contain personal data.
2.3.2.1 Persistent Log Settings
When rebooting the system or activating the configuration on a node, the configuration will be restored to it's default settings. To avoid overwriting wanted settings, the settings can be made persistent by changing the log levels for the following config files:
- <module_name>-audit.log
- failedrequests.log
- partiallysucceededoperations.log
- eir_partiallysucceededoperations.log
- successfulrequests.log
The following log level parameters can be set:
- @LOG_LEVEL_AUTHORIZATION@ - authorization (default set to OFF)
- @LOG_LEVEL_AUTHENTICATION@ - authentication (default set to TRACE)
- @LOG_LEVEL_FAILED_REQUESTS@ - failed requests (default set to OFF)
- @LOG_LEVEL_SUCCESSFUL_REQUESTS@ - successful requests (default set to OFF)
- @LOG_LEVEL_PARTIALLY_SUCCEEDED_REQUESTS@ - partially succeeded requests (default set to INFO)
- @LOG_LEVEL_PARTIALLY_SUCCEEDED_EIR_OPERATIONS@ - partially succeeded EIR operations (default set to INFO)
The following log level values are supported and can be modified by editing the corresponding config file.
|
Level |
Description |
|---|---|
|
TRACE |
Logs all messages. This mode should not be used in production. |
|
DEBUG |
Logs debugging messages. This mode should not be used when PGNGN is in production. |
|
INFO |
Logs informational messages, similar to "verbose" mode. |
|
WARN |
Logs potentially harmful events and situations, where the application is carried on as usual. |
|
ERROR |
Logs error events where the application can still function. |
|
FATAL |
Logs severe error events where the application might quit abnormally. |
|
OFF |
Logs nothing. |
Follow the procedure below to configure the log levels:
- From an SC node, run the following command to configure
log level setting:
$ bootloader.py config set --parameter <log_level_parameter> --value <log_level_value>
The values to use are specified above.
- From an SC node, run the following command, for all PL
nodes one by one, to activate the change:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
2.3.3 Ericsson SNMP Agent
The following log files are used in ESA:
- /var/log/esa/MasterAgent.log
This log file contains log entries related to the Master Agent operations.
- /var/log/esa/FmAgent.log
This log file contains log entries related to the FM Agent operations.
- /var/log/esa/PmAgent.log
This log file contains log entries related to the PM Agent operations.
- /var/log/esa/alarms.log
This log file contains every alarm and event that has been triggered in the FM Agent, which means all alarm raise, alarm clear and event are logged.
- /var/log/esa/nontranslated/<ipaddress>.log
This log file is non-translated alarms log and contains every trap and notification that has gone through the alarm translation process in the FM Agent without being successfully matched to a condition. The complete trap content is logged in a log file named <ipaddress>.log, where <ipaddress> is the IP address of the alarming source.
- /usr/local/ssm/log
This directory holds the log file ssmagent.log, containing information about ESA System Service Monitor (SSM) operations and errors.
2.3.4 Processing Log File Format
Processing log records both provisioning, and GUI operations in Dynamic Activation.
It is possible to view processing logs in the Log Management subtab in Dynamic Activation GUI, see User Guide for Resource Activation, Reference [8]. Processing logs are indexed by instance; for example imsi, msisdn to easier find related processing logs bound to a certain subscriber.
To extract processing logs from the database, use the Processing Log Admin Tool. For detailed information, see Section 5.10.
Log files are strictly following Comma-Separated Value (CSV) file format. CSV uses a comma to separate values, and uses a double quote (") character around fields that contain reserved characters, such as commas or new lines. If the field itself contains double quote, that double quote should be escaped by one double quote (") character.
Attributes not used for provisioning will not be encoded to a readable format in the processing log.
The structure of log files is described as follows:
For each log file, there is a file header describing what value one log entry contains. The head contains the following field names:
LogType(v1.1),RootLogId,SubLogId,TransactionId,Instance,Operation,Status,User,Hostname,Protocol,Target,StartTime,ExecuteTime,ResponseCode,FullRequest,FullResponse,ReplayLogId
|
Field Name |
Description |
Log Level |
|---|---|---|
|
LogType(v1.1) |
Describes the type of log a log entry has. Current possible values are:
|
The following table list depicts which log levels apply for each entry:
For information about the different log levels, see Table 23. |
|
RootLogId |
Describes root log identification of one log entry and it is globally unique. It is composed according to this convention <hostname><datetime><sequence number><protocol>.
|
Always holds a value, irrespective of log level, that is applicable for log-levels 1-3. |
|
SubLogId |
Describes different parts of the system recording processing log. For example "/1", "/1/2", "/1/2/1" and so on. One slash stands for one different part, and the number in each part defines how many operations that have been executed. |
Not relevant for northbound log type. |
|
TransactionId |
Describes transaction identification. The TransactionId is not currently used in this version but is still logged for backwards compatibility towards AS, it is passed by soapOperation to be logged as string. |
- |
|
Instance |
Describes instance identification and context in the message header of a provisioning request, such as msisdn=12345678:imsi=87654321:context=SH376896987 The Instance of Batch Handler also contains batch job id, such as batchJobId=b502bfdb-ee7e-49ae-bc9b-dc642f673f9 |
- |
|
Operation |
Describes what operation a log entry has, such as CREATE, SET, GET, and DELETE. |
Always holds a value, irrespective of log level, that is applicable for log-levels 1-3. |
|
Status |
Indicates processing result, possible values are: SUCCESSFUL, FAILED, and CANCELLED. |
Always holds a value, irrespective of log level, that is applicable for log-levels 1-3. |
|
User |
Shows specific user activities, such as which user did send a certain provisioning request, or which user did perform a specific GUI operation. |
Not relevant for southbound log type. |
|
Hostname |
Is the hostname of Dynamic Activation. |
Always holds a value, irrespective of log level, that is applicable for log-levels 1-3. |
|
Protocol |
Defines which protocol is used, such like GUI, MML, and CAI3G1_2, and more. |
Not relevant for internal log type. |
|
Target |
Defines the object type that is operated on. This could mean the actual MML command in MML or MOType in CAI3G on the northbound, or the NE name southbound. |
Always holds a value, irrespective of log level, that is applicable for log-levels 1-3. |
|
StartTime |
Records start time of processing. |
Always holds a value, irrespective of log level, that is applicable for log-levels 1-3 |
|
ExecuteTime |
Stands for total execution time of processing. |
Always holds a value, irrespective of log level, that is applicable for log-levels 1-3. |
|
ResponseCode |
Stands for response code of processing. |
Always holds a value, irrespective of log level, that is applicable for log-levels 1-3. Response code 0 indicates a successful response. |
|
FullRequest |
Describes the detailed request, such as complete MML request or CAI3G request. |
Always holds a value, irrespective of log level, that is applicable for log-levels 1-3. |
|
FullResponse |
Describes the detailed response, such as complete MML request or CAI3G response. |
Always holds a value, irrespective of log level, that is applicable for log-levels 1-3. |
|
ReplayLogId |
Reference to source RootLogId and SubLogId for a replay operation. |
Always holds a value, irrespective of log level, that is applicable for log-levels 1-3. |
The following is an example of a processing log file:
Example 1 Example of Processing log file
LogType(v1.1),RootLogId,SubLogId,TransactionId,Instance,Operation,Status,User,Hostname,
Protocol,Target,StartTime,ExecuteTime,ResponseCode,FullRequest,FullResponse,ReplayLogId
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1","","","SEARCH","FAILED","",
"vn-1","LDAP","FT_CUDB","2015-07-09 13.26.37.307","00 00:00:00.001","32",
"dn: serv=Identities,IMSI=264000000510112,dc=imsi,ou=identities,dc=operator,dc=com
Search Scope: OBJECT
Search Filter: (objectClass=*)
Time Limit Milliseconds : 0
Count Limit: 0
Return Object: False
DerefAliases: ALWAYS
DerefLink: False","[LDAP: error code 32 - No Such Object]",""
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1/1","","","SEARCH","FAILED","",
"vn-1","LDAP","FT_CUDB","2015-07-09 13.26.37.309","00 00:00:00.001","32",
"dn: IMSI=264000000510112,dc=imsi,ou=identities,dc=operator,dc=com
Search Scope: OBJECT
Search Filter: (objectClass=*)
Time Limit Milliseconds : 0
Count Limit: 0
Return Object: False
DerefAliases: NEVER
DerefLink: False","[LDAP: error code 32 - No Such Object]",""
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1/1/1","","","SEARCH","FAILED","",
"vn-1","LDAP","FT_CUDB","2015-07-09 13.26.37.311","00 00:00:00.001","32",
"dn: serv=Identities,MSISDN=4919018510112,dc=msisdn,ou=identities,dc=operator,dc=com
Search Scope: OBJECT
Search Filter: (objectClass=*)
Time Limit Milliseconds : 0
Count Limit: 0
Return Object: False
DerefAliases: ALWAYS
DerefLink: False","[LDAP: error code 32 - No Such Object]",""
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1/1/1/1","","","SEARCH","FAILED","",
"vn-1","LDAP","FT_CUDB","2015-07-09 13.26.37.313","00 00:00:00.001","32",
"dn: serv=CSPS,MSISDN=4919018510112,dc=msisdn,ou=identities,dc=operator,dc=com
Search Scope: OBJECT
Search Filter: (objectClass=*)
Time Limit Milliseconds : 0
Count Limit: 0
Return Object: False
DerefAliases: ALWAYS
DerefLink: False","[LDAP: error code 32 - No Such Object]",""
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1/1/1/1/1","","","SEARCH","FAILED","",
"vn-1","LDAP","FT_CUDB","2015-07-09 13.26.37.315","00 00:00:00.001","32",
"dn: MSISDN=4919018510112,dc=msisdn,ou=identities,dc=operator,dc=com
Search Scope: OBJECT
Search Filter: (objectClass=*)
Time Limit Milliseconds : 0
Count Limit: 0
Return Object: False
DerefAliases: NEVER
DerefLink: False","[LDAP: error code 32 - No Such Object]",""
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1/1/1/1/1/1","","","HESUI","SUCCESSFUL","",
"vn-1","TELNET","FT_HLR","2015-07-09 13.26.37.317","00 00:00:00.001","0",
"HESUI:IMSI=264000000510112,MSISDN=4919018510112;","HLR LA PROVISIONING PROCESS MODIFIED DATA
PPID
0
MSISDN CDC
7
NEID VALUE
1.1 919458101101
1.2 046200000150FF11
3.1 2
28.14 0
10.2 4
10.3 1
7.52 7
7.28 0
7.39 1234
END",""
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1/1/1/1/1/1/1","","","ADD","SUCCESSFUL","",
"vn-1","LDAP","FT_CUDB","2015-07-09 13.26.37.319","00 00:00:00.032","0",
"dn: MSISDN=4919018510112,dc=msisdn,ou=identities,dc=operator,dc=com
changetype: add
objectClass: top
objectClass: alias
objectClass: MSISDN
aliasedObjectName: mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com","",""
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1/1/1/1/1/1/1/1","","","ADD","SUCCESSFUL","",
"vn-1","LDAP","FT_CUDB","2015-07-09 13.26.37.352","00 00:00:00.002","0",
"dn: mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com
changetype: add
objectClass: CUDBMultiServiceConsumer
objectClass: top
mscId: 4622814443244bbba03ffb88f586f4f3","",""
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1/1/1/1/1/1/1/1/1","","","ADD","SUCCESSFUL","",
"vn-1","LDAP","FT_CUDB","2015-07-09 13.26.37.355","00 00:00:00.012","0",
"dn: serv=Identities,mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com
changetype: add
IMSI: 264000000510112
imsiMask: '0000000000000001'B
MSISDN: 4919018510112
msisdnMask: '0000000000000001'B
CDC: 0
objectClass: top
objectClass: CUDBService
objectClass: mscIdentities","",""
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1/1/1/1/1/1/1/1/1/1","","","ADD","SUCCESSFUL","",
"vn-1","LDAP","FT_CUDB","2015-07-09 13.26.37.370","00 00:00:00.002","0",
"dn: serv=AAA,mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com
changetype: add
serv: AAA
objectClass: top
objectClass: CUDBService","",""
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1/1/1/1/1/1/1/1/1/1/1","","","ADD","SUCCESSFUL","",
"vn-1","LDAP","FT_CUDB","2015-07-09 13.26.37.374","00 00:00:00.007","0",
"dn: IMSI=264000000510112,dc=imsi,ou=identities,dc=operator,dc=com
changetype: add
objectClass: top
objectClass: alias
objectClass: IMSI
aliasedObjectName: mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com","",""
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1/1/1/1/1/1/1/1/1/1/1/1","","","ADD","SUCCESSFUL","",
"vn-1","LDAP","FT_CUDB","2015-07-09 13.26.37.383","00 00:00:00.007","0",
"dn: serv=CSPS,mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com
changetype: add
CDC: 7
MSISDN: 491901851011
IMSI: 26400000051011
CSLOC: 2
GSMUEFEAT: 0
DEMLPP: 4
MEMLPP: 1
SOSDCF: 7
OFA: 0
PWD: 4321
objectClass: top
objectClass: CUDBServiceAuxiliary
objectClass: CP1
objectClass: CP2
objectClass: CP3
objectClass: CP4
objectClass: CP5
objectClass: CP6
objectClass: CP7
objectClass: CP8
objectClass: CP9
objectClass: CPA
objectClass: CPB
objectClass: CPC
objectClass: CPD
objectClass: CPE
objectClass: CPF
objectClass: CPG
objectClass: CPH
objectClass: CPI
objectClass: CPJ
objectClass: CPK
objectClass: CPM
objectClass: CPM1
objectClass: CPM2
objectClass: CPM3
objectClass: CPM4
objectClass: CPZ
objectClass: CP04s
objectClass: CP0A
objectClass: CP11
NAM: 0
CAT: 10
DBSG: 1
SOCB: 0
PWDC: 0
SOCFB: 0
SOCFNRC: 0
SOCFNRY: 0
SOCFU: 0
SODCF: 0
SOCLIP: 0
SOCLIR: 0
SOCOLP: 0","",""
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1","","","ADD","SUCCESSFUL","",
"vn-1","LDAP","FT_CUDB","2015-07-09 13.26.37.392","00 00:00:00.006","0",
"dn: ei=gprs,serv=CSPS,mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com
changetype: add
objectClass: top
objectClass: alias
objectClass: CUDBExtensibleObject
aliasedObjectName: PDPCP=0,ou=gprsProfiles,serv=CSPS,ou=mscCommonData,dc=operator,dc=com","",""
"southbound","vn-11507091326370003CAI3G1_2","/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1","","","ADD","SUCCESSFUL","",
"vn-1","LDAP","FT_CUDB","2015-07-09 13.26.37.401","00 00:00:00.002","0",
"dn: ei=camel,serv=CSPS,mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com
changetype: add
objectClass: top
objectClass: alias
objectClass: CUDBExtensibleObject
aliasedObjectName: CSP=0,ou=camelProfiles,serv=CSPS,ou=mscCommonData,dc=operator,dc=com","",""
"northbound","vn-11507091326370003CAI3G1_2","","","msisdn=4919018510112:imsi=264000000510112","CREATE",
"SUCCESSFUL","cai3guser","vn-1","CAI3G1_2","Subscription@http://schemas.ericsson.com/pg/hlr/13.5/",
"2015-07-09 13.26.37.226","00 00:00:00.183","0",
"<soapenv:Envelope xmlns:soapenv=""http://schemas.xmlsoap.org/soap/envelope/""
xmlns:cai3=""http://schemas.ericsson.com/cai3g1.2/"" xmlns:ns=""http://schemas.ericsson.com/pg/hlr/13.5/"">
<soapenv:Header><cai3:SessionId>99901944f90f4cb492c39aeff5e392c3</cai3:SessionId>
</soapenv:Header><soapenv:Body>
<cai3:Create>
<cai3:MOType>Subscription@http://schemas.ericsson.com/pg/hlr/13.5/</cai3:MOType>
<cai3:MOId>
<ns:msisdn>4919018510112</ns:msisdn>
<ns:imsi>264000000510112</ns:imsi>
</cai3:MOId>
<cai3:MOAttributes>
<ns:CreateSubscription imsi=""264000000510112"" msisdn=""4919018510112"">
<ns:msisdn>4919018510112</ns:msisdn>
<ns:imsi>264000000510112</ns:imsi>
</ns:CreateSubscription>
</cai3:MOAttributes>
</cai3:Create>
</soapenv:Body></soapenv:Envelope>","<S:Envelope xmlns:S=""http://schemas.xmlsoap.org/soap/envelope/"">
<S:Header><SessionId xmlns=""http://schemas.ericsson.com/cai3g1.2/"">99901944f90f4cb492c39aeff5e392c3
</SessionId><TransactionId xmlns=""http://schemas.ericsson.com/cai3g1.2/""
xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xsi:nil=""true""/>
<SequenceId xmlns=""http://schemas.ericsson.com/cai3g1.2/""
xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xsi:nil=""true""/>
</S:Header><S:Body><CreateResponse xmlns=""http://schemas.ericsson.com/cai3g1.2/"">
<MOId><ns:msisdn xmlns:cai3=""http://schemas.ericsson.com/cai3g1.2/""
xmlns:ns=""http://schemas.ericsson.com/pg/hlr/13.5/""
xmlns:soapenv=""http://schemas.xmlsoap.org/soap/envelope/"">4919018510112</ns:msisdn>
<ns:imsi xmlns:cai3=""http://schemas.ericsson.com/cai3g1.2/""
xmlns:ns=""http://schemas.ericsson.com/pg/hlr/13.5/""
xmlns:soapenv=""http://schemas.xmlsoap.org/soap/envelope/"">264000000510112</ns:imsi>
</MOId></CreateResponse></S:Body></S:Envelope>",""
"southbound","vn-11507091326380001REPLAY","/1/1/1/1/1/1/1/1/1/1/1","","","ADD","FAILED","",
"vn-1","LDAP","FT_CUDB_REPLAY","2015-07-09 13.26.38.313","00 00:00:00.001","68",
"dn: MSISDN=4919018510112,dc=msisdn,ou=id
entities,dc=operator,dc=com
changetype: add
objectClass: top
objectClass: alias
objectClass: MSISDN
aliasedObjectName: mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com","
[LDAP: error code 68 - Entry Already Exists]","vn-11507091326370003CAI3G1_2/1/1/1/1/1/1/1/1/1"
"southbound","vn-11507091326380001REPLAY","/1/1/1/1/1/1/1/1/1/1/1/1","","","ADD","FAILED","",
"vn-1","LDAP","FT_CUDB_REPLAY","2015-07-09 13.26.38.315","00 00:00:00.001","68",
"dn: mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com
changetype: add
objectClass: CUDBMultiServiceConsumer
objectClass: top
mscId: 4622814443244bbba03ffb88f586f4f3","[LDAP: error code 68 - Entry Already Exists]",
"vn-11507091326370003CAI3G1_2/1/1/1/1/1/1/1/1/1/1"
"southbound","vn-11507091326380001REPLAY","/1/1/1/1/1/1/1/1/1/1/1/1/1","","","ADD","FAILED","",
"vn-1","LDAP","FT_CUDB_REPLAY","2015-07-09 13.26.38.317","00 00:00:00.001","68",
"dn: serv=Identities,mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com
changetype: add
IMSI: 264000000510112
imsiMask: '0000000000000001'B
MSISDN: 4919018510112
msisdnMask: '0000000000000001'B
CDC: 0
objectClass: top
objectClass: CUDBService
objectClass: mscIdentities","[LDAP: error code 68 - Entry Already Exists]",
"vn-11507091326370003CAI3G1_2/1/1/1/1/1/1/1/1/1/1/1"
"southbound","vn-11507091326380001REPLAY","/1/1/1/1/1/1/1/1/1/1/1/1/1/1","","","ADD","FAILED","",
"vn-1","LDAP","FT_CUDB_REPLAY","2015-07-09 13.26.38.318","00 00:00:00.001","68",
"dn: serv=AAA,mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com
changetype: add
serv: AAA
objectClass: top
objectClass: CUDBService","[LDAP: error code 68 - Entry Already Exists]",
"vn-11507091326370003CAI3G1_2/1/1/1/1/1/1/1/1/1/1/1/1"
"southbound","vn-11507091326380001REPLAY","/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1","","","ADD","FAILED","",
"vn-1","LDAP","FT_CUDB_REPLAY","2015-07-09 13.26.38.320","00 00:00:00.001","68",
"dn: IMSI=264000000510112,dc=imsi,ou=identities,dc=operator,dc=com
changetype: add
objectClass: top
objectClass: alias
objectClass: IMSI
aliasedObjectName: mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com","
[LDAP: error code 68 - Entry Already Exists]",
"vn-11507091326370003CAI3G1_2/1/1/1/1/1/1/1/1/1/1/1/1/1"
"southbound","vn-11507091326380001REPLAY","/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1","","","ADD","FAILED","",
"vn-1","LDAP","FT_CUDB_REPLAY","2015-07-09 13.26.38.323","00 00:00:00.004","68",
"dn: serv=CSPS,mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com
changetype: add
CDC: 7
MSISDN: 491901851011
IMSI: 26400000051011
CSLOC: 2
GSMUEFEAT: 0
DEMLPP: 4
MEMLPP: 1
SOSDCF: 7
OFA: 0
PWD: 4321
objectClass: top
objectClass: CUDBServiceAuxiliary
objectClass: CP1
objectClass: CP2
objectClass: CP3
objectClass: CP4
objectClass: CP5
objectClass: CP6
objectClass: CP7
objectClass: CP8
objectClass: CP9
objectClass: CPA
objectClass: CPB
objectClass: CPC
objectClass: CPD
objectClass: CPE
objectClass: CPF
objectClass: CPG
objectClass: CPH
objectClass: CPI
objectClass: CPJ
objectClass: CPK
objectClass: CPM
objectClass: CPM1
objectClass: CPM2
objectClass: CPM3
objectClass: CPM4
objectClass: CPZ
objectClass: CP04s
objectClass: CP0A
objectClass: CP11
NAM: 0
CAT: 10
DBSG: 1
SOCB: 0
PWDC: 0
SOCFB: 0
SOCFNRC: 0
SOCFNRY: 0
SOCFU: 0
SODCF: 0
SOCLIP: 0
SOCLIR: 0
SOCOLP: 0","[LDAP: error code 68 - Entry Already Exists]",
"vn-11507091326370003CAI3G1_2/1/1/1/1/1/1/1/1/1/1/1/1/1/1"
"southbound","vn-11507091326380001REPLAY","/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1","","","ADD","FAILED","",
"vn-1","LDAP","FT_CUDB_REPLAY","2015-07-09 13.26.38.328","00 00:00:00.001","68",
"dn: ei=gprs,serv=CSPS,mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com
changetype: add
objectClass: top
objectClass: alias
objectClass: CUDBExtensibleObject
aliasedObjectName: PDPCP=0,ou=gprsProfiles,serv=CSPS,ou=mscCommonData,dc=operator,
dc=com","[LDAP: error code 68 - Entry Already Exists]",
"vn-11507091326370003CAI3G1_2/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1"
"southbound","vn-11507091326380001REPLAY","/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1","","","ADD","FAILED","",
"vn-1","LDAP","FT_CUDB_REPLAY","2015-07-09 13.26.38.330","00 00:00:00.001","68",
"dn: ei=camel,serv=CSPS,mscId=4622814443244bbba03ffb88f586f4f3,ou=multiSCs,dc=operator,dc=com
changetype: add
objectClass: top
objectClass: alias
objectClass: CUDBExtensibleObject
aliasedObjectName: CSP=0,ou=camelProfiles,serv=CSPS,ou=mscCommonData,dc=operator,dc=com",
"[LDAP: error code 68 - Entry Already Exists]",
"vn-11507091326370003CAI3G1_2/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1"
"northbound","vn-11507091326380001REPLAY","","","timestamp=1436441194","AUTOREPLAY","SUCCESSFUL",
"replayer","vn-1","","","2015-07-09 13.26.38.076","00 00:00:00.299","0",
"REPLAY:STARTTIME=1436441194,ENDTIME=1436441198;","Automatic Replay operation successful",""
2.3.4.1 Cassandra
Processing logs from all nodes are, upon provisioning, saved in the Cassandra cluster. The logs are stored as an aggregated view, to be viewable in the Log Management subtab in Dynamic Activation GUI. Each processing log is replicated into, by default, three copies that are saved on different nodes. The spread algorithm for where processing logs are replicated too, are handle by Cassandra, and thereby not controlled or possible to know from a Dynamic Activation perspective.
As Processing logs are stored, Cassandra determines and discovers how to efficient compress data, by comparing it. This means that Cassandra will compress both new Processing logs and old Processing logs in a more efficient way, after some usage.
2.3.4.2 Flat Processing Log File
The flat processing log files are named as flatlog-<moduleName>.log_<hostName>, for example: flatlog-nbia.log_node, and are stored in the default directory /var/log/dve/flatlog/ on each node. Each module is corresponding to a specialized Log Type, see Table 11.
When the flat processing log file reaches the maximum file size or after a time interval, the file gets rotated and it is backed up to a CSV formatted file with the name <flatlogfileName>_<data>_<time>_<sequenceNumber>.csv, for example: flatlog-nbia.log_node1_20160407_015025_00001.csv. This CSV file will exist until the end of the lifecycle.
The setting of the flat processing log files can be modified in bootloader, see Section 5.12.4 for all the configurable parameters and their default values.
|
Module Name |
Log Type |
Log File |
|---|---|---|
|
nbia-module |
Northbound(CAI/CAI3G/EDIFACT) and Notification (for asynchronous CAI3G request) |
<FLATLOG_PATH_NBIA>/flatlog-nbia.log_<HostName> |
|
dve-inbound-interfaces-application |
NorthBound (MML) |
<FLATLOG_PATH_DVE_INBOUND_INTERFACES_APPLICATION>/flatlog-dve-inbound-interfaces-application.log_<HostName> |
|
dve-application |
Southbound and Internal |
<FLATLOG_PATH_DVE_APPLICATION>/flatlog-dve-application.log_<HostName> |
|
tomcat-server |
Northbound (GUI operation) |
<FLATLOG_PATH_TOMCAT_SERVER>/flatlog-tomcat-server.log_<HostName> |
|
arh-module |
Internal (Asynchronous CAI3G) |
<FLATLOG_PATH_ARH>/flatlog-arh.log_<HostName> |
2.4 Ports
This section includes information about the different traffic, and O&M ports used in Dynamic Activation.
2.4.1 Traffic Ports
The following table contains the traffic ports.
|
Port |
Protocol |
Interface |
Description |
Module |
Node |
|---|---|---|---|---|---|
|
8010 |
Telnet |
alb_1 |
MML traffic |
dve-inbound-interfaces |
|
|
8111 |
alb_1 |
MML traffic |
dve-inbound-interfaces |
||
|
3010 |
alb_1 |
EDIFACT traffic |
nbia-module |
||
|
3300 |
Telnet |
alb_1 |
CAI traffic |
nbia-module |
|
|
3301 |
Telnet |
alb_1 |
nbia-module |
||
|
8080 |
HTTP/TCP |
alb_1 |
nbia-module |
||
|
8181 |
HTTPS/TCP |
alb_1 |
nbia-module |
||
|
8888 |
HTTP/TCP |
* (1) |
nbia-module |
||
|
8989 |
HTTPS/TCP |
* (1) |
nbia-module |
(1) Means all interfaces.
2.4.2 Operation and Maintenance Ports
The Operation and Maintenance ports are described in Table 13:
|
Port |
Protocol |
Interface |
Description |
Module |
|---|---|---|---|---|
|
22 |
* (1) |
Used for secure access and data transfer |
- | |
|
123 |
* (1) |
Used for time synchronization |
- | |
|
161, 162 |
* (1) |
- | ||
|
2828 |
Telnet |
bond0 |
SLR CLI |
akka-cluster-service-lib |
|
8009 |
* (1) |
JMX interface number 2 used for internal monitoring of application deployment and status. |
- | |
|
8023 |
Telnet (2) |
alb_0 |
Massive CLI |
dve-application |
|
8090 |
- |
Load Balancer |
- | |
|
8099 |
* (1) |
JMX interface number 1 for runtime configuration of Blocking CUDB Before Backup only. |
cudb-block-proxy | |
|
8100 |
* (1) |
JMX interface number 1 for runtime configuration. |
dve-application | |
|
8282 (3) |
alb_0 |
tomcat-server | ||
|
8383 |
alb_0 |
tomcat-server | ||
|
8904 |
* (1) |
JMX interface number 1 used for internal monitoring of application deployment and status. |
- | |
|
8992 |
Telnet |
alb_0 |
dve-application | |
|
8994 |
* (1) |
JMX interface number 2 for runtime configuration of blocking CUDB before backup only. |
cudb-block-proxy | |
|
8995 |
* (1) |
JMX interface number 2 for runtime configuration |
dve-application | |
|
9042 |
CQL |
* (1) |
Used by applications to obtain stored data from the Cassandra Query interface. Also used by Cassandras CQLSH client to communicate with the Cassandra database. |
All modules creating or modifying processing logs. |
|
9162 |
HTTP/TCP |
* (1) |
- |
(1) Means all interfaces.
(2) Configuration instructions
are described in Section 6.2.1. PL nodes are
only reachable from the SC nodes.
(3) The HTTP traffic is automatically redirected
to HTTPS on port 8383 by tomcat.
2.5 Version Identification
This section describes how to retrieve information about the different component versions and also how to check the hardware present in the system.
Information about Customer Adaptations on Dynamic Activation is presented with the Dynamic Activation component versions.
2.5.1 Software Version
Print all the installed Dynamic Activation RPMs.
From any node, locally:
$ sudo 3ppmon version
Print all installed Dynamic Activation components, and Customer Adaptation System Properties.
From an SC node:
$ bootloader.py system version
2.5.2 BSP Version
This section describes how to retrieve information about the installed BSP version.
Login to BSP 8100:
$ ssh -p 2024 advanced@<BSP_NBI_IP>
> show ManagedElement=1,SystemFunctions=1,SwInventory=1
Look for CXP9025735-XXXXX where XXXXX is the BSP version installed.
Exampel:
SwVersion=CXP9025735-R3A05
2.6 Users
This section gives a brief description of the users of Dynamic Activation.
2.6.1 System Users
The following are the default system users for Dynamic Activation:
- actadm - belongs to the group activation and is used for administering Dynamic Activation.
- dvecli - belongs to the group activation and is used for fetching massive result-files.
- casadm - belongs to the group activation and is used for administering the Cassandra database.
- zooadm - belongs to the group activation and owns the ZooKeeper application.
- syncuser - belongs to the group activation and is used for synchronization of license counters and configurations by SFTP.
The system users can be administered with LDE operations. For more information about LDE, see LDE Product Description, Reference [5].
System Handling
The system user actadm is the user that will own most of the processes and files used by Dynamic Activation. For management of the system, the administrative users administrators need to be created, see section Create Administrative User, Section 2.6.1.1. Administrators are members of the activation group, and are allowed to perform operation and maintenance of the system.
Locking Mechanism
Failed login attempts for the whole system users are logged in /var/log/<hostname>/auth on the SC node and in /var/log/auth on each server.
If too many sequential login attempts fail, the user is locked and an event is raised. For more information about alarms and events, refer to Event and Alarm Handling, Reference [7].
To view the failed login attempts status of a user, enter the following command as user root:
# pam_tally2 --file /home/faillog
If a user is locked, unlock the user by entering the following command as user root:
# pam_tally2 --file /home/faillog --user <user_name> --reset
The number of unsuccessful login attempts required before an event is raised can be edited in the files /etc/pam.d/common-auth and /etc/pam.d/common-account
Change the value of the parameter deny=<deny_value>
2.6.1.1 Create Administrative User
This section contains information on how to create non-root users for administering purposes, such as log file reading, process monitoring, managing Dynamic Activation processes, installation of modules, and more.
To create an administrative user, log in as user root on one of the SC nodes and follow the instructions in the step-list:
- Create an administrative user that belongs to the global
group activation:
# addActivationUser <administrator_username>
- Set the password and activate the new user:
# passwd <administrator_username>
2.6.1.2 Delete Administrative User
To delete an administrative user, log in as user root on one of the SC nodes and follow the instructions in the step-list:
- # userdel -r <administrator_username>
- Edit the /cluster/etc/login.allow file. Remove the following line:
<administrator_username> all
# vi /cluster/etc/login.allow
2.6.2 Dynamic Activation Users
Dynamic Activation users can be administered through the GUI. For the details, refer to User Guide for Resource Activation, Reference [8].
Failed login attempts for Dynamic Activation users are logged in different files, depending on which interface the login attempt was made for, see Section 2.6.2.1.
2.6.2.1 Failed Login Attempts
- Failed login attempts through the MML, CAI, and CAI3G
interfaces, are logged in the /var/log/dve/ema-authentication-module-audit.log file, residing on the SC nodes.
Example output:
2017-03-28 11:20:12.923 [acs-service-44] INFO AuditLog - [CAI] Login attempt with username [caiuser] has failed 2017-03-28 11:23:13.270 [acs-service-45] INFO AuditLog - [MML] Login attempt with username [mmltest] has failed 2017-03-28 11:23:44.797 [acs-service-45] INFO AuditLog - [CAI3G1_2] Login attempt with username [cai3gtest] has failed
- Failed login attempts through the CLI interface are
logged in the /var/log/dve/dve-application-audit.log file, residing on the PL nodes.
Example output:
2017-03-30 08:17:00,957 [AUTHENTICATIONLOG] User [clitest] failed to authenticate 2017-03-30 08:17:00,958 [AUTHENTICATIONLOG] loginAttemptsThreshold reached, locking user clitest
- Failed login attempts through the GUI interface are
logged in the /var/log/dve/tomcat-server-audit.log file, residing on the SC nodes.
Example output:
2017-03-30 08:24:43,779 [AUTHENTICATIONLOG] User [testuser] failed authentication 2017-03-30 08:24:54,767 [AUTHENTICATIONLOG] User [testuser] is locked
The log files above contain, except for information about failed login attempts, also information about user locking when maximal number of consecutive unsuccessful login attempts is reached.
An event is sent from ESA:
- For every failed login attempt
- When a user is locked (due to threshold reached)
2.7 Crontab
Dynamic Activation uses the following crontab jobs for housekeeping purposes:
- Note:
- All crontab jobs are applicable for all nodes, both SC and PL, except for sync_license.sh. This crontab job is only applicable on the PL nodes.
- auto_erase.sh - handles certain
log and response files.
For more information, see Section 5.12.5.
- proclog-retain-maximum-days.sh - this script is used to retain the maximum number of days configured.
It removes the N+1:th day configured to retain a steady amount of
preferred days with logs in the system. If there is more logs than
the N+1:th days, they wont be removed until the next run of the script.
This script is run hourly and increments per node, and is only executed
by the actadm user.
For more information, see Section 5.12.2.
- sync_license.sh - used for
synchronization of license counters. This crontab job is only executed
by the user syncuser.
For more information, see Section 5.17.6.
To list all crontab jobs, run the following command as user root:
# crontab -l -u actadm
3 User Privileges
Certain system administration commands require superuser privileges to run. It is highly recommended to setup system administrator users part of the wheel group, and run such commands with sudo privileges instead of using the root account.
As an example, the pam_tally2 command :
- (High security risk) If logged in as user root:
CL22-SC-1:~ # pam_tally2 --file /home/faillog - (Low security risk/Better choice) If logged in as a
system administrator:
<System Admin>@CL22-SC-1:~> sudo pam_tally2 --file /home/faillog
For security reasons, do not use root user for Operations and Management (O&M) of Dynamic Activation. Instead, create Dynamic Activation administrators, that are part of the activation group, to perform O&M operations. See Section 3.1.
It is highly recommended to disable the root user, and replacing this account with one or more System Administrator users that are part of the wheel group. See Section 3.2.
3.1 Administrator in activation Group
The activation group defines trusted system administrators that are allowed to manage Dynamic Activation.
System administrators in the activation group have privileges to perform the following O&M:
- Use all available bootloader commands, for example, manage application modules, set bootloader configuration, and so on.
- Manage 3PPs with 3ppmon commands by using sudo.
- Run scripts, for example, syncuser/sync_ssh_keys, replay-admin-tool, housekeeping, and so on.
Unlike users in the wheel group, the activation users cannot use sudo to execute other commands that require superuser privileges, except the 3ppmon commands.
For instruction on how to create activation users, see Section 2.6.1.1.
3.2 Administrator in wheel Group
System administrators in the wheel group can use sudo to execute all commands that require superuser privileges. They have privileges to perform:
- Initial deployment and installation
- User management of Linux users, for example, create new users, set passwords, and so on.
- Manage License Keys in Sentinel.
- License Agent administration
- Reboot cluster
- Health check (due to accessing system variables)
- Operating System level management
To keep certain operation-procedures consistent in the Dynamic Activation documentation, some commands (that require superuser privileges to run), are specified to be run as user root with # prompt. These commands can instead, and preferably, be executed as a wheel group user by adding sudo prior to the commands.
4 Operations
This section describes system operation commands, such as starting and stopping the system, and how to perform various system checks.
- Note:
- All the commands should be used depending on the state of the system. If the system is running traffic, some of the following commands can disturb the provisioning.
4.1 Operation Commands
This section describes the following commands which are used for checking system status and performing system operations.
- 3ppmon commands – Manage 3rd party applications and services that the Dynamic Activation application processes depend on.
- bootloader.py commands – Manage the Dynamic Activation application progresses (also known as modules).
- systemctl commands – Manage system background processes (also known as daemons).
- Note:
- The 3ppmon commands must be run with superuser privileges.
|
Command |
Description |
|---|---|
|
3ppmon startesa | stopesa |
Used to start or stop the ESA processes. |
|
3ppmon startssm | stopssm |
Used to start or stop the SSM process. |
|
3ppmon startzookeeper | stopzookeeper |
Used to start or stop the ZooKeeper service. ZooKeeper runs by default on SC1, SC2, and PL3. |
|
3ppmon startcassandra | stopcassandra |
Used to start or stop Cassandra. |
|
3ppmon startlserv | stoplserv |
Used to start or stop the Sentinel License Server. |
|
3ppmon status | status --host all | status -H all | status --host <hostname> | status -H <hostname> |
Used to view the status for all nodes and the detailed status for one specified node (including port status, deploy status, process status and alarm status for ESA, SSM, and Sentinel). It is possible to replace hostname with IP address in the command. |
|
3ppmon version |
Used to view installed version of platform and other installed RPM packages on the system. |
- Note:
- bootloader.py commands are exclusively run on SC nodes.
|
Command |
Description |
|---|---|
|
bootloader.py system version |
Used to list system product and module version information, for each node in the cluster. |
|
bootloader.py node activate --host <ip/hostname/all> (1) |
Used to, again, activate installed modules and submodules applicable for the given host. |
|
bootloader.py node status --host <ip/hostname/all> --detailed |
Used to list status info for modules and submodules. |
|
bootloader.py node start/stop --host <ip/hostname/all> |
Used to administer Dynamic Activation applications, except for ZooKeeper, ESA, and Sentinel. |
|
bootloader.py module start|stop|restart --name/-n <name> --host <ip/hostname/all> |
Used to start, stop, or restart the process corresponding to the given module. |
|
bootloader.py module add --name/-n <name> --enabled/-e <true/false> --host <ip/hostname> |
Used to install a specific module. (2) |
|
bootloader.py module delete --name/-n <name> --host <ip/hostname> |
Used to uninstall a specific module. (2) |
|
bootloader.py module update --name/-n <module tar.gz file> --host <ip/hostname> |
Used to update a specific module. If it does not exist it will be added. (2) |
|
bootloader.py module getversion --name/-n <module tar.gz file> --host <ip/hostname> |
Used to get the version of a specific module. |
|
bootloader.py submodule add --name/-n <name> --type/-t <types allowed in parent module> --parent/-p <name> --host <ip/hostname> |
Used to install a specific submodule. (2) |
|
bootloader.py submodule delete --name/-n <name> --parent/-p <name> --host <ip/hostname> |
Used to uninstall a specific submodule. (2) |
|
bootloader.py submodule update --name/-n <submodule tar.gz file> --type/-t <types allowed in parent module> --parent/-p <name> --host <ip/hostname> |
Used to update a specific submodule. If it does not exist it will be added. (2) |
|
bootloader.py submodule deploy --name/-n <submodule tar.gz file> --type/-t <types allowed in parent module> --parent/-p <name> --host <ip/hostname/all> |
Used to deploy a specific submodule (only supported for type jdv). |
|
bootloader.py submodule undeploy --name/-n <submodule tar.gz file> --parent/-p <name> --host <ip/hostname/all> |
Used to undeploy a specific submodule (only supported for type jdv). |
|
bootloader.py config list [--show_all/-A] |
Used to list configuration information for the system. |
|
bootloader.py config set --parameter/-p <parameter> --value/-v <value> |
Used to override the value of a parameter with a new customized value. (2) |
|
bootloader.py config remove --parameter/-p <parameter> |
Used to restore the default value of a parameter. (2) |
|
bootloader.py logging deploy-log --name/-n <(sub)module name> --jarfile/-j <jarfile> --level/-l <loglevel> --host <ip/hostname/all> [--outputfile/-o <log file name>] |
Used to deploy custom log settings for Dynamic Activation Application sub-modules. |
|
bootloader.py logging undeploy-log --name/-n <(sub)module name> --jarfile/-j <jarfile> --host <ip/hostname/all> |
Used to undeploy custom log settings for Dynamic Activaion Application sub-modules. (3) |
|
bootloader.py logging undeploy-log --name/-n all --host all |
Used to quickly undeploy all custom log settings for all JAR-files/modules on all hosts. |
|
bootloader.py logging deploy-proclog --name/-n <module name> --category/-c <proclog category> |
Used to deploy custom proclog logging settings on a module. (4) |
|
bootloader.py logging undeploy-proclog --name/-n <(sub)module name> --category/-c <proclog category> |
Used to undeploy custom proclog logging settings on a module. (4) |
|
bootloader.py logging list [--type/-t <proclog/log/all>] [--name/-n <(sub)module name>] |
Used to list all active custom logs/proclogs settings on the Dynamic Activation system. |
(1) all is
only used for new installations. It should not be used in any other
cases.
(2) For these commands to take effect,
they need to be activated. A bootloader.py node activate
--host <ip/hostname/all> command is required.
(3) This command will undeploy per jar-file
and on one or several hosts depending on parameter input.
(4) This is used in Customer Adaptation (CA) development.
|
Command |
Description |
|---|---|
|
systemctl status | start | stop license_agent |
Used to start or stop the License Agent process, or check its status. |
|
systemctl poweroff |
Used to power off the GEP blade. |
4.2 Dynamic Activation Property Configuration
Dynamic Activation properties are categorized as follows:
- Site-specific properties must be adapted to specific sites (such as IP address), opposed to non-site specific.
- Adaptable properties implies that the customer can change the value at operation time, opposed to non-adaptable.
This implies the following:
- The configuration of non-adaptable site-specific properties is part of the Dynamic Activation installation procedure, handled by the installation scripts.
- Configuration of adaptable properties is performed at operation time, by use of the bootloader.py command.
To configure a property with a customized value, run the following commands from an SC node:
- To list the adaptable properties:
$ bootloader.py config list -A
- Configure the selected adaptable property:
$ bootloader.py config set --parameter @<Adaptable Property>@ --value <value>
- Activate the modification for each node, one by one:Attention!
Wait for each node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the node that is to be activated.
To restore a property to its default (initially installed) value, run the following commands from an SC node:
- To list the adaptable properties:
$ bootloader.py config list -A
- Restore the selected adaptable property:
$ bootloader.py config remove --parameter @<Adaptable Property>@
- Activate the modification for each node, one by one:Attention!
Wait for each node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the node that is to be activated.
- Note:
- Repeat the command for each node affected by this change.
To check which properties that have been customized, run the following command, from an SC node:
$ bootloader.py config list
Example 2 Config List Printout
Configuration attributes for nbia-module: No user configuration exist Configuration attributes for admin-tool: No user configuration exist Configuration attributes for akka-cluster-service-lib: No user configuration exist Configuration attributes for tomcat-server: No user configuration exist Configuration attributes for ema-faultmgmt-module: No user configuration exist Configuration attributes for NCA-Application: No user configuration exist Configuration attributes for ema-authentication-module: No user configuration exist Configuration attributes for dve-application: No user configuration exist Configuration attributes for lockservice-module: No user configuration exist Configuration attributes for activation-orchestration-module: No user configuration exist Configuration attributes for dve-inbound-interfaces-application: No user configuration exist Configuration attributes for jdv-cudb-config-assembly: No user configuration exist Configuration attributes for cudb-block-proxy: No user configuration exist Configuration attributes for arh-module: No user configuration exist Configuration attributes for NotificationRules: No user configuration exist Configuration attributes for UDC-Application: No user configuration exist Configuration attributes for ema-batch-module: No user configuration exist
In the example, the cudb-block-proxy module has a customized parameter, @BLOCK_PROXY_SSLCERT_STORE_PASSWORD@
All other modules have default values for their properties.
4.3 Dynamic Activation System Status
The 3ppmon status command is used to view the status for the 3PP processes in all nodes, or the detailed status for one specified node.
The bootloader.py node status command is used to view the status of Dynamic Activation application processes for all nodes, or the status for one specified node.
4.3.1 All Nodes Status for 3PP Processes
Execute the following command on any node to check the deployment status for all 3PPs in all nodes:
$ sudo 3ppmon status --host all
Example 3 Output for a Dynamic Activation system
Host ESAPrcs LicSrv ZooKpr Cassandra ActvAlrm SC-1 UP UP UP UP 0 SC-2 UP UP UP UP 0 PL-3 UP - UP UP 0 PL-4 UP UP 0
|
Attribute |
Description |
|---|---|
|
Host |
The hostname of the node in the LDE cluster. |
|
ESAPrcs |
ESA processes. UP means that all ESA processes are running, DOWN means that one or more ESA processes are not running. |
|
LicSrv |
Sentinel License server. UP means that the Sentinel License Server is running and DOWN means that the server is not running. A dash (-) means that there is no license server on that node. The Sentinel License server is only running on SC nodes. |
|
ZooKpr |
ZooKeeper process. UP means that ZooKeeper is running on that node and DOWN means that it is down. A dash (-) means that there is no ZooKeeper server on that node. The ZooKeeper is running on SC1, SC2, and PL3 by default. |
|
Cassandra |
Cassandra process. Up means that the process is running and DOWN means that the process is down. One Cassandra process is running on each node. |
|
ActvAlrm |
Number of active alarms for the node. If there are active alarms, run command fmactivealarms on an SC node, for detailed information about the alarms. Alarms for all nodes are listed when running this command. For more information about the specific alarms, refer to Event and Alarm Handling, Reference [7]. |
- Note:
- A question mark (?) indicates unknown status.
4.3.2 Single Node Status for 3PP Processes
Execute the following command to check the details of the 3PP deployment status, locally:
$ sudo 3ppmon status
Execute the following command to check the details of the 3PP deployment status on a specific node, remote:
$ sudo 3ppmon status --host <hostname>
<hostname> is the hostname of the node that is to be checked.
Example output for the Dynamic Activation system:
Example 4 Output of the 3PP deployment status details on SC1
ESA Master agent UP Fault mgmt agent UP Performance mgmt agent UP SSM Sub agent UP License Server Sentinel LM server UP ZooKeeper ZooKeeper process UP Cassandra Cassandra process UP Active Alarms Number of alarms 0 Alarm list -
|
Part of the Output |
Description |
|---|---|
|
ESA |
Shows the status of the Ericsson SNMP processes. For more detailed information of starting or stopping the ESA processes, refer to the Section 4.7.2.2. |
|
License Server |
Shows the status of the Sentinel License Server process. For more detailed information of starting or stopping the Sentinel License Server on the SC nodes, refer to the Section 4.7.2.1. (1) |
|
ZooKeeper |
Shows the status of the ZooKeeper processes. For more detailed information of starting or stopping the ZooKeeper processes, refer to the Section 4.7.2.3. |
|
Cassandra |
Shows the status of the Cassandra process. For more detailed information of starting or stopping the Cassandra process, refer to Section 4.7.2.4. |
|
Active Alarms |
Shows the status of the active alarms information.
|
(1) Shown as unavailable, -, for PL nodes
- Note:
- A question mark (?) indicates unknown status.
4.3.3 Status of Dynamic Activation Applications
From an SC node, execute the following command to check the status of the Dynamic Activation applications:
$ bootloader.py node status --host all
Example 5 Printout for all nodes
Host Module Version Status Bindings =========== =========== =========== =========== =========== SC-1 akka-cluster-service-lib 3.3.1 Running OK (2/2) SC-1 admin-tool 3.124 SC-1 activation-orchestration-module 2.2 Running SC-1 ema-faultmgmt-module 1.15 Running OK (2/2) SC-1 ema-authentication-module 1.2.3 Running SC-1 tomcat-server 162.23 Running (18/0) SC-1 -> DVE-OM-GUI 162.23 SC-1 -> Log-Backend-Rest 162.23 SC-1 -> Log-Consolidation-Web 162.23 SC-1 -> LEH-Backend 162.23 SC-1 -> LooseErrorHandlingWeb 162.23 SC-1 -> BootloaderRest 162.23 SC-1 -> Queue-Management 162.23 SC-1 -> Launchpad 162.23 SC-1 -> OrderManagement 162.23 SC-1 -> OrderManagement-Backend 162.23 SC-1 -> SyncSingleEntryGuiFrontend 162.23 SC-1 -> SyncSingleEntryGuiBackend 162.23 SC-1 -> replayer 162.23 SC-1 -> inbound-interfaces-rest 162.23 SC-1 -> Scm-Web 162.23 SC-1 -> batchhandler-backend 2.4 SC-1 -> batchhandler-frontend 2.4 SC-1 cudb-block-proxy 2.0.4 Running SC-1 lockservice-module 1.7.10 Running SC-1 ema-batch-module 2.4 Running SC-2 akka-cluster-service-lib 3.3.1 Running OK (2/2) SC-2 admin-tool 3.124 SC-2 activation-orchestration-module 2.2 Running SC-2 ema-faultmgmt-module 1.15 Running OK (2/2) SC-2 ema-authentication-module 1.2.3 Running SC-2 tomcat-server 162.23 Running (18/0) SC-2 -> DVE-OM-GUI 162.23 SC-2 -> Log-Backend-Rest 162.23 SC-2 -> Log-Consolidation-Web 162.23 SC-2 -> LEH-Backend 162.23 SC-2 -> LooseErrorHandlingWeb 162.23 SC-2 -> BootloaderRest 162.23 SC-2 -> Queue-Management 162.23 SC-2 -> Launchpad 162.23 SC-2 -> OrderManagement 162.23 SC-2 -> OrderManagement-Backend 162.23 SC-2 -> SyncSingleEntryGuiFrontend 162.23 SC-2 -> SyncSingleEntryGuiBackend 162.23 SC-2 -> replayer 162.23 SC-2 -> inbound-interfaces-rest 162.23 SC-2 -> Scm-Web 162.23 SC-2 -> batchhandler-backend 2.4 SC-2 -> batchhandler-frontend 2.4 SC-2 lockservice-module 1.7.10 Running SC-2 ema-batch-module 2.4 Running PL-3 admin-tool 3.124 PL-3 nbia-module 3.124 Running OK (2/2) PL-3 dve-application 3.124 Running (67/0) OK (2/2) PL-3 -> NCA-Application 162.23 PL-3 -> jdv-nca-assembly 162.23 PL-3 -> UDC-Application 162.23 PL-3 -> jdv-udc-assembly 162.23 PL-3 -> jdv-cudb-config-assembly 162.23 PL-3 -> NotificationRules 162.23 PL-3 arh-module 3.124 Running OK (2/2) PL-3 dve-inbound-interfaces-application 162.23 Running PL-4 admin-tool 3.124 PL-4 nbia-module 3.124 Running OK (2/2) PL-4 dve-application 3.124 Running (67/0) OK (2/2) PL-4 -> NCA-Application 162.23 PL-4 -> jdv-nca-assembly 162.23 PL-4 -> UDC-Application 162.23 PL-4 -> jdv-udc-assembly 162.23 PL-4 -> jdv-cudb-config-assembly 162.23 PL-4 -> NotificationRules 162.23 PL-4 arh-module 3.124 Running OK (2/2) PL-4 dve-inbound-interfaces-application 162.23 Running
|
Host |
The host it concerns. |
|
Module |
The Dynamic Activation module. |
|
Version |
The module version. |
|
Status |
If the module is a process, a process status is shown, Running | Stopped. If the process contains sub-modules, deployment status of it's artefacts is shown in a parenthesis. The first number indicates fully deployed artefacts. The second number indicates artefacts that have failed deployment. For example, Running (46/0) indicates that 0 artefacts out of 46 have failed to deploy. |
|
Bindings |
This column shows whether the module has services and are registered successfully in the SLRs, OK (Bounded to all SLRs) | WARN (Bound in at least one SLR) | NOK (Not bounded to any SLR). |
4.3.4 Status of License Agent
From an SC node, execute the following command to check the status of the License Agent:
$ sudo -u actadm lastat
- Note:
- It can take up to one hour for the License Agent process to reload new or updated license keys. Forced reload of license keys can be achieved by restarting the License Agent process. See the License Agent command listed in Table 16.
Example 6 Printout for all nodes
Node ZooKeeper Sentinel Instances License Cores License Valid to Run SC-1 CONNECTED CONNECTED INSTALLED INSTALLED YES SC-2 CONNECTED CONNECTED INSTALLED INSTALLED YES PL-3 CONNECTED CONNECTED INSTALLED INSTALLED YES PL-4 CONNECTED CONNECTED INSTALLED INSTALLED YES PL-5 CONNECTED CONNECTED INSTALLED INSTALLED YES PL-6 CONNECTED CONNECTED INSTALLED INSTALLED YES
Example 7 Licenses are missing and License Agent is not running on PL-5
Node ZooKeeper Sentinel Instances License Cores License Valid to Run SC-1 DISCONNECTED DISCONNECTED NOT INSTALLED NOT INSTALLED NO SC-2 DISCONNECTED DISCONNECTED NOT INSTALLED NOT INSTALLED NO PL-3 DISCONNECTED DISCONNECTED NOT INSTALLED NOT INSTALLED NO PL-4 DISCONNECTED DISCONNECTED NOT INSTALLED NOT INSTALLED NO PL-5 - - - - - PL-6 DISCONNECTED DISCONNECTED NOT INSTALLED NOT INSTALLED NO Required license key [FAT1023848/1] is missing. Retry after installing the required key. Required license key [FAT1023848/3] is missing. Retry after installing the required key. lastat: Error occurred while reading status from one or more nodes. Make sure License Agent service is running properly on all nodes.
4.4 GEP Blade Management
To perform a controlled shutdown or startup of a blade, follow the subchapters in this section.
- Note:
- Only one node at a time can be down, for a maximum period of three hours. Otherwise consistency in the Cassandra database will be impacted.
4.4.1 Blade Power Off
To power off the blade, execute the following command on the current node:
$ sudo systemctl poweroff
4.4.2 Blade Startup
To turn on the power of a GEP blade, follow the instructions in the step-list below:
- Note:
- Due to the sudo systemctl poweroff command previously executed, it is necessary to first lock the node to unlock it.
- Login with the following command:
$ ssh -p 2024 advanced@<BSP_NBI_IP>
- Enter configuration mode, lock the blade, and commit the
changes:
> configure
(config)> ManagedElement=1,Equipment=1,Shelf=0,Slot=<slot position of the GEP blade>,Blade=1,administrativeState=LOCKED
(config)> commit -s
- Unlock the blade, and commit the changes:
(config)> ManagedElement=1,Equipment=1,Shelf=0,Slot=<slot position of the GEP blade>,Blade=1,administrativeState=UNLOCKED
(config)> commit -s
4.5 Controlled Blade Reboot
Reboot only one blade at a time.
Because rebooting two or more blades concurrently causes Cassandra errors and disturbs the traffic.To perform a controlled reboot of a blade, execute the following command, as user root, from any of the SC nodes:
# cluster reboot -n <nodeId>
4.6 Enabling and Disabling Provisioning Traffic and CAI3G Test Traffic
This section describes how to enable and disable the provisioning traffic, and CAI3G test traffic.
4.6.1 Enabling and Disabling Provisioning Traffic
It is possible to enable and disable provisioning (CAI3G, MML, CAI) traffic on Dynamic Activation. This, to allow maintenance to be performed on a traffic handling PL node, without interfering with live traffic on the Dynamic Activation cluster.
To disable traffic, execute the following commands:
- From an SC node, run the following command to disable
traffic in the cluster:
$ bootloader.py config set --parameter @REGISTER_SERVICES@ --value false
- From an SC node, run the following command to activate
the configuration:
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the node that is to be activated.
To enable traffic, execute the following commands:
- From an SC node, run the following command to enable traffic
in the cluster:
$ bootloader.py config set --parameter @REGISTER_SERVICES@ --value true
- From an SC node, run the following command to activate
the configuration:
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the node that is to be activated.
4.6.2 Enabling and Disabling CAI3G Test Traffic Ports and Provisioning Traffic
It is possible to enable and disable CAI3G test traffic ports, over HTTP/HTTPS separated through test ports, using different port numbers, see Table 12.
The test traffic is disabled by default. For a test scenario, one of the SC nodes, and the PL node to test with CAI3G test traffic, is switched into test traffic mode.
To disable provisioning traffic and enable test traffic ports on both nodes, execute the following commands:
- Test traffic can only be handled by one PL node at a time.
Before switching a PL node into test traffic mode, run the following
command on an SC node to check the status of the Dynamic Activation
applications, to make sure that there is no other PL node running
in test mode.
$ bootloader.py node status --host all
- From an SC node, run the following command to disable
traffic on both nodes:
$ bootloader.py config set --parameter @REGISTER_SERVICES@ --value false
- Enable CAI3G test traffic:
$ bootloader.py config set --parameter @REGISTER_TEST_SERVICES@ --value true
- From an SC node, run the following commands to activate
the configurations:
$ bootloader.py node activate --host <hostname of an SC-node>
$ bootloader.py node activate --host <hostname of a PL-node>
- Run test traffic on test ports 8888 or 8989 (secure).
To enable provisioning traffic and disable CAI3G test traffic ports on the nodes, execute the following commands:
- Note:
- This is performed once the test is finished and the nodes previously set to test traffic mode, is to resume live traffic.
- From an SC node, run the following commands to disable
the CAI3G test traffic mode and enable provisioning traffic:
$ bootloader.py config remove --parameter @REGISTER_TEST_SERVICES@
$ bootloader.py config remove --parameter @REGISTER_SERVICES@
- From an SC node, run the following commands to activate
the configurations:
$ bootloader.py node activate --host <hostname of an SC-node>
$ bootloader.py node activate --host <hostname of a PL-node>
4.7 Starting and Stopping
The commands for starting and stopping processes in the Dynamic Activation system are listed in the following sections.
The different parts of the application-dependent processes are listed in separate subsections.
Dynamic Activation module processes and 3PP processes are supervised by SSM. If any of the processes go down, SSM starts it. The process is added or removed automatically in SSM, when the following commands are used.
- Note:
- Stopping processes by using these commands do not trigger any alarms.
4.7.1 Dynamic Activation Application Processes
Starting the Dynamic Activation Application Processes
- Note:
- Before starting the application process, ensure that the
application-dependent 3PP processes (such as Cassandra or Zookeeper)
are started first.
For instructions, see Section 4.7.2.
- Perform the following command to start the Dynamic Activation
application processes on a complete system:
$ bootloader.py node start --host all
- Perform the following command to start the Dynamic Activation
application processes on a specific node <hostname>:
$ bootloader.py node start --host <hostname>
- Verify the status of all the nodes.
To check the status for all nodes, see Section 4.3.
Stopping the Dynamic Activation Application Processes
- Perform the following command to stop the Dynamic Activation
application processes on a complete system:
$ bootloader.py node stop --host all
- Perform the following command to stop the Dynamic Activation
application processes on a specific node <hostname>:
$ bootloader.py node stop --host <hostname>
- Verify the status of all the nodes.
To check the status for all nodes, see Section 4.3.
4.7.2 Starting and Stopping 3PP Processes
This section covers how to start and stop 3PP Processes.
The commands in this chapter apply only for the node they are executed on.
- Note:
- Before stopping an application-dependent 3PP process (such
as Cassandra or Zookeeper), ensure that the Dynamic Activation application
process is stopped first.
For an instruction, see Section 4.7.1.
4.7.2.1 Sentinel
This section covers how to start and stop Sentinel RMS. Applies only for SC nodes.
Starting Sentinel RMS (License Server)
To start the license server on an SC node, execute the following command:
$ sudo 3ppmon startlserv
Stopping Sentinel RMS (License Server)
To stop the license server on an SC node, execute the following command:
$ sudo 3ppmon stoplserv
4.7.2.2 Ericsson SNMP Agent
This section covers how to start and stop Ericsson SNMP Agent (ESA) and the System Service Monitors (SSM) agent (comes with ESA).
Starting Ericsson SNMP Agent
To start ESA, execute the following command:
$ sudo 3ppmon startesa
Stopping Ericsson SNMP Agent
To stop ESA, execute the following command:
$ sudo 3ppmon stopesa
Starting SSM
To start the SSM agent, execute the following command:
$ sudo 3ppmon startssm
Stopping SSM
To stop the SSM agent, execute the following command:
$ sudo 3ppmon stopssm
4.7.2.3 ZooKeeper
This section covers how to start and stop the ZooKeeper service.
- Note:
- These operations can only be performed on a ZooKeeper node. For an overview of which nodes that have ZooKeeper installed, see Section 2.
Starting ZooKeeper
To start ZooKeeper, execute the following command:
$ sudo 3ppmon startzookeeper
Stopping ZooKeeper
To stop ZooKeeper, execute the following command:
$ sudo 3ppmon stopzookeeper
4.7.2.4 Cassandra
This section covers how to start and stop the Cassandra processes.
It is allowed to stop only one Cassandra instance at a time. For more information, see Section 5.8.1.1.
Starting Cassandra
To start Cassandra, execute the following command:
$ sudo 3ppmon startcassandra
Stopping Cassandra
To stop Cassandra, execute the following command:
$ sudo 3ppmon stopcassandra
4.7.3 Dynamic Activation Modules
This section covers how to start and stop Dynamic Activation modules.
For modules possible to start and stop on the SC, and PL nodes respectively, see Example 5 in Section 4.3.3.
- Note:
- The status-output shows both modules and sub-modules. Sub-modules start with ->.
Starting Dynamic Activation Modules
A module can be started on a specific node, or on all nodes. To start one or all Dynamic Activation modules, execute the following command:
$ bootloader.py module start -n <Module> --host <hostname> | all
Stopping Dynamic Activation Modules
A module can be stopped on a specific node, or on all nodes. To stop one or all Dynamic Activation modules, execute the following command:
$ bootloader.py module stop -n <Module> --host <hostname> | all
4.7.4 Increasing Startup Time-out
When starting a system with many network elements, the startup sequence may fail with a time-out.
To increase the time-out value, execute the following commands:
- From an SC node, run the following command to disable
traffic in the cluster:
$ bootloader.py config set --parameter @timeout@ --value <timeout_value>
- Note:
- The <timeout_value> must be lower than four minutes.
- From an SC node, run the following command to activate
the configuration:
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the node that is to be activated.
4.8 System Checks
This section describes system tools and other utility programs used during problem tracing and for maintaining and verifying the Dynamic Activation system. It also describes how to perform a health check on the system.
4.8.1 Health Check on Dynamic Activation
Health checks are automatically run during installation, update and upgrade, and checks the following:
- Node status of Cassandra.
- The ownership of drbd on SC1 and SC2.
- Node status of Zookeeper
- Status of Sentinel
- The ESA cluster status.
- Active ESA alarms.
- Dynamic Activation status on all nodes.
- The eVIP interfaces alb_0 and alb_1 are shown on all nodes.
- Active cluster alarms
- Note:
- The automatic health-check tool does not check the Network Setup for GEP3 or GEP5, if unclear provisioning issues exist or the /var/log/messages file for recent error messages.
It is possible to manually run a system health check on any occasion.
The following health-check options are available:
|
Command |
Description |
|---|---|
|
healthcheck.py run -n <task or group of tasks to be executed> |
Used to execute a health-check task or a group of tasks and display the result in the console. |
|
healthcheck.py get |
Used to get all the possible task or group of tasks, along with their descriptions, that can be executed by the healthcheck.py run command. |
|
healthcheck.py show -r <time> -d/--detail |
Used to show previous health-check results that was run in a specific time. If not specified it will show the latest executed run. -d/--detail is optional and is used to show result along with detailed information. |
|
healthcheck.py show -l/--list |
Used to show all previous health-check results in the system |
The step-list below shows an example of a manually run system health-check:
- Note:
- All health-check commands need to be run as user root, and can only be executed on SC1 or SC2.
- Verify that the system status:
# healthcheck.py run -n <task or group of tasks to be executed>
Example:
# healthcheck.py run -n Activation_all_Native
Output Example:
============================================== Name: Activation_all_Native Description: Verify native system status is OK Result stored as: 2017-05-08_14-20-54 ============================================== activation_cassandra PASS activation_drbd PASS activation_esa_cluster PASS activation_esa_alarms PASS activation_lde_cluster PASS activation_zookeeper PASS activation_sentinel PASS activation_network PASS activation_application PASS activation_esa PASS
To get a list of all possible <task or group of tasks to be executed> options, run the following command:
# healthcheck.py get
Output:
Name Description ====================== ======================================================================== Activation_all_Native Verify native system status is OK Activation_all_Virtual Verify virtual system status is OK activation_application Check that all modules is up and running activation_cassandra Check that cassandra is up and running properly activation_drbd Check ownership of drbd is correct (only for native) activation_esa Check that esa is up and running properly activation_esa_alarms Check if esa alarms exist activation_esa_cluster Check esa cluster status is correct activation_lde_cluster Check lde cluster status (only for native) activation_network Check that the network (evip) is up and running properly (only for native) activation_platform Check that all 3pp's is up and running activation_sentinel Check that sentinel is up and running properly activation_zookeeper Check that zookeeper is up and running properly
- Run the following command to get detailed information
on the health-check:
# healthcheck.py show -d
Output Example:
============================================== Name: Activation_all_Native Description: Verify native system status is OK Result stored as: 2017-05-08_14-20-54 ============================================== activation_lde_cluster PASS Description : Check lde cluster status (only for native) Messages : LDE cluster verified ok activation_esa_cluster PASS Description : Check esa cluster status is correct Messages : Esa cluster verified ok activation_network PASS Description : Check that the network (evip) is up and running properly (only for native) Messages : evip interface is set up properly on all nodes activation_esa PASS Description : Check that esa is up and running properly Messages : Esa is up and running on all nodes activation_cassandra PASS Description : Check that cassandra is up and running properly Messages : Cassandra is up and running on all nodes activation_esa_alarms PASS Description : Check if esa alarms exist Messages : Active alarms: activation_drbd PASS Description : Check ownership of drbd is correct (only for native) Messages : drbd verified OK activation_application PASS Description : Check that all modules is up and running Messages : All modules is up and running properly on all nodes activation_zookeeper PASS Description : Check that zookeeper is up and running properly Messages : Zookeeper is up and running on all nodes activation_sentinel PASS Description : Check that sentinel is up and running properly Messages : Sentinel is up and running on all nodes
- Note:
- If any of the statuses are not shown as PASS, check the logs in /var/log/healthcheck/ directory.
Health-check logs store the executing flow of each task. Check the corresponding log if one or more tasks failed during the health-check.
The healthcheck.log file stores general information of the health-check execution flow.
- Make sure that no unclear provisioning issues exist.
- Note:
- This step is only applicable on a running system.
Check point: Check Dynamic Activation provisioning logs for any unclear provisioning issues, for example partly executed logs, see Section 2.3.2.
- Check the /var/log/messages file for recent error messages, all nodes:
- Note:
- This step is only applicable on a running system.
# cat /var/log/messages
Check point: No error messages should exist.
- Note:
- For a full history of error messages, check the log files according to the Section 2.3.1.
4.8.2 Network Setup Check
This section contains information on how to check the network setup for GEP3 or GEP5.
- Login from any SC node:
# ssh -p 2024 advanced@<BSP_NBI_IP>
- Enter configure mode:
> configure
- Set up SSH value to troubleshootingMode attribute:
(config)> ManagedElement=1,DmxcFunction=1,SystemInformation=1,troubleshootingMode=ssh
- Note:
- BSP allows opening of port 22 to enable a login to the BSP node by use of SSH over the NBI network. The port is automatically closed after 5 minutes and no new SSH sessions can be opened. Ongoing SSH sessions will be kept.
- Commit changes and exit configure mode:
(config)> commit -s
(config)> end
> exit
- From any SC node login to the NBI network:
# ssh advanced@<BSP_NBI_IP>
- Login to CMX-0-26:
advanced@SC-2-2:~# ssh advanced@169.254.0.2
- Run imish to connect to zebOS:
advanced@blade_0_26:~# imish
- Login to virtual router om_cn_sp on CMX-0-26:
blade_0_26> enable
blade_0_26# login virtual-router om_cn_sp
blade_0_26> enable
- Execute the following ping commands on virtual router om_cn_sp on CMX-0-26:
VRRP:
# ping <OM_CN_SP_VRRP_GW_IP>
BFD:
# ping <OM_CN_SP_CMX_0_26_CE0_GW_IP>
# ping <OM_CN_SP_CMX_0_28_CE1_GW_IP>
Check point: Verify that the ping is successful.
- Verify VRRP in om_cn_sp.
Verify that VRRP is master on CMX-0-26 by using command:
# show vrrp
- Note:
- If using BFD setup, only vlan1.4054 and vlan1.184 is visible.
Example printout for vlan1.1162:
Address family IPv4 VRRP Id: 20 on interface: vlan1.1162 State: AdminUp - Master Virtual Router MAC Address: 00-00-5E-00-01-14 Number of virtual ip addresses configured: 1 Virtual IP address: 10.44.166.92 Not-owner Operations Primary IP address: 10.44.166.93 Operations Master IP address: 10.44.166.93 Priority is 253 Preempt Hold time: 90000 msec Advertisement interval: 30 centi sec Master Advertisement interval: 30 centi sec Skew time: 0 centi sec Accept mode: FALSE Preempt mode: TRUE Multicast membership on IPv4 interface vlan1.1162: JOINED
Example printout for vlan1.4054:
Address family IPv4 VRRP Id: 40 on interface: vlan1.4054 State: AdminUp - Master Virtual Router MAC Address: 00-00-5E-00-01-28 Number of virtual ip addresses configured: 1 Virtual IP address: 10.44.178.193 Not-owner Operations Primary IP address: 10.44.178.194 Operations Master IP address: 10.44.178.194 Priority is 253 Preempt Hold time: 90000 msec Advertisement interval: 30 centi sec Master Advertisement interval: 30 centi sec Skew time: 0 centi sec Accept mode: FALSE Preempt mode: TRUE Multicast membership on IPv4 interface vlan1.4054: JOINED
Example printout for vlan1.184:
Address family IPv4 VRRP Id: 50 on interface: vlan1.184 State: AdminUp - Master Virtual Router MAC Address: 00-00-5E-00-01-32 Number of virtual ip addresses configured: 1 Virtual IP address: 10.44.186.65 Not-owner Operations Primary IP address: 10.44.186.66 Operations Master IP address: 10.44.186.66 Priority is 253 Preempt Hold time: 90000 msec Advertisement interval: 30 centi sec Master Advertisement interval: 30 centi sec Skew time: 0 centi sec Accept mode: FALSE Preempt mode: TRUE Multicast membership on IPv4 interface vlan1.184: JOINED
Check point: Verify that the printout matches the example printout above.
- Verify BFD in om_cn_sp:
# show bfd session
Example printout:
Sess-Idx Remote-Disc Lower-Layer Sess-Type Sess-State UP-Time Remote-Addr 6 8 IPv4 Single-Hop Up 16:14:25 192.168.209.10/32 11 3 IPv4 Single-Hop Up 16:14:36 192.168.100.2/32 12 2 IPv4 Single-Hop Up 16:05:05 192.168.100.3/32 13 2 IPv4 Single-Hop Up 16:05:05 192.168.100.4/32 Number of Sessions: 4
Check point: Verify that the printout matches the example printout above.
If BFD is used to external network, the following printout is also visible:
Sess-Idx Remote-Disc Lower-Layer Sess-Type Sess-State UP-Time Remote-Addr 5 21 IPv4 Single-Hop Up 19:04:06 10.44.166.73/32 6 21 IPv4 Single-Hop Up 19:04:05 10.44.166.74/32
- Verify that three OSPF neighbors are visible for VLAN
204 in om_cn_sp:
# show ip ospf neighbor
Example printout:
Neighbor ID Pri State Dead Time Address Interface Instance ID 192.168.208.2 252 Full/Backup 00:00:08 192.168.209.10 vlan1.30 0 192.168.208.2 252 Full/Backup 00:00:08 192.168.100.2 vlan1.204 0 192.168.100.3 0 Full/DROther 00:00:06 192.168.100.3 vlan1.204 0 192.168.100.4 0 Full/DROther 00:00:06 192.168.100.4 vlan1.204 0
Check point: All OSPF areas should be in state Full.
- Verify that the VIP-OAM-IP address is advertised in om_cn_sp:
# show ip route <VIP-OAM-IP>
Example:
# show ip route 10.64.26.246
Printout:
Routing entry for 10.64.26.246/32 Known via "static", distance 130, metric 0, tag 0 192.168.209.10, via vlan1.30 Routing entry for 10.64.26.246/32 Known via "ospf", distance 110, metric 20, tag 0, best Last update 00:33:17 ago * 192.168.100.3, via vlan1.204 * 192.168.100.4, via vlan1.204Check point: Verify that the printout matches the example printout above.
- Exit virtual router om_cn_sp:
blade_0_26# exit
- Login to virtual router prov_cn_sp on CMX-0-26:
blade_0_26# login virtual-router prov_cn_sp
blade_0_26> enable
- Execute the following ping commands on virtual router prov_cn_sp on CMX-0-26:
VRRP:
# ping <PROV_CN_SP_VRRP_GW_IP>
BFD:
# ping <PROV_CN_SP_CMX_0_26_CE0_GW_IP>
# ping <PROV_CN_SP_CMX_0_28_CE1_GW_IP>
Check point: Verify that the ping is successful.
- Verify VRRP in prov_cn_sp.
Verify that VRRP is master on CMX-0-26 by using command:
# show vrrp
- Note:
- If using BFD setup, no vlan is shown.
Example printout:
Address family IPv4 VRRP Id: 10 on interface: vlan1.1161 State: AdminUp - Master Virtual Router MAC Address: 00-00-5E-00-01-0a Number of virtual ip addresses configured: 1 Virtual IP address: 10.44.166.84 Not-owner Operations Primary IP address: 10.44.166.85 Operations Master IP address: 10.44.166.85 Priority is 253 Preempt Hold time: 90000 msec Advertisement interval: 30 centi sec Master Advertisement interval: 30 centi sec Skew time: 0 centi sec Accept mode: FALSE Preempt mode: TRUE Multicast membership on IPv4 interface vlan1.1161: JOINED
Check point: Verify that the printout matches the example printout above.
- Verify BFD in prov_cn_sp:
# show bfd session
Example printout:
Sess-Idx Remote-Disc Lower-Layer Sess-Type Sess-State UP-Time Remote-Addr 9 11 IPv4 Single-Hop Up 16:33:19 192.168.208.10/32 10 1 IPv4 Single-Hop Up 16:33:43 192.168.101.2/32 14 1 IPv4 Single-Hop Up 16:23:23 192.168.101.3/32 15 2 IPv4 Single-Hop Up 16:23:15 192.168.101.4/32 Number of Sessions: 4
Check point: Verify that the printout matches the example printout above.
If BFD is used to external network, the following printout is also visible:
Sess-Idx Remote-Disc Lower-Layer Sess-Type Sess-State UP-Time Remote-Addr 8 19 IPv4 Single-Hop Up 19:13:04 10.44.166.65/32 9 19 IPv4 Single-Hop Up 19:13:04 10.44.166.66/32
- Verify that three OSPF neighbors are visible for VLAN
104 in prov_cn_sp:
# show ip ospf neighbor
Example printout:
OSPF process 1: Neighbor ID Pri State Dead Time Address Interface Instance ID 192.168.208.14 252 Full/Backup 00:00:08 192.168.208.10 vlan1.60 0 192.168.208.14 252 Full/Backup 00:00:08 192.168.101.2 vlan1.104 0 192.168.101.3 0 Full/DROther 00:00:08 192.168.101.3 vlan1.104 0 192.168.101.4 0 Full/DROther 00:00:09 192.168.101.4 vlan1.104 0
Check point: All OSPF areas should be in state Full.
- Verify that the VIP-TRAFFIC-IP address is advertised in prov_cn_sp:
# show ip route <VIP-TRAFFIC-IP>
Example:
# show ip route 10.64.26.245
Printout:
Routing entry for 10.64.26.245/32 Known via "static", distance 130, metric 0, tag 0 192.168.208.10, via vlan1.60 Routing entry for 10.64.26.245/32 Known via "ospf", distance 110, metric 20, tag 0, best Last update 00:39:41 ago * 192.168.101.3, via vlan1.104 * 192.168.101.4, via vlan1.104Check point: Verify that the printout matches the example printout above.
- Logout from CMX-0-26:
blade_0_26# exit
blade_0_26# exit
advanced@blade_0_26:~# exit
- Login to CMX-0-28:
advanced@SC-2-2:~# ssh advanced@169.254.0.4
- Run imish to connect to zebOS:
advanced@blade_0_28:~# imish
- Login to virtual router om_cn_sp on CMX-0-28:
blade_0_28> enable
blade_0_28# login virtual-router om_cn_sp
blade_0_28> enable
- Execute the following ping commands on virtual router om_cn_sp on CMX-0-28:
VRRP:
# ping <OM_CN_SP_VRRP_GW_IP>
BFD:
# ping <OM_CN_SP_CMX_0_26_CE0_GW_IP>
# ping <OM_CN_SP_CMX_0_28_CE1_GW_IP>
Check point: Verify that the ping is successful.
- Verify VRRP in om_cn_sp.
Verify that VRRP is backup on CMX-0-28 by using command:
# show vrrp
- Note:
- If using BFD setup, only vlan1.4054 and vlan1.184 is visible.
Example printout for vlan1.1162:
Address family IPv4 VRRP Id: 20 on interface: vlan1.1162 State: AdminUp - Backup Virtual Router MAC Address: 00-00-5E-00-01-14 Number of virtual ip addresses configured: 1 Virtual IP address: 10.44.166.92 Not-owner Operations Primary IP address: 10.44.166.94 Operations Master IP address: 10.44.166.93 Priority is 252 Advertisement interval: 30 centi sec Master Advertisement interval: 30 centi sec Skew time: 0 centi sec Accept mode: FALSE Preempt mode: TRUE Multicast membership on IPv4 interface vlan1.1162: JOINED
Example printout for vlan1.4054:
Address family IPv4 VRRP Id: 40 on interface: vlan1.4054 State: AdminUp - Backup Virtual Router MAC Address: 00-00-5E-00-01-28 Number of virtual ip addresses configured: 1 Virtual IP address: 10.44.178.193 Not-owner Operations Primary IP address: 10.44.178.195 Operations Master IP address: 10.44.178.194 Priority is 252 Advertisement interval: 30 centi sec Master Advertisement interval: 30 centi sec Skew time: 0 centi sec Accept mode: FALSE Preempt mode: TRUE Multicast membership on IPv4 interface vlan1.4054: JOINED
Example printout for vlan1.184:
Address family IPv4 VRRP Id: 50 on interface: vlan1.184 State: AdminUp - Backup Virtual Router MAC Address: 00-00-5E-00-01-32 Number of virtual ip addresses configured: 1 Virtual IP address: 10.44.186.65 Not-owner Operations Primary IP address: 10.44.186.67 Operations Master IP address: 10.44.186.66 Priority is 252 Advertisement interval: 30 centi sec Master Advertisement interval: 30 centi sec Skew time: 0 centi sec Accept mode: FALSE Preempt mode: TRUE Multicast membership on IPv4 interface vlan1.184: JOINED
Check point: Verify that the printout matches the example printout above.
- Verify BFD in om_cn_sp:
# show bfd session
Example printout:
Sess-Idx Remote-Disc Lower-Layer Sess-Type Sess-State UP-Time Remote-Addr 8 6 IPv4 Single-Hop Up 16:38:12 192.168.209.9/32 14 1 IPv4 Single-Hop Up 16:28:52 192.168.100.3/32 15 1 IPv4 Single-Hop Up 16:28:52 192.168.100.4/32 3 11 IPv4 Single-Hop Up 16:38:23 192.168.100.1/32 Number of Sessions: 4
Check point: Verify that the printout matches the example printout above.
If BFD is used to external network, the following printout is also visible:
Sess-Idx Remote-Disc Lower-Layer Sess-Type Sess-State UP-Time Remote-Addr 4 22 IPv4 Single-Hop Up 19:13:09 10.44.166.74/32 5 22 IPv4 Single-Hop Up 19:13:09 10.44.166.73/32
- Verify that three OSPF neighbors are visible for VLAN
204 in om_cn_sp:
# show ip ospf neighbor
Example printout:
OSPF process 1: Neighbor ID Pri State Dead Time Address Interface Instance ID 192.168.208.1 253 Full/DR 00:00:07 192.168.209.9 vlan1.30 0 192.168.208.1 253 Full/DR 00:00:07 192.168.100.1 vlan1.204 0 192.168.100.3 0 Full/DROther 00:00:07 192.168.100.3 vlan1.204 0 192.168.100.4 0 Full/DROther 00:00:07 192.168.100.4 vlan1.204 0
Check point: All OSPF areas should be in state Full.
- Verify that the VIP-OAM-IP address is advertised in om_cn_sp:
# show ip route <VIP-OAM-IP>
Example:
# show ip route 10.64.26.246
Printout:
Routing entry for 10.64.26.246/32 Known via "ospf", distance 110, metric 20, tag 0, best Last update 00:40:58 ago * 192.168.100.4, via vlan1.204 * 192.168.100.3, via vlan1.204 Routing entry for 10.64.26.246/32 Known via "static", distance 130, metric 0, tag 0 192.168.209.9, via vlan1.30Check point: Verify that the printout matches the example printout above.
- Exit virtual router om_cn_sp:
blade_0_28# exit
- Login to virtual-router prov_cn_sp:
blade_0_28# login virtual-router prov_cn_sp
blade_0_28> enable
- Execute the following ping commands on virtual router prov_cn_sp on CMX-0-28:
VRRP:
# ping <PROV_CN_SP_VRRP_GW_IP>
BFD:
# ping <PROV_CN_SP_CMX_0_28_CE1_GW_IP>
# ping <PROV_CN_SP_CMX_0_26_CE0_GW_IP>
Check point: Verify that the ping is successful.
- Verify VRRP in prov_cn_sp.
Verify that VRRP is backup on CMX-0-28 by using command:
# show vrrp
- Note:
- If using BFD setup, no vlan is shown.
Example printout:
Address family IPv4 VRRP Id: 10 on interface: vlan1.1161 State: AdminUp - Backup Virtual Router MAC Address: 00-00-5E-00-01-0a Number of virtual ip addresses configured: 1 Virtual IP address: 10.44.166.84 Not-owner Operations Primary IP address: 10.44.166.86 Operations Master IP address: 10.44.166.85 Priority is 252 Advertisement interval: 30 centi sec Master Advertisement interval: 30 centi sec Skew time: 0 centi sec Accept mode: FALSE Preempt mode: TRUE Multicast membership on IPv4 interface vlan1.1161: JOINED
Check point: Verify that the printout matches the example printout above.
- Verify BFD in prov_cn_sp:
# show bfd session
Example printout:
Sess-Idx Remote-Disc Lower-Layer Sess-Type Sess-State UP-Time Remote-Addr 1 10 IPv4 Single-Hop Up 16:44:16 192.168.101.1/32 11 9 IPv4 Single-Hop Up 16:43:53 192.168.208.9/32 16 2 IPv4 Single-Hop Up 16:33:57 192.168.101.3/32 17 1 IPv4 Single-Hop Up 16:33:48 192.168.101.4/32 Number of Sessions: 4
Check point: Verify that the printout matches the example printout above.
If BFD is used to external network, the following printout is also visible:
Sess-Idx Remote-Disc Lower-Layer Sess-Type Sess-State UP-Time Remote-Addr 7 20 IPv4 Single-Hop Up 19:15:43 10.44.166.66/32 8 20 IPv4 Single-Hop Up 19:15:43 10.44.166.65/32
- Verify that three OSPF neighbors are visible for VLAN
104 in prov_cn_sp:
# show ip ospf neighbor
Example printout:
OSPF process 1: Neighbor ID Pri State Dead Time Address Interface Instance ID 192.168.208.13 253 Full/DR 00:00:07 192.168.208.9 vlan1.60 0 192.168.208.13 253 Full/DR 00:00:07 192.168.101.1 vlan1.104 0 192.168.101.3 0 Full/DROther 00:00:06 192.168.101.3 vlan1.104 0 192.168.101.4 0 Full/DROther 00:00:07 192.168.101.4 vlan1.104 0
Check point: All OSPF areas should be in state Full.
- Verify that the VIP-TRAFFIC-IP address is advertised in prov_cn_sp:
# show ip route <VIP-TRAFFIC-IP>
Example:
# show ip route 10.64.26.245
Printout:
Routing entry for 10.64.26.245/32 Known via "static", distance 130, metric 0, tag 0 192.168.208.9, via vlan1.60 Routing entry for 10.64.26.245/32 Known via "ospf", distance 110, metric 20, tag 0, best Last update 00:43:33 ago * 192.168.101.4, via vlan1.104 * 192.168.101.3, via vlan1.104Check point: Verify that the printout matches the example printout above.
- Logout from CMX-0-28 and SSH on port 22:
blade_0_28# exit
blade_0_28# exit
advanced@blade_0_28:~# exit
advanced@SC-2-2:~# exit
4.8.3 Maintaining SSL Certificates
Certificates for SSL verification are always time limited. Check the validity of all installed SSL certificates.
On the first SC node (SC-1), go to /home/bootloader/ssl and execute the following command as an administrator user:
$ /usr/java/latest/bin/keytool -list -v -keystore <key_filename> -storepass <password>
If necessary, configure new certificates. Refer to Section 5.5, and Section 6.2.1 for instructions.
4.9 Changing Timezone
When changing the timezone on a fully installed EDA system, the whole cluster needs to be rebooted. This to get full functionality on the crontab jobs. Crontab jobs must run on local time.
Follow the step-list:
- Edit the cluster.conf file,
residing in the /cluster/etc directory,
and change the timezone parameter to the new timezone:
# vi /cluster/etc/cluster.conf
- Reload the cluster config:
# cluster config -a -r
- Reboot the cluster:
# cluster reboot -a
- Note:
- During reboot, there will be traffic disturbance.
5 System Administration
This section holds information on how to administer different parts of the system.
5.1 eVIP Cluster Configuration Administration
This section describes how to edit the eVIP configuration.
5.1.1 eVIP Configuration File
When the eVIP cluster is started, it uses the data in the evip.xml file that is located in the /cluster/storage/system/config/evip-apr9010467 directory. The eVIP configuration is composed of configuration elements that can be edited through a CLI.
5.1.2 Login to the eVIP CLI
To access the eVIP CLI, run the following command from SC1:
$ telnet `/opt/vip/bin/getactivecontrol` 25190
For information on how to change parameters through the CLI, refer to the eVIP on LSB Management Guide, Reference [6].
5.1.3 Logging
The eVIP component logs events to the system log, see Section 2.3.1 for details on how to find the system log output.
5.2 Configuring ESA
This section describes the configuration in ESA.
5.2.1 Alarms
The Ericsson SNMP Agent (ESA) manages the alarm handling in Dynamic Activation. Information about the different alarms can be found in the document Event and Alarm Handling, Reference [7].
5.2.1.1 Trap Destination
The trap destination decides where ESA sends the SNMP alarms. If SNMP v2c is used, the destination is identified with IP address and port number. If SNMP v3 is used, more SNMP v3 configuration is required.
For more information, see Reference [10].
The configuration files are stored in /usr/local/esa/conf
The following file is used for configuring the trap destination - trapDestCfg.xml
The value of the attribute active specifies whether the trap destination configuration is active or not:
- yes - The trap destination definition is active.
- no - The trap destination definition is inactive.
By inactivating the trap destination configuration, ESA disregards reading the configuration and thus does not send any traps to that destination. This setting is useful since the configuration can be kept as is, while it is possible to stop the traps only by deactivating it with a no.
5.2.1.2 SNMP Version
The configuration files are stored in /usr/local/esa/conf
The default SNMP version is v2c.
The use of SNMPv3 with multiple agents, such as when using cluster mode, in a virtual IP (VIP) configuration is not a recommended setup. Since the OSS will target the VIP, which in turn randomly targets the agents, the OSS will have issues handling the different engineIDs of the different agents. If VIP is used with SNMPv3 the OSS must be able to aggregate and correlate O&M data from multiple agents as well as keep track of multiple engineIDs for a single IP (the VIP).
For more information on how to change SNMP version, see Reference [10].
5.2.2 Fault Management Agent
During runtime, the Fault Management (FM) Agent is ready to accept alarm triggers and hold active alarms and alarm data, which is made available to the OSS using SNMP. The technician working locally with the system could benefit from looking at the data captured and produced by the FM Agent.
A number of CLI commands exist that makes the setup and configuration of the FM Agent easier. Also, they are useful when troubleshooting faults in the system by looking at the FM Agent alarm information.
The following CLI commands are available for the FM Agent:
- fmactivealarms – To view active alarms.
- fmsendmessage – To send message for triggering an alarm or event, or clearing an alarm.
5.2.2.1 Command: View Active Alarms
This command is used for viewing the active alarms that have been triggered in the FM Agent.
- Note:
- This command must be run on an SC node.
5.2.2.1.1 Command Format
The command format:
fmactivealarms
The command does not take any arguments.
5.2.2.1.2 Command Output
When running the command on an SC node, if there are active alarms, the command output is a complete list of all active alarms. The following output example shows three active alarms.
# fmactivealarms
Example 8 List of Active Alarms
Active alarms: !--------------------------------------------------------------- Module : ALARMMOD Error Code : 11 Resource Id : 1.1.1.2.72 Timestamp : Wed Sep 21 09:50:03 CEST 2011 Model Description : This is fault number 1! Active Description : The component XXX has an error. Event Type : 1 Probable Cause : 1024 Severity : minor Orig Source IP : 10.1.100.88 ---------------------------------------------------------------! !--------------------------------------------------------------- Module : ALARMMOD Error Code : 22 Resource Id : 1.1.1.2.9 Timestamp : Wed Sep 21 09:52:31 CEST 2011 Model Description : This is fault number 2! Active Description : The component YYY has an error. Event Type : 1 Probable Cause : 1024 Severity : minor Orig Source IP : 10.1.100.88 ---------------------------------------------------------------! !--------------------------------------------------------------- Module : ALARMMOD Error Code : 33 Resource Id : 1.1.1.2.21 Timestamp : Wed Sep 21 09:54:44 CEST 2011 Model Description : This is fault number 3! Active Description : The component ZZZ has an error. Event Type : 1 Probable Cause : 1024 Severity : minor Orig Source IP : 10.1.100.88 ---------------------------------------------------------------!
If there are no active alarms, the command output is only a heading:
# fmactivealarms
Active alarms:
5.2.2.2 Command: Send Message
This command is used for triggering alarms and events.
5.2.2.2.1 Command Format
The command format:
fmsendmessage <action> <moduleId> <errorCode> <resourceId> ["<activeDescr>"] [<sourceIp>]
The command attributes are the following:
| action |
The type of action to trigger.
| ||||||
| moduleId | The module identity of the alarm to trigger. | ||||||
| errorCode | The error code of the alarm to trigger. | ||||||
| resourceId | The resource identity of the alarming object. | ||||||
| activeDescr | The text to send in the alarm message. | ||||||
| sourceIp | The IP address to use as the alarm originating source. | ||||||
For example, to clear an alarm with Dynamic Activation as module, Error code 4001 and resource ID. 1.1.1.1.2.2.192.168.0.3, issue the following command:
From originating node (the node that triggered the alarm):
# fmsendmessage -c PG 4001 .1.1.1.1.2.2.192.168.0.3 "manually cleared"
From any node:
# fmsendmessage -c PG 4001 .1.1.1.1.2.2.192.168.0.3 "manually cleared" 192.168.0.3
5.2.2.2.2 Command Output
If the specified alarm is successfully triggered, the command does not give any output.
If the specified alarm could not be triggered, the command returns an error message.
5.2.3 CPU Load Monitoring
Ericsson SNMP Agent (ESA) is also used for monitoring the CPU load. The Performance Management Agent (PMA) module in ESA can be used for this purpose.
PMA saves every five minutes a 3GPP XML file containing the CPU load for each core on each node. Saved files older than one day are automatically removed.
The configuration files are stored in the /usr/local/esa/conf directory.
All 3GPP files are stored in the /var/log/esa/pm3gppXml directory. The files can be fetched through SFTP.
For an example of a 3GPP file, see Example 9.
Example 9 3GPP XML File
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <measCollecFile xmlns="http://www.3gpp.org/ftp/specs/archive/32_series/32.435$measCollec"> <fileHeader fileFormatVersion="3GPP_PM_32.435_XML_v11.0.0"> <fileSender/> <measCollec beginTime="2014-01-06T19:45:00-05:00"/> </fileHeader> <measData> <managedElement/> <measInfo> <job jobId="CPULoadCounterNode3"/> <granPeriod duration="PT300S" endTime="2014-01-06T19:50:00-05:00"/> <repPeriod duration="PT300S"/> <measType p="1">CPULoadCounterNode3Core8.max</measType> <measType p="2">CPULoadCounterNode3Core5.min</measType> <measType p="3">CPULoadCounterNode3Core12.max</measType> <measType p="4">CPULoadCounterNode3Core9.actual</measType> <measType p="5">CPULoadCounterNode3Core12.actual</measType> <measType p="6">CPULoadCounterNode3Core1.actual</measType> <measType p="7">CPULoadCounterNode3Core12.min</measType> <measType p="8">CPULoadCounterNode3Core7.max</measType> <measType p="9">CPULoadCounterNode3Core5.max</measType> <measType p="10">CPULoadCounterNode3Core9.max</measType> <measType p="11">CPULoadCounterNode3Core9.min</measType> <measType p="12">CPULoadCounterNode3Core4.max</measType> <measType p="13">CPULoadCounterNode3Core10.max</measType> <measType p="14">CPULoadCounterNode3Core6.max</measType> <measType p="15">CPULoadCounterNode3Core3.min</measType> <measType p="16">CPULoadCounterNode3Core11.min</measType> <measType p="17">CPULoadCounterNode3Core6.min</measType> <measType p="18">CPULoadCounterNode3Core5.actual</measType> <measType p="19">CPULoadCounterNode3Core8.min</measType> <measType p="20">CPULoadCounterNode3Core7.actual</measType> <measType p="21">CPULoadCounterNode3Core7.min</measType> <measType p="22">CPULoadCounterNode3Core11.max</measType> <measType p="23">CPULoadCounterNode3Core6.actual</measType> <measType p="24">CPULoadCounterNode3Core2.min</measType> <measType p="25">CPULoadCounterNode3Core4.actual</measType> <measType p="26">CPULoadCounterNode3Core11.actual</measType> <measType p="27">CPULoadCounterNode3Core2.max</measType> <measType p="28">CPULoadCounterNode3Core4.min</measType> <measType p="29">CPULoadCounterNode3Core10.actual</measType> <measType p="30">CPULoadCounterNode3Core2.actual</measType> <measType p="31">CPULoadCounterNode3Core1.max</measType> <measType p="32">CPULoadCounterNode3Core8.actual</measType> <measType p="33">CPULoadCounterNode3Core10.min</measType> <measType p="34">CPULoadCounterNode3Core3.actual</measType> <measType p="35">CPULoadCounterNode3Core1.min</measType> <measType p="36">CPULoadCounterNode3Core3.max</measType> <measValue measObjLdn=""> <r p="1">0</r> <r p="2">0</r> <r p="3">0</r> <r p="4">0</r> <r p="5">0</r> <r p="6">0</r> <r p="7">0</r> <r p="8">0</r> <r p="9">1</r> <r p="10">0</r> <r p="11">0</r> <r p="12">0</r> <r p="13">0</r> <r p="14">1</r> <r p="15">0</r> <r p="16">0</r> <r p="17">0</r> <r p="18">0</r> <r p="19">0</r> <r p="20">0</r> <r p="21">0</r> <r p="22">0</r> <r p="23">0</r> <r p="24">0</r> <r p="25">0</r> <r p="26">0</r> <r p="27">0</r> <r p="28">0</r> <r p="29">0</r> <r p="30">0</r> <r p="31">0</r> <r p="32">0</r> <r p="33">0</r> <r p="34">0</r> <r p="35">0</r> <r p="36">0</r> </measValue> </measInfo> </measData> <fileFooter> <measCollec endTime="2014-01-06T19:50:00-05:00"/> </fileFooter> </measCollecFile>
The following is to be performed on all nodes.
Configuration of PMA:
- Edit the /usr/local/esa/conf/mainCfg.xml file with the following command:
$ sudo -u actadm vi /usr/local/esa/conf/mainCfg.xml
To configure:
- The output directory.
Configure the pm3gppXml node to appropriate output directory. For example:
<output> <pm3gppXml>/var/log/esa/pm3gppXml</pm3gppXml> </output>
- The time interval in days of how long the 3GPP files
are saved.
Configure the fileRetentionPeriod node. For example:
<pm> ... <fileRetentionPeriod>1</fileRetentionPeriod> ... </pm>
- The time interval in minutes of how often the system
scans for old 3GPP files.
Configure the fileRetentionScan node. For example:
<pm> ... <fileRetentionScan>30</fileRetentionScan> ... </pm>
- The output directory.
- Edit the /usr/local/esa/conf/pmCounters/CPUloadCounterNode<nodeId>.xml file with the following command:
$ sudo -u actadm vi /usr/local/esa/conf/pmCounters/CPUloadCounterNode<nodeId>.xml
To configure:
- The time interval in seconds of how often the system
reads the data.
Configure the interval attribute in all dataSource nodes. For example:
<dataSource interval="60">
There are limitations on allowed values. They are 10, 15, 20, 30, 60, 120, 300, 600, 900, 1200, 1800, 3600 seconds. For more details, see ESA SAG Performance Management.
- The time interval in seconds of how often the system
reads the data.
- Edit the /usr/local/esa/conf/pmJobs/CPUloadJobNode<nodeid>.xml file with the following command:
$ sudo -u actadm vi /usr/local/esa/conf/pmJobs/CPUloadJobNode<nodeId>.xml
To configure:
- The time interval in minutes of how often the system
collects data before any 3GPP file is written to file.
Configure the <granularityPeriod> node. For example:
<defGranularityPeriod>5</defGranularityPeriod>
There are limitations on allowed values. They are 5, 15, 30, 60 minutes. For more details, see ESA SAG Performance Management.
- The time interval in minutes of how often the system
collects data before any 3GPP file is written to file.
For the new configuration to take effect, restart ESA:
$ sudo 3ppmon stopesa
$ sudo 3ppmon startesa
5.2.4 Configuring Internal Heartbeat Monitoring
The PMA module in ESA is used for monitoring the internal heartbeat. Two PMAs are running, one on each SC node. They monitor all nodes and have defined counters and thresholds for how many SC nodes and PL nodes that are considered as alive, respectively on each SC node.
Counters are fetched by the PMA, by calling an external script, /opt/dve/bin/getNodesNumber.sh
Default interval is 60 seconds. Intervals allowed are 10, 15, 20, 30, 60, 120, 300, 600, 900, 1200, 1800, 3600 seconds.
Configuration of heartbeat interval:
- Note:
- Both steps are performed on both SC nodes.
- Edit the /usr/local/esa/conf/pmCounters/pmNodeCounter.xml file with the following command:
$ sudo -u actadm vi /usr/local/esa/conf/pmCounters/pmNodeCounter.xml
Configure the dataSource interval attribute. For example:
<dataSource interval="120">
- For the new configuration to take effect, restart ESA:
$ sudo 3ppmon stopesa
$ sudo 3ppmon startesa
If one of the PL or SC nodes goes down, an event is sent indicating which particular node that is not responding.
If a counter breaches the falling threshold level, for example the counter for PL nodes, the active ESA administrator (master) raises an alarm. Once the counter breaches the rising threshold level, the active ESA administrator (master) ceases the alarm. Threshold is also defined for SC node counter and raises an alarm if one SC node goes down. Once the counter breaches the rising threshold level, the active ESA administrator (master) ceases the alarm. Default counter thresholds are defined as follows:
|
CounterId |
Falling Threshold |
Rising Threshold |
Description |
|---|---|---|---|
|
CONTROLNODECOUNTER |
1 |
2 |
Alarm raised upon threshold breach (< 2). |
|
PAYLOADNODECOUNTER |
Half of total |
>Half of total |
Rounded upwards on odd count (4/7). Alarm raised upon threshold breach, for example <= 4/7. |
Configuration of thresholds:
- Note:
- Both steps are performed on both SC nodes.
- Edit the /usr/local/esa/conf/pmThresholds/pmNodeThreshold.xml file with the following command:
$ sudo -u actadm vi /usr/local/esa/conf/pmThresholds/pmNodeThreshold.xml
Configure the default value for rise and fall for PAYLOADCOUNTER
For example:
<threshold active="yes"> <counter> <groupId>CountersForMonitoringOfNodes</groupId> <counterId>PAYLOADNODECOUNTER</counterId> </counter> <type> <level invert="yes"> <rise>2</rise> <fall>1</fall> </level> </type> <output type="alarm"> <moduleId>PERFORMANCE</moduleId> <errorCode>6002</errorCode> <resourceId>.1.1.4.2</resourceId> <text>Span threshold breached.</text> <origSourceIp>10.61.235.10</origSourceIp> </output> </threshold>
- Note:
- The <origSourceIp> attribute in the above example is a VIP address. The reason for this is that the alarm must be unique in case the raising of an alarm and the ceasing of the same alarm is handled by different PMAs. Appending a VIP address to a <resourceId> is not applicable since the counters are global for the whole cluster, not just for a node. For more details about the address and node name, check the event.
- If adding extra PL nodes to the Dynamic Activation cluster,
the default threshold values can be updated by the following command:
$ sudo -u actadm /opt/dve/bin/updatePmThreshold.sh
- Note:
- The threshold definition file, /usr/local/esa/conf/pmThresholds/pmNodeThreshold.xml, is overwritten.
Each counter is published through the ERICSSON-ESA-PM-MIB and is possible to fetch through the SNMP Get operations from OSS-RC.
5.3 Firewall
This section describes the basic firewall operations and settings.
5.3.1 Activating and Deactivating
- Note:
- The firewall is activated by default on all nodes. The firewall applies to all external interfaces parsed from cluster.conf and is automatically appended to iptables-rules.cfg
To activate the firewall on a node, enter the following command as user root:
# /etc/init.d/firewall start
To deactivate the firewall, enter the following command as user root:
# /etc/init.d/firewall stop
5.3.2 Custom Iptables Rules
Custom iptables rules are supplied by /home/actadm/config/iptables-rules-custom.cfg and are automatically appended to the existing iptables rules upon activation of the firewall.
Refer to /home/actadm/config/iptables-rules-custom.cfg.example
Example on how to add custom iptables rules.
As user root:
# /etc/init.d/firewall stop
# sudo -u actadm cp /home/actadm/config/iptables-rules-custom.cfg.example /home/actadm/config/iptables-rules-custom.cfg
Edit /home/actadm/config/iptables-rules-custom.cfg accordingly.
# /etc/init.d/firewall start
The following printout is displayed:
Starting iptables firewall Appending custom iptables |
5.4 DiffServ
The DiffServ framework defined by the IETF is used to implement a set of forwarding treatments and communicate the classification of packets between UDC nodes and backbone/site infrastructure.
Login as user root on appropriate nodes and perform the following:
- To activate DiffServ and DSCP, issue the following commands
from the appropriate nodes:
- Default DiffServ Code Point is 16 (CS2):
# iptables -t mangle -A OUTPUT -j DSCP --set-dscp 16
- Default DiffServ Code Point for LDAP traffic towards
CUDB is 40 (CS5):
# iptables -t mangle -A OUTPUT -p tcp --dport 389 -j DSCP --set-dscp 40
- Default DiffServ Code Point for SNMP traps is 32 (CS4):
# iptables -t mangle -A OUTPUT -p udp --dport 162 -j DSCP --set-dscp 32
- Default DiffServ Code Point is 16 (CS2):
To make these rules persistent, do the following:
- Edit the /home/actadm/config/iptables-rules-custom.cfg file:
# sudo -u actadm vi /home/actadm/config/iptables-rules-custom.cfg
If the file does not exist, copy /home/actadm/config/iptables-rules-custom.cfg.example as /home/actadm/config/iptables-rules-custom.cfg and add the following lines between *mangle and COMMIT:
-A OUTPUT -j DSCP --set-dscp 16
-A OUTPUT -p tcp -m tcp --dport 5555 -j DSCP --set-dscp 40
-A OUTPUT -p udp -m udp --dport 162 -j DSCP --set-dscp 40
- Save the file.
- Restart the firewall, as user root:
# /etc/init.d/firewall stop
# /etc/init.d/firewall start
5.5 Configuring SSL
The SSL administrative procedures described in this section are designated for SSL communication through the following external interfaces:
- tomcat-server (port 8383)
- cudb-block-proxy (ports 8099,8994)
- NBIA/CAI3G (port 8181)
- NBIA/CAI3G (test port 8989)
- CAI (port 3301)
- CLI (port 8992)
5.5.1 General Information
The following guide, throughout these subsections, is a recommendation on how to configure SSL to work on CAI3G (HTTPS/SOAP), CAI (Telnet), CUDB Block Proxy (JMX over RMI), and Dynamic Activation GUI (HTTPS).
Since the Private Key Infrastructure (PKI), and SSL setups vary from one customer installation to another, this instruction is to be seen as a verified guide on how it can be done.
5.5.2 Remarks and Deviations
The following section contains information about critical points, deviations, and remarks regarding the different SSL configurations.
5.5.2.1 Tomcat-server Module
Only one private key and certificate pair is verified for this instruction.
5.5.2.2 Cudb-block-proxy Module
Only one private key and certificate pair is verified for this instruction.
5.5.2.3 NBIA Module
Only one private key and certificate pair is verified for this instruction.
5.5.3 Configuration Instruction
On a high-level, a complete SSL configuration flow contains the following:
- Prerequisite configuration:
- Create keystore file.
- Create a Certificate Signing Request (CSR) request.
- Get Certificate Authority (CA) certificate and signed server certificate from CA organization.
- Import CA certificate and then the signed server certificate into the keystore file.
- Populate the system with the keystore files and SSL settings.
- Activate the SSL settings.
- Note:
- Two keystore files are maintained: one is traffic keystore
for NBIA module, and the other is OAM keystore for other modules.
Due to several modules use legacy SSL configuration parameters without the keyalias parameter, only one key – one certificate pair can be used for each module.
- Module/protocol specific configuration for every protocol:
- NBIA module (port 8181), NBIA module (test port 8989)
- tomcat-server module (port 8383)
- cudb-block-proxy module (ports 8099,8994)
- CLI module (port 8992), see Section 6.2.1.
- Note:
- The individual module/protocol configuration items are independent
of each other.
A node-by-node activation is required before an individual configuration takes effect in system. Therefore, if multiple modules require SSL communication, it recommends configuring items for all those modules and then performing the activation and verification.
5.5.3.1 Prerequisite Configuration for NBIA Module
- Generate a new private key and add it to the pgexternal_trf_keystore.jks keystore.
The new key is to be used for all external communication.
- Note:
- Below is an example using example-specific values, such as -dname and similar. Choose your own parameters according to real situation.
$ sudo -u actadm /usr/java/latest/bin/keytool -genkey -keyalg RSA -alias pgexternaltrf -keystore /home/bootloader/ssl/pgexternal_trf_keystore.jks -validity 3600 -keysize 2048 -dname 'CN=eg8-vip-trf.ete.ka.sw.ericsson.se, OU=IT, O=Ericsson, L=Unknown, ST=Stockholm, C=SE'
When prompted for password set keystore password.
- Generate a CSR.
$ sudo -u actadm /usr/java/'latest/bin/keytool -keystore /home/bootloader/ssl/pgexternal_trf_keystore.jks -certreq -alias pgexternaltrf -keyalg RSA -file /home/bootloader/ssl/pgexternaltrf.csr
- Verify the creation of the CSR.
$ cat /home/bootloader/ssl/pgexternaltrf.csr
Output:
-----BEGIN NEW CERTIFICATE REQUEST----- /.../ -----END NEW CERTIFICATE REQUEST-----
- Send the CSR to your Certificate Authority (CA). Your
CA will send you back a number of intermediate certificate and signed
server certificate files.
- Note:
- The number of certificates are differ due to CA vendor.
A certificate file looks like the following:
-----BEGIN CERTIFICATE----- [encoded data] -----END CERTIFICATE----- - Save all those files received from CA to /home/bootloader/ssl/.
- Use the keytool to import the
certificate files into the keystore, one by one.
- Note:
- The following example imports three certificates from CA.
- Import Root CA certificate file verisignRoot.crt:
$ sudo -u actadm /usr/java/latest/bin/keytool -import -keystore /home/bootloader/ssl/pgexternal_trf_keystore.jks -file /home/bootloader/ssl/verisignRoot.crt -alias verisignrootca
Enter keystore password:
Certificate was added to keystore
- Import Intermediate CA/CAs example file intermediate.crt:
$ sudo -u actadm /usr/java/latest/bin/keytool -import -keystore /home/bootloader/ssl/pgexternal_trf_keystore.jks -file /home/bootloader/ssl/intermediate.crt -alias intermediateca
Enter keystore password:
Certificate was added to keystore
- Import Server certificte file pgexternaltrf.crt:
$ sudo -u actadm /usr/java/latest/bin/keytool -importcert -keystore /home/bootloader/ssl/pgexternal_trf_keystore.jks -file /home/bootloader/ssl/pgexternaltrf.crt -alias pgexternaltrf
Enter keystore password:
Certificate was added to keystore
- Note:
- If the following error is shown:
keytool error: java.security.cert.CertificateException: java.io.IOException: Incomplete data
To solve the problem, change the certificate file from .crt to a .der file and then import. For example:
$ sudo -u actadm openssl x509 -outform der -in pgexternaltrf.crt -out pgexternaltrf.der
$ sudo -u actadm /usr/java/latest/bin/keytool -importcert -keystore /home/bootloader/ssl/pgexternal_trf_keystore.jks -file /home/bootloader/ssl/pgexternaltrf.der -alias pgexternaltrf
5.5.3.2 Prerequisite Configuration for Other Modules
For modules other than NBIA module, the prerequisite configuration is the same as NBIA, except those modules use keystore pgexternal_oam_keystore.jks.
An example is given as follows. The differences in commands are using "oam" keystore and certificate files (NBIA module uses "trf" instead). For more instruction details, see Section 5.5.3.1.
- $ sudo -u actadm /usr/java/latest/bin/keytool
-genkey -keyalg RSA -alias pgexternaloam -keystore /home/bootloader/ssl/pgexternal_oam_keystore.jks
-validity 3600 -keysize 2048 -dname 'CN=eg8-vip-trf.ete.ka.sw.ericsson.se,
OU=IT, O=Ericsson, L=Unknown, ST=Stockholm, C=SE'
When prompted for password set keystore password.
- $ sudo -u actadm /usr/java/latest/bin/keytool -keystore /home/bootloader/ssl/pgexternal_oam_keystore.jks -certreq -alias pgexternaloam -keyalg RSA -file /home/bootloader/ssl/pgexternaloam.csr
- $ cat pgexternal.csr
- Send the CSR to your CA and get certificate files back.
In this example, there three files as follows:
- Root CA certificate: verisignRoot.crt
- Intermediate certificate: intermediate.crt
- Service certificate: pgexternaloam.crt
- Save the certificate files to /home/bootloader/ssl/.
- Import those certificate files to the keystore one by
one.
- $ sudo -u actadm /usr/java/latest/bin/keytool
-import -keystore /home/bootloader/ssl/pgexternal_oam_keystore.jks
-file /home/bootloader/ssl/verisignRoot.crt -alias verisignrootca
Enter keystore password:
Certificate was added to keystore
- $ sudo -u actadm /usr/java/latest/bin/keytool
-import -keystore /home/bootloader/ssl/pgexternal_oam_keystore.jks
-file /home/bootloader/ssl/intermediate.crt -alias intermediateca
Enter keystore password:
Certificate was added to keystore
- Import Server certificte file pgexternaltrf.crt:
$ sudo -u actadm /usr/java/latest/bin/keytool -importcert -keystore /home/bootloader/ssl/pgexternal_oam_keystore.jks -file /home/bootloader/ssl/pgexternaloam.crt -alias pgexternaloam
Enter keystore password:
Certificate was added to keystore
- Note:
- If java.io.IOException: Incomplete data error is detected, change the certificate files from .crt to .der files before
importing them. For example:
$ sudo -u actadm openssl x509 -outform der -in pgexternaloam.crt -out pgexternaloam.der
- $ sudo -u actadm /usr/java/latest/bin/keytool
-import -keystore /home/bootloader/ssl/pgexternal_oam_keystore.jks
-file /home/bootloader/ssl/verisignRoot.crt -alias verisignrootca
5.5.3.3 NBIA Module SSL Configuration and Activation
- Set keystore for NBIA module:
$ bootloader.py config set --parameter @NBIA_KEYSTOREEXT@ --value /home/bootloader/ssl/pgexternal_trf_keystore.jks
- Encrypt the clear text password:
$ java -cp /usr/java/latest/lib/missioncontrol/plugins/org.eclipse.jetty.util_<version>.jar org.eclipse.jetty.util.security.Password <password>
Example Output:
OBF:1u2a1toa1w8v1tok1u301u2a1toa1w8v1tok1u30 MD5:f6fdffe48c908deb0f4c3bd36c032e72
- Set keystore password for the NBIA module:
$ bootloader.py config set --parameter @NBIA_SSLKEYPASSWORD@ --value OBF:1u2a1toa1w8v1tok1u301u2a1toa1w8v1tok1u30
$ bootloader.py config set --parameter @NBIA_SSLPASSWORD@ --value OBF:1u2a1toa1w8v1tok1u301u2a1toa1w8v1tok1u30
- Verify that all parameters have been set:
$ bootloader.py config list
- Perform an activation for the configured parameters to
take effect on all modules on the system, one by one:Attention!
Wait for each node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the node that is to be activated.
- Note:
- Repeat the command for each PL node in the cluster.
- Verify the certificate from an external node:
5.5.3.4 Tomcat-server Module SSL Configuration and Activation
- Set keystore for tomcat-server module:
$ bootloader.py config set --parameter @TOMCAT_KEYSTORE_FILE@ --value /home/bootloader/ssl/pgexternal_oam_keystore.jks
- Set the keystore password:
$ bootloader.py config set --parameter @TOMCAT_KEYSTORE_PASSWORD@ --value <password>
- Set the private alias:
$ bootloader.py config set --parameter @AUTHN_PRIVATE_ALIAS@ --value pgexternaloam
- Set the public alias:
$ bootloader.py config set --parameter @AUTHN_PUBLIC_ALIAS@ --value pgexternaloam
- Perform an activation for the configured parameters to
take effect on all modules on the system, one by one:Attention!
Wait for each node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the node that is to be activated.
- Note:
- Repeat the command for each SC node in the cluster.
- Verify the certificate from an external node:
$ openssl s_client -connect <VIP-OAM-IP>:8383 -showcerts -tls1
Expected response example:
CONNECTED(00000003) depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=SE/ST=Stockholm/L=Stockholm/O=Ericsson/OU=IT/CN=eg8-vip-oam.ete.ka.sw.ericsson.se i:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4 -----BEGIN CERTIFICATE----- MIIGCzCCBPOgAwIBAgIQXEKFiasv3kVZjna6Ms73NjANBgkqhkiG9w0BAQsFADB+ MQswCQYDVQQGEwJVUzEdMBsGA1UEChMUU3ltYW50ZWMgQ29ycG9yYXRpb24xHzAd BgNVBAsTFlN5bWFudGVjIFRydXN0IE5ldHdvcmsxLzAtBgNVBAMTJlN5bWFudGVj IENsYXNzIDMgU2VjdXJlIFNlcnZlciBDQSAtIEc0MB4XDTE3MDEwOTAwMDAwMFoX DTE4MDExMDIzNTk1OVowgYExCzAJBgNVBAYTAlNFMRIwEAYDVQQIDAlTdG9ja2hv bG0xEjAQBgNVBAcMCVN0b2NraG9sbTERMA8GA1UECgwIRXJpY3Nzb24xCzAJBgNV ----------------------------------------------
5.5.3.5 Cudb-block-proxy Module SSL Configuration and Activation
- Set keystore for cudb-block-proxy module:
$ bootloader.py config set --parameter @BLOCK_PROXY_SSLCERT_STORE@ --value /home/bootloader/ssl/pgexternal_oam_keystore.jks
- Activate SSL of cudb-block-proxy module:
$ bootloader.py config set --parameter @BLOCK_PROXY_JMXREMOTE_SSL@ --value true
$ encrypt.sh '<clear_text_password>'
- Note:
- The password needs to be inside the single quotation marks (').
Output:
pwqrl5Un3PemL4v88evQ+Q==
$ bootloader.py config set --parameter @BLOCK_PROXY_SSLCERT_STORE_PASSWORD@ --value pwqrl5Un3PemL4v88evQ+Q==
- Perform an activation for the configured parameters to
take effect on all modules on the system, one by one:Attention!
Wait for each node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the node that is to be activated.
- Note:
- Repeat the command for each SC node in the cluster.
- Verify the certificate from an external node:
$ openssl s_client -connect <VIP-OAM-IP>:8994 -showcerts -tls1
Example Output:
CONNECTED(00000003) /.../ <Certificate output>
5.5.3.6 CAI SSL Configuration and Activation
The CAI SERVER2 with port 3301 is run with SSL by default. To change the SSL configuration for CAI SERVER2, run the following commands:
- $ bootloader.py config set --parameter
@CAI_SERVER_2_ENABLE_SSL@ --value <boolean value>
Set <boolean value> to true to enable the SSL configuration for CAI SERVER2 with port 3301.
Set <boolean value> to false to disable the SSL configuration for CAI SERVER2 with port 3301.
- Perform an activation for the configured parameters to
take effect on all modules on the system, one by one:Attention!
Wait for each node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the node that is to be activated.
- Note:
- Repeat the command for each PL node in the cluster.
- Verify the certificate from an external node:
$ openssl s_client -connect <VIP-OAM-IP>:3301 -showcerts -tls1
Output:
CONNECTED(00000003) /.../ <Certificate output>
5.5.3.7 General Cleanup
Backup all SSL working files, including the SSL keystore passwords, from the system to a secure storage area.
Files to be backed up are the:
- Keystore files
- CSR request file
- CA authority certificate files
5.6 Configuring Authentication Rules of GUI Users
This section describes how to configure authentication rules for GUI users, including:
- Password rules that must be fulfilled when setting a password for GUI users.
- Input parameters of Brute Force Protection Algorithm.
5.6.1 Brute Force Protection Algorithm
Brute Force Protection Algorithm is applied on the default admin GUI user because this user can not be permanently locked out from system.
Once the AUTHN_MAX_LOGIN_ATTEMPTS is reached for the default admin GUI user, the user is locked out for a period. The Brute Force Protection Algorithm determines the locking period as follows:
- When the admin user is locked for the 1st time, the locking period is AUTHN_BRUTE_FORCE_INITIAL_LOCK_PERIOD (5 seconds by default).
- From the 2nd time, the locking period is
doubled each time, until the AUTHN_BRUTE_FORCE_MAX_LOCK_PERIOD is reached. For example:
- 2nd locking period is 10 seconds.
- 3rd locking period is 20 seconds.
- 4th locking period is 40 seconds, and so on.
- Later on, every time when the user is locked out, the locking period is AUTHN_BRUTE_FORCE_MAX_LOCK_PERIOD (15 minutes by default).
5.6.2 View Authentication Parameters
To list all the configuration information for the system, run the following command from an SC node:
$ bootloader.py config list --show_all
Then find the authentication parameters in tomcat section.
Table 20 lists all the parameters for authentication rule configuration.
|
Parameter |
Description |
Type |
Default Value |
|---|---|---|---|
|
AUTHN_MIN_PASSWD_LENGTH |
Minimum password length (1) |
Integer |
12 |
|
AUTHN_MIN_DIGIT_CHARS |
Minimum number of digits [0-9] within password |
Integer |
1 |
|
AUTHN_MIN_UPPER_CASE_CHARS |
Minimum number of upper case characters [A-Z] within password |
Integer |
1 |
|
AUTHN_MIN_LOWER_CASE_CHARS |
Minimum number of lower case characters [a-z] within password |
Integer |
1 |
|
AUTHN_MIN_SPECIAL_CHARS |
Minimum number of special characters [!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~] within password |
Integer |
1 |
|
AUTHN_MAX_DAYS_TO_REMEMBER |
Number of days that a password cannot be reused |
Integer |
365 |
|
AUTHN_MAX_DAYS_TO_LIVE |
Maximum days before a password needs to be changed |
Integer |
90 |
|
AUTHN_MAX_LOGIN_ATTEMPTS |
Maximum number of failed login attempts before the user is locked out |
Integer |
3 |
|
AUTHN_BRUTE_FORCE_INITIAL_LOCK_PERIOD |
Initial locking period (in milliseconds) of default admin GUI user after reaching "AUTHN_MAX_LOGIN_ATTEMPTS" |
Integer |
5000 |
|
AUTHN_BRUTE_FORCE_MAX_LOCK_PERIOD |
Maximum locking period (in milliseconds) for the default admin GUI user. |
Integer |
900000 |
(1) The password
length must consist of minimum 12 characters.
5.6.3 Modify An Authentication Parameter
To modify the default values of these parameters, do the following from an SC node:
- Run the bootloader.py command
to set a new value for a parameter.
$ bootloader.py config set --parameter <parameterName> --value <newValue>
The new value is saved in the user_config file.
Example of setting a new value for AUTHN_MAX_DAYS_TO_REMEMBER as 30:
$ bootloader.py config set --parameter @AUTHN_MAX_DAYS_TO_REMEMBER@ --value 30 INFO - *** @AUTHN_MAX_DAYS_TO_REMEMBER@=30 inserted in user configuration INFO - *** To activate the changed parameter, execute "bootloader.py node activate --host <hostname>" command
Check user_config for verification.
$ cat /home/bootloader/config/user_config @AUTHN_MAX_DAYS_TO_REMEMBER@=30
- From each SC node, one by one, run the following command
to activate the modification:Attention!
Wait for each SC node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the SC node that is to be activated.
- Note:
- The new value will take effect after executed the command bootloader node activate.
GUI users can keep using their current password until expiration.
5.6.4 Restore An Authentication Parameter
A parameter is restored to default value by removing the modified value from the user_config file.
To restore a parameter, do the following from an SC node:
- Run the bootloader.py command
to remove the modified value.
$ bootloader.py config remove --parameter <parameterName>
Example of restoring AUTHN_MAX_DAYS_TO_REMEMBER to default value:
$ bootloader.py config remove --parameter @AUTHN_MAX_DAYS_TO_REMEMBER@ INFO - *** Removed @AUTHN_MAX_DAYS_TO_REMEMBER@ from user configuration INFO - *** To activate the changed parameter, execute "bootloader.py node activate --host <hostname>" command
- From each SC node, one by one, run the following command
to activate the modification:Attention!
Wait for each SC node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the SC node that is to be activated.
- Note:
- The restored value will take effect after executed the command bootloader node activate.
GUI users can keep using their current password until expiration.
5.7 License Keys Administration
This section holds information on how to manually request and install a new license key and how to perform a rollback.
- Note:
- There is no downtime when a new license is installed or a rollback is performed. The Sentinel server is accessed by Dynamic Activation once a day which means that Dynamic Activation has all licenses in cache. To speed up the process, it is possible to trigger an update through the Dynamic Activation GUI. Also, provisioning to the Dynamic Activation is not required to be stopped.
5.7.1 Requesting New License Key
Do the following as user root to request the new license key:
- Obtain the license code from all SC nodes where the license
server is installed:
# cd /opt/sentinel/bin
# ./echoid
Sentinel RMS Development Kit 8.6.2.0053 Host Locking Code Information Utility Copyright (C) 2014 SafeNet, Inc. Locking Code 1 : 2008-*1MS LHEN 9GMR X8EQ Locking Code 1 (Old Style) : 2008-BA44D
- Note:
- The echoid command needs to be run, as user root, in the /opt/sentinel/bin directory. Otherwise the license provided in Ericsson License Information System (ELIS) will not work.
- Provide all Locking Code for the ELIS and get the license file.
Example 10 A License File
13 FAT1022833/5 Ni LONG NORMAL NETWORK EXCL 100000 INFINITE_KEYS 15 NOV 2013 11 NOV 2014 NO_SHR SLM_CODE 1 NON_COMMUTER NO_GRACE NO_OVERDRAFT CL_ND_LCK NON_REDUNDANT AB99F2008,FB97E2008 NO_HLD 5 M2M_Start,_ PS T,fzJ:wrGlhYlTlWOoaZ3VyN5wPB1aJd4HVM505BvjfWZAcemFO1DYYtplY:y90yLgTU2Vw2Z7C1x FWRupieI93p#AID=fe665bf5-a24a-4c3c-910b-e882f41cb146
5.7.2 Installing License Key
To install a new license key, follow the procedure:
- Back up the existing licenses if there are any. See Section 5.7.3.
- Set the environment variable LSHOST to the license server:
# export LSHOST=nfs
- Import the new licenses by using a licence file received
from ELIS. See Section 5.7.1.
The license can be imported either from the file directly, or from the string found in the file.
- Import a license file directly
# /opt/sentinel/bin/lslic -F <file_name>
Example printout
Sentinel RMS Development Kit 8.6.2.0053 License Addition/Deletion Utility Copyright (C) 2014 SafeNet, Inc. License code "13 FAT1022833/5 Ni LONG NORMAL NETWORK EXCL 100000 INFINITE_KEYS 15 NOV 2013 11 NOV 2014 NO_SHR SLM_CODE 1 NON_COMMUTER NO_GRACE NO_OVERDRAFT CL_ND_LCK NON_REDUNDANT AB99F2008,FB97E2008 NO_HLD 5 M2M_Start,_PS T,fzJ:wrGlhYlTlWOoaZ3VyN5wPB1aJd4HVM505BvjfWZAcemFO1DYYtplY: y90yLgTU2Vw2Z7C1xFWRupieI93p$AID= fe665bf5-a24a-4c3c-910b-e882f41cb146" added on nfs
- Import from a string in the license file
- Copy the entire text in the license file as a license
string.
- Note:
- The license string cannot contain any word-wrapping and the whole string needs to be on a single line for Sentinel to be able to read it.
- Import the licences from the license string.
# /opt/sentinel/bin/lslic -A "<license_string>"
Example printout
License Addition/Deletion Utility Copyright (C) 2008 SafeNet, Inc. License code 13 FAT1022833/5 Ni LONG NORMAL NETWORK EXCL 100000 INFINITE_KEYS 15 NOV 2013 11 NOV 2014 NO_SHR SLM_CODE 1 NON_COMMUTER NO_GRACE NO_OVERDRAFT CL_ND_LCK NON_REDUNDANT AB99F2008,FB97E2008 NO_HLD 5 M2M_Start,_ PS T,fzJ:wrGlhYlTlWOoaZ3VyN5wPB1aJd4HVM505BvjfWZAcemFO1DYYtplY:y90yLgTU2Vw2Z7C1x FWRupieI93p$AID=fe665bf5-a24a-4c3c-910b-e882f41cb146
- Copy the entire text in the license file as a license
string.
- Import a license file directly
- Restart the license server:
# 3ppmon stoplserv
# 3ppmon startlserv
- Login as user root, and verify
on the localhost, SC1, that the license is installed:
# /opt/sentinel/bin/lsmon localhost
- Note:
- Make sure that the response does not contain:
There is no license in the server
- Login to the SC2 node as en administrator.
- Restart the license server:
# 3ppmon stoplserv
# 3ppmon startlserv
- Login as user root, and verify
on the localhost, SC2, that the license is installed:
# /opt/sentinel/bin/lsmon localhost
- Note:
- Make sure that the response does not contain:
There is no license in the server
- Verify that the license server NFS contains licenses:
# /opt/sentinel/bin/lsmon nfs
- Note:
- Make sure that the response does not contain:
There is no license in the server
- For the new license to take effect, an update on the Dynamic
Activation applications needs to be performed in the GUI.
Login to the Dynamic Activation web GUI. For detailed information, refer to User Guide for Resource Activation, Reference [8].
Go to the System > Licenses tab.
Click the update arrows for the appropriate Dynamic Activation application to load the license to the system.
- If the new license is replacing a license, follow Section 5.7.5 to remove the old license.
5.7.3 Back up License Keys
To back up a license key, follow this procedure:
- Login to SC1 and create a backup folder.
# mkdir /home/actadm/licenses/backup
- Back up the existing license.
# cp /home/actadm/licenses/lservrc /home/actadm/licenses/backup
5.7.4 Restoring a Backup of License Keys
To restore a backup of a license key, follow this procedure:
- Login as user root, and stop
the Sentinel server on both SC nodes:
# 3ppmon stoplserv
- Copy the backed up file lservrc from directory /home/actadm/licenses/backup to the original directory, /home/actadm/licenses/:
# cp /home/actadm/licenses/backup/lservrc /home/actadm/licenses/
- Start the Sentinel server on both SC nodes:
# 3ppmon startlserv
- For this procedure to take effect, an update on the Dynamic
Activation applications needs to be performed in the GUI.
Login to the Dynamic Activation web GUI. For detailed information, refer to User Guide for Resource Activation, Reference [8].
Go to the System > Licenses tab.
Click the update arrows for the appropriate Dynamic Activation application to load the license on the system.
5.7.5 Removing a License Key
To remove a license key, follow the procedure:
- Login to SC1 as user root,
and set the environment variable LSHOST to the license server:
# export LSHOST=nfs
- Print existing licenses.
# /opt/sentinel/bin/lsmon nfs
Example printout
Sentinel RMS Development Kit 8.6.2.0053 Application Monitor Copyright (C) 2014 SafeNet, Inc. [Contacting Sentinel RMS Development Kit server on host "nfs"] |- Feature Information |- Feature name : "FAT1022833/1" |- Feature version : "" |- License type : "Normal License" |- License Version : 0x08400000 |- Commuter license allowed : NO |- Maximum concurrent user(s) : Unlimited |- Unreserved tokens in use : 0 |- Available unreserved : Unlimited |- Reserved tokens in use : 0 |- Available reserved : 0 |- Soft limit on users : Unlimited |- License start date : Fri Nov 15 00:00:00 2013 |- Expiration date : Tue Nov 11 23:59:59 2014 |- Log encryption level : 1 |- Check time tamper : Yes |- Application-server locking : Client-locked license. |- Client $1 locking code : 2008-AB99F |- Client $2 locking code : 2008-FB97E |- Additive/exclusive/aggregate : Exclusive license(overrides additive licenses). |- Token lifetime (heartbeat) : 300 secs (5 min(s)) |- Public vendor information : SW_Basic,_PS |- Allowed on VM : YES |- License Information |- License Hash : 1181152AD32E7EBE |- License type : "Normal License" |- License Version : 0x08400000 |- License storage name : /home/actadm/licenses/lservrc |- License status : Active |- Commuter license allowed : NO |- Maximum concurrent user(s) : Unlimited |- Soft limit on users : Unlimited |- License start date : Fri Nov 15 00:00:00 2013 |- Expiration date : Tue Nov 11 23:59:59 2014 |- Log encryption level : 1 |- Check time tamper : Yes |- Application-server locking : Client-locked license. |- Client $1 locking code : 2008-AB99F |- Client $2 locking code : 2008-FB97E |- Additive/exclusive/aggregate : Exclusive license(overrides additive licenses). |- Token lifetime (heartbeat) : 300 secs (5 min(s)) |- Allowed on VM : YES - Set the environment variable LSFORCEHOST to the license server:
# export LSFORCEHOST=nfs
- Remove the specific License Key:
# /opt/sentinel/bin/lslic -df “<Feature name>” “<Feature Version>” “<License Hash>”
Example
# /opt/sentinel/bin/lslic -df “FAT1022833/1” “” “1181152AD32E7EBE”
Sentinel RMS Development Kit 8.6.2.0053 License Addition/Deletion Utility Copyright (C) 2014 SafeNet, Inc. This will delete license(s) from the server, do you want to continue? (Y/N):
Press Y.
license FAT1022833/1 are deleted.
- Restart the license server:
# 3ppmon stoplserv
# 3ppmon startlserv
- Verify on the localhost, SC1, that a license is still
installed:
# /opt/sentinel/bin/lsmon localhost
- Note:
- Make sure that the response does not contain:
There is no license in the server
- Login to the SC2 node as an administrator.
- Restart the license server:
# 3ppmon stoplserv
# 3ppmon startlserv
- Verify on the localhost, SC2, that a license is still
installed:
# /opt/sentinel/bin/lsmon localhost
- Note:
- Make sure that the response does not contain:
There is no license in the server
- Verify that the license server NFS contains licenses:
# /opt/sentinel/bin/lsmon nfs
Make sure that the response does not contain the text:
There is no license in the server
- For this procedure to take effect, an update on the Dynamic
Activation applications needs to be performed in the GUI.
Login to the Dynamic Activation web GUI. For detailed information, refer to User Guide for Resource Activation, Reference [8].
Go to the System > Licenses tab.
Click the update arrows to load the license to the system.
5.8 Cassandra Administration
This section outlines useful nodetool commands to administer the Cassandra database.
5.8.1 Cassandra Data Consistency
5.8.1.1 Cassandra Node Outage
A Dynamic Activation cluster consisting of 4 - 12 nodes, replication factor 3, and read/write on consistency level QUORUM, can only have one node down at a time.
- When a node goes down, Cassandra saves incoming writes
for that node, in accordance to how the max_hint_window_ms parameter is configured (three hours by default). When the node
is up and running again, the cluster starts replicating missed data.
During node downtime, the node is still able to get consistent reads and writes, though Ericsson recommends getting the node back up and running as soon as possible.
- If the downtime is longer then the max_hint_window_ms configuration, the node requires manual repairing after it is up and running again. For instructions, see Section 5.8.1.2.
5.8.1.2 Repair Cassandra Node
When replacing a blade, or a node goes down for an extended period of time (more than three house by default), inconsistence occurs among Cassandra nodes. To synchronize Cassandra data, the replaced or down node needs to be repaired manually.
To repair a Cassandra node:
- Log on to the node to be repaired.
- Run the repair command.
$ nodetool repair
- Note:
- Repairing a Cassandra node is a time consuming task. Depending on how much data is available in Cassandra, the nodetool repair command can continue in background up to several hours.
5.8.2 Cassandra Performance Monitoring
The following commands can be used to monitor the Cassandra cluster:
- nodetool status logconsolidation - Displays information about the Cassandra cluster.
- nodetool compactionstats - Displays the total column and the total number of uncompressed bytes of SSTables being compacted.
- nodetool cfstats - Displays statistics on each table and keyspace.
- nodetool netstats - Displays statistics on network operations and connections within the Cassandra cluster.
- nodetool tpstats - Displays statistics about the number of active, pending, and completed tasks for each stage of Cassandra operations by thread pool.
For more information about nodetool commands, see https://docs.datastax.com/en/cassandra/2.2/cassandra/tools/toolsNodetool.html, Reference [15].
5.9 PC Keys
This section contains information on how to install a new PC key and how to perform a rollback.
- Note:
- There is downtime of each node when a new key is installed or a rollback is performed as the packages in the application are redeployed. The Dynamic Activation reads the keys from the key directory every 5 seconds and redeploys when changes are detected. Provisioning is not required to be stopped when adding keys to one node at a time in a cluster, allowing time for each node to get back up.
5.9.1 Installing a PC Key
To install a PC key, follow the steps:
- Transfer the PC key to a temporary folder and pack it
into a .tar.gz file:
$ tar cvzf PCkey-<Feature_Name>-<Rev>.tar.gz <PcKey>
- Copy the packed PC key file to the /opt/ericsson/activation/bootloader/repository/ directory:
$ sudo -u actadm cp PCkey-<Feature_Name>-<Rev>.tar.gz /opt/ericsson/activation/bootloader/repository/
- If upgrading an old PC key, repeat the following command
for all PL nodes to uninstall the old PC key.
From an SC node:
$ bootloader.py submodule delete -n PCkey-<Feature_Name> -p dve-application --host <hostname>
<hostname> is the hostname of the PL node on which the old PC key is to be uninstalled.
- Repeat the following command for all PL nodes to install
the PC key.
From an SC node:
$ bootloader.py submodule add -n PCkey-<Feature_Name>-<Rev>.tar.gz -t key -p dve-application --host <hostname>
<hostname> is the hostname of the PL node on which the PC key is to be installed.
- From an SC node, run the following
command for all PL nodes, one by one, to activate the PC key:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
The keys are active as soon as the application has been redeployed, no other means of activation are necessary unless stated by the PC documentation.
5.9.2 Rolling Back a PC Key
To roll back a PC key, follow the steps:
- Repeat the following command for all PL nodes to remove
the PC key.
From an SC node:
$ bootloader.py submodule delete -n PCkey-<Feature_Name> -p dve-application --host <hostname>
<hostname> is the hostname of the PL node on which the PC key is to be removed.
- If the current PC key is to be replaced by a previous
key, ensure that the previous packed PC key file exists in the /opt/ericsson/activation/bootloader/repository/, and
reinstall the previous key.
From an SC node:
$ bootloader.py submodule add -n PCkey-<Feature_Name>-<Rev>.tar.gz -t key -p dve-application --host <hostname>
<hostname> is the hostname of the PL node on which the PC key is to be reinstalled.
- From an SC node, run the following command, for all PL
nodes, one by one, to activate the PC key:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
- Remove the packed PC key file from the /opt/ericsson/activation/bootloader/repository/ directory:
$ cd /opt/ericsson/activation/bootloader/repository/
$ sudo -u actadm rm PCkey-<Feature_Name>-<Rev>.tar.gz
The keys are deactivated as soon as the application has been redeployed, no other means of deactivation are necessary unless stated by the PC documentation.
5.10 Processing Log Admin Tool
The Processing Log Admin Tool provides maintenance functionality for exporting processing logs from the data storage. This tool is also used by crontab in the Dynamic Activation cluster. When necessary, to release data storage space, it removes log entries in the Cassandra database (starting with the oldest).
The Processing Log Admin Tool can be run locally on one of the nodes within Dynamic Activation. It can also be run from an external node.
5.10.1 Using Processing Log Admin Tool
5.10.1.1 Exporting Logs
The following are ways to export logs using the Processing Log Admin Tool.
- Logs can be exported as days in UTC using the -e option. Then all log entries stored during a day are exported where a day is defined as YYYY-MM-DD 00:00:00.000UTC to YYYY-MM-DD 23:59:59.999UTC. Logs exported as a day are unsorted to optimize export speed for backup purposes.
- Logs can be exported for a given time interval using the -et option. Then all complete logs (a northbound request with all belonging southbound requests) stored during the time interval are exported. The time interval is provided in the system local timezone. The output is sorted by the northbound entry timestamp.
- Complete logs can be exported sorted with the latest log first, using the -el option. This option searches the data store backwards until a provided limit of logs are exported. When exporting logs, a path for storing the export files must be provided.
Export location
The exported log files are compressed to a *.gz file, to minimize disk use. During export, for approximately every 100000 logged entries a compressed file will be created. It is important to take into account that when exporting huge amount of data, these files can fill up the target partition, and it can take a few hours for the export of provisioning data to finish.
When export log files, chose the export location as follows:
- Whenever possible, export the log files to an external disk that is mounted onto the server, or a local path on an external computer that has access to the cluster.
- If only an internal disk is available, the /var/log directory can be used when the size of the
log files is small.
- Note:
- The free space of the /var/log directory can be checked by executing the df -h command.
Do not export the logs to /tmp on an internal disk. Because in the current setup, the /tmp is a part of the / directory, which is not suitable for log export usage.
5.10.1.1.1 Running Processing Log Admin Tool
Table 21 shows the arguments to use when exporting processing logs:
|
Argument |
Description |
|
-e |
Starts the export on a given day, written as: YYYY-MM-DD, where a day is defined as YYYY-MM-DD 00:00:00.000UTC to YYYY-MM-DD 23:59:59.999UTC. This needs to be combined with the -p argument. This export is time prioritized and unsorted. Example: sudo -u actadm ./proclog-admin-tool.sh -e 2015-06-20 -p /mnt/exportdisk |
|
-el |
Starts the export from the last log saved in the database, and exports the given amount of logs. Example: sudo -u actadm ./proclog-admin-tool.sh -el 100 -p /mnt/exportdisk |
|
-et |
Starts the export with a start- and an end-date, write the two dates as one parameter. This export is sorted, and not time prioritized. Example: sudo -u actadm ./proclog-admin-tool.sh -et "2015-06-20 12:00:00 2015-06-23 13:00:00" -p /mnt/exportdisk |
|
-h |
Help, lists all the existing commands that are applicable, and how to use them. |
|
-l |
Lists all the days a log has been stored in the database. |
|
-p |
This is the location where the files are exported to. |
|
-rc |
Removes the oldest days from the database so that only the configured number of days remain. |
|
-ro |
Removes the oldest day from the database |
Run the Processing Log Admin Tool from LDE:
For local use, the Processing Log Admin Tool executable shell script, proclog-admin-tool.sh, is located in the /usr/local/pgngn/admin-tool-<version>/bin directory.
The following example shows how to export the processing logs created during one specified hour:
Example 1:
$ cd /usr/local/pgngn/admin-tool-<version>/bin
$ sudo -u actadm ./proclog-admin-tool.sh -et "2014-06-20 12:00:00 2014-06-23 13:00:00" -p <export_dir_path>
5.10.2 Configuring Processing Log Admin Tool
To update the Processing Log Admin Tool configuration, follow the instructions in Section 4.2.
- Note:
- The configuration change is node specific.
Log information derived by using the Processing Log Admin Tool, is saved in the /var/log/dve/proclog-admin-tool.log file.
|
Property |
Description |
|---|---|
|
Cassandra.IP |
The IP address of the node where Cassandra resides. Multiple addresses can be specified by using a comma as a delimiter, for example: 169.254.100.1,169.254.100.2,169.254.100.3 |
|
Cassandra.Port |
The port for the Cassandra instance. Default is 9042 |
|
export.file.numLogEntries |
Number of log entries per . csv file when exporting. Default is 100000 |
|
cassandra.keyspace |
The keyspace used when searching. Default is logconsolidation |
|
cassandra.consistencyLevel |
The number of Cassandra nodes that must reply to a save request before proceeding. Default is QUORUM |
|
cassandra.replication |
The number of proclog data copies that need to be saved. Default is 3 |
5.11 Daily Export of Processing Logs
This section describes the export procedures of processing log data, on all nodes, on a daily basis.
The recommendation is to run the export script once a day. The script exports the processing logs that were provisioned the previous day.
The export process can be executed with the following script and flags.
- For exporting logs relative to the UTC time zone:
$ sudo -u actadm /usr/local/pgngn/admin-tool-<version>/bin/proclog-admin-tool-auto-export.sh utc <export_dir_path>
- For exporting logs relative to the time zone configured
on the server:
$ sudo -u actadm /usr/local/pgngn/admin-tool-<version>/bin/proclog-admin-tool-auto-export.sh serverTimeZone <export_dir_path>
The script can be added as a crontab job.
- Note:
- This script should only be executed on one node.
By default this script is not added to crontab due to the limit disk space for processing logs on file.
The path to where this script is exporting the data is configurable, and must be specified as an argument when running the script. It can for example be an internal disk, or an external disk that is mounted onto the server. Logs can also be exported to a local path on an external computer that has access to the cluster. The exported log files are compressed to a .gz file, to minimize disk use.
5.12 Log File Maintenance
This section contains information on log file maintenance of different logs, processing, and other.
5.12.1 Removal of Old Processing Logs
The proclog-cleanup.sh script is used by ESA to remove old processing logs. It removes logs, starting with the oldest, when the disk space exceeds a specified limit within ESA. The script then removes logs until the limit of 70% disk usage has been reached.
The script can be executed manually with different settings.
5.12.2 Configuration of the Retained Amount of Processing log days
To configure the retained amount of processing log days, follow the step-list:
- From an SC node, run the following command to configure
the retention value:
$ bootloader.py config set --parameter @CASSANDRA_NUMBER_OF_DAYS_STORED@ --value <retention_value>
Default retention value is 30.
- From an SC node, run the following command, for all PL
nodes, one by one, to activate the change:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
5.12.3 Configuration of Processing Logs
Dynamic Activation provides log level settings.
|
Log Level |
Value |
Description |
|---|---|---|
|
0 |
ERROR |
Only logs access control related administrative operations, for example, create a user, create authority context. |
|
1 |
WARN |
|
|
2 (Default) |
INFO |
|
|
3(1) |
DEBUG |
|
|
- |
OFF |
Used to turn off proclog . |
(1) After configuration updates,
Dynamic Activation will distribute the detail information between
the components in the Dynamic Activation system. Since a great amount
of log data size could influence on the provisioning traffic and log
disk size, be cautious in selecting this level.
The Log level value can be set to one of different values:
|
Log level 0 = ERROR Log level 1 = WARN Log level 2 = INFO Log level 3 = DEBUG To turn off proclog = OFF |
5.12.3.1 Configuration Steps of Processing Logs for Southbound Interfaces
To configure the process log, run the following commands:
- From an SC node, run the following command to configure
log level setting for the processing log:
$ bootloader.py config set --parameter @LOG_LEVEL@ --value <log_level_value>
- From an SC node, run the following command, for all PL
nodes, one by one, to activate the change:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
5.12.3.2 Configuration Steps of Processing Logs for CAI3G and CAI Interfaces
To configure the processing log, run the following commands:
- From an SC node, run the following command to configure
log level setting for the processing log:
$ bootloader.py config set --parameter @PROCLOG_LEVEL@ --value <log_level_value>
- From an SC node, run the following command, for all PL
nodes, one by one, to activate the change:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
5.12.3.3 Configuration Steps of Processing Logs for MML Interfaces
To configure the processing log, run the following commands:
- From an SC node, run the following command to configure
log level setting for the processing log:
$ bootloader.py config set --parameter @MML_PROCLOG_LEVEL@ --value <log_level_value>
- Note:
- When setting the log level to ERROR, the faults of MML commands are not recorded in the processing log. For detailed information about MML faults, refer to Layered HLR AUC Provisioning over MML, Reference [14].
- From an SC node, run the following command, for all PL
nodes, one by one, to activate the change:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
5.12.4 Configuration of Flat Processing Log Files
Flat processing log files share the log level settings with processing log , see Section 5.12.3.
Table 24 shows the configurable parameters of flat processing log files.
|
Parameter |
Description |
Type |
Default Value(1) |
|---|---|---|---|
|
FLATLOG_ENABLE_FLATLOGGING |
Enable or disable the flat processing log files. |
Boolean |
true |
|
FLATLOG_PATH_NBIA |
The location of the flat processing log file generated by nbia-module. |
String |
/var/log/dve/flatlog/ |
|
FLATLOG_PATH_ARH |
The location of the flat processing log file generated by arh-module. |
String |
/var/log/dve/flatlog/ |
|
FLATLOG_PATH_DVE_APPLICATION |
The location of the flat processing log files generated by dve-application module. |
String |
/var/log/dve/flatlog/ |
|
FLATLOG_PATH_TOMCAT_SERVER |
The location of the flat processing log file generated by tomcat-server module. |
String |
/var/log/dve/flatlog/ |
|
FLATLOG_PATH_DVE_INBOUND_INTERFACES_APPLICATION |
The location of the flat processing log file generated by dve-inbound-interfaces-application module. |
String |
/var/log/dve/flatlog/ |
|
FLATLOG_TIMEINTERVAL |
The time interval in minutes between two flat processing log files. |
Integer Value range: inclusively from 1 to 5 |
1 |
|
FLATLOG_LIFECYCLE |
The time duration in minutes of a flat processing log file exists. |
Integer Value range: inclusively from 5 to 60 |
10 |
|
FLATLOG_MAXFILESIZE |
The maximum size of the flat processing log file. |
String 1MB-20MB |
2MB |
|
FLATLOG_LOGGING_COLUMNS |
Customize the fields to be logged in the flat processing log files. Field names are separated by comma and are case sensitive. The fields include the following: LogType,RootLogId, SubLogId, TransactionId, Instance, User, Target, Operation, Status, Hostname, Protocol, StartTime,ExecuteTime, ResponseCode, FullRequest, FullResponse, ReplayLogId |
String |
LogType,RootLogId,SubLogId,TransactionId,Instance,Operation,Status,User,Hostname,Protocol,Target,StartTime,ExecuteTime,ResponseCode |
(1) If a invalid value has been configured, flat
processing log will use the default value.
To configure the flat processing log file parameters, run the following commands:
- From an SC node, run the following command to modify the
value of a parameter:
$ bootloader.py config set --parameter <flat_processing_log_file_parameter> --value <flat_processing_log_parameter_value>
For example:
$ bootloader.py config set --parameter @FLATLOG_TIMEINTERVAL@ --value 2
- From an SC node, run the following command, for all PL
nodes, one by one, to activate the change:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
5.12.5 Other Log Files
Apart from the processing logs, most other log files are maintained by the script auto_erase.sh that is located in /opt/dve/bin.
The script handles all log files where this is mentioned in Section 2.3.2. The script also handles the CLI Response files, /var/dve/cli, which might contain personal data. The script checks the time stamp on those logs and result files regularly through a cron job. All files older than the systems retention time are deleted. The default retention time is 30 days.
The script can also be executed manually with different settings, see the following use help text:
Use:
$ /opt/dve/bin/auto_erase.sh --help
Usage: auto_erase.sh [OPTION...] The script auto_erase.sh handles certain log and response files. System Administrators Guide for PG NGN for more information Examples: auto_erase.sh # Clean log files that are older than 30 days auto_erase.sh -days 60 # Clean log files that are older than 60 days, values between 1 and 365 allowed Options: -days change the number of days when certain log files is deleted
5.13 Automatic Queue Cleanup for Resilient Activation
The sub-requests in NE Abstraction Queues can grow over time and fill to capacity. Dynamic Activation provides an automatic cleanup function to cancel obsolete expired sub-requests.
5.13.1 How the Cleanup Function Works
When the cleanup function is activated, Dynamic Activation performs a scheduled job once a day, to cancel all sub-requests that have been expired for a specified retention period. Those sub-requests are treated as failed ones, and final notifications are sent to the client if the corresponding asynchronous requests of those sub-requests are in Executing state.
- Note:
- The cleanup function takes effect on all NE Abstraction Queues.
5.13.2 Configure the Cleanup Function
To configure the cleanup function for Resilient Activation, do as follows from an SC node:
- Specify the number of days to keep an expired sub-request.
$ bootloader.py config set --parameter @EXPIRED_REQUEST_RETENTION_PERIOD@ --value <number of days>
- Note:
- The default <number of days> is 0, which means the cleanup function is inactivated by default.
- Schedule a time (0–23) for when to start the cleanup on a daily basis.
$ bootloader.py config set --parameter @EXPIRED_REQUEST_CLEANER_START_HOUR@ --value <hour>
- Note:
- The <hour> is 2 by default, and is specified in a local time.
- Active all PL nodes one by one, to apply the configuration.Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname of a PL node>
5.14 Enabling and Disabling Hostname in CAI3G Error Details
It is possible to enable and disable hostname in CAI3G error details.
To disable hostname, execute the following commands:
- From an SC node, run the following command to disable
hostname in CAI3G error detail:
$ bootloader.py config set --parameter @CAI3G_DISABLE_HOSTNAME_IN_ERRORDETAILS@ --value true
- From an SC node, run the following command for all the
PL nodes (one by one) to activate the change:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
To enable hostname, execute the following commands:
- From an SC node, run the following command to enable hostname
in CAI3G error detail:
$ bootloader.py config set --parameter @CAI3G_DISABLE_HOSTNAME_IN_ERRORDETAILS@ --value false
- From an SC node, run the following command for all the
PL nodes (one by one) to activate the change:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
5.15 Configuring CAI3G Max Sessions
- Note:
- Overload protection in Dynamic Activation is based on configuration of the amount of incoming sessions. The number of sessions is pre-configured from start.
The number of simultaneous CAI3G session IDs can be changed from the default 25000 to suit the customer needs, by use of the bootloader.py commands. This is changed with the following procedure:
- From an SC node, run the following command to enable configuration
of Max Sessions:
$ bootloader.py config set --parameter @CAI3G_MAX_SESSION@ --value <MaxSession>
- From an SC node, run the following command for the PL
nodes (one by one) to activate the change:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
5.16 Configuring MDV Timeout
- Note:
- The MDV timeout indicates the maximum time during which the MDV response must be returned. If MDV doesn’t respond during the time, a timeout error is sent to client.
The MDV timeout is 300(s) by default, but it can be set to a larger value when customer has a complicated MDV, by using the bootloader.py command:
- From an SC node, run the following command to specify
the MDV timeout value:
$ bootloader.py config set --parameter @MDV_TIMEOUT@ --value <MDV Timeout>
- From an SC node, run the following command for the nodes
(one by one) to activate the change:
- Note:
- Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the node that is to be activated.
5.17 Synchronization Among Clusters
A Dynamic Activation cluster has license counters for itself and its synchronized clusters. Each counter represents one cluster. The sum of those counters are synchronized among the clusters and shown in GUI.
Do not change the name of a cluster that is synchronized with other clusters. Otherwise it causes license-counter-mismatch problems.
- Note:
- The clusters synchronize license counters every hour. During the intervals, the sum of counters on a cluster can be different from one another.
- Note:
- The hardware specific licenses, for example Max CPUs and Max VMs, are not distributed among clusters and are specific for each cluster. For example, two synchronized clusters with 8 CPUs in total, then each cluster must have a 4-CPU capacity license. This will be shown in the GUI as 4 CPUs for the system.
To be able to synchronize clusters, a setup needs to be performed on each cluster first. During setup, generation and exchange of SSH keys are performed between clusters since scripts for synchronization of license counters and configuration work without passwords. This needs to be done on each cluster.
The following sections contain information on how to generate and exchange SSH keys between Dynamic Activation clusters. The keys are stored in /home/syncuser/.ssh.
Configuration file is located in the /home/syncuser/config directory. This file contains the IP addresses to the clusters to synchronize with and is populated when running the script for exchanging SSH keys in Section 5.17.2.
Figure 3 acts as a conceptual picture of cluster synchronization where Cluster-A indicates currently used provisioning cluster with correct configuration, while Cluster-B and Cluster-C act as newly installed clusters.
5.17.1 Verification of Connections
Follow the instructions to check that the ssh|sftp|scp connection to the cluster, with which to exchange SSH keys, is working.
- Enter the following command to check that the connection
to the other cluster is working, and that the port for ssh|sftp|scp is open.
$ ssh actadm@<VIP-OAM cluster>
- Check the output of the command.
- If the following output is shown, the connection is
working.
The authenticity of host '< VIP-OAM IP-address cluster> (<VIP-OAM IP-address cluster 1>)' can't be established. RSA key fingerprint is <autogenerated_key_fingerprint>. Are you sure you want to continue connecting (yes/no)?
Enter no to interrupt.
- If the following output is shown, the connection is
not working.
ssh: connect to host < VIP-OAM IP-address cluster> port 22: Connection refused
- If the following output is shown, the connection is
working.
5.17.2 Exchanging SSH Key between Clusters
This section contains information on how to generate and send SSH key from the newly installed clusters.
- Make sure that all clusters are using the same software version.
- Make sure that communication between the clusters is working, see Section 5.17.1.
- Make sure a password for syncuser is set on all clusters that are synchronized.
By default, no password is set for syncuser.
Login as user root on SC-1 and run the following command on all involved clusters (Cluster-A, Cluster-B, Cluster-C according to Figure 3), to set a password for syncuser:
# passwd syncuser
- Make sure that on all clusters that are synchronized,
there are no old keys in the /home/syncuser/.ssh directory:
$ sudo -u syncuser rm -f /home/syncuser/.ssh/*
- Log in to SC1 on the first cluster (Cluster-A according to Figure 3).
- Run the setup
script:
$ sudo -u syncuser /home/syncuser/scripts/sync_ssh_keys.sh
A printout is shown:
Set up the communication for synchronization to one or two other clusters. Before continuing, make sure the IP addresses to the other clusters and the password for syncuser are available.
Press any key to continue...
Press Enter to continue or interrupt with Ctrl+C if not all information specified above is available.
Specify the name of the cluster, VIP-OAM IP addresses for the clusters to synchronize with and username and password of a provisioning client user. Leave cluster 2 (Cluster C according to Figure 3) blank if only connect to one cluster.
Example:
When script is executed on Cluster-A, following values are used: - Enter a name for this cluster: Cluster-A - Enter VIP-OAM IP-address of cluster 1: VIP-OAM-IP of Cluster-B - Enter VIP-OAM IP-address of cluster 2: VIP-OAM-IP of Cluster-C When script is executed on Cluster-B, following values are used: - Enter a name for this cluster: Cluster-B - Enter VIP-OAM IP-address of cluster 1: VIP-OAM-IP of Cluster-A - Enter VIP-OAM IP-address of cluster 2: VIP-OAM-IP of Cluster-C When script is executed on Cluster-C, following values are used: - Enter a name for this cluster: Cluster-C - Enter VIP-OAM IP-address of cluster 1: VIP-OAM-IP of Cluster-A - Enter VIP-OAM IP-address of cluster 2: VIP-OAM-IP of Cluster-B
Enter a name for this cluster: <name>
Enter VIP-OAM IP-address of cluster 1: <VIP-OAM IP-address cluster 1>
Enter VIP-OAM IP-address of cluster 2: <VIP-OAM IP-address cluster 2>
Enter username for access: <user name of a provisioning client user>
Enter password for access: <password of the provisioning client user>
Encrypting password...
Password encypted!
Current Cluster name and VIP-OAM IP addresses of other clusters:
THIS_CLUSTER_NAME<name>
CL1_VIP_OAM_IP <VIP-OAM IP-address cluster 1>
CL2_VIP_OAM_IP <VIP-OAM IP-address cluster 2>
JMX_USER admin <user name of a provisioning client user>
JMX_PASSWORD admin <password of the provisioning client user>
Are the configurations above correct? (y/n) y
Transfering ssh keys to remote cluster <VIP-OAM IP-address cluster 1> as syncuser
The authenticity of host '<VIP-OAM IP-address cluster 1> (<VIP-OAM IP-address cluster 1>)'can't be established. RSA key fingerprint is <autogenerated_key_fingerprint>. Are you sure you want to continue connecting (yes/no)?
Answer Yes to keep the connection to cluster 1.
Warning: Permanently added < VIP-OAM IP-address cluster 1>' (RSA) to the list of known hosts.
Specify SSH key:
Enter the password for user syncuser
Password: <syncuser_password>
authorized_keys 100% 400 0.4KB/s 00:00
- Note:
- Password required in this step is the syncuser password set in Step 3.
- Log in to SC1 on the next cluster (Cluster-B, according to Figure 3), and repeat Step 6.
- Log in to each cluster and verify that no password is
needed:
$ sudo -u syncuser ssh syncuser@<VIP-OAM_IP-address_cluster_1_or_2>
5.17.3 Forward SSH Key to New Cluster
This section contains information on how to generate and exchange SSH key when adding the second cluster (cluster2).
The following instructions are to be performed on the first cluster. On cluster2 that is added, perform the instructions specified in Section 5.17.2.
- Make sure that all clusters are using the same software version.
- Make sure that communication to the new cluster is working, see Section 5.17.1.
- Login to SC1.
- Run the setup script:
$ cd /home/syncuser/scripts
$ sudo -u syncuser ./sync_ssh_keys.sh
- A printout is shown:
Set up the communication for synchronization to one or two other clusters. Before continuing, make sure the IP addresses to the other clusters and the password for syncuser are available. Press any key to continue.
Press Enter to continue or interrupt with Ctrl+C if not all information specified above is available.
- The IP address of cluster 1 is shown:
Current VIP-OAM IP addresses of other clusters:CL1_VIP_OAM_IP < VIP-OAM IP-address cluster 1> CL2_VIP_OAM_IP
Are the above IP addresses correct? (y/n)
Answer n
- Enter the IP address for cluster 1 and the new IP address for cluster 2.
- Forward the SSH key to cluster 2:
- The authenticity of host '< VIP-OAM IP-address
cluster 2> (<VIP-OAM IP-address cluster 2>)' can't
be established. RSA key fingerprint is <autogenerated_key_fingerprint>. Are you sure you want to continue connecting (yes/no)?
Answer yes to keep the connection to cluster 2.
- Warning: Permanently added < VIP-OAM IP-address
cluster 2>' (RSA) to the list of known hosts.
Then the SSH key is sent to cluster 2.
- The authenticity of host '< VIP-OAM IP-address
cluster 2> (<VIP-OAM IP-address cluster 2>)' can't
be established. RSA key fingerprint is <autogenerated_key_fingerprint>. Are you sure you want to continue connecting (yes/no)?
5.17.4 Exchanging SSH Key When Replacing a Cluster
This section contains information on how to generate and exchange SSH key when replacing a cluster. Exchange of SSH key is needed since the scripts for synchronization cannot handle passwords.
This is only needed for synchronization of license counters and configuration and is to be performed on the cluster already available.
- Make sure that all clusters are using the same software version.
- Make sure that communication to the new cluster is working, see Section 5.17.1.
- Login to SC1.
- Run the setup script:
$ cd /home/syncuser/scripts
$ sudo -u syncuser ./sync_ssh_keys.sh
- A printout is shown:
Set up the communication for synchronization to one or two other clusters. Before continuing, make sure the IP addresses to the other clusters and the password for syncuser are available. Press any key to continue.
Press Enter to continue or interrupt with Ctrl+C if not all information specified above is available.
- The current IP addresses of the clusters are shown:
Current VIP-OAM IP addresses of other clusters:CL1_VIP_OAM_IP < VIP-OAM IP-address cluster 1> CL2_VIP_OAM_IP < VIP-OAM IP-address cluster 2>
Are the above IP addresses correct? (y/n)
Answer n
- Enter the IP addresses for the clusters. Change the address that is new.
- Forward the SSH key to the new cluster:
- The authenticity of host '< VIP-OAM IP-address
cluster “new cluster”> (<VIP-OAM IP-address
cluster “new cluster”>)' can't be established. RSA
key fingerprint is <autogenerated_key_fingerprint>.
Are you sure you want to continue connecting (yes/no)?
Answer yes to keep the connection to the new cluster.
- Warning: Permanently added < VIP-OAM IP-address
cluster “new cluster”>' (RSA) to the list of known
hosts.
Then the SSH key is sent to the new cluster.
- The authenticity of host '< VIP-OAM IP-address
cluster “new cluster”> (<VIP-OAM IP-address
cluster “new cluster”>)' can't be established. RSA
key fingerprint is <autogenerated_key_fingerprint>.
Are you sure you want to continue connecting (yes/no)?
5.17.5 Update Password for Cluster Synchronization
This section contains information on how to update the password for the cluster synchronization.
Whenever the password of the provisioning client, selected in Section 5.17.2, is changed in the Operation & Management>Access Control GUI, the same password must be updated on the cluster for cluster synchronization.
Otherwise the synchronization breaks.
- Run the setup script by using the " -changePassword" parameter.
$ cd /home/syncuser/scripts
$ sudo -u syncuser ./sync_keys.sh -changePassword
- Confirm to update the password by entering y and then pressing Enter.
Do you want to update the password for syncing (y/n)
y
- The following printout is displayed.
Enter the new password for access:
- Enter the same password as that in the GUI, and then press Enter.
5.17.6 License Counters Synchronization
The script sync_license.sh, located in /home/syncuser/scripts, extracts license counters and synchronizes these with the other clusters. This script is run by a crontab job once every hour on each PL node in the cluster with syncuser authority and with an interval of five minutes. The time interval prevents the PL nodes from executing the job at the same time.
This means that the script starts on the first PL node at 00, continuing on the second node at 05, on the third at 10 and so on.
The temporary files containing the license counter that is transferred between the clusters are at /home/syncuser
The temporary file containing the configuration that is transferred between the clusters is at /home/syncuser/scripts
Log files for synchronization function are located in /home/syncuser/logs
These logs describe the license counters that were sent to and received from another cluster.
5.17.7 Remove Clusters from Synchronization
To remove clusters from synchronization, the synchronization must be stopped and the synchronized counters must be cleaned.
- Note:
- The same procedures must be performed on every cluster in the synchronization.
Stop the synchronization
Perform the following procedure on each cluster in the synchronization:
- Back up the property file for synchronization. The file
is:
/home/syncuser/config/sync.properties
- In the current property file (/home/syncuser/config/sync.properties), remove the address of the cluster(s) that it no longer should synchronize with.
For example, when cluster_1, cluster_2 and cluster_3 are synchronized, and cluster_3 is to be removed from synchronization:
Change cluster_1 properties to: CL1_VIP_OAM_IP=cluster_2 CL2_VIP_OAM_IP= Change cluster_2 properties to: CL1_VIP_OAM_IP=cluster_1 CL2_VIP_OAM_IP= Change cluster_3 properties to: CL1_VIP_OAM_IP= CL2_VIP_OAM_IP=
Clean the counters
- Get all cluster names by running
the command multiple times, each time using a cluster
IP of those clusters.
$ ssh <cluster IP> less /home/syncuser/config/sync.properties |grep THIS_CLUSTER_NAME
- Perform the following procedure on a PL node on each cluster
in the synchronization:
- Remove all pending imports of counters on the current
cluster.
$ sudo -u syncuser rm /home/syncuser/logs/*.cc
- Clean the counters of other clusters that the current
cluster should no longer synchronize with.
$ /opt/dve/tools/licenseclient/licenseclient.sh deleteexternalusedcapacity -u=<provisioning client user> -e=<provisioning client user password> -cid=<cluster name>
- Note:
- The <cluster name> was obtained in Step 1.
For example, when cluster_1 ~ cluster_3 are synchronized, and cluster_3 is to remove from synchronization:
- On cluster_1 and cluster_2, clean the counter by using name of cluster_3.
- On cluster_3, clean two counters by using name of cluster_1 and name of cluster_2.
- Remove all pending imports of counters on the current
cluster.
5.18 Dynamic Activation Application Custom Log4j Settings
By use of the bootloader.py command, see Section 4.1, it is possible to set up logging with a custom log level for modules in the Dynamic Activation Application. These custom logs will be redirected to a user-provided log output file, located in the /var/log/dve/customlogs/ directory.
Applying new log settings is performed through the bootloader.py deploy-log command, and are persisted over both a reboot and activate of the system. The bootloader logging deploy-log command can be used to deploy custom logging for a sub-module on one host at a time, or on all hosts where the specific sub-module resides.
This feature is mainly used for debugging and troubleshooting. It should generally not be enabled on a live deployment. Depending on custom logging amount deployed, this may have severe impact on application performance.
The commands in Section 5.18.1 through Section 5.18.3, should all be executed from the first control node in the cluster (SC1).
All the custom logs in /var/log/dve/customlogs/ have a maximum size of 15MB before they are rotated, and will store a maximum of 10 log files before the oldest logs are deleted.
5.18.1 Deploy Custom Log Settings
This section describes the procedure of deploying custom log settings on Dynamic Activation Application modules.
- To deploy a custom log setting, execute the following
command from SC1:
$ bootloader.py logging deploy-log -n <(sub)module name> -j <jarfile> -l <loglevel> --host <ip/hostname/all> -o <log file name>
- Note:
- To list all the available jar-files for a specific sub-module, use the following command from SC1; bootloader.py logging list –n <module-name>.
- Verify that
the new settings are applied.
From SC1:
$ bootloader.py logging list -t log
Make sure the new log setting exists in the list.
The log output file can be found in the /var/log/dve/customlogs/ directory, on the node where custom logging was enabled.
- Note:
- The -outputfile/-o flag in bootloader logging deploy-log command, is optional. If no output file-name is provided, the output file will be named <module name>.log
The following shows an example of enabling debug on all the classes in DVE-Core-2.359.jar, contained in dve-application module. The settings are deployed to host cluster-host-1
The custom logging will be stored in /var/log/dve/customlogs/example.log
$ bootloader.py logging deploy-log -n dve-application -j DVE-Core-2.359.jar -l DEBUG --host cluster-host-1 -o example.log
- Note:
- The host cluster-host-1 can be replaced with all to enable debug on all the hosts in the cluster, where dve-application exists.
5.18.2 Undeploy Custom Log Settings
This section describes the procedure of undeploying activate log settings through the bootloader. This is performed by use of the bootloader.py logging undeploy-log command.
- To undeploy a custom log setting, execute the following
command from SC1:
$ bootloader.py logging undeploy-log -n <(sub)module name> -j <jarfile> --host <ip/hostname/all> -o <log file name>
- Verify that the log settings were correctly removed, execute
the following command from SC1:
$ bootloader.py logging list -t log
Make sure that the list no longer contains any custom settings.
The following shows an example of disabling the settings applied in Step 2 in Section 5.18.1:
$ bootloader.py logging undeploy-log -n dve-application -j DVE-Core-2.359.jar --host cluster-host-1
- Note:
- The output log is not removed during undeploy, and will be saved on disk until manually removed. Some cleanup may be necessary when old debug logs are no longer needed.
5.18.3 List Active Custom Log Settings
This section describes how to list all the active custom log settings in the cluster.
- To list all the active log settings in the cluster, execute
the following command from SC1:
$ bootloader.py logging list -t log
- To show a list of available JAR files in a specific sub-module,
execute the following command from SC1:
$ bootloader.py logging list -n <(sub)module name>
5.19 Asynchronous Request Handler Setting
5.19.1 Configuration of Asynchronous CAI3G 1.2 Interface
This section contains information on how the Asynchronous CAI3G 1.2 interface configuration works the first time it gets enabled and operated, as well as how to modify the parameters.
The first time the Asynchronous CAI3G 1.2 interface is deployed, the parameters in Table 25 are initiated with default values.
To modify the default values of these parameters, do the following on one of the SC nodes:
- Run the bootloader.py command
to list the configuration information for the system:
$ bootloader.py config list --show_all
Then find the Asynchronous CAI3G 1.2 interface configuration in arh-module.
- Run the bootloader.py command
to modify the values of a parameter.
- To set the new value, do the following:
The new value is saved in the user_config file.
$ bootloader.py config set --parameter <parameter> --value <value>
Example of setting a new value for ARHJOB_MAX_QUEUE_SIZE:
$ bootloader.py config set --parameter @ARHJOB_MAX_QUEUE_SIZE@ --value 10000
INFO - *** @ARHJOB_MAX_QUEUE_SIZE@=10000 inserted in user configuration
INFO - *** To activate the changed parameter, execute "bootloader.py node activate --host <hostname>" commandCheck user_config for the changed parameter:
$ cat /home/bootloader/config/user_config
@ARHJOB_MAX_QUEUE_SIZE@=10000
- To restore the default value, do the following:
The value being restored is removed from the user_config file.
$ bootloader.py config remove --parameter <parameter>
Example of restoring the value for ARHJOB_MAX_QUEUE_SIZE:
$ bootloader.py config remove --parameter @ARHJOB_MAX_QUEUE_SIZE@
INFO - *** Removed @ARHJOB_MAX_QUEUE_SIZE@ from user config
INFO - *** To activate the changed parameter, execute "bootloader.py node activate --host <hostname>" command uration
- To set the new value, do the following:
- From an SC node, run the following command to activate
the modification for each PL node, one by one:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
- Verify the changes by logging in to the Asynchronous CAI3G
1.2 interface.
|
Parameter |
Description |
Type |
Default Value |
|---|---|---|---|
|
ARHJOB_MAX_QUEUE_SIZE |
Maximum number of requests can be stored in Asynchronous Request Handler. |
Integer |
10,000,000 |
|
CAPACITY_CHECK_INTERVAL |
The interval in millisecond of checking the maximum number of requests in Asynchronous request Queue. |
Integer |
60,000 |
|
ARHNOTIFICATION_MAX_RETRY_TIME |
The maximum retry time in millisecond of re-sending the final failed notification. |
Integer |
3,600,000 |
|
ARHNOTIFICATION_RETRY_FACTOR |
The retry factor of the interval for retrying the final failed notification. |
Integer |
2 |
5.20 Batch Handler Settings
This section contains information on how the batch handler configuration works the first time it gets enabled and operated, as well as how to modify the parameters.
The first time the batch handler is deployed, the parameters in Table 26 are initiated with default values.
To modify the default values of these parameters, do the following on one of the SC nodes:
- Run the bootloader.py command
to list the configuration information for the system:
$ bootloader.py config list --show_all
Then find the batch handler configuration in ema-batch-module.
- Run the bootloader.py command
to modify the values of a parameter.
- To set the new value, do the following:
The new value is saved in the user_config file.
$ bootloader.py config set --parameter <parameter> --value <value>
Example of setting a new value for ACS_RETRY_TIMES:
$ bootloader.py config set --parameter @ACS_RETRY_TIMES@ --value 3
INFO - *** @ACS_RETRY_TIMES@=3 inserted in user configuration
INFO - *** To activate the changed parameter, execute "bootloader.py node activate --host <hostname>" commandCheck user_config for the changed parameter:
$ cat /home/bootloader/config/user_config
@ACS_RETRY_TIMES@=3
- To restore the default value, do the following:
The value being restored is removed from the user_config file.
$ bootloader.py config remove --parameter <parameter>
Example of restoring the value for ACS_RETRY_TIMES:
$ bootloader.py config remove --parameter @ACS_RETRY_TIMES@
INFO - *** Removed @ACS_RETRY_TIMES@ from user configuration
INFO - *** To activate the changed parameter, execute "bootloader.py node activate --host <hostname>" command
- To set the new value, do the following:
- From an SC node, run the following command to activate
the modification for each SC node, one by one:Attention!
Wait for each node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the node that is to be activated.
- Verify the changes by logging in to the Batch Handler.
|
Parameter |
Description |
Type |
Default Value |
|---|---|---|---|
|
ACS_RETRY_TIMES |
The number of times an Akka Communication Service (ACS) request is retried. |
Integer |
0 Value = 0 or empty indicates that no retry will be made. |
|
ACS_RETRY_INTERVAL |
The time (in milliseconds) to wait before another retry. |
Integer |
5000 Value 0 or empty indicates that there is no interval between the retries. |
|
ACS_TIMEOUT |
The time (in seconds) to wait for a response after sending an ACS request. |
Integer |
90 |
|
MAX_JOB |
The maximum number of batch jobs. |
Integer |
1000 |
|
MAX_BATCH_FILE |
The maximum number of batch files. |
Integer |
100 |
|
MAX_SESSION |
The maximum number of sessions. |
Integer |
200 |
5.21 Configuring External OpenID Connect Provider
This section describes how to configure Dynamic Activation to connect to external OpenID Connect Provider.
Table 27 lists all the configuration parameters for connecting to external OpenID Connect Provider configuration.
|
Parameter |
Description |
Default Value |
Occurrence |
|---|---|---|---|
|
AUTHN_AUTHENTICATION_URI(1) |
Authorization Endpoint URL of the OpenID Connect Provider |
Mandatory | |
|
AUTHN_TOKEN_URI (1) |
Token Endpoint URI of the OpenID Connect Provider |
Mandatory | |
|
CORE_CLIENT_ID (1) |
OAuth 2.0 client Identifier of Dynamic Activation defined at the Authorization Server |
Mandatory | |
|
CORE_CLIENT_SECRET (1) |
OAuth 2.0 client secret of Dynamic Activation defined at the Authorization Server |
Mandatory | |
|
CORE_REDIRECT_URI |
Redirection URI of Dynamic Activation to which the authentication response will be sent. Format: https://<VIP-OAM-IP>:8383/oauth/v1/callback |
Mandatory | |
|
AUTHN_PUBLIC_ALIAS(2) |
The alias of certificate in the keystore of Dynamic Activation used to validate the ID Tokens signature received from OpenID Connect Provider |
slr_server |
Mandatory |
|
AUTHN_ISSUER (1) |
Issuer Identifier for the OP that the RP is to send the Authentication Request to. Its value MUST be a URL using the https scheme. |
Mandatory | |
|
AUTHN_LOGOUT_URI (1) |
End Session Endpoint URI of the OpenID Connect Provider for RP-initiated logout |
Mandatory | |
|
POST_LOGOUT_REDIRECT_URI |
Redirection URI of Dynamic Activation to which end user's User Agent is redirected after a logout has been performed. Format: https://<VIP-OAM-IP>:8383 |
Mandatory |
(1) The
value of these parameters depends on the OpenID Connect Provider product.
(2) The alias name is defined during
setting OpenID Connect Provider configuration in Dynamic Activation.
See example in Section 5.21.2.
5.21.1 Pre-request for External OpenID Connect Provider
To utilize OpenID Connect services for an end user, Dynamic Activation must be registered with external OpenID Connect Provider to provide information about itself and to obtain required information, including OAuth 2.0 client ID and client secret. In general, the following information of Dynamic Activation must be registered in external OpenID Connect Provider:
- Redirect URI to Dynamic Activation for authentication response
- Redirect URI to Dynamic Activation for post logout
Dynamic Activation supports authentication using Authorization Code Flow. Dynamic Activation uses profile scope to require specific sets of claims. All required claims must be returned to Dynamic Activation in ID_Token. Dynamic Activation requires the following standard claims to be returned in ID_Token:
- given_name
- family_name
- preferred_username
- phone_number
The procedures of client registration and configuration of ID_Token as well as claims depend on the OpenID Connect Provider product. For more information about OpenID Connect specifications, refer to OpenID Connect Core 1.0 and OpenID Connect Session Management 1.0.
Dynamic Activation always checks the time of iat claim in ID_Token. To avoid invalid time in iat, Dynamic Activation and external OpenID Connect Provider must be time synchronized through NTP.
5.21.2 Setting External OpenID Connect Provider Configuration in Dynamic Activation
To set external OpenID Connect Provider configuration in Dynamic Activation:
- Export the certificate for ID Token signature from external OpenID Connect Provider server.
- Import the certificate file for ID
Token signature validation into the keystore of Dynamic Activation.
For example, run the following commands on an SC node to import ID Token signature certificate from Step 1:
- Upload the certificate file to directory /home/bootloader/ssl/.
- Import the certificate file into keystore:
$ sudo -u actadm /usr/java/latest/bin/keytool -import -keystore /home/bootloader/ssl/keystore_tomcat.jks -file /home/bootloader/ssl/id_token_signature.crt -alias test
Enter keystore password:
Certificate was added to keystore
- Note:
- The password for keystore_tomcat.jks can be found in /user/local/pgngn/tomcat-server-<version>/config/oauth.properties.
- Export certificate file for SSL connection from external OpenID Connect Provider server.
- Import certificate file for SSL connection into the keystore
of Dynamic Activation.
For example, run the following command on an SC node to import SSL connection certificate from Step 3:
- Upload the certificate file to directory /home/bootloader/ssl/.
- Import SSL connection certificate file into keystore:
$ sudo -u actadm /usr/java/latest/bin/keytool -import -keystore /home/bootloader/ssl/keystore_tomcat.jks -file /home/bootloader/ssl/ssl_connection.crt -alias tomcat
Enter keystore password:
Certificate was added to keystore
- Note:
- The password for keystore_tomcat.jks can be found in /user/local/pgngn/tomcat-server-<version>/config/oauth.properties
- Configure external OpenID Connect Provider parameters
in Table 25 for Dynamic Activation.
For example, when integrating with ForgeRock OpenAM server, run the following commands as the actadm user on an SC node to add external OpenID Connect Provider configuration:
$ bootloader.py config set --parameter @AUTHN_AUTHENTICATION_URI@ --value https://openam.com:8383/openam/oauth2/authorize
$ bootloader.py config set --parameter @AUTHN_TOKEN_URI@ --value https://openam.com:8383/openam/oauth2/access_token
$ bootloader.py config set --parameter @CORE_CLIENT_ID@ --value edaAgent
$ bootloader.py config set --parameter @CORE_CLIENT_SECRET@ --value edaagent000
$ bootloader.py config set --parameter @CORE_REDIRECT_URI@ --value https://10.170.13.62:8383/oauth/v1/callback
$ bootloader.py config set --parameter @AUTHN_PUBLIC_ALIAS@ --value test
- Note:
- The alias name is the same as the one in Step 2.
$ bootloader.py config set --parameter @AUTHN_ISSUER@ --value https://openam.com:8383/openam/oauth2
$ bootloader.py config set --parameter @POST_LOGOUT_REDIRECT_URI@ --value https://10.170.13.62:8383
$ bootloader.py config set --parameter @AUTHN_LOGOUT_URI@ --value https://openam.com:8383/openam/oauth2/connect/endSession
- From an SC node, run the following command to activate
the modification for each node, one by one:
- Note:
- Wait for each node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter can only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the node that is to be activated.
5.21.3 Removing External OpenID Connect Provider Configuration
To remove external OpenID Connect Provider configuration:
- Roll back Ericsson certificate from Dynamic Activation.
- Note:
- The following example removes alias of ID Token signature
certificate and SSL certificate.
The password for keystore_tomcat.jks can be found in /user/local/pgngn/tomcat-server-<version>/config/oauth.properties.
$ sudo -u actadm /usr/java/latest/bin/keytool -delete -alias test -keystore /home/bootloader/ssl/keystore_tomcat.jks -storepass <password>
$ sudo -u actadm /usr/java/latest/bin/keytool -delete -alias tomcat -keystore /home/bootloader/ssl/keystore_tomcat.jks -storepass <password>
- From an SC node, run the following command to remove external
OpenID Connect Provider configuration in the cluster.
$ bootloader.py config remove --parameter @AUTHN_AUTHENTICATION_URI@
$ bootloader.py config remove --parameter @AUTHN_TOKEN_URI@
$ bootloader.py config remove --parameter @CORE_CLIENT_ID@
$ bootloader.py config remove --parameter @CORE_CLIENT_SECRET@
$ bootloader.py config remove --parameter @CORE_REDIRECT_URI@
$ bootloader.py config remove --parameter @AUTHN_PUBLIC_ALIAS@
$ bootloader.py config remove --parameter @AUTHN_ISSUER@
$ bootloader.py config remove --parameter @POST_LOGOUT_REDIRECT_URI@
$ bootloader.py config remove --parameter @AUTHN_LOGOUT_URI@
- From an SC node, run the following command to activate
the modification for each node, one by one:
- Note:
- Wait for each node to be activated before starting with the
next one, otherwise traffic disturbances occur. The all parameter can only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the node that is to be activated.
6 UDC Specific Information
6.1 UDC Specific Operations
6.1.1 Installing and Upgrading HSS Validator Plug-in
This section contains information on how to install or upgrade the HSS validator plug-in software.
- Note:
- Make sure to have the correct plug-in available.
Also make sure that the file is named according to the syntax <HSS Plugin Name>-<R-state>.tar.gz
For example: HssProvisioningValidator-R4A.tar.gz
6.1.1.1 Installing
- Copy the HSS Validator Plug-in software to the /home/bootloader/repository directory on one of the SC nodes.
- Change owner and group of the HSS Plug-in file.
# chown actadm:activation /home/bootloader/repository/<HSS Plugin>-<R-state>.tar.gz
- From an SC node, run the following command for all PL
nodes in the cluster, one by one, to add the plug-in as submodule.
For example, on a four node cluster it is the <hostname> of nodeId 3 and 4, on a six node cluster it is the <hostname> of nodeId 3, 4, 5 and 6:
$ bootloader.py submodule add -n <HSS Plugin Name>-<R-state>.tar.gz -t lib-ext -p dve-application --host <hostname>
<hostname> is the hostname of the PL node to which the submodule is being added.
- From an SC node, run the following command for all PL
nodes, one by one, to activate the plug-in as submodule.Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node to which the submodule is being activated.
6.1.1.2 Upgrading
- Remove the old HSS Validator Plug-in software from the /home/bootloader/repository directory, residing on
the SC node.
- Note:
- Save the file for a possible rollback, if needed later.
- From the SC node, run the following command for all PL
nodes in the cluster, one by one, to remove the plug-in as submodule.
For example, on a four node cluster it is the <hostname> of nodeId 3 and 4, on a six node cluster it is the <hostname> of nodeId 3, 4, 5 and 6:
$ bootloader.py submodule delete -n <HSS Plugin Name> -p dve-application --host <hostname>
<hostname> is the hostname of the PL node to which the submodule is deleted.
- Note:
- Make sure that the file is named according to the syntax <HSS Plugin Name>
For example: HssProvisioningValidator
- Copy the HSS Validator plug-in software to the /home/bootloader/repository directory on one of the SC nodes.
- Change owner and group of the HSS Plug-in file.
# chown actadm:activation /home/bootloader/repository/<HSS Plugin>-<R-state>.tar.gz
- From the SC node, run the following command for all PL
nodes in the cluster, one by one, to add the plug-in as submodule.
For example, on a four node cluster it is the <hostname> of nodeId 3 and 4, on a six node cluster it is the <hostname> of nodeId 3, 4, 5 and 6:
$ bootloader.py submodule add -n <HSS Plugin Name>-<R-state>.tar.gz -t lib-ext -p dve-application --host <hostname>
<hostname> is the hostname of the PL node to which the submodule is being added.
- From the SC node, run following command for all PL nodes,
one by one, to activate the plug-in as submodule.Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node to which the submodule is being activated.
6.1.2 Uninstalling and Rolling Back HSS Validator Plug-in
This section contains information on how to uninstall and roll back the HSS plug-in validator.
6.1.2.1 Uninstalling
- Remove the HSS Validator
plug-in software from the /home/bootloader/repository directory, residing on one of the SC nodes.
- Note:
- Save the file for a possible rollback, if needed later.
- From an SC node, run the following command for all PL
nodes in the cluster, one by one, to remove the plug-in as submodule.
For example, on a four node cluster it is the <hostname> of nodeId 3 and 4, on a six node cluster it is the <hostname> of nodeId 3, 4, 5 and 6:
$ bootloader.py submodule delete -n <HSS Plugin Name> -p dve-application --host <hostname>
<hostname> is the hostname of the PL node to which the submodule is being deleted.
- Note:
- Make sure that the file is named according to the syntax <HSS Plugin Name>
For example: HssProvisioningValidator
- From the SC node, run the following command for all PL
nodes, one by one, to activate the plug-in as submodule.Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node to which the submodule is being activated.
6.1.2.2 Rolling Back
- Remove the new HSS Validator plug-in software from the /home/bootloader/repository directory, residing on one of the SC nodes.
- From an SC node, run the following command for all PL
nodes, one by one, to remove the plug-in as submodule.
For example, on a four node cluster it is the <hostname> of nodeId 3 and 4, on a six node cluster it is the <hostname> of nodeId 3, 4, 5 and 6:
$ bootloader.py submodule delete -n <HSS Plugin Name> -p dve-application --host <hostname>
<hostname> is the hostname of the PL node to which the submodule is being deleted.
- Note:
- Make sure that the file is named according to the syntax <HSS Plugin Name>
For example: HssProvisioningValidator
- Copy the previously saved HSS Validator Plug-in software, see Step 1 in Section 6.1.2.1, to the /home/bootloader/repository directory, on one of the SC nodes.
- Change owner and group of the HSS Plug-in file.
# chown actadm:activation /home/bootloader/repository/<HSS Plugin>-<R-state>.tar.gz
- From the SC node, run the following command for all PL
nodes, one by one, to add the plug-in as submodule.
For example, on a four node cluster it is the <hostname> of nodeId 3 and 4, on a six node cluster it is the <hostname> of nodeId 3, 4, 5 and 6:
$ bootloader.py submodule add -n <HSS Plugin Name>-<R-state>.tar.gz -t lib-ext -p dve-application --host <hostname>
<hostname> is the hostname of the PL node to which the submodule is being added.
- From the SC node, run the following command for all PL
nodes, one by one, to activate the plug-in as submodule.Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node to which the submodule is being activated.
6.2 UDC Specific System Administration
6.2.1 Activating CLI
This section covers how to activate and configure CLI functions.
- For secure CLI connection (through port 8992), SSL must be configured and activated on the CLI module.
- Unsecure CLI connection is through port 8023.
6.2.1.1 CLI Module SSL Configuration and Activation
- Note:
- Before configuring SSL for CLI module, ensure that the keystore pgexternal_oam_keystore.jks is created as described in Section 5.5.3.2.
- Set the keystore for CLI.
Edit the /home/bootloader/config/module_config_files/UDC-Application/cli.properties file. Uncomment the line $keyStore=keystore_external.jks, and edit it to: keyStore=/home/bootloader/ssl/pgexternal_oam_keystore.jks
- Obtain the encrypted password.
- If the same keystore file is used as for the cudb-block-proxy
module, do as follows:
$ bootloader.py config list -A | grep @BLOCK_PROXY_SSLCERT_STORE_PASSWORD@
<encrypted_password>
- If not, obtain the password for the above keystore file
and encrypt it by issuing the following command:
$ encrypt.sh '<clear_text_password>'
- Note:
- The password needs to be inside the single quotation marks (').
Example output:
pwqrl5Un3PemL4v88evQ+Q==
- If the same keystore file is used as for the cudb-block-proxy
module, do as follows:
- Set keystore password for the CLI module.
Edit /home/bootloader/config/module_config_files/UDC-Application/cli.properties file. Uncomment the line $keyStorePassword= and set the encrypted password. For example:
keyStorePassword=pwqrl5Un3PemL4v88evQ+Q==
- From an SC node, activate the modifications
by running the following command for each affected PL node, one by
one:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
- Verify the certificate for SSL communication from an external
node.
$ openssl s_client -connect <VIP-OAM-IP>:8992 -showcerts -tls1
- Note:
- Below output is an example of how the openssl s_client can look like when a self-signed certificate negotiation is working.
Example Output:
CONNECTED(00000003) depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=SE/ST=Stockholm/L=Stockholm/O=Ericsson/OU=IT/CN=eg8-vip-oam.ete.ka.sw.ericsson.se i:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4 -----BEGIN CERTIFICATE----- MIIGCzCCBPOgAwIBAgIQXEKFiasv3kVZjna6Ms73NjANBgkqhkiG9w0BAQsFADB+
- Login as provisioning user, defined in the Dynamic Activation
GUI:
login as: <provisioning user>
password: <password of the provisioning user>CLI Prompt Output
>
6.2.1.2 Active CLI without SSL
- Enable unsecure CLI connection through port 8023.
Edit the /home/bootloader/config/module_config_files/UDC-Application/cli.properties file. Modify the allowUnsecureConnection= line. Set value = true
- Note:
- For secure connections, the value is set to false:
allowUnsecureConnection=false
- From an SC node, activate the modifications by running
the following command for each affected PL node, one by one:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
- Connect to an unsecure CLI, without
SSL, and login as provisioning user, defined in the Dynamic Activation
GUI:
$ telnet <VIP-OAM-IP> 8023
login as: <provisioning user>
password: <password of the provisioning user>
CLI Prompt Output:
>
6.2.2 Notification Rules File Administration
The configurable notification rules files are used by Dynamic Activation to trigger notification messages towards configured notification consumers. These files contain information regarding provisioning changes of user-related data in the external database (CUDB).
Each provisioning notification rules file contains the following information:
- Modification Information: The modification part enables configuration of data that needs to be included when Dynamic Activation has created, changed, or deleted a subscriber profile in the CUDB.
- Additional Information: The additional part enables configuration of data that needs to be included in addition to those that are configured in the modification part. Typical data that is configured is subscriber identities and addresses to network entities that need to be informed about changes of the subscriber profile in the CUDB. This part is optional.
- Send Information: The send part enables definition of conditions that trigger the sending of the notification message to a configured notification consumer.
A check is made when a new notification is to be validated against a rule file. Once every minute, the time stamp of the file is checked. If the rule file was updated during the last minute, the rule file on disk is loaded into cache and used. The cache removes configuration rules if the files no longer exist on disk. The application server domain process does not need to be restarted.
Dynamic Activation provides notification support for IMS, EPS and DAE. The IMS and EPS files are included in the HSS-FE delivery, and the DAE file is included in the DAE-FE delivery. These files may have different names and notification purposes. It is necessary to name and store the IMS, EPS, DAE files according to the list below. For example received IMS file needs to be renamed to NotificationRulesIms.xml, and stored in the /home/bootloader/config/module_config_files/NotificationRules/ directory.
The Dynamic Activation delivery includes templates for the following notification rule files:
- /home/bootloader/config/module_config_files/NotificationRules/NotificationRulesDae.xml
- /home/bootloader/config/module_config_files/NotificationRules/NotificationRulesEps.xml
- /home/bootloader/config/module_config_files/NotificationRules/NotificationRulesIms.xml
Use the following procedure to update the notification rules:
- Edit or replace the following notification rules configuration
files for EPS, IMS, and DAE notifications, respectively:
/home/bootloader/config/module_config_files/NotificationRules/NotificationRulesDae.xml
/home/bootloader/config/module_config_files/NotificationRules/NotificationRulesEps.xml
/home/bootloader/config/module_config_files/NotificationRules/NotificationRulesIms.xml
- Note:
- Folders and files can not be renamed.
- From an SC node, run the following command to deploy the
customized notification rule files:
$ bootloader.py submodule deploy –n NotificationRules-<version>.tar.gz -t notificationrules –p dve-application --host all
<version> is the version of the NotificationRules module. This version can be found by running:
$ bootloader.py system version
6.2.3 Error Mapping File Administration
The error mapping is used by Dynamic Activation to map subordinate error codes found in CAI3G errors.
The configuration is performed in a file named ErrorMappings.properties, which is used to define the error mapping.
Dynamic Activation provides error mapping support for AVG, EPS and IMS Provisioning.
6.2.3.1 Enabling or Updating Error Mapping
To enable or update error mapping, follow the step-list:
- Create (or if updating, modify) a file named ErrorMappings.properties, and add the desired error
mappings into the file.
- Note:
- If a mapping is not found in this file, the original error code and description will be used.
The mapping syntax of this file is as follows:
<AdditionalClassifiedError_code>#<ApplicationErrorDetails_localpart>#<Message> = <AdditionalClassifiedError_code>
Parameter
Description
<AdditionalClassifiedError_code>
The error code of the additional classified error that should be mapped from.
<ApplicationErrorDetails_localpart>
The application error details localpart that should be mapped from.
<Message>
The exception message that should be mapped from. No spaces can be allowed, They should be replaced with ('_'). It is also allowed to use a wildcard ('.*'), matching any message or empty string ('')..
<AdditionalClassifiedError_code>
The new error code for the additional classified error.
The following are examples of error mappings for EPS:
13001#EPSFault#.* = 19001 13002#EPSFault#IMSI_defined_and_EPS_service_exists = 19002 13003#EPSFault#MSISDN_joint_to_other_IMSI.* = 19003 13003#EPSFault#MSISDN_already_provided = 19004 13003#EPSFault#IMSI_changeover_ongoing = 19005 13006#EPSFault#.* = 19006
For more examples, including error mappings for AVG and IMS, see the README.txt file located in /usr/local/pgngn/dve-application-<version>/config/errormapping
- Pack the ErrorMappings.properties file into a .tar.gz file:
$ tar zcvf ErrorMapping-<version>.tar.gz ErrorMappings.properties
- Copy the .tar.gz file to the /home/bootloader/repository directory:
$ sudo -u actadm cp ErrorMapping-<version>.tar.gz /home/bootloader/repository
- From an SC node, run the following command to deploy the
error mapping as a submodule:
$ bootloader.py submodule deploy –n ErrorMapping-<version>.tar.gz -t errormapping –p dve-application --host all
6.2.3.2 Disabling Error Mapping
To disable error mapping, after having enabled it, run the following command from an SC node:
$ bootloader.py submodule undeploy -n ErrorMapping-<version>.tar.gz -p dve-application --host all
6.2.4 Configuring HTTP Max Connections
NEs using HTTP as protocol have a limit on how many connections that can be kept alive simultaneously. This can have an undesired effect of connections being opened and closed more than needed. If many southbound operations to HTTP services are used, the attribute for maximum HTTP connections, http.maxConnections, should be changed from the default 32 to suit the customers needs. This is changed with the following procedure:
- From an SC node, run the following command to enable configuration
of Max Connections:
$ bootloader.py config set --parameter @HTTP_MAX_NE_CONNECTIONS@ --value <MaxNumber>
- From an SC node, run the following command, for all PL
nodes, one by one, to activate the change:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
6.2.5 Configuration of MML Interface
This section is divided into two configuration parts, MML Interface configuration and MML Command configuration.
6.2.5.1 MML Interface Configuration
The MML interface configuration works the first time it gets enabled and operated, as well as possible runtime changeable parameters.
There are template files located in /opt/dve/tools/jmxbatchclient/templates
Use the DVE-Inbound-MML-EJB-get.xml template file to get the settings for the Inbound-MML-EJB.
- Note:
- Output from commands is placed in result folder.
The first time the MML interface is deployed, the parameters in Table 28 are initiated with default values.
To be able to find and change these parameter values through Jconsole, use the following path: com.ericsson.Configuration.DVE-Inbound-MML-EJB
To modify these parameters using the JMX Batch Client, do the following:
- Use the JMX Batch Client to get all current values:
$ sudo -u actadm /opt/dve/tools/jmxbatchclient/jbc-rmi.sh -A<provisioning client user>:<provisioning client user password> service:jmx:rmi://<PS_JMX_Server_IP>:8995/jndi/rmi:// <PS_JMX_Server_IP>:8100/connector /opt/dve/tools/jmxbatchclient/templates/<XML-file with JMX data>
- Note:
- When running the JMX batch client, user and password must be specified, as first parameter with: -A<user>:<password>
- Copy the output file to DVE-Inbound-MML-EJB-set.xml or use the template file and edit it to modify parameters:
$ cp <outputfile> DVE-Inbound-MML-EJB-set.xml
$ vi DVE-Inbound-MML-EJB-set.xml
- Use the JMX Batch Client with DVE-Inbound-MML-EJB-set.xml to set the new values:
$ sudo -u actadm /opt/dve/tools/jmxbatchclient/jbc-rmi.sh -A<provisioning client user>:<provisioning client user password> service:jmx:rmi://<PS_JMX_Server_IP>:8995/jndi/rmi://<PS_JMX_Server_IP>:8100/connector /opt/dve/tools/jmxbatchclient/templates/DVE-Inbound-MML-EJB-set.xml
- Note:
- When running the JMX batch client, user and password must be specified, as first parameter with: -A<user>:<password>
- Verify the changes by logging in to the MML interface.
- Note:
- All runtime configuration changes on the parameters in Table 28, will take effect on the next login session. Present, already connected MML clients, are not able to see the changes.
|
Parameter |
Data Type |
Description |
|---|---|---|
|
BindAddress |
String |
Local IP address to bind to. The bindAddress argument can be used on a multihomed host for a ServerSocket that will only accept connect requests to one of its addresses. Default value: 0.0.0.0 |
|
Echo |
boolean |
This command is set to true as default. If set to false, the server asks the client for using echo at startup. Default value: true |
|
FirstMessage |
String |
The first welcome message received after a successful login procedure. Default value: MA MML PAGE 1 |
|
LineTerminationCharacter |
String |
Defines what the client receives for each line break. The lineTerminationCharacter values can be used as the following: CRNL = \r\n CR = \r NL = \n NONE = "" Default value: CRNL |
|
NotAcceptMessage |
String |
Fault message sent by the server to the client when the simultaneous session resources for logging in to the MML have exceeded. Default value: Max number of connections exceeded |
|
Port |
Integer |
The port number listening for incoming MML traffic. Default value: 8010 |
|
Prompt |
String |
The prompt visible for the client after a successful login attempt. Default value: < To add special characters in the prompt output in the XML (template) document, the following syntax must be used: " = \03" ' = \03&apos < = \03< > = \03> & = \03& |
|
SessionTimeout |
Integer |
The number of seconds a session is idle until it times out. Default value: 300 |
|
Integer |
Maximum number of simultaneous session resources that can be connected. Default value: 160 | |
|
SupressGoAhead |
boolean |
This command is set to DO as default. Inbound Telnet requests the client to use this command. Default value: true |
- Note:
- If the port number is changed, the VIP configuration needs to be updated to enable the new port in the firewall.
6.2.5.2 MML Command Configuration
The MML commands can be configured by use of the following parameters:
ENABLE_EOT - Certain MML commands have an 'End of Transmission' feature enabled on their response. Setting the ENABLE_EOT parameter to false disables that feature for all MML commands.
ENABLE_CHECK_PRINTOUT - Certain MML commands have a 'Check Printout' feature enabled on their response. Setting the ENABLE_CHECK_PRINTOUT parameter to false disables that feature for all MML commands.
<COMMAND>_EOT - Each command can be configured to use EOT, this is only applicable if ENABLE_EOT is set to true
<COMMAND>_CONFIRM - Each command can be configured to use Check Printout, this is only applicable if ENABLE_CHECK_PRINTOUT is set to true
Default configuration:
# This file is used to configure the behavior of MML commands # If the command is to require a CTRL-D specify it with <command>_EOT=true, i.e AGAAC_EOT=true # If the command should require a check printout, (semicolon after the command to execute the command) specify it with # <command>_CONFIRM=true # # These settings are only valid if the global settings are set to true. # Global setting for EOT, if this is false no command could have this behavior, default is true. ENABLE_EOT=true #Require EOT AGAAC_EOT=true AGSUC_EOT=true HGAPE_EOT=true HGLDP_EOT=true HGMSP_EOT=true HGPDP_EOT=true HGSGP_EOT=true #Global setting for EOT, if this is false no command could have this behavior, default is true. ENABLE_CHECK_PRINTOUT=true #Requires check printout AGAAC_CONFIRM=true HGICC_CONFIRM=true HGICE_CONFIRM=true HGICI_CONFIRM=true HGSLR_CONFIRM=true HGIRI_CONFIRM=true HGCSC_CONFIRM=true HGCUC_CONFIRM=true HGCUE_CONFIRM=true HGCUI_CONFIRM=true |
The mml.properties file is located in the /cluster/home/bootloader/config/module_config_files/dve-inbound-interfaces-application directory.
To configure the MML commands, follow the procedure:
- Edit the mml.properties file to add specific customizations:
$ sudo -u actadm vi /home/bootloader/config/module_config_files/dve-inbound-interfaces-application/mml.properties
- From all PL nodes, one by one, run the following command
to activate the customized mml.properties file:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
6.2.6 Configuration of MML over SSH Interface
This section contains information on how the MML interface configuration works the first time it gets enabled and operated, as well as possible runtime changeable parameters.
There are template files located in /opt/dve/tools/jmxbatchclient/templates
Use the DVE-Inbound-MML-EJB-SSH-get.xml template file to get the settings for the Inbound-MML-EJB-SSH.
- Note:
- Output from commands is placed in result folder.
The first time the MML SSH interface is deployed, the parameters in Table 29 are initiated with default values.
To be able to find and change these parameter values through Jconsole, use the following path: com.ericsson.Configuration.DVE-Inbound-MML-EJB-SSH
To modify these parameters using the JMX Batch Client, do the following:
- Use the JMX Batch Client to get all current values:
$ /opt/dve/tools/jmxbatchclient/jbc-rmi.sh -A<provisioning client user>:<provisioning client user password> service:jmx:rmi://<PS_JMX_Server_IP>:8995/jndi/rmi://<PS_JMX_Server_IP>:8100/connector /opt/dve/tools/jmxbatchclient/templates/<XML-file with JMX data>
- Note:
- When running the JMX batch client, user and password must be specified, as first parameter with: -A<user>:<password>
- Copy the output file to DVE-Inbound-MML-EJB-SSH-set.xml or use the template file and edit it to modify parameters:
$ cp <outputfile> DVE-Inbound-MML-EJB-SSH-set.xml
$ vi DVE-Inbound-MML-EJB-SSH-set.xml
- Use the JMX Batch Client with DVE-Inbound-MML-EJB-SSH-set.xml to set the new values:
$ /opt/dve/tools/jmxbatchclient/jbc-rmi.sh -A<provisioning client user>:<provisioning client user password> service:jmx:rmi://<PS_JMX_Server_IP>:8995/jndi/rmi://<PS_JMX_Server_IP>:8100/connector /opt/dve/tools/jmxbatchclient/templates/DVE-Inbound-MML-EJB-SSH-set.xml
- Note:
- When running the JMX batch client, user and password must be specified, as first parameter with: -A<user>:<password>
- Verify the changes by logging in to the MML interface.
- Note:
- All runtime configuration changes on the parameters in Table 29, will take effect on the next login session. Present, already connected MML clients, are not able to see the changes.
|
Parameter |
Data Type |
Description |
|---|---|---|
|
BindAddress |
String |
Local IP address to bind to. The bindAddress argument can be used on a multihomed host for a ServerSocket that will only accept connect requests to one of its addresses. Default value: 0.0.0.0 |
|
FirstMessage |
String |
The first welcome message received after a successful login procedure. Default value: MA MML PAGE 1 |
|
LineTerminationCharacter |
String |
Defines what the client receives for each line break. The lineTerminationCharacter values can be used as the following: CRNL = \r\n CR = \r NL = \n NONE = "" Default value: CRNL |
|
NotAcceptMessage |
String |
Fault message sent by the server to the client when the simultaneous session resources for logging in to the MML have exceeded. Default value: Max number of connections exceeded |
|
Port |
Integer |
The port number listening for incoming MML traffic. Default value: 8111 |
|
Prompt |
String |
The prompt visible for the client after a successful login attempt. Default value: < To add special characters in the prompt output in the XML (template) document, the following syntax must be used: " = \03" ' = \03&apos < = \03< > = \03> & = \03& |
|
SessionTimeout |
Integer |
The number of seconds a session is idle until it times out. Default value: 300 |
|
SimultaneousSession |
Integer |
Maximum number of simultaneous session resources that can be connected. Inherits the same value as for Telnet, which by default is 160. |
|
LoginDomain |
String |
Defines whether DOMAIN is shown in the login prompt or not Default value: false |
- Note:
- If the port number is changed, the eVIP configuration needs to be updated to enable the new port in the firewall.
6.2.7 Configuration of CAI Interface
This section contains information on how the CAI interface configuration works the first time it gets enabled and operated, as well as how to modify the parameters.
The first time the CAI interface is deployed, the parameters in Table 30 are initiated with default values.
To modify the default values of these parameters, do the following on one of the SC nodes:
- Run the bootloader.py command
to list the configuration information for the system:
$ bootloader.py config list --show_all
Then find the CAI configuration in nbia-module
- Run the bootloader.py command
to modify the values of the parameters.
- To set the new value, do the following:
The new value is saved in the user_config file.
$ bootloader.py config set --parameter <parameter> --value <value>
Example of setting a new value for CAI_IDLE_TIMEOUT_SERVER1:
$ bootloader.py config set --parameter @CAI_IDLE_TIMEOUT_SERVER1@ --value 200
INFO - *** @CAI_IDLE_TIMEOUT_SERVER1@=200 inserted in user configuration
INFO - *** To activate the changed parameter, execute "bootloader node activate" commandCheck user_config for the changed parameter:
$ cat /home/bootloader/config/user_config
@CAI_IDLE_TIMEOUT_SERVER1@=200
- To restore the default value, do the following:
The value being restored is removed from the user_config file.
$ bootloader.py config remove --parameter <parameter>
Example of restoring the value for CAI_IDLE_TIMEOUT_SERVER1:
$ bootloader.py config remove --parameter @CAI_IDLE_TIMEOUT_SERVER1@
INFO - *** Removed @CAI_IDLE_TIMEOUT_SERVER1@ from user configuration
INFO - *** To activate the changed parameter, execute "bootloader node activate" command
- To set the new value, do the following:
- From an SC node, run the following command to activate
the modification for each PL node, one by one:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
- Verify the changes by logging in to the CAI interface.
- Note:
-
- The new value will take effect after executed the command bootloader node activate
- When the CAI port number is changed, VIP configurations also need to be updated. For details on updating VIP configurations, refer to Hardening Guideline for Native Deployment, Reference [11].
|
Parameter |
Description |
Type |
Default Value |
|---|---|---|---|
|
CAI_SERVER_CONNECTED_MESSAGE_SERVER1 |
Set of characters displayed when the connection is accepted by CAI server |
String |
CONNECTING TO CAI...\n\nPROCESS CaiServer1/2 CONNECTED... |
|
CAI_SERVER_CONNECTED_MESSAGE_SERVER2 | |||
|
CAI_PORT_NUMBER_SERVER1 |
Integer Value range: inclusively from 1024 to 65534 |
3300/3301 | |
|
CAI_SSL_PORT_NUMBER_SERVER2 | |||
|
CAI_MAX_CONNECTIONS_EXCEEDED_SERVER1 |
Set of characters displayed when the maximum number of CAI connections exceeded |
String |
Max number of connections exceeded. |
|
CAI_MAX_CONNECTIONS_EXCEEDED_SERVER2 | |||
|
CAI_MAX_CONNECTIONS_SERVER1 |
Maximum number of CAI connections |
Integer Value range: inclusively from 1 to 1200 |
160 |
|
CAI_MAX_CONNECTIONS_SERVER2 | |||
|
CAI_WORKER_THREADS1 |
Maximum number of CAI worker threads |
Integer Value range: inclusively from 1 to 1200 |
32 |
|
CAI_WORKER_THREADS2 | |||
|
CAI_SHOW_EMPTY_ARG_SERVER1 |
Shows an argument in a CAI response when it does not have a value (1: to show; 0: not to show) |
Enumeration value:
|
0 |
|
CAI_SHOW_EMPTY_ARG_SERVER2 | |||
|
CAI_PROMPT_SERVER1 |
CAI prompt displayed to the user |
String |
Enter command: |
|
CAI_PROMPT_SERVER2 | |||
|
CAI_REQUEST_CONFIRMATION_SERVER1 |
Set of characters that confirm the request has been sent |
Enumeration value:
|
CRNL |
|
CAI_REQUEST_CONFIRMATION_SERVER2 | |||
|
CAI_MAX_LOGIN_RETRY_SERVER1 |
Maximum number of retries when login fails (-1 means unlimited) |
Integer Value range: inclusively from -1 to 100 |
-1 |
|
CAI_MAX_LOGIN_RETRY_SERVER2 | |||
|
CAI_IDLE_TIMEOUT_SERVER1 |
Maximum time a CAI session is active when there is no interaction (seconds; 0 means infinite) |
Integer (in second) Value range: inclusively from 0 to 3600 |
300 |
|
CAI_IDLE_TIMEOUT_SERVER2 | |||
|
CAI_CONNECTION_TYPE_SERVER1 |
Type of connection that CAI server uses (read only) |
String |
TCPIP |
|
CAI_CONNECTION_TYPE_SERVER2 |
(1) When CAI port number has been
changed, new port number needs to be added to Customer Iptables Rules.
For more information, see Section 5.3.2
6.2.8 Configuration of EDIFACT Interface
This section contains information on how the EDIFACT interface configuration works the first time it gets enabled and operated, as well as how to modify the parameters.
The first time the EDIFACT interface is deployed, the parameters in Table 31 are initiated with default values.
To modify the default values of these parameters, do the following on one of the SC nodes:
- Run the bootloader.py command
to list the configuration information for the system:
$ bootloader.py config list --show_all
Then find the EDIFACT configuration in nbia-module.
- Run the bootloader.py command
to modify the values of a parameter.
- To set the new value, do the following:
The new value is saved in the user_config file.
$ bootloader.py config set --parameter <parameter> --value <value>
Example of setting a new value for EDIFACT_SERVER_STATUS:
$ bootloader.py config set --parameter @EDIFACT_SERVER_STATUS@ --value disable
INFO - *** @EDIFACT_SERVER_STATUS@=disable inserted in user configuration
INFO - *** To activate the changed parameter, execute
"bootloader.py node activate --host <hostname>" commandCheck user_config for the changed parameter:
$ cat /home/bootloader/config/user_config
@EDIFACT_SERVER_STATUS@=disable
- To restore the default value, do the following:
The value being restored is removed from the user_config file.
$ bootloader.py config remove --parameter <parameter>
Example of restoring the value for EDIFACT_SERVER_STATUS::
$ bootloader.py config remove --parameter @EDIFACT_SERVER_STATUS@
INFO - *** Removed @EDIFACT_SERVER_STATUS@ from user config
INFO - *** To activate the changed parameter, execute
"bootloader.py node activate --host <hostname>" command uration
- To set the new value, do the following:
- From an SC node, run the following command to activate
the modification for each PL node, one by one:Attention!
Wait for each PL node to be activated before starting with the next one, otherwise traffic disturbances occur. The all parameter should only be used when no provisioning traffic is running.
$ bootloader.py node activate --host <hostname>
<hostname> is the hostname of the PL node that is to be activated.
- Verify the changes by logging in to the EDIFACT interface.
- Note:
-
- The new value will take effect after executed the command bootloader node activate.
- When the EDIFACT port number is changed, VIP configurations also need to be updated. For details on updating VIP configurations, refer to Hardening Guideline for Native Deployment, Reference [11].
|
Parameter |
Description |
Type |
Default Value |
|---|---|---|---|
|
EDIFACT_WORKER_THREADS |
Maximun number of Dynamic Activation work threads to receiver EDIFACT requests. |
Integer Value range: inclusively 0 to 100 |
20 |
|
EDIFACT_PORT_NUMBER |
Integer Value range: inclusively 1024 to 65535 |
3010 | |
|
EDIFACT_SERVER_STATUS |
The status of EDIFACTE server. |
String Value range: disable or enable |
enable |
|
EDIFACT_NOTIFICATIONACK_TIMEOUT |
Response Timeout in seconds to wait for a response from BSCS after EDIFACT sent a notification. |
Integer Value range: inclusively 0 to 300 |
3 |
|
EDIFACT_NOTIFY_THREADS |
Maximum number of notify threads that EDIFACT can process. |
Integer Value range: inclusively 0 to 100 |
20 |
(1) When
EDIFACT port number has been changed, new port number needs to be
added to Customer Iptables Rules. For more information, see Section 5.3.2
6.2.8.1 Configuring bscs.xml for EDIFACT interface
bscs.xml is used to configure the notification receiver on BSCS node, the configurable parameters are listed in Table 32. The executation result of EDIFACT requests is sent to the notification receivers based on the senderId in the request.
To configuration the parameters of EDIFACT receiver:
- Edit the file /home/bootloader/config/module_config_files/nbia-module/bscs.xml. The bscs.xml file supports Multiple
EDIFACT receivers.
Below is an example of configuring two notification receivers in bscs.xml:
<p:bscs senderId="sender01"> <p:receiverHost>127.0.0.1</p:receiverHost> <p:receiverPort>60006</p:receiverPort> <p:usernameInNGN>example_user</p:usernameInNGN> </p:bscs> <p:bscs senderId="sender02"> <p:receiverHost>127.0.0.1</p:receiverHost> <p:receiverPort>60007</p:receiverPort> <p:usernameInNGN>example_user</p:usernameInNGN> </p:bscs> - From an SC node, run the following command for all PL
nodes in the cluster, one by one to activate.
$ bootloader.py node activate --host <hostname>
|
Parameter |
Description |
Value Range |
|---|---|---|
|
senderId |
Name of the EDIFACT Receiver which is the destination EDIFACT response forwarding to. |
Any effective string |
|
receiverHost |
Inclusively 0.0.0.0 to 255.255.255.255 | |
|
receiverPort |
Port of EDIFACT notification receiver. |
Inclusively 1024 to 65535 |
|
usernameInNGN |
Dynamic Activation user who processes the request. |
Any effective string - Dynamic Activation user name with sufficient authority. |
6.2.9 Load Default NE Groups and Routing Methods
This section describes how to load the default NE groups and routing methods.
- Log in to one of the PL nodes.
- Load the default NE groups and routing methods:
$ sudo -u actadm /usr/local/pgngn/dve-application-<version>/bin/installDefaultConfig.sh <provisioning client user> <provisioning client user password>
Are you sure you want to load the default configuration? This will replace existing configuration (y/n)
Enter y and press Enter.
6.3 UDC Data Durability
This section contains information regarding manual replay operations, and operational instructions in the event of a replay failure.
6.3.1 Manual Replay Operation
This section contains information on how to run a manual replay operation using the Replay Admin Tool.
- Note:
- Before issuing a manual replay operation, make sure that the consolidated state processing logs is in a consistent state for the indicated time span.
6.3.1.1 Running Replay Admin Tool
The commands and respective command options to use for manual replay operations are shown in Table 33.
|
Command |
Command Options |
Description |
Example |
|---|---|---|---|
|
executeReplay |
-u=<username>, --username=<username> |
Username for Replay Administrator user. |
replay-admin-tool.sh executeReplay -u=user -p=password -c='<replay-command>' replay-admin-tool.sh executeReplay -u=user -p=password -c='<replay-command>' -pb=true |
|
-p=<password>, --password=<password> |
Password for Replay Administrator user. | ||
|
-c=<replay cmd>, --cmd=<replay cmd> |
Manual replay command to execute. | ||
|
-pb=<true|false>, --provisioningblock=<true|false> |
Set provisioning block to <true|false> until replay is complete (false by default). | ||
|
getReplaystate |
-u=<username>, --username=<username> |
Username for Replay Administrator user. |
replay-admin-tool.sh getReplaystate -u=user -p=password |
|
-p=<password>, --password=<password> |
Password for Replay Administrator user. | ||
|
provisioningBlock |
-u=<username>, --username=<username> |
Username for Replay Administrator user. |
replay-admin-tool.sh provisioningBlock -u=user -p=password -a=true |
|
-p=<password>, --password=<password> |
Password for Replay Administrator user. | ||
|
-a=<true|false>, --active=<true|false> |
Set provisioning block to <true|false> until changed (false by default). | ||
The replay command is as follows:
REPLY:STARTTIME=<POSIX time start>,ENDTIME=<POSIX time end>,[SOUTHBOUND=<PROTOCOL>],[TARGET=<NE-GROUP>],[ROOTLOGIDS="<RootLogId1, RootLogId2, ...>"];
Where:
- " []" indicates an optional parameter that can be omitted.
- All parameters, mandatory or optional ones, can be put in any chosen order.
- Note:
- To check if an executed replay operation is successful or failed, check for posted events or alarms for the system, see Event and Alarm Handling, Reference [7].
Run the Replay Admin Tool from SC-1 or SC-2 on LDE:
For local use, the Replay Admin Tool executable shell script, replay-admin-tool.sh, is located in the /opt/dve/bin directory.
The following example shows how to run a manual replay operation for a specific time span:
Example 1
$ replay-admin-tool.sh executeReplay -u=replayuser -p=replaypassword -c='REPLAY:SOUTHBOUND=LDAP,STARTTIME=1423652362,ENDTIME=1423652555;'
Output:
Do you want to continue without setting provisioning block? 1) Yes 2) No, activate provisioning block for me 3) Don't execute manual replay input: #? 1 continued output: Manual replay accepted ............................................................. Replay is completed. |
The following example shows how to run a manual replay operation for a specific time span, specifying a target NE Group:
Example 2
$ replay-admin-tool.sh executeReplay -u=replayuser -p=replaypassword -c='REPLAY:SOUTHBOUND=LDAP,TARGET=CUDB-GROUP,STARTTIME=1423652362,ENDTIME=1423652555;'
Output:
Do you want to continue without setting provisioning block? 1) Yes 2) No, activate provisioning block for me 3) Don't execute manual replay input: #? 1 continued output: Manual replay accepted ............................................................. Replay is completed. |
The following example shows how to run a manual replay operation for a specific time span, with selective replay using list of RootLogId(s):
Example 3
$ replay-admin-tool.sh executeReplay -u=replayuser -p=replaypassword -c='REPLAY:SOUTHBOUND=LDAP,TARGET=CUDB-GROUP,STARTTIME=1423652362,ENDTIME=1423652555,ROOTLOGIDS="PL-31503301827491798CAI3G1_2,PL-31503301827491799CAI3G1_2";'
Output:
Do you want to continue without setting provisioning block? 1) Yes 2) No, activate provisioning block for me 3) Don't execute manual replay input: #? ´1 continued output: Manual replay accepted ............................................................. Replay is completed. |
The following example shows how to run a manual replay operation for a specific time span, with northbound provisioning being blocked for this command only:
Example 4
$ replay-admin-tool.sh executeReplay -u=replayuser -p=replaypassword -c='REPLAY:SOUTHBOUND=LDAP,STARTTIME=1423652362,ENDTIME=1423652555;' -pb=true
Output:
Provisioning blocked Manual replay accepted ............................................................. Replay is completed. Provisioning block removed |
The following example shows how to query replay state for an ongoing replay operation:
Example 5
$ replay-admin-tool.sh getReplaystate -u=replayuser -p=replaypassword
Output:
Replay state: REPLAY |
6.3.2 Operational Instruction in the Event of a Failed Replay Operation
Failed replay operations will render in replay alarms and events being raised, indicating the corresponding time span that is subject to manual replay operation.
- Note:
- It is recommended to always block northbound provisioning in Dynamic Activation before using the following methodology to perform a manual replay, this is to avoid race conditions between replay operations and new service orders, affecting the same set of LDAP entries in CUDB.
In the event of replay failure, use the following steps to secure data durability for the indicated time span:
The following definitions are used in the step-list below for greater readability. | ||
| T0 | Requested timestamp from when to start replaying. | |
| T1 | Timestamp when receiving replay notification from CUDB | |
| Tnow | Current time. | |
| T0-T1 | Indicated time span. | |
- Log in on SC1 or SC2.
- Run the replay-admin-tool.sh provisioningBlock
-u=<user> -p=<password> -a=true command, to block northbound provisioning.
For example:
$ replay-admin-tool.sh provisioningBlock -u=replayuser -p=replaypassword -a=true
- Run the replay-admin-tool.sh command for the time span indicated in the alarm, with northbound
provisioning block set.
For example:
$ replay-admin-tool.sh executeReplay -u=replayuser -p=replaypassword -c='REPLAY:SOUTHBOUND=LDAP,STARTTIME=1423652362,ENDTIME=1423652555;'
- Run the proclog-admin-tool.sh command and export processing logs to CSV file format for the indicated
time span (T0-T1). Convert POSIX time stamp to human readable format
for the local time zone.
For example:
$ cd /usr/local/pgngn/admin-tool-<version>/bin
$ sudo -u actadm ./proclog-admin-tool.sh -et “2015-02-11 10:59:22 2015-02-11 11:02:35” -p <export_dir_path>
For more information on the command, refer to Table 21.
- Run the proclog-admin-tool.sh command and export processing logs to CSV file format ranging after
the indicated time span until current time (T1-Tnow).
For example:
$ sudo -u actadm ./proclog-admin-tool.sh -et “2015-02-11 11:02:36 2015-02-11 12:00:00” -p <export_dir_path>
For more information on the command, refer to Table 21.
- Compare the two CSV files and identify any subsequent
log records using the same instance values, as for the indicated time
span that is subject to replay.
If any log records were identified using the same Instance values, note the corresponding RootLogId(s) and proceed with Step 7.
- Run the replay-admin-tool.sh command ranging from indicated time span until current time (T1-Tnow),
with selective replay using list of RootLogId(s).
For example:
$ replay-admin-tool.sh executeReplay -u=replayuser -p=replaypassword -c='REPLAY:SOUTHBOUND=LDAP,TARGET=CUDB-GROUP,STARTTIME=1423652556,ENDTIME=1423656000,ROOTLOGIDS="PL-31502111103491798CAI3G1_2,PL-31502111103491799CAI3G1_2";'
- Run the replay-admin-tool.sh provisioningBlock
-u=<user> -p=<password> -a=false command, to manually remove the block for northbound provisioning.
For example:
$ replay-admin-tool.sh provisioningBlock -u=replayuser -p=replaypassword -a=false
7 Maintenance Routines
This section describes the maintenance routines recommended to be performed on a daily, weekly, and monthly basis.
7.1 Daily Maintenance
This section holds an overview of the maintenance routines recommended to be performed daily.
- Check /var/log/messages.
For information about how to perform this check, see Section 4.8.
- Check for active alarms.
For information about how to perform this check, see Section 4.8.
- Check processing logs.
- Removal of old processing logs is performed by the proclog-retain-maximum-days.sh crontab script.
Since the oldest logs are deleted it is recommended to export the processing logs, according to instructions in Section 5.11.
7.2 Weekly Maintenance
This section holds an overview of the maintenance routines recommended to be performed on a weekly basis.
- Perform a system check according to instructions in Section 4.8.
7.3 Monthly Maintenance
This section holds an overview of the maintenance routines recommended to be performed on a monthly basis.
- For information about maintaining SSL certificates, see Section 4.8.3.
- For information about backup of Dynamic Activation Configuration, see Backup and Restore Guideline for Native Deployment, Reference [12].
- For information about log files, see Section 2.3.4.
Reference List
| Ericsson Documents |
|---|
| [1] Library Overview, 18/1553-CSH 109 628 Uen |
| [2] Glossary of Terms and Acronyms, 0033-CSH 109 628 Uen |
| [3] Hardware Specification, 1/2135-CSH 109 628 Uen |
| [4] CUDB Subscription Repair and Remove Procedures, 4/1553-CSH 109 628 Uen |
| [5] LDE Product Description, 1/1551-APR 901 0551/1 |
| [6] eVIP on LSB Management Guide, 3/1553-APR 901 0467/1 Uen |
| [7] Event and Alarm Handling, 3/1553-CSH 109 628 Uen |
| [8] User Guide for Resource Activation, 1/1553-CSH 109 628 Uen |
| [9] Configuration Manual for Resource Activation, 2/1543-CSH 109 628 Uen |
| [10] ESA Setup and Configuration, 1/1543-FAM 901 455 Uen |
| [11] Hardening Guideline for Native Deployment, 1/154 43-CSH 109 628 Uen |
| [12] Backup and Restore Guideline for Native Deployment, 2/1553-CSH 109 628 Uen |
| [13] LDE Management Guide, 1/1553-CAA 901 2978/1 Uen |
| [14] Layered HLR AUC Provisioning over MML, 5/155 19-CSH 109 628 Uen |
| Online References |
|---|
| [15] The nodetool utility, https://docs.datastax.com/en/cassandra/2.2/cassandra/tools/toolsNodetool.html. |
| [16] OpenID Connect Core 1.0, http://openid.net/specs/openid-connect-core-1_0.html. |
| [17] OpenID Connect Session Management 1.0, http://openid.net/specs/openid-connect-session-1_0.html. |

Contents


