Data Migration from IPWorks HP 15B FD1 CP3 to IPWorks 2

Contents

1Introduction

2

Timeline

3

Prerequisites
3.1Prepare scripts

4

Backup
4.1SS Backup
4.2PS Backup

5

Migration
5.1Common Migration
5.2DNS Service Migration
5.3ENUM Service Migration
5.4AAA Diameter Service Configuration Data Migration
5.5AAA Radius Service Migration

6

Trouble Shooting

Reference List

1   Introduction

This document describes the followings:

This document focuses on the data migration from IPWorks HP to IPWorks 2, including:

Other services and scenarios are out of the scope.

If any issue occurs during data migration, consult the next level of maintenance support.

2   Timeline

The results in Table 1 are based on the lab test and only as reference.

The actual duration of IPWorks data migration is different based on the volume of user data.

Table 1 lists all parts timeline information.

Table 1    Timeline for IPWorks Data Migration

Service

Duration (mins)

Section/Action

Comments

DNS

2

Common Backup

Includes SS configuration

2

Database Backup

 

2

DNS Backup

 

2

Common Migration

 

8

DNS Database Migration

2 million arecord

2

DNS Migration

 

ENUM

2

Common Backup

Includes SS configuration

2

Database Backup

 

2

ENUM Backup

 

2

Common Migration

 

84

ENUM Database Migration

24 million arecord

2

ENUM Migration

 

DNS + ENUM

2

Common Backup

Includes SS configuration

2

Database Backup

 

4

DNS, ENUM Backup

 

2

Common Migration

 

8

DNS, ENUM Database Migration

1 million arecord and 1 million naptrrecord

4

DNS, ENUM Database

 

AAA Diameter

2

Common Backup

Includes SS configuration

2

Diameter Database Backup

 

2

Diameter Backup

 

2

Common Migration

 

15

Diameter Database Migration

2 million aaauser

2

Diameter Migration

 

AAA Radius

2

Common Backup

Includes SS configuration

2

Radius Database Backup

 

2

Radius Backup

 

2

Common Migration

 

15

Radius Database Migration

2 million aaauser

2

Radius Migration

 

3   Prerequisites

This section states the prerequisites that must be fulfilled.

This section assumes that readers have knowledge and experience with:

Before execution of the migration, the IPWorks VNF deployment must be completed. For the deployment, refer to the IPWorks Deployment Guide.

To execute the data migration, the root access is needed.

3.1   Prepare scripts

The IPWorks software packages can be retrieved from SW Gateway. For specific information, see the product release notes.

The IPWorks software package CXP9029034_3_Ux_<Revision Number>.tar.gz is used for the data migration.

CXP9029034_3_Ux_K.tar.gz is used as an example in this document.

4   Backup

This section describes the steps of IPWorks data backup for data migration on IPWorks HP.

To perform the backup, see the Table 2:

Table 2   

Service Complex

Operation

Deployment Scenario

Reference

DNS

SS Backup


DNS Service Backup

For all deployments

Section 4.1


Section 4.2.1

ENUM

SS Backup


ENUM Service Backup

For all deployments

Section 4.1


Section 4.2.2

DNS + ENUM

SS Backup


DNS Service Backup


ENUM Service Backup

For all deployments

Section 4.1


Section 4.2.1


Section 4.2.2

AAA Diameter

SS Backup


AAA Diameter Service Backup

For all deployments

Section 4.1


Section 4.2.3

AAA Radius

SS Backup


Radius Service Backup

For all deployments

Section 4.1


Section 4.2.4

4.1   SS Backup

Backup the configuration and database on active SS node.

  1. Prepare the backup scripts.

    Log on the active SS node.

    Copy the package CXP9029034_3_Ux_K.tar.gz to /tmp in active SS node.

    # cd /tmp

    # tar -zxvf CXP9029034_3_Ux_K.tar.gz

    # cd datamigrationtool/

    # cd backup/

    # chmod +x ipwbackup_for_datamigration.sh ipw_get_service_conf.py ipwbackup_serverinfo.sh server_info.py

  2. Backup IPWorks common configuration data.

    #./ipwbackup_for_datamigration.sh COMMON

    Note:  
    It generates a file called "ipwbackup_<time>_<date>_<hostname>_for_COMMON.tar.gz" under /tmp/dest.

    <time>_<date> displays Coordinated Universal Time (UTC).


    Expect result:

    --------------------------------------------

    IPWORKS Backup/Migration Utility

    --------------------------------------------

    ...

    File Created:

    ../dest/ipwbackup_025152_04102017_ipwm2ss-01_for_COMMON.tar.gz

    Copy the backup file to other machine or external devices.

  3. Backup IPWorks database.

    #./ipwbackup_for_datamigration.sh DB

    Note:  
    It generates a file called "ipwbackup_<time>_<date>_<hostname>"_for_DB.tar.gz under /tmp/dest.

    <time>_<date> displays Coordinated Universal Time (UTC).


    Expect result:

    --------------------------------------------

    IPWORKS Backup/Migration Utility

    --------------------------------------------

    ...

    File Created:

    ../dest/ipwbackup_025152_04102017_ipwm2ss-01_for_DB.tar.gz

    Note:  
    If no backup generated, refer to section 6.

    Copy the backup file to other machine or external devices.

  4. Backup network deployment information

    # ./ipwbackup_serverinfo.sh

    A file named "server_address.csv" will be generated under /tmp/dest.

    For example:

    # cat server_address.csv

    server,name,host,ip AAAServer,aaaserver01,ci-wsqr2fwytfsy-ipworks28,192.168.23.79

    DNSServer,dns1,ci-wsqr2fwytfsy-ipworks28,192.168.23.79

    Copy the backup file to other machine or external devices.

4.1.1   Backup the Configuration on standby SS Node

Backup the configuration on standby SS node, if it exists.

  1. Prepare the backup scripts.

    Log on the standby SS node.

    Copy the package CXP9029034_3_Ux_K.tar.gz to /tmp in standby SS node.

    # cd /tmp

    # tar -zxvf CXP9029034_3_Ux_K.tar.gz

    # cd datamigrationtool/

    # cd backup/

    # chmod +x ipwbackup_for_datamigration.sh ipw_get_service_conf.py ipwbackup_serverinfo.sh server_info.py

  2. Backup IPWorks common configuration data.

    # ./ipwbackup_for_datamigration.sh COMMON

    Note:  
    It generates a file called "ipwbackup_<time>_<date>_<hostname>_for_COMMON.tar.gz" under /tmp/dest.

    <time>_<date> displays Coordinated Universal Time (UTC).


    Expect result:

    --------------------------------------------

    IPWORKS Backup/Migration Utility

    --------------------------------------------

    ...

    File Created:

    ../dest/ipwbackup_025152_04102017_ipwm2ss-02_for_COMMON.tar.gz

    Copy the backup package to other machine or external devices.

4.2   PS Backup

The PS backup contains:

4.2.1   DNS Service Backup

To backup the DNS service, do the following steps on each PS node:

  1. Prepare the backup scripts

    Copy the package CXP9029034_3_Ux_K.tar.gz to /tmp.

    # cd /tmp

    # tar -zxvf CXP9029034_3_Ux_K.tar.gz

    # cd datamigrationtool/

    # cd backup/

    # chmod +x ipwbackup_for_datamigration.sh ipw_get_service_conf.py

  2. Backup DNS configuration

    #./ipwbackup_for_datamigration.sh DNS

    Note:  
    It generates a file called "ipwbackup_<time>_<date>_<hostname>_for_DNS.tar.gz" under /tmp/dest.

    <time>_<date> displays Coordinated Universal Time (UTC).


    Expect result:

    --------------------------------------------

    IPWORKS Backup/Migration Utility

    --------------------------------------------

    ...

    File Created:

    ../dest/ipwbackup_025152_04102017_ ipwm2ps-01_for_DNS.tar.gz

    Note:  
    If no backup generated, refer to section 6.

    Copy the backup file to other machine or external devices.

4.2.2   ENUM Service Backup

To backup the ENUM service, do the following steps on each PS node:

  1. Prepare the backup scripts

    Copy the package CXP9029034_3_Ux_K.tar.gz to /tmp in PS node.

    # cd /tmp

    # tar -zxvf CXP9029034_3_Ux_K.tar.gz

    # cd datamigrationtool/

    # cd backup/

    # chmod +x ipwbackup_for_datamigration.sh ipw_get_service_conf.py

  2. Backup ENUM configuration

    #./ipwbackup_for_datamigration.sh ENUM

    Note:  
    It generates a file called "ipwbackup_<time>_<date>_<hostname>_for_ENUM.tar.gz" under /tmp/dest.

    <time>_<date> displays Coordinated Universal Time (UTC).


    Expect result:

    --------------------------------------------

    IPWORKS Backup/Migration Utility

    --------------------------------------------

    ...

    File Created:

    ../dest/ipwbackup_025152_04102017_ipwm2ps-01_for_ENUM.tar.gz

    Note:  
    If no backup generated, refer to section 6.

    Copy the backup file to other machine or external devices.

4.2.3   AAA Diameter Service Backup

To backup AAA Diameter service, do the following steps on each PS node:

  1. Prepare the backup scripts

    Copy the package CXP9029034_3_Ux_K.tar.gz to /tmp in PS node.

    # cd /tmp

    # tar -zxvf CXP9029034_3_Ux_K.tar.gz

    # cd datamigrationtool/

    # cd backup/

    # chmod +x ipwbackup_for_datamigaration.sh ipw_get_service_conf.py

  2. Backup AAA Diameter configuration

    # ./ipwbackup_for_datamigration.sh AAA-Dia

    Note:  
    It generates a file called "ipwbackup_<time>_<date>_<hostname>_for_AAA-Dia.tar.gz" under /tmp/dest.

    <time>_<date> displays Coordinated Universal Time (UTC).


    Expect result:

    --------------------------------------------

    IPWORKS Backup/Migration Utility

    --------------------------------------------

    ...

    File Created:

    ../dest/ipwbackup_025152_04102017_ipwm2ps-01_for_AAA-Dia.tar.gz

    If no backup generated, refer to section 6.

    Copy the backup file to other machine or external devices.

4.2.4   AAA Radius Service Backup

To backup the AAA Radius service, do the following steps on each PS node:

  1. Prepare the backup scripts.

    Copy the package CXP9029034_3_Ux_K.tar.gz to /tmp in PS node.

    # cd /tmp

    # tar -zxvf 19010-CXP9014429_5_CP3_DataMigrativi on.tar.gz

    # cd datamigrationtool/

    # cd backup/

    # chmod +x ipwbackup_for_datamigration.sh ipw_get_service_conf.py

  2. Backup AAA Radius configuration

    # ./ipwbackup_for_datamigration.sh AAA-Rad

    Note:  
    It generates a file called "ipwbackup_<time>_<date>_<hostname>_for_AAA-Rad.tar.gz" under /tmp/dest.

    <time>_<date> displays Coordinated Universal Time (UTC).


    Expect result:

    --------------------------------------------

    IPWORKS Backup/Migration Utility

    --------------------------------------------

    ...

    File Created:

    ../dest/ipwbackup_025152_04102017_ipwm2ps-01_for_AAA-Rad.tar.gz

    If no backup generated, refer to section 6.

    Copy the backup file to other machine or external devices.

5   Migration

This section describes the steps of following migrations.

Before execution of the configuration migration, make sure that:

To perform the migration, see table 3:

Table 3   

Service Complex

Operation

Deployment Scenario

Reference

DNS

Common Migration


DNS Service Migration

For 2+2 and 2+2+2 deployments

Section 5.1


Section 5.2

ENUM

Common Migration


ENUM Service Migration

For 2+2 and 2+2+2 deployments

Section 5.1


Section 5.3

DNS + ENUM

Common Migration


DNS Service Migration


ENUM Service Migration

For 2+2 and 2+2+2 deployments

Section 5.1


Section 5.2


Section 5.3

AAA Diameter

Common Migration


AAA Diameter Service Migration

For 2+2 and 2+2+2 deployments

Section 5.1


Section 5.4

AAA Radius

Common Migration


AAA Radius Service Migration

For both 2+2 and 2+2+2 deployments

Section 5.1


Section 5.5

5.1   Common Migration

The Common migration contains:

5.1.1   Configuration Data Migration

To execute the configuration data migration, do:

  1. Log on the active SC node.

    #ssh <Username>@<MIP_OAM_IP>

    Password:<Password>

  2. Copy the migration tool to /cluster on the SC node .

    The migration tool is:

    CXP9029034_3_Ux_K.tar.gz

  3. Unpack the migration tool.

    SC-x:~ # tar -zxvf CXP9029034_3_Ux_K.tar.gz

    # cd datamigrationtool/

    SC-x:~ # ls

    backup/ dest/ migration/ ruleconf/ util/

  4. Copy the common backup file to /cluster/datamigrationtool/migration/.

    The backup file is:

    ipwbackup_<time>_<date>_<hostname>_for_COMMOM.tar.gz

  5. Run the migration tool.

    SC-x:~ # cd /cluster/datamigrationtool/migration

    SC-x:~ # chmod +x ipw_migrate_service.py

    SC-x:~ # ./ipw_migrate_service.py COMMON SC-1 ipwbackup_<time>_<date>_<hostname>_for_COMMOM.tar.gz

    A netconf XML file, "common_netconf_set_SC-1.xml", is generated in /cluster/datamigrationtool/dest/ after the execution of the migration tool.

    Note:  
    If it is AAA migration, make sure the tables of ipw_prov_aaa are empty before migration.

  6. Execute netconf command to import the netconf configuration.

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/common_netconf_set_SC-1.xml

Note:  
If any error, refer to section 6.

5.1.2   Database Migration

To execute the database migration, do:

  1. Log on the active SC node.

    #ssh <Username>@<MIP_OAM_IP>

    Password:<Password>

  2. Check the status of the NDB Cluster Node.

    Check the status of the NDB Cluster Node as following:

    SC-x:~ # ipw-ctr status all

    on SC-1 :

    ...

    sqlnodemgr is running as standby role.

    on SC-2 :

    ...

    sqlnodemgr is running as active role.

    At least the sqlnodemgr is running on one SC node. If not, start sqlnodemgr as following:

    SC-x:~ # ipw-ctr start sqlnodemgr

    Start sqlnodemgr ==> success.

  3. Copy the DB backup file to /cluster/datamigrationtool/migration/.

    The backup file is:

    ipwbackup_<time>_<date>_<hostname>_for_DB.tar.gz

  4. Run the migration tool

    SC-x:~ # cd /cluster/datamigrationtool/migration/

    SC-x:~ # chmod +x ipw_migrate_db.py

    1. Database migration for Common part:

      SC-x:~ # ./ipw_migrate_db.py COMMON ipwbackup_<time>_<date>_<hostname>_for_DB.tar.gz

    2. Database migration for service:

      SC-x:~ # ./ipw_migrate_db.py <SERVICE_NAME> ipwbackup_<time>_<date>_<hostname>_for_DB.tar.gz

      The <SERVICE_NAME> can be set as DNS, ENUM or AAA according to the actual environment.

      If the <SERVICE_NAME> is set as AAA, a netconf XML file, "db_netconf_set_aaa.xml", is generated in /cluster/datamigrationtool/dest/ after the execution of the migration tool.

      Note:  
      Currently the netconf XML file will be only generated for AAA Radius services.

      If any error, refer to section 6.

  5. In case of AAA Radius, execute netconf command to import the netconf configuration.

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/db_netconf_set_aaa.xml

    Note:  
    If any error, refer to section 6.

  6. Configure the UserName and Password for the Storage Server.

    The UserName and Password migrate from HP to IPWork 1 for the Storage Server.

    Configure them same with in HP.

    SC-x:~ # /opt/com/bin/cliss

    For information about how to use ECLI, refer to Ericsson Command-Line Interface User Guide.

    >configure

    (config)>ManagedElement=<Node Name>,IpworksFunction=1,IpworksCommonRoot=1,StorageServer=1,SSInterface=1

    (config-SSInterface=1)> username=<Storage Server UserName>

    (config-SSInterface=1)> password="<Storage Server Password>" cleartext

    (config-StorageServer=1)> commit

    (config-StorageServer=1)> exit

  7. Restart Storage Server by ipw-ctr

    SC-x:~ # ipw-ctr restart ss SC-1

    SC-x:~ # ipw-ctr restart ss SC-2

5.2   DNS Service Migration

There are two types of DNS service, see Table 4.

Table 4   

DNS Service Type

Reference

Scenario 1: DNS

See section 5.2.1

Scenario 2: iDNS and eDNS

See section 5.2.2

5.2.1   DNS Service Migration for Scenario 1: DNS

To execute the DNS service configuration data migration, do:

  1. Log on the active SC node.

    #ssh <Username>@<MIP_OAM_IP>

    Password:<Password>

  2. Copy the DNS backup packages to /cluster/datamigrationtool/migration/.

    For example:

    The backup packages are:

    ipwbackup_<time>_<date>_<hostname 1>_for_DNS.tar.gz

    ipwbackup_<time>_<date>_<hostname 2>_for_DNS.tar.gz

    ...

    Because DNS service needs 2 PLs for the migration, choose only two backup packages.

    For example:

    In case of 2+2 deployment: ipwbackup_<time>_<date>_<hostname 1>_for_DNS.tar.gz and ipwbackup_<time>_<date>_<hostname 2>_for_DNS.tar.gz for PL-3 and PL-4

    In case of 2+2+2 deployment: ipwbackup_<time>_<date>_<hostname 1>_for_DNS.tar.gz and ipwbackup_<time>_<date>_<hostname 2>_for_DNS.tar.gz for PL-5 and PL-6

  3. Run the migration tool on the SC for the two PLs.

    SC-x:~ # cd /cluster/datamigrationtool/migration/

    SC-x:~ # ./ipw_migrate_service.py DNS <PL Node hostname> ipwbackup_<time>_<date>_<hostname n>_for_DNS.tar.gz

    Example:

    SC-x:~ # ./ipw_migrate_service.py DNS PL-3 ipwbackup_091108_04012017_ipwm2ps1_for_DNS.tar.gz

    A netconf XML file, "dns_netconf_set_PL-3.xml", is generated in /cluster/datamigrationtool/dest/ after the execution of the migration tool.

    SC-x:~ # ./ipw_migrate_service.py DNS PL-4 ipwbackup_091108_04012017_ipwm2ps2_for_DNS.tar.gz

    A netconf XML file, "dns_netconf_set_PL-4.xml", is generated in /cluster/datamigrationtool/dest/ after the execution of the migration tool.

    Note:  
    If any error, refer to section 6.

  4. Execute netconf command to import the netconf configuration.

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/dns_netconf_set_<PL node>.xml

    For example:

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/dns_netconf_set_PL-3.xml

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/dns_netconf_set_PL-4.xml

    Note:  
    If any error, refer to section 6.

  5. Change DNS server address for the two PLs.

    In case 2+2 deployment:

    #ipwcli

    IPWorks> modify dnsserver <dnsserver name 1> -set Address=169.254.100.3

    IPWorks> modify dnsserver <dnsserver name 2> -set Address=169.254.100.4

    IPWorks> exit

    In case 2+2+2 deployment:

    IPWorks> modify dnsserver <dnsserver name 3> -set Address=169.254.100.5

    IPWorks> modify dnsserver <dnsserver name 4> -set Address=169.254.100.6

    IPWorks> exit

  6. Restart the DNS service for each PLs.

    Restart the DNS Server Manager.

    SC-x:~ #ipw-ctr restart dnssm <PL node>

    For example: ipw-ctr restart dnssm PL-3

    Restart the DNS service.

    SC-x:~ #ipw-ctr restart dns <PL node>

    For more information on how to start and stop DNS service, refer to the Section Starting and Stopping DNS Service in Configure DNS and ENUM.

  7. Update the DNS server.

    SC-x:~ #ipwcli

    IPWorks>update dnsserver <server name> -rebuild=true

    Result of performing an export is:

    • Exported the zone [MasterZone iptelco.com]
    • Exported configuration for [DnsServer dns1]
    • Updated the configuration for 'DNS' server 'dns1'

    IPWorks>exit

  8. Start ASDNS monitor

    Check that if ASDNS monitor function is configured.

    SC-x:~ #ipwcli

    IPWorks>list monitor

    If no monitor listed, skip this step. Otherwise do as following:

    Modify ASDNS monitor address to internal IP address.

    • For 2+2 deployment:

      SC-x:~ #ipwcli

      IPWorks> modify monitor <monitor name 1> -set address=169.254.100.3

      IPWorks> modify monitor <monitor name 2> -set address=169.254.100.4

      IPWorks> exit

    • For 2+2+2 deployment:

      SC-x:~ #ipwcli

      IPWorks> modify monitor <monitor name 1> -set address=169.254.100.5

      IPWorks> modify monitor <monitor name 2> -set address=169.254.100.6

      IPWorks> exit

    Restart ASDNS monitor.

    • For 2+2 deployment:

      SC-x:~ # ipw-ctr restart asdnssm PL-3

      Stop asdnssm ==> success.

      Start asdnssm ==> success.

      SC-x:~ # ipw-ctr restart asdns PL-3

      Stop asdns ==> success.

      Start asdns ==> success.

      SC-x:~ # ipw-ctr restart asdnssm PL-4

      Stop asdnssm ==> success.

      Start asdnssm ==> success.

      SC-x:~ # ipw-ctr restart asdns PL-4

      Stop asdns ==> success.

      Start asdns ==> success.

    • For 2+2+2 deployment:

      SC-x:~ # ipw-ctr restart asdnssm PL-5

      Stop asdnssm ==> success.

      Start asdnssm ==> success.

      SC-x:~ # ipw-ctr restart asdns PL-5

      Stop asdns ==> success.

      Start asdns ==> success.

      SC-x:~ # ipw-ctr restart asdnssm PL-6

      Stop asdnssm ==> success.

      Start asdnssm ==> success.

      SC-x:~ # ipw-ctr restart asdns PL-6

      Stop asdns ==> success.

      Start asdns ==> success.

    Update ASDNS monitor.

    Update all ASDNS monitor based on the actual environment.

    SC-x:~ # ipwcli

    IPWorks> update monitor asndsmon1

    IPWorks> exit

5.2.2   DNS Service Migration for Scenario 2: iDNS and eDNS

To execute the iDNS and eDNS service configuration data migration, do:

  1. Log on the active SC node.

    #ssh <Username>@<MIP_OAM_IP>

    Password:<Password>

  2. Copy the DNS backup packages including iDNS and eDNS to /cluster/datamigrationtool/migration/.

    For example:

    The backup packages are:

    iDNS: ipwbackup_<time>_<date>_<hostname 1>_for_DNS.tar.gz

    iDNS: ipwbackup_<time>_<date>_<hostname 2>_for_DNS.tar.gz

    ...

    eDNS: ipwbackup_<time>_<date>_<hostname 3>_for_DNS.tar.gz

    eDNS: ipwbackup_<time>_<date>_<hostname 4>_for_DNS.tar.gz

    ...

    Because iDNS or eDNS service needs 2 PLs for the migration, choose only two backup packages for one type of iDNS or eDNS into one IPWorks 2 system.

    For example:

    In case of 2+2 deployment: ipwbackup_<time>_<date>_<hostname 1>_for_DNS.tar.gz and ipwbackup_<time>_<date>_<hostname 2>_for_DNS.tar.gz for PL-3 and PL-4, or ipwbackup_<time>_<date>_<hostname 3>_for_DNS.tar.gz and ipwbackup_<time>_<date>_<hostname 4>_for_DNS.tar.gz for PL-3 and PL-4

    In case of 2+2+2 deployment: ipwbackup_<time>_<date>_<hostname 1>_for_DNS.tar.gz and ipwbackup_<time>_<date>_<hostname 2>_for_DNS.tar.gz for PL-5 and PL-6, or ipwbackup_<time>_<date>_<hostname 3>_for_DNS.tar.gz and ipwbackup_<time>_<date>_<hostname 4>_for_DNS.tar.gz for PL-5 and PL-6

  3. Run the migration tool on the SC for the two PLs.

    SC-x:~ # cd /cluster/datamigrationtool/migration/

    SC-x:~ # ./ipw_migrate_service.py DNS <PL Node hostname> ipwbackup_<time>_<date>_<hostname n>_for_DNS.tar.gz

    Example:

    SC-x:~ # ./ipw_migrate_service.py DNS PL-3 ipwbackup_091108_04012017_ipwm2ps1_for_DNS.tar.gz

    A netconf XML file, "dns_netconf_set_PL-3.xml", is generated in /cluster/datamigrationtool/dest/ after the execution of the migration tool.

    SC-x:~ # ./ipw_migrate_service.py DNS PL-4 ipwbackup_091108_04012017_ipwm2ps2_for_DNS.tar.gz

    A netconf XML file, "dns_netconf_set_PL-4.xml", is generated in /cluster/datamigrationtool/dest/ after the execution of the migration tool.

    Note:  
    If any error, refer to section 6.

  4. Execute netconf command to import the netconf configuration.

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/dns_netconf_set_<PL node>.xml

    For example:

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/dns_netconf_set_PL-3.xml

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/dns_netconf_set_PL-4.xml

    Note:  
    If any error, refer to section 6.

  5. Change DNS server address for the two PLs.

    In case 2+2 deployment:

    #ipwcli

    IPWorks> modify dnsserver <dnsserver name 1> -set Address=169.254.100.3

    IPWorks> modify dnsserver <dnsserver name 2> -set Address=169.254.100.4

    IPWorks> exit

    In case 2+2+2 deployment:

    IPWorks> modify dnsserver <dnsserver name 3> -set Address=169.254.100.5

    IPWorks> modify dnsserver <dnsserver name 4> -set Address=169.254.100.6

    IPWorks> exit

  6. Delete the DNS server whose IP address is not changed.

    SC-x:~ # ipwcli

    IPWorks> list dnsserver

    If there are more than two DNS servers, delete the DNS sever whose IP address is not changed:

    For example: [DnsServer dns3] (169.254.100.5) is not currently available. Operation update is interrupted.

    IPWorks> delete dnsserver dns3

    Working on 1 object(s).

    The [DnsServer dns3] has been marked for deletion. If it is deleted then all the objects contained within this object will also be deleted as well. If there is a large number of objects this operation could take a while. Do you want to continue? [yes] > yes

    The [View default] has been marked for deletion. If it is deleted then all the objects contained within this object will also be deleted as well. If there is a large number of objects this operation could take a while. Do you want to continue? [yes] > yes

    The [TSIGKey dns3-default-smkey] has been marked for deletion. If it is deleted then the Server Manager for the associated server may not be able to communicate correctly with the server. Do you want to continue? [yes] > yes

    1 object(s) were updated.

    After deletion of all DNS severs whose IP address is not changed, do:

    IPWorks>update dnsserver -rebuild=true

    IPWorks>exit

  7. Restart the DNS service for each PLs.

    Restart the DNS Server Manager.

    SC-x:~ #ipw-ctr restart dnssm <PL node>

    For example: ipw-ctr restart dnssm PL-3

    SC-x:~ #ipw-ctr restart dnssm PL-3

    Restart the DNS service.

    SC-x:~ #ipw-ctr restart dns <PL node>

    For more information on how to start and stop DNS service, refer to the Section Starting and Stopping DNS Service in Configure DNS and ENUM.

    IPWorks>update dnsserver -rebuild=true

    IPWorks>exit

  8. Start ASDNS monitor

    Check that if ASDNS monitor function is configured.

    SC-x:~ # ipwcli

    IPWorks> list monitor

    If no monitor listed, skip this step. Otherwise do as following:

    Modify ASDNS monitor address to internal IP address.

    • For 2+2 deployment:

      SC-x:~ # ipwcli

      IPWorks> modify monitor <monitor name 1> -set address=169.254.100.3

      IPWorks> modify monitor <monitor name 2> -set address=169.254.100.4

      IPWorks> exit

    • For 2+2+2 deployment:

      SC-x:~ # ipwcli

      IPWorks> modify monitor <monitor name 1> -set address=169.254.100.5

      IPWorks> modify monitor <monitor name 2> -set address=169.254.100.6

      IPWorks> exit

    If there are more than two monitors, delete the monitor whose IP address is not changed.

    For example:

    IPWorks> delete monitor monitor3

    Working on 1 object(s).

    The [Monitor monitor3] has been marked for deletion. If it is deleted then all the objects contained within this object will also be deleted as well. If there is a large number of objects this operation could take a while. Do you want to continue? [yes] > yes

    1 object(s) were updated.

    Restart ASDNS monitor.

    • For 2+2 deployment:

      SC-x:~ # ipw-ctr restart asdnssm PL-3

      Stop asdnssm ==> success.

      Start asdnssm ==> success.

      SC-x:~ # ipw-ctr restart asdns PL-3

      Stop asdns ==> success.

      Start asdns ==> success.

      SC-x:~ # ipw-ctr restart asdnssm PL-4

      Stop asdnssm ==> success.

      Start asdnssm ==> success.

      SC-x:~ # ipw-ctr restart asdns PL-4

      Stop asdns ==> success.

      Start asdns ==> success.

    • For 2+2+2 deployment:

      SC-x:~ # ipw-ctr restart asdnssm PL-5

      Stop asdnssm ==> success.

      Start asdnssm ==> success.

      SC-x:~ # ipw-ctr restart asdns PL-5

      Stop asdns ==> success.

      Start asdns ==> success.

      SC-x:~ # ipw-ctr restart asdnssm PL-6

      Stop asdnssm ==> success.

      Start asdnssm ==> success.

      SC-x:~ # ipw-ctr restart asdns PL-6

      Stop asdns ==> success.

      Start asdns ==> success.

    Update all ASDNS monitor based on the actual environment.

    SC-x:~ # ipwcli

    IPWorks> update monitor asndsmon1

    IPWorks> exit

5.3   ENUM Service Migration

To execute the ENUM service configuration data migration, do:

  1. Log on the active SC node.

    #ssh <Username>@<MIP_OAM_IP>

    Password:<Password>

  2. Copy the ENUM backup packages to /cluster/datamigrationtool/migration/.

    For example:

    The backup packages are:

    ipwbackup_<time>_<date>_<hostname 1>_for_ENUM.tar.gz

    ipwbackup_<time>_<date>_<hostname 2>_for_ENUM.tar.gz

    ...

    Because ENUM service needs 2 PLs for the migration, choose only two backup packages.

    For example:

    In case of 2+2 deployment: ipwbackup_<time>_<date>_<hostname 1>_for_ENUM.tar.gz and ipwbackup_<time>_<date>_<hostname 2>_for_ENUM.tar.gz for PL-3 and PL-4

    In case of 2+2+2 deployment: ipwbackup_<time>_<date>_<hostname 1>_for_ENUM.tar.gz and ipwbackup_<time>_<date>_<hostname 2>_for_ENUM.tar.gz for PL-5 and PL-6

  3. Run the migration tool for the two PLs

    SC-x:~ # cd /cluster/datamigrationtool/migration/

    SC-x:~ # ./ipw_migrate_service.py ENUM <PL Node> ipwbackup_<time>_<date>_<hostname n>_for_ENUM.tar.gz

    For example:

    SC-x:~ # ./ipw_migrate_service.py ENUM PL-3 ipwbackup_091108_04012017_ipwm2ps1_for_ENUM.tar.gz

    A netconf XML file, "enum_netconf_set_PL-3.xml", is generated in /cluster/datamigrationtool/dest/ after the execution of the migration tool.

    SC-x:~ # ./ipw_migrate_service.py ENUM PL-4 ipwbackup_091108_04012017_ipwm2ps2_for_ENUM.tar.gz

    A netconf XML file, "enum_netconf_set_PL-4.xml", is generated in /cluster/datamigrationtool/dest/ after the execution of the migration tool.

  4. Execute netconf command to import the netconf configuration.

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/enum_netconf_set_<PL Node>.xml

    For example:

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/enum_netconf_set_PL-3.xml

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/enum_netconf_set_PL-4.xml

    Note:  
    If any error, refer to section 6.

  5. Initial Configuration of ENUM Number Portability

    Currently the data migration tool does not contain the configuration of ENUM Number Portability. If needed, do the following procedure manually.

    For more information, refer to the following sections in IPWorks Initial Configuration:

    Section 3.4.2 Initial Configuration of ENUM Number Portability via SS7

    Section 3.4.3 Initial Configuration of ENUM Number Portability via LDAP

  6. Delete the ENUM server, which is not used.

    Delete the ENUM server, whose id is not set as 1.

    IPWorks> delete enumserver <id>

    For example:

    IPWorks> delete enumserver 2

    Working on 1 object(s).

    1 object(s) were updated.

  7. Restart ENUM service for the two PLs

    For example:

    SC-x:~ # ipw-ctr restart enum PL-3

    For more information on how to start and stop ENUM service, refer to the Section Starting and Stopping ENUM Server in Configure DNS and ENUM.

5.4   AAA Diameter Service Configuration Data Migration

The AAA Diameter service configuration data migration contains:

5.4.1   Service Configuration Data Migration

To execute the AAA Diameter service configuration data migration, do:

  1. Log on the active SC node.

    #ssh <Username>@<MIP_OAM_IP>

    Password:<Password>

  2. Copy the AAA diameter backup packages to /cluster/datamigrationtool/migration/.

    The backup files are:

    ipwbackup_<time>_<date>_<hostname 1>_for_AAA-Dia.tar.gz

    ipwbackup_<time>_<date>_<hostname 2>_for_AAA-Dia.tar.gz

    Because AAA Diameter service needs 2 PLs for the migration, choose only two backup packages.

    For example:

    In case of 2+2 and 2+2+2 deployments: ipwbackup_<time>_<date>_<hostname 1>_for_AAA-Dia.tar.gz and ipwbackup_<time>_<date>_<hostname 2>_for_AAA-Dia.tar.gz for PL-3 and PL-4

  3. Run the migration tool for PL-3 and PL-4.

    SC-x:~ # cd /cluster/datamigrationtool/migration/

    SC-x:~ # ./ipw_migrate_service.py DIAMETER <PL Node hostname> ipwbackup_<time>_<date>_<hostname n>_for_AAA-Dia.tar.gz

    For example:

    SC-x:~ # cd /cluster/datamigrationtool/migration/

    SC-x:~ # ./ipw_migrate_service.py DIAMETER PL-3 ipwbackup_091108_04012017_ipwm2ps1_for_AAA-Dia.tar.gz

    A netconf XML file, "diameter_netconf_set_PL-3.xml", is generated in /cluster/datamigrationtool/dest/ after the execution of the migration tool.

    SC-x:~ # ./ipw_migrate_service.py DIAMETER PL-4 ipwbackup_091108_04012017_ipwm2ps2_for_AAA-Dia.tar.gz

    A netconf XML file, "diameter_netconf_set_PL-4.xml", is generated in /cluster/datamigrationtool/dest/ after the execution of the migration tool.

    Note:  
    If any error, refer to section 6.

  4. Execute netconf command to modify the netconf configuration.

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/diameter_netconf_set_PL-3.xml

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/diameter_netconf_set_PL-4.xml

    Note:  
    If any error, refer to section 6.

5.4.2   Configure EPC AAA

The prerequisites in section 1.1 in Configure EPC AAA must be fulfilled first.

Because currently the data migration tool does not support parts of EPC AAA configuration migration automatically, the following steps only can be done manually.

To configure EPC AAA, do:

  1. Configuring EPC AAA Session Capacity License Type

    Refer to section 2.1.3 in Configure EPC AAA.

  2. Configuring HSSHOST for EPC AAA

    If the HSSHOST for EPC AAA Configuration is not used in HP, skip to next step.

    Refer to section 2.2.7 in Configure EPC AAA.

  3. Configuring EPC AAA Certification and OCSP

    To configure the EPC AAA PKI authentication, EPC AAA Certification and OCSP must be configured.

    1. Unpack ipwbackup_<time>_<date>_<hostname n>_for_AAA-Dia.tar.gz

      SC-x:~ # tar -zxvf ipwbackup_<time>_<date>_<hostname n>_for_AAA-Dia.tar.gz

    2. Migrate the certificate files from backup file

      Refer to Certificate Files on HP in backup package ipwbackup_<time>_<date>_<hostname n>_for_AAA-Dia.tar.gz, as the following table.

      Certificate File

      Directory

      CA Root Certificate

      /cluster/datamigrationtool/migration/tmp/ipwbackup/conf/ca_cert_path

      Certificate For The Server

      /cluster/datamigrationtool/migration/tmp/ipwbackup/conf/serv_cert

      Private Key of The Server

      /cluster/datamigrationtool/migration/tmp/ipwbackup/conf/serv_key

      Refer to section 2.4.1 in Configure EPC AAA.

    3. Configuring OCSP in ECLI

      For configuring OCSP CA server name and responder URL, refer to /cluster/datamigrationtool/migration/tmp/ipwbackup/conf/aaa_ocsp.xml in the backup package.

      For example:

      The contents in /cluster/datamigrationtool/migration/tmp/ipwbackup/conf/aaa_ocsp.xml as following:

      <ocsp softfail="yes">

      <server use_nonce="yes" name="/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=CA">

      <responder url="http://127.0.0.1:80/ocsp/"/> </server>

      </ocsp>

      Configure CA server name and responder URL according to /cluster/datamigrationtool/migration/tmp/ipwbackup/conf/aaa_ocsp.xml.

      # ssh <username>@<MIP_OAM_IP> -t -s cli

      >configure

      (config)>dn ManagedElement=<Node Name>,IpworksFunction=1,IPWorksAAARoot=1,IPWorksDiameterAAARoot=1,DiameterAAAService=1,AAAPKIService=1,OCSPMgr=1

      (config-OCSPMgr=1)>enableOcspCheck=true

      (config-OCSPMgr=1)>OCSPServer=1

      (config-OCSPServer=1)>name="/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=CA"

      (config-OCSPServer=1)>responderUrl="http://127.0.0.1:80/ocsp/"

      (config-OCSPServer=1)>commit

      (OCSPServer=1)>exit

    4. Configuring APNs for EPC AAA PKI User

      To Configure APNs for EPC AAA PKI User according to /cluster/datamigrationtool/migration/tmp/ipwbackup/conf/aaa_apn.xml in the backup package, refer to step 5 in section 2.4.4 in Configure EPC AAA.

  4. Configuring Wi-Fi Mobility Management
    1. Configuring Geography IP Data

      This step describes how to configure the geography IP data according to the defined CSV format for Wi-Fi Mobility Management.

      Refer to section 2.5.5 in Configure EPC AAA.

    2. Configuring ISOCC to MCC Mapping Dictionary

      Refer to section 2.5.6 in Configure EPC AAA.

      Note:  
      Refer to /cluster/datamigrationtool/migration/tmp/ipwbackup/conf/isocc_to_mcc.csv in the backup package.

    3. Configuring E164CC to MCC Mapping Dictionary

      Refer to section 2.5.7 in Configure EPC AAA.

      Note:  
      Refer to /cluster/datamigrationtool/migration/tmp/ipwbackup/conf/e164cc_to_mcc.csv in the backup package.

5.4.3   Configure the Diameter Stack

Because currently the data migration tool does not support the configuration of Diameter Stack automatically, the follow steps must be done manually.

To configure the Diameter Stack, Refer to Diameter Stack Configuration Guide and the /cluster/datamigrationtool/migration/tmp/ipwbackup/conf/aaa_diameter.xml file in backup package to set the parameters.

5.4.4   Change server address to internal IP address for PL-3 and PL-4

To change server address to internal IP address for all PLs, do:

SC-x:~ # ipwcli

IPWorks>modify aaaserver <aaadiameterserv1> -set Address=169.254.100.3

IPWorks>modify aaaserver <aaadiameterserv2> -set Address=169.254.100.4

IPWorks>exit

5.4.5   Restart the EPC AAA Service

The hostname of the payload changes according to the actual configuration environment.

To restart the EPC AAA Service, do:

  1. Restart the AAA Server Manager.

    SC-x:~ # ipw-ctr restart aaasm PL-3

    SC-x:~ # ipw-ctr restart aaasm PL-4

  2. Restart the EPC AAA service.

    SC-x:~ # ipw-ctr restart aaa_diameter PL-3

    SC-x:~ # ipw-ctr restart aaa_diameter PL-4

    For more information on how to start and stop the EPC AAA service, refer to the section Starting and Stopping AAA Server in Configure EPC AAA.

5.5   AAA Radius Service Migration

The AAA Radius Service Migration contains:

5.5.1   AAA Radius Service Configuration Data Migration

To execute AAA Radius service configuration data migration, do:

  1. Log on the active SC node.

    # ssh <Username>@<MIP_OAM_IP>

    Password:<Password>

  2. Copy the AAA Radius backup file to /cluster/datamigrationtool/migration/.

    The backup files are:

    ipwbackup_<time>_<date>_<hostname 1>_for_AAA-Rad.tar.gz

    ipwbackup_<time>_<date>_<hostname 2>_for_AAA-Rad.tar.gz

    ...

    Because AAA Radius service needs 2 PLs for the migration, choose only two backup packages .

  3. Run the migration tool for PL-3 and PL-4.

    SC-x:~ # cd /cluster/datamigrationtool/migration/

    SC-x:~ # ./ipw_migrate_service.py RADIUS <PL Node hostname> ipwbackup_<time>_<date>_<hostname n>_for_AAA-Rad.tar.gz

    For example:

    SC-x:~ # cd /cluster/datamigrationtool/migration/

    SC-x:~ # ./ipw_migrate_service.py RADIUS PL-3 ipwbackup_091108_04012017_ipwm2ps1_for_AAA-Rad.tar.gz

    A netconf XML file, "radius_netconf_set_PL-3.xml", is generated in /cluster/datamigrationtool/dest/ after the execution of the migration tool.

    SC-x:~ # ./ipw_migrate_service.py RADIUS PL-4 ipwbackup_091108_04012017_ipwm2ps2_for_AAA-Rad.tar.gz

    A netconf XML file, "radius_netconf_set_PL-4.xml", is generated in /cluster/datamigrationtool/dest/ after the execution of the migration tool.

    SC-x:~ # chmod +x ipw_migrate_conf_db.py

    SC-x:~ # ./ipw_migrate_conf_db.py ipwbackup_<time>_<date>_<hostname>_for_DB.tar.gz ipwbackup_<time>_<date>_<hostname n>_for_AAA-Rad.tar.gz

    A netconf XML file, "conf_db_netconf_set.xml", is generated in /cluster/datamigrationtool/dest/ after the execution of the migration tool.

    Note:  
    If any error, refer to section 6.

  4. Execute netconf command to import the netconf configuration.

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/radius_netconf_set_PL-3.xml

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/radius_netconf_set_PL-4.xml

    SC-x:~ # netconf < /cluster/datamigrationtool/dest/conf_db_netconf_set.xml

    Note:  
    If any error, refer to section 6.

5.5.2   Configure AAA Radius

Because currently the data migration tool does not support parts of AAA Radius configuration migration automatically, the following steps only can be done manually.

To configure AAA Radius, do:

  1. Configuring FTP Server

    Refer to section 3.5.2.2 Configuring FTP Server Configuring Dictionaries in Configure Radius AAA.

  2. Configure Dictionaries

    Unpack ipwbackup_<time>_<date>_<hostname n>_for_AAA-Rad.tar.gz

    SC-x:~ # tar -zxvf ipwbackup_<time>_<date>_<hostname n>_for_AAA-Rad.tar.gz

    Copy the directory /cluster/datamigrationtool/migration/tmp/ipwbackup/conf/dict in backup package ipwbackup_<time>_<date>_<hostname n>_for_AAA-Rad.tar.gz to overwrite the directory /etc/ipworks/aaa_radius/dict on IPWorks 2.

    For more information, refer to section 3.6 Configuring Dictionaries in Configure Radius AAA.

  3. Configuring CUDB Connection Pool

    CUDB connection configuration varies based on the actual environment for AAA-FE connects to the nodes.

    If AAA-FE is not used in HP, skip this step.

    For configuring the parameter values, refer to the backup file /cluster/datamigrationtool/migration/tmp/ipwbackup/conf/cudb_connection_pool.xml in ipwbackup_<time>_<date>_<hostname n>_for_AAA-Rad.tar.gz .

    Refer to section 7.2 Configuring CUDB Connection Pool in Configure Radius AAA.

  4. Configuring Wi-Fi AAA

    If Wi-Fi AAA is not used in HP, skip this step.

    IPWorks 2 supports two EAP authentication method: EAP-SIM, EAP-AKA.

    Before the configuration, SS7 Stack must be configured for WiFi AAA, if the user case is 2G/3G USIM (SIM) based on UE authentication with HLR.

    For more information, refer to Configure SS7 for AAA.

    1. Configuring Subscription Authorization Mode

      The authorization mode can be APN_MODE or ODB_MODE , according to the actual configuration to configure the authorization mode.

      Refer to section 8.3 Configuring Subscription Authorization Mode in Configure Radius AAA.

      For configuring or modifying the parameter values, refer to the backup file /cluster/datamigrationtool/migration/tmp/ipwbackup/conf/aaa_wifi_data.xml in ipwbackup_<time>_<date>_<hostname n>_for_AAA-Rad.tar.gz.

    2. Configuring GT Convert

      Sometimes, customers use the Mobile Global Title (MGT, E.214) to address the HLR. The MGT is a result of IMSI Series Analysis.

      Refer to section 8.7 Configuring GT Convert in Configure Radius AAA.

5.5.3   Radius AAA Initial Configuration

To configure Radius AAA initial configuration, do:

  1. Configure the eVIP flow policy for Radius AAA.
    Note:  
    This step can be skipped, when new IPWorks is deployed with Radius AAA.

    1. Check the instance of MO RadiusInterface to find the port numbers.

      For example:

      SC-X:~ # cliss

      >ManagedElement=<Node Name>,IpworksFunction=1,IPWorksAAARoot=1,IPWorksRadiusAAARoot=1,RadiusStack=1,RadiusInterface=1

      (RadiusInterface=1)>show -v

      RadiusInterface=1

      acctAddress="any" <default>

      acctPort=1813 <default>

      authAuthzAddress="any" <default>

      authAuthzPort=1812 <default>

      dmCoaPort=3799 <default>

      localhostBindIPType=IPV4 <default>

      proxyAddress="any" <default>

      proxyBindIPType=IPV4 <default>

      proxyPortsNumEachPL=100

      proxyStartPort=10000 <default>

      radiusInterfaceId="1"

      The example shows that the default Radius listening ports 1812, 1813, and 3799 are configured.

    2. Check the instances of MO EvipFlowPolicy, whether the Radius listening ports (1812, 1813, and 3799) are configured already. If yes, skip to section 5.5.4.

      For example:

      SC-X:~ # cliss

      >dn ManagedElement=<Node Name>,Transport=1,Evip=1,EvipAlbs=1,EvipAlb=ipw_sig_sp,EvipFlowPolicies=1

      (EvipFlowPolicies=1)>show -v

      EvipFlowPolicies=1

      evipFlowPoliciesId="1"

      EvipFlowPolicy=4diameter_port_3869_1

      EvipFlowPolicy=4diameter_port_3869_2

      EvipFlowPolicy=4sip_alb_tcp_fe_port_53

      EvipFlowPolicy=4sip_alb_udp_fe_port_53

      EvipFlowPolicy=4sctp_1

      EvipFlowPolicy=4sctp_2

      EvipFlowPolicy=4sctp_3

      EvipFlowPolicy=4sctp_4

      EvipFlowPolicy=4diameter_port_3868_1

      EvipFlowPolicy=4diameter_port_3868_2

    3. Add the instances of MO EvipFlowPolicy for the 1812, 1813, and 3799 ports.

      For example:

      (EvipFlowPolicies=1)>configure

      (config-EvipFlowPolicies=1)> EvipFlowPolicy=4sip_alb_udp_fe_port_1812

      (config-EvipFlowPolicy=4sip_alb_udp_fe_port_1812)>addressFamily=ipv4

      (config-EvipFlowPolicy=4sip_alb_udp_fe_port_1812)>dest="<VIP_TRF_IP1>"

      (config-EvipFlowPolicy=4sip_alb_udp_fe_port_1812)>destPort="1812"

      (config-EvipFlowPolicy=4sip_alb_udp_fe_port_1812)>protocol=udp

      (config-EvipFlowPolicy=4sip_alb_udp_fe_port_1812)>targetPool=SIG_pools

      (config-EvipFlowPolicy=4sip_alb_udp_fe_port_1812)>commit

      (EvipFlowPolicy=4sip_alb_udp_fe_port_1812)>show -v

      ...

      Where: <VIP_TRF_IP1> represents the Radius AAA traffic eVIP address.

      Repeat the step for 1813 and 3799 ports respectively. Only the port number is different.

    4. Verify the configuration.

      For example:

      (EvipFlowPolicies=1)>show -v

      EvipFlowPolicies=1

      evipFlowPoliciesId="1"

      EvipFlowPolicy=4diameter_port_3869_1

      EvipFlowPolicy=4diameter_port_3869_2

      EvipFlowPolicy=4sip_alb_tcp_fe_port_53

      EvipFlowPolicy=4sip_alb_udp_fe_port_53

      EvipFlowPolicy=4sctp_1

      EvipFlowPolicy=4sctp_2

      EvipFlowPolicy=4sctp_3

      EvipFlowPolicy=4sctp_4

      EvipFlowPolicy=4diameter_port_3868_1

      EvipFlowPolicy=4diameter_port_3868_2

      EvipFlowPolicy=4sip_alb_udp_fe_port_1812

      EvipFlowPolicy=4sip_alb_udp_fe_port_1813

      EvipFlowPolicy=4sip_alb_udp_fe_port_3799

5.5.4   Change server address to internal IP address for PL-3 and PL-4.

SC-x:~ # ipwcli

IPWorks>modify aaaserver <aaaserv1> -set Address=169.254.100.3

IPWorks>modify aaaserver <aaaserv2> -set Address=169.254.100.4

IPWorks>exit

5.5.5   Restart AAA Radius

To restart AAA Radius, do:

  1. Log on SC node (SC-1 or SC-2).

    # ssh <Username>@<MIP_OAM_IP>

  2. Restart the AAA Server Manager.

    SC-x:~ # ipw-ctr restart aaasm PL-3

    SC-x:~ # ipw-ctr restart aaasm PL-4

  3. Restart Radius Stack.

    SC-X:~ # ipw-ctr restart aaa_radius_stack PL-3

    SC-X:~ # ipw-ctr restart aaa_radius_stack PL-4

  4. Restart Radius Backend.

    SC-X:~ # ipw-ctr restart aaa_radius_backend PL-3

    SC-X:~ # ipw-ctr restart aaa_radius_backend PL-4

  5. Restart CSV Engine.

    SC-X:~ # ipw-ctr restart csvengine SC-1

    SC-X:~ # ipw-ctr restart csvengine SC-2

6   Trouble Shooting

If any issue occurs during backup and migration, refer to:


Reference List

Ericsson Online References
[1] CEE Technical Description, 221 02-FGC 101 3095 UEN
[2] IPWorks Deployment Guide, 21/1553-AVA 901 33/2 UEN
[3] Ericsson Command-Line Interface User Guide.
[4] Configure DNS and ENUM.
[5] IPWorks Initial Configuration, 5/1553-AVA 901 33/2 UEN
[6] Configure EPC AAA.
[7] Diameter Stack Configuration Guide.
[8] Configure Radius AAA.
[9] Configure SS7 for AAA.
[10] IPWorks Troubleshooting Guideline.


Copyright

© Ericsson AB 2018. All rights reserved. No part of this document may be reproduced in any form without the written permission of the copyright owner.

Disclaimer

The contents of this document are subject to revision without notice due to continued progress in methodology, design and manufacturing. Ericsson shall have no liability for any error or damage of any kind resulting from the use of this document.

Trademark List
All trademarks mentioned herein are the property of their respective owners. These are shown in the document Trademark Information.

    Data Migration from IPWorks HP 15B FD1 CP3 to IPWorks 2