1 NSP 6.1 Network Configuration Guide Introduction
This document provides information to define the network configuration needed to run the SAPC in a Network Server Platform (NSP).
2 NSP 6.1 Network Configuration Guide Overview
This section provides an overview of the hardware and software components used to configure the SAPC internal and external networks, as well as a general network description.
The configuration described here applies to NSP 6.1 Ericsson Blade System (EBS). For other vendor blade systems similar hardware functional elements must be considered:
- Hardware components
- Gateway routers between the different blades and the external network.
- System Control Switches (SCXs) constitute the cluster backplane.
- Blade system with at least eight blades.
- Software components
A blade system is a hardware system with one complete SAPC running. Each blade has one different role with the following distribution:
- SC-1 and SC-2 are the System Controllers (SC). The Operation and Maintenance (OAM) is done through these blades. These blades are virtualized.
- PL-x is a traffic payload. Policy Charging and Control (PCC) deployment traffic (such as Gx, Rx) is handled through these blades. These machines are not virtualized and run directly in the blade hardware.
The blade system can have a variable number of blades. In this network configuration guide, two scenarios are explained. The first scenario is a single subrack deployment with 12 blades and the OAM is in the SC blades. The other one is a cabinet scenario with two or three subracks (24 or 36 blades, respectively). Depending on the number of blades and the delivery needs (external database, geographical redundancy or traffic separation), follow the most adequate scenario. Gigabit interface is needed for internal and external connectivity.
For a detailed description of NSP 6.1, refer to SAPC NSP 6.1 Hardware Description.
2.1 NSP 6.1 Single Subrack Network Configuration
In this first scenario, the fifth and sixth blades (slots 9 and 11, respectively) are SCs and the other blades are traffic payloads. Each blade has a different role depending on the needs.
2.1.1 System Controller Blades
Figure 1 SCs in Single Subrack Scenario
SCs are virtualized, so virtual bridges are defined.
- Bridge br_mgmt is used for both hypervisor management, through virtual interface mgmt0, and the SAPC Node management, connecting the eth2 of the virtual machine with the eth2 of the physical blades.
- Bridges br_bp0 and br_bp1 are the backplane bridges which connect eth0 and eth1 interfaces of the Virtual Machines with the corresponding interfaces of the physical blades. A bond is created between eth0 and eth1 at Virtual Machine level to provide high availability for the backplane.
SCs are connected to the external network through VIP FEE. These connections are used for load balancing purposes through a VIP. For this purpose OAM VLANs are used. SCs also provide an external OAM IP address independent of the VIP-OAM.
2.1.2 Payload Blades
Payload blades follow different network configuration depending on the customer needs. This chapter describes a scenario with all functionality. Payload blades are configured according to the customer needs, taking the following as the recommended distribution.
- PL-7 and PL-8 are used for traffic purposes in this scenario. All external diameter traffic is received through these two.
- In case external database is configured, PL-3 and PL-4 are used for this purpose.
- In case GeoRed is configured, PL-5 and PL-6 are used for this purpose.
- In case traffic separation is configured, PL-9 and PL-10 are used for this purpose.
- Rest of the PLs have no external communication.
Payloads are not virtualized, so no virtual bridges are defined. A bond is created between eth0 and eth1 to provide high availability for the backplane.
Payloads are connected to the external network through VIP FEE. Four VIPs are defined for Traffic, External Database, GeoRed (Replication), and Traffic Separation in case that traffic exists, and additional FEEs can be defined. These connections are used for load balancing purposes through a VIP.
2.1.2.1 Traffic Payload Blades
2.1.2.2 External Database Payload Blades
2.1.2.3 GeoRed Payload Blades
2.1.2.4 Traffic Separation Payload Blades
2.1.2.5 Remaining Payload Blades
Remaining payloads are not virtualized, so no virtual bridges are defined. A bond is created between eth0 and eth1 to provide high availability for the backplane.
2.1.2.6 Logical Traffic Separation
In case no physical traffic separation is possible or chosen but traffic separation is required, it can be achieved by defining different ALBs and FEEs per traffic type through the same physical connection.
2.1.3 TIPC Networks
For TIPC communication there are two separate VLANs defined. Each of these two VLANs is assigned to different interfaces on all blades (eth0 VLAN sapc_tipc_pdl and eth1 VLAN sapc_tipc_pdr) and does not make use of bonding.
Figure 8 Subrack Configuration. TIPC Networks
2.2 NSP 6.1 Extra Subracks Network Configuration
The fifth and sixth blades are SCs and the other blades are traffic payloads. The installation described in Section 2.1 has to be done for the first subrack. In this chapter, additional networking is included for the additional second and third subrack.
Additional FEEs are needed for each type of traffic for the second and third subracks. For a second subrack, in a scenario with External Database, GeoRed, and Traffic Separation, apart from normal diameter traffic, PL-19 and PL-20 are used for Traffic FEEs, PL-15 and PL-16 for External Database FEEs, PL-17 and PL-18 for GeoRed FEEs, and PL-21 and PL-22 for Traffic Separation FEEs. For a third subrack, in a scenario with External Database, GeoRed, and Traffic Separation, apart from normal diameter traffic, PL-31 and PL-32 are used for Traffic FEEs, PL-27 and PL-28 for External Database FEEs, PL-29 and PL-30 for GeoRed FEEs, and PL-33 and PL-34 for Traffic Separation FEEs.
2.2.1 Traffic Blades
2.2.1.1 All Blades
For all blades, the following extra networking must be done.
Payloads are not virtualized, so no virtual bridges are defined. A bond is created between eth0 and eth1 to provide high availability for the backplane.
2.2.1.2 FEE Payload Blades
Additional payload configuration is needed in the new subracks. New FEEs are created in that case as the figure shows.
Figure 10 Extra Subracks Configuration. FEE Payloads
2.2.2 TIPC Networks
For TIPC communication there are two separate VLANs defined. Each of these two VLANs is assigned to different interfaces on all blades (eth0 VLAN sapc_tipc_pdl and eth1 VLAN sapc_tipc_pdr) and does not make use of bonding.
Figure 11 Extra Subracks Configuration. TIPC Networks
3 NSP 6.1 Networks Allocation
This section specifies how the SAPC is connected to the network, detailing all the VLANs and networks. The examples provided in this section are based on IPv4, but IPv6 is also supported for the external network. Before starting to configure the SAPC network, agree with the customer on all the details (IP addresses, Network, VLAN Tags, and so on) referenced in this section. Although most of the references in this document mention only one additional traffic separation network, there are no logical restrictions on the number of traffic networks and the corresponding ALBs to be defined.
All VLANs are tagged unless explicitly stated.
3.1 NSP 6.1 DMX Network Allocation
|
Address Type |
Name/Tag |
Example |
|---|---|---|
|
%{cnb} |
172.21.20.186 | |
|
Collapsed northbound default gateway IP address |
%{cnb_defgw} |
172.21.20.185 |
|
Collapsed northbound network netmask |
%{cnb_netmask} |
255.255.255.248 |
|
Collapsed northbound network VLAN identity |
%{cnb_vlanid} |
3122 |
|
External NTP server 1 for the DMX |
%{ntp1_net} |
9.9.9.9 |
|
External NTP server 2 for the DMX |
%{ntp2_net} |
9.9.9.10 |
|
%{ntp3_net} |
9.9.9.11 |
(1) For
TSP legacy migrations, check section Section 3.4 for
network reuse considerations.
(2) Optional. The DMX
can be configured with up to three time references. At least two time
references must be configured.
3.2 NSP 6.1 VLANs
|
VLAN Name |
Interface |
Ports |
Comments |
|---|---|---|---|
|
cnb_vlanid |
N/A |
Northbound Interface | |
|
sapc_internal_sp |
bond0 |
Cluster Internal | |
|
SCX-0-0: GE2(4) | |||
|
sapc_om2_sp |
Blade: mgmt0 |
N/A |
SCs only. Hypervisor Management |
|
sapc_mgmt_sp |
Blade: mgmt1, mgmt2 |
SCX-0-X: BP9, BP11, E3 |
SCs only. System Management |
|
VM: bond0 | |||
|
sapc_tipc_pdl |
eth0 |
SCX-0-0: BPn |
Left TIPC |
|
sapc_tipc_pdr |
eth1 |
SCX-0-25: BPn |
Right TIPC |
|
sapc_om1_sp |
eth2 |
N/A |
VIP Router Link O&M Traffic |
|
sapc_sig1_sp |
eth2 |
N/A |
VIP Router Link Signaling Traffic |
|
sapc_sig2_sp |
eth2 |
N/A |
|
|
sapc_sig3_sp |
eth2 |
N/A |
VIP Router Link Replication Traffic |
|
sapc_sig4_sp |
eth2 |
N/A |
VIP Router Link Signaling Traffic Separation |
|
sapc_sig5_sp |
eth2 |
N/A |
VIP Router Link Signaling Traffic Separation |
(1) LOCALHOST, link to own SCXB host
processor.
(2) REMOTEHOST, cross-link to the
host processor on the other SCXB.
(4) Temporary untagged VLAN, set for installing
the hypervisor.
Same VLAN configuration for the lower subrack than the single rack configuration, extending it for the lower subrack ports connected to the other subracks. Configuration for extending the lower subrack and configuring the other subracks according to the following tables.
|
VLAN Name |
Interface |
Ports |
Comments |
|---|---|---|---|
|
sapc_tipc_pdl |
eth0 |
SCX-0-0: E1, E2 |
Left TIPC |
|
SCX-m-0: BPn, E1 | |||
|
sapc_tipc_pdr |
eth1 |
SCX-0-25: E1, E2 |
Right TIPC |
|
SCX-m-25: BPn, E1 | |||
|
sapc_sig1_sp |
eth2 |
N/A |
VIP Router Link Signaling Traffic |
|
sapc_sig2_sp |
eth2 |
N/A |
|
|
sapc_sig3_sp |
eth2 |
N/A |
VIP Router Link Replication Traffic |
|
sapc_sig4_sp |
eth2 |
N/A |
VIP Router Link Signaling Traffic Separation |
|
sapc_sig5_sp |
eth2 |
N/A |
VIP Router Link Signaling Traffic Separation |
3.3 NSP 6.1 IP Addressing Example
Each SAPC Node requires a set of IP addresses agreed with the customer before configuring the SAPC Node.
|
Network Address |
Mask |
Type |
Usage |
VLAN Name |
VLAN ID |
|---|---|---|---|---|---|
|
192.168.218.0 |
/29 |
Private |
sapc_om1_sp |
130 | |
|
192.168.216.0 |
/28 |
Private |
VIP Router Link for Signaling Traffic |
sapc_sig1_sp |
120 |
|
192.168.217.0 |
/28 |
Private |
sapc_sig2_sp |
140 | |
|
192.168.219.0 |
/28 |
Private |
VIP Router Link for Replication Traffic |
sapc_sig3_sp |
150 |
|
192.168.220.0 |
/28 |
Private |
VIP Router Link for Signaling Traffic Separation |
sapc_sig4_sp |
122 |
|
192.168.221.0 |
/28 |
Private |
VIP Router Link for Signaling Traffic Separation |
sapc_sig5_sp |
124 |
|
192.168.100.0(1) |
/24 |
Private |
System Management Network |
sapc_mgmt_sp |
138 |
|
sapc_om2_sp |
/29 |
Public |
Hypervisor Management Network |
sapc_om2_sp |
137 |
|
sapc_sig_cn_1_vip |
/32 |
Public |
VIP Signaling Address |
N/A |
N/A |
|
sapc_om_cn_vip1 |
/32 |
Public |
N/A |
N/A | |
|
sapc_om_cn_vip2 |
/32 |
Public |
VIP Provisioning Address |
N/A |
N/A |
|
sapc_sig_data_1_vip |
/32 |
Public |
N/A |
N/A | |
|
sapc_sig_data_2_vip |
/32 |
Public |
VIP Replication Address |
N/A |
N/A |
|
sapc_sig_cn_2_vip |
/32 |
Public |
VIP Signaling Separation Address |
N/A |
N/A |
|
sapc_sig_cn_3_vip |
/32 |
Public |
VIP Signaling Separation Address |
N/A |
N/A |
(1) This network can be reused, since is
private and internal.
|
Network |
Gateways |
VLAN |
OSPF Area |
Comments |
|---|---|---|---|---|
|
192.168.218.0/29 |
192.168.218.1 |
130 |
0.1.1.1 |
O&M Traffic |
|
192.168.218.2 | ||||
|
192.168.216.0/28 |
192.168.216.1 |
120 |
0.0.1.1 |
Signaling Traffic |
|
192.168.216.2 | ||||
|
192.168.217.0/28 |
192.168.217.1 |
140 |
0.0.1.2 |
LDAP Traffic |
|
192.168.217.2 | ||||
|
192.168.219.0/28 |
192.168.219.1 |
150 |
0.0.1.3 |
Replication Traffic |
|
192.168.219.2 | ||||
|
192.168.220.0/28 |
192.168.220.1 |
122 |
0.0.1.4 |
Signaling Traffic Separation |
|
192.168.220.2 | ||||
|
192.168.221.0/28 |
192.168.221.1 |
124 |
0.0.1.5 |
Signaling Traffic Separation |
|
192.168.221.2 |
- Note:
- In OSPF, a backbone must be defined when routing a packet between two non-backbone areas. The OSPF backbone is the special OSPF Area 0 (written as Area 0.0.0.0, since OSPF Area IDs are typically formatted as IP addresses). The OSPF backbone always contains all Area Border Routers. The backbone is responsible for distributing routing information between non-backbone areas. This is mandatory as VIPs are published to the ABR and thus visible in the backbone only if OSPF Area 0 is defined.
Figure 12 OSPF Backbone Area
3.3.1 NSP 6.1 IP Addresses of External Elements
This section covers all the IP addresses in the customer network that do not belong to the SAPC Node but needed when configuring it.
|
IP Address |
Network |
Use |
|---|---|---|
|
<NTP-SERVER> |
<NTP-NETWORK>/<NTP-NETMASK> |
NTP Server |
|
<SNMP-SERVER> |
<SNMP-NETWORK>/<SNMP-NETMASK> |
SNMP Server |
|
<DNS-SERVER>(1) |
<DNS-NETWORK>/<DNS-NETMASK> |
DNS Server |
(1) Optional.
There can be several NTP servers.
SNMP servers are configured for Fault Management. For security reasons, it is highly recommended to use Create SNMPv3 Target. Also, legacy versions can be used as Create SNMPv2C Target and Create SNMPv1 Target.
Optionally, DNS servers can be defined in the SAPC.
3.3.2 NSP 6.1 Internal IP Addresses
This section covers all the IP addresses in the customer network that do not belong to the SAPC Node but needed when configuring it.
|
IP Address |
Node |
Interface |
Comment |
|---|---|---|---|
|
172.16.100.0/24 |
Network |
N/A |
VLAN sapc_internal_sp Cluster internal network (backplane) |
|
.1 |
SCX-0-0 |
LOCALHOST(1) |
IP address assigned on SCX-0-0 for ARP target and set by DMX |
|
.2 |
SCX-0-25 |
LOCALHOST (1) |
IP address assigned on SCX-0-25 for ARP target and set by DMX |
|
.121 |
SC-1 |
bond0 |
|
|
.122 |
SC-2 |
bond0 |
|
|
.200 - .232 |
N/A |
N/A |
Temporary IP address pool for scaled blades |
|
.n |
PL-n |
bond0 |
|
|
.105 |
SC-1 |
bond0:3 |
LA-LDAP movable IP address |
|
SC-2 | |||
|
.241 |
SC-1 |
bond0:4 |
uetrace movable IP address |
|
SC-2 | |||
|
.242 |
SC-1 |
bond0:2 |
NFS movable IP address |
|
SC-2 | |||
|
.243 |
SC-1 |
bond0:1 |
Boot movable IP address |
|
SC-2 | |||
|
.244 |
One of the PL nodes |
bond0:1 |
SS7CAF CPM movable IP address |
|
192.168.100.0/24 |
Network |
N/A |
VLAN sapc_mgmt_sp System management network |
|
.1 |
Hypervisor 1 |
mgmt1 |
IP address assigned on Host 1 mgmt1 interface for system management |
|
.2 |
Hypervisor 2 |
mgmt1 |
IP address assigned on Host 2 mgmt1 interface for system management |
|
.3 |
Hypervisor 1 |
mgmt2 |
IP address assigned on Host 1 mgmt2 interface for system management |
|
.4 |
Hypervisor 2 |
mgmt2 |
IP address assigned on Host 2 mgmt2 interface for system management |
|
.126 |
SC-1 |
bond0 |
IP address assigned on SC-1 for system management |
|
.127 |
SC-2 |
bond0 |
IP address assigned on SC-2 for system management |
(1) LOCALHOST, link
to own SCXB host processor.
3.4 NSP 6.1 TSP Legacy Considerations
To achieve a maximum reuse of the existing elements, the DMX Collapsed northbound IP address can be one free IP address in subnetwork sapc_om2_sp, so that there is no need to provision and route additional networks to the existing ones in TSP configurations.
|
TSP Network |
PNF Network |
Use |
|---|---|---|
|
IO Management |
sapc_om2_sp |
Hypervisor 1 and 2, DMX northbound, gateway router 1, gateway router 2, and gateway router VRRP |
|
OAM VIP |
sapc_om1_sp |
|
|
Traffic VIP |
sapc_sig1_sp |
4 NSP 6.1 Network Configuration Guide Annex
4.1 NSP 6.1 VLANs and Ports, Overview
This section shows the switch configuration, VLANs and ports in a graphical way. Readability is improved when printed in color.
Figure 13 VLANs and Ports Overview. SCX-0-0
Figure 14 VLANs and Ports Overview. SCX-0-25
Figure 15 VLANs and Ports Overview. SCX-1-0
Figure 16 VLANs and Ports Overview. SCX-1-25
Figure 17 VLANs and Ports Overview. SCX-2-0
Figure 18 VLANs and Ports Overview. SCX-2-25

Contents

















