1 SAPC VNF Network Configuration GuideIntroduction
This document provides information to define the hardware, software, network components, and network configuration needed to run the SAPC in the supported Cloud environments:
- OpenStack
- VMware vSphere
- VMware vCloudDirector
2 SAPC VNF Network Configuration Guide Overview
This section provides an overview of the hardware and software components used for the SAPC internal and external networks in a Cloud deployment, as well as a general network description. The Cloud deployment, as one of its intrinsic characteristics, unties the SAPC from hardware, therefore all hardware components, and some of the software components are not part of the SAPC but the Cloud provided platform.
Even though those components are not part of the SAPC and they are out of the scope of this document, they are briefly described in this section to make it more understandable.
From the point of view of network configuration, consider the following elements:
- Hardware components
- Gateway (GW), specified as part of the Cloud Infrastructure, also called as site router
- Compute hosts on which the virtual machines execute, specified as part of the Cloud Infrastructure
- Controller hosts on which the CEE Cloud Infrastructure Controller executes, specified as part of the Cloud Infrastructure for CEE deployments
- Hosts on which the Cloud Manager, Ericsson Cloud Manager (ECM)/Atlas for CEE deployments and vCenter/vCloudDirector for VMware execute, specified as part of the Cloud Infrastructure
- Software components
- CEE: based on Mirantis OpenStack, includes Ubuntu Linux as the Operating System(OS), Kernel-based Virtual Machine (KVM) as hypervisor, and the Open Virtual Switch (OVS) as virtual switch on the compute hosts
- VMware: vSphere virtualization platform including ESXi as the OS and hypervisor, and Distributed Virtual Switches (vDS) for the switching configuration
- Virtual routers (optional part of the virtual SAPC)
- The SAPC software
Figure 1 shows the SAPC network model used on Cloud environment. There are four virtual machines, each one of them with a different role.
- SC-1 and SC-2 are the System Controllers (SC)
- PL-3 and PL-4 are the traffic payloads
There are some scenarios where the customer can choose to deploy virtual routers. For such cases, Figure 2 shows the corresponding network model. There are four additional virtual machines, which are directly connected to the GWs.
- SC-1 and SC-2 are the SCs
- PL-3 and PL-4 are the traffic payloads
- VR-1 and VR-2 are the virtual routers providing access to the SAPC Operation and Maintenance (OAM) Virtual IP (VIP)
- VR-3 and VR-4 are the virtual routers providing access to the SAPC Traffic VIP
2.1 Hardware Components
2.1.1 Gateway
The GW is the connection point between the physical world and the Cloud virtual infrastructure. It can be also referred to as a site router.
2.1.2 Compute Hosts
Compute hosts provide infrastructure resources (CPU, RAM, and Disk) to the Cloud environment and they make connectivity possible among different virtual machines deployed in the Cloud environment.
2.1.3 Controller Hosts
Controller hosts manage infrastructure resources (CPU, RAM, and Disk) in the Cloud environment and they provide the OpenStack Application Programming Interfaces (APIs) towards the upper layer, that is the ECM application).
2.1.4 Cloud Manager Hosts
Hosts where the Cloud Manager is deployed. The Cloud Manager is the central point for managing the Cloud infrastructure and the Virtual Applications (vAPPs) deployed on top of it.
2.2 Software Components
2.2.1 Mirantis OpenStack (MOS)
The OpenStack delivery included in the CEE, provided by Mirantis.
2.2.2 Ubuntu Hypervisor (KVM)
It is a Linux distribution used on physical hosts as part of the CEE. Both compute and controller hosts count on this Linux distribution as the OS to provide the required services as part of the OpenStack Cloud Manager Platform.
However, the Kernel-Based Virtual Machine (KVM) modules are only installed in the case of Compute Hosts, while controllers do not require them since they do not provide any computing or infrastructure resources to the cloud environment. The Ubuntu OS and KVM modules are included in the Mirantis OpenStack delivery (CEE) and they are automatically deployed in the specific physical hosts depending on the assigned role during deployment.
The KVM is a full virtualization solution for x86 processors supporting hardware virtualization (Intel VT or AMD-V). It consists of two main components:
- A set of Kernel modules (kvm.ko, kvm-intel.ko, and kvm-amd.ko) providing the core virtualization infrastructure and processor-specific drivers
- A userspace program (qemu-kvm) that provides emulation for virtual devices and control mechanisms to manage VM Guests (virtual machines)
The term KVM more properly refers to the Kernel level virtualization functionality, but in practice it is more commonly used to refer to the userspace component. VM Guests (virtual machines), virtual storage, and networks can be managed with libvirt-based and QEMU tools.
Libvirt is a library that provides an API to manage VM Guests based on different virtualization solutions, for example, KVM and Xen. It offers a graphical user interface and a command line program. The QEMU tools are KVM/QEMU-specific and are only available for the command line.
2.2.3 Open Virtual Switch
Similarly to KVM, the Open Virtual Switch modules are used to provide network connectivity among all members of the OpenStack Cloud Manager Platform, being the base for virtual machine connectivity in the Cloud Infrastructure.
The Open vSwitch is also included in the Mirantis OpenStack delivery (CEE) and is also deployed at the time of CEE installation.
2.2.4 vSphere ESXi
It is a VMware hypervisor for deploying and serving virtual machines.
2.2.5 vSphere Distributed Virtual Switch
The vSphere Distributed Switch (VDS) provides a centralized interface to configure, monitor, and administer virtual machine access switching for the entire data center.
2.2.6 Virtual Router
When virtual routers are used in active-active geographical redundancy or standalone deployments, they eliminate the need for the manual configuration of the Open Shortest Path First (OSPF) mechanism and the OSPF protocol support or license handling in the physical routers where the SAPC is finally deployed during the SAPC installation.
They also provide redundancy access to the SAPC from the Cloud environment through the Virtual Router Redundancy Protocol (VRRP).
The virtual routers use the Vyatta Software for this purpose.
2.2.7 VIP addresses
The eVIP Component is used for making the VIP addresses accessible from the customer network and isolate the SAPC cluster to the outside network.
- Note:
- In this document, VIP means a moveable IP address that can be found in one Virtual Machine (VM) of a high-availability VM cluster. It has no relation to the concept of a virtual IP address in OpenStack.
There are two ways to achieve this:
- Using the OSPF v2 protocol by creating adjacency with the OSPF neighbors, either with the virtual routers or with site routers
- Using static routing with a set of predefined IP addresses
2.3 Network
2.3.1 General Overview
The following networks are used for SAPC connectivity:
- A network for internal communication purposes among all the virtual machines in the SAPC cluster
- A network for Transparent Inter-Process Communication (TIPC) purposes among all the virtual machines belonging to the SAPC cluster
- A network for OAM VIPs between the SC virtual machines and either of the following routers:
- A network or networks for Traffic VIPs between the Traffic Payload virtual machines with external access (PL-3 and PL-4) and either of the following routers:
In addition, when virtual routers are part of the deployment:
- External OAM networks to provide external access to the VR-1 and VR-2 routers.
- External Traffic network or networks to provide external access to the VR-3 and VR-4 routers.
2.3.2 External Connectivity
The SAPC provides several VIP addresses to provide service for other Network Virtualization Functions which are VIPs for OAM to serve Operation and Management functions, and VIPs for Traffic to provide policy controller functions to the rest of the nodes.
VIPs for OAM are provided by both SCs while VIPs for Traffic are provided by a maximum of 6 Payloads (PLs) at the same time. Those VIP addresses are reachable from the outside world either directly with static routing, or discovered using the OSPF protocol, with or without virtual routers (VR-1, VR-2, VR-3, and VR-4).
2.3.2.1 SAPC Networks Designated for Static Routing
The networks used in the SAPC for interconnection between the so called Node Front-Ends (SC-1 and SC-2 to provide VIPs for OAM and some of the PLs to VIPs for Traffic) and site routers are called OAMVip<x> and TrafficVip<y>.
The site routers have to be configured with static routes to all front-ends for reaching each specific VIP. The SAPC provides redundancy mechanism to make sure that all defined IP destination addresses (front-end IP addresses) are available even in the case when a front-end element goes down.
When virtual routers are deployed, there is a particular case where static routing from site routers is set up, but internally in the SAPC the OSPF is used. For more information on this solution, see Section 2.3.2.2.
2.3.2.2 SAPC Networks Designated for OSPF Discovery
When OSPF is used, the OAMVip<x> and TrafficVip<y> networks are used for distributing the VIP addresses, but in this case, they are part of OSPF areas.
Because of the OSPF protocol activation among these virtual machines and virtual routers, all VIP addresses are automatically included in the routing tables of the virtual routers, and the SAPC automatically learns their default routes.
These networks also provide redundancy to the SAPC to guarantee the availability of VIP addresses if any of the following failures occur:
When virtual routers are deployed, these networks are internal and not visible or routable from the customer network.
With these virtual routers, the connection to a customer network is done through the ExtOam<x> and ExtTraffic<y> networks. Two scenarios are possible:
- The customer has got an OSPF backbone area, adjacent to the SAPC virtual routers. In this case, the ExtOam<x> and ExtTraffic<y> networks can be included in the backbone, or they can be included in the corresponding OSPF area together with the OAMVip<x> and TrafficVip<y> networks.
- There is no backbone area in the customer network. The
area 0.0.0.0 is now created internally between virtual routers, for
VIP propagation. The routing to a VIP address from the site routers
is done statically. A VRRP address for each external network is also
provided for this purpose.
- Note:
- Although the VIP addresses are propagated internally in the SAPC using OSPF, they are actually accessed externally from the site routers using static routing.
2.3.3 Internal Connectivity
In the SAPC, the SAPC cluster consists of four processors in the minimal configuration of the SAPC:
- SC-1
- SC-2
- PL-3
- PL-4
They require connectivity among them for the different traffic that needs to be interchanged for different purposes. This is possible through two internal subnets:
- One subnet for TIPC communication
- One subnet for other internal communication
All VMs are connected internally through these subnets through the eth0 and eth1 interfaces.
2.3.4 Preconfigured Values
The particular values of the network configuration described in this document are the ones preconfigured in the SAPC. Although IPv6 is also supported for the External Networks, all the preconfigured values are IPv4 addresses. Some of them can be changed, based on operator needs, during deployment. To check which values can be modified, refer to SAPC VNF Descriptor Generator Tool.
3 Network Configuration Solutions
This section specifies how the SAPC is connected to the network. All the external networks and IP addresses described in this section are reachable through the customer network after a successful SAPC deployment in the Cloud environment. Since the Cloud based SAPC node uses previously installed images for the virtual machines, all the details (IP addresses, networks, and GWs) referred to in this section are already configured in the SAPC by default.
3.1 Solution to Define Unique OAM and Traffic Networks
This section describes the SAPC configuration to support one OAM network to serve OAM functions and one traffic network to provide policy controller functions to the rest of the nodes.
Figure 3 shows the general network overview:
3.1.1 Internal Network Configuration
3.1.1.1 IP Addressing
The Internal0 and Internal1 networks, shown in Figure 3, provide network connectivity between processors in the SAPC cluster. Each processor has two interfaces, eth0, and eth1 connected to internal networks composing the SAPC backplane.
|
IP Address |
Assign To |
|---|---|
|
172.16.100.0/24 |
Network |
|
.1 |
SC-1 |
|
.2 |
SC-2 |
|
.3 |
PL-3 |
|
.4 |
PL-4 |
|
.x |
PL-X if more traffic payloads are necessary |
|
TIPC Node Address |
Assign To |
|---|---|
|
1.1.1 |
SC-1 |
|
1.1.2 |
SC-2 |
|
1.1.3 |
PL-3 |
|
1.1.4 |
PL-4 |
|
1.1.x |
PL-x if more traffic payloads are necessary |
3.1.1.2 Extra Services over the Internal0 Network
Every service (NFS, and so on) is offered in a different IP and is offered by the SC acting as the primary one:
|
IP Address |
Assign To |
|---|---|
|
172.16.100.0/24 |
Network |
|
.100(1) |
SC-1 SC-2 |
|
.105 (2) |
SC-1 SC-2 |
|
.200 (3) |
SC-1 SC-2 |
|
.241 (4) |
SC-1 SC-2 |
|
.244 (5) |
PL-3(6) PL-4 (6) |
|
.245 to .254(7) |
Scalability temporary pool for any added payload |
|
.255 |
Broadcast |
(1) NFS movable IP. eth0:1 alias
interface
(2) la-ldap movable IP. eth0:3 alias interface
(3) boot movable IP. eth0:2 alias interface
(4) uetrace movable IP. eth0:4 alias interface
(5) SCTP movable IP. eth0:1 alias interface
(6) In the minimal
configuration of the SAPC node
(7) Scalability temporary pool
3.1.2 External/VIP Networks Configuration
3.1.2.1 eVIP Configuration Overview
Traffic is separated in two networks through which VIP-OAM and VIP-GX VIPs are reachable. The OAM-VIP networks enclose SCs and Traffic-VIP networks enclose PLs. From eVIP point of view, one Front-End Element (FEE) manages each kind of traffic in each processor they run on.
Figure 4 shows how to configure the VIP-OAM.
In deployments that require a provisioning address that is different from the OAM Address, the SAPC requires a new VIP for handling provisioning, that is VIP-PROVISIONING. This new VIP is published to the external network through the same FEEs as the VIP-OAM.
Figure 5 shows how to configure the VIP-GX (traffic VIP).
Additional VIPs can be also made available through the same network. For instance, in deployments with an external database like the Ericsson Centralized User Database (CUDB), the SAPC requires a new VIP for handling the Lightweight Directory Access Protocol (LDAP) traffic and the Simple Object Access Protocol (SOAP) notifications traffic with the external database.
For both OAM and traffic VIPs, static routes are created from GWs to the FEEs and VRRP address between GWs is the default GW for eVIP front-ends.
The FEE network is mapped to a virtual interface on all PLs belonging to a deployment, no matter whether they host a front-end or not. A wide netmask is required for setting up this network in the infrastructure to contain the maximum number of PLs allowed in a deployment.
Therefore in OpenStack deployments, where the IP address assignment and the netmask are required together at virtual network creation, the networks created in the infrastructure at deployment time can differ from the ones that appear in the adapt_cluster.cfg configuration file. That is, the network address defined in the infrastructure does not match the network address used for the eVIP front-ends in the adapt_cluster.cfg file.
In addition, and for future upgrade purposes, a wider netmask is required.
When the external networks are created automatically in the OpenStack infrastructure together with the SAPC deployment (the vnets_exist or external_networks parameter is set to false or empty in the SAPC.cfg file, respectively), the deployment scripts handle this consideration.
If the networks are created manually in the infrastructure (the vnets_exist or external_networks parameter is set to true or to <net_id> in the SAPC.cfg file, respectively), set the netmask to /27 for OAM networks and /24 for traffic networks.
3.1.2.2 SAPC VIP Addresses
|
VIP Description |
VIP |
Use |
|---|---|---|
|
VIP-OAM |
10.58.31.7/32 |
|
|
VIP-PROVISIONING(1) |
N/A |
|
|
VIP-GX |
10.58.31.137/32 |
SAPC Traffic VIP address. All the payload traffic from all the available interfaces (Gx, Rx, Sy, and so on) is handled through this VIP. |
|
VIP-ExtDB(2) |
N/A |
VIP address for handling the access to the external database |
(1) Only for deployments that
require a provisioning address that is different from the OAM VIP
address.
(2) Only in deployments with
an external database.
3.1.2.3 External Networks
The following networks are configured in the configuration file delivered as a template to interconnect the SAPC with the customer network. The provided network IDs and IP addresses are examples only.
|
Network Name |
Network |
Default GW |
Use |
|---|---|---|---|
|
OAMVip |
10.41.30.224/29 |
||
|
TrafficVip |
10.41.70.224/28 |
(1) VRRP address with
static routing
(2) Only used in GeoRed deployments with OSPF
3.1.2.4 IP Addressing of External Elements
This section covers all the IP addresses in the customer network that do not belong to the SAPC but are configured in the SAPC to interoperate with other nodes. No default values are configured for them, since they are customer dependent.
|
IP Address |
Network |
Use |
|---|---|---|
|
<NTP-SERVER> |
<NTP-NETWORK>/<NTP-NETMASK> |
NTP server |
|
<SNMP-SERVER> |
<SNMP-NETWORK>/<SNMP-NETMASK> |
SNMP server |
|
<DNS-SERVER> |
<DNS-NETWORK>/<DNS-NETMASK> |
DNS server |
Network Time Protocol (NTP) servers are configured by the adapt_cluster tool during deployment. For further details, see SAPC VNF Descriptor Generator Tool.
Simple Network Management Protocol (SNMP) servers are configured for fault management. For security reasons, follow the instructions as written in Create SNMPv3 Target. Legacy versions can also be used as written in Create SNMPv2C Target and Create SNMPv1 Target.
Optionally, Domain Name System (DNS) servers can also be defined in the SAPC. For further details on their configuration, refer to LDE Management Guide.
3.1.3 Gateway Router Configuration
This section covers all the IP routes to be configured in the GW routers to interoperate with each SAPC:
- Configuration in OAM GWs: to get to the OAM VIP, define static routes for the FEE IP addresses configured in the SCs
- Configuration in Traffic GWs: to get to the Traffic VIP, define static routes for the FEE IP addresses configured in the PLs
Equal-Cost Multipath (ECMP) is configured for traffic distribution among FEE IP addresses for each traffic type.
3.2 Solution to Define Unique OAM and Traffic Networks with Virtual Routers
This section describes the SAPC configuration to support one OAM Network to serve OAM functions and one traffic network to provide policy controller functions to the rest of the nodes.
Figure 6 shows the general network overview.
3.2.1 Internal Network Configuration
See Section 3.1.1
3.2.2 VIP Networks Configuration
3.2.2.1 Networks for OSPF v2
Table 7 shows the networks allocated inside the SAPC node images, in which the OSPF protocol is enabled. They are already defined in the SAPC and in the configuration of the virtual routers by default to ensure a proper operation after the SAPC deployment in the Cloud environment.
|
Network Name |
Subnet |
Use |
|---|---|---|
|
OamVip0 |
172.16.213.0/29 |
OSPFv2 attachment between SCs and VR-1, VR-2 |
|
TrafficVip0-0 |
172.16.113.0/28 |
OSPFv2 attachment between PL-3 and PL-4, and VR-3, VR-4 |
3.2.2.2 SAPC VIP Addresses
|
VIP Description |
VIP |
Use |
|---|---|---|
|
VIP-OAM |
10.58.31.7/32 |
|
|
VIP-PROVISIONING(1) |
N/A |
|
|
VIP-GX |
10.58.31.137/32 |
SAPC Traffic VIP address. All the payload traffic from all the available interfaces (Gx, Rx, Sy, and so on) is handled through this VIP. |
|
VIP-ExtDB(2) |
N/A |
VIP address for handling the access to the external database |
(1) Only for deployments that
require a provisioning address that is different from the OAM VIP
address.
(2) Only in deployments with
an external database.
3.2.2.3 eVIP Configuration
This section describes the mapping of networks to Virtual Network Interface Cards (vNICs) in the different pieces of networking equipment related to Ericsson Evolved Virtual IP (eVIP) components.
This section describes the eVIP configuration defined in the SAPC images for the Cloud environment. The evip.xml configuration file included in the SAPC images holds many parameters, however, this document describes only the ones that are key to the design.
3.2.2.3.1 eVIP Configuration Overview
Traffic is separated in four networks through which the VIP-OAM and VIP-GX VIPs are propagated. OAM-VIP networks enclose SCs and Traffic VIP networks enclose PLs. From eVIP point of view, one FEE manages each kind of traffic in each processor they run on.
Figure 7 shows how to configure the VIP-OAM that is specified in Section 3.2.2.3.3.
In deployments that require a provisioning VIP address that is different from the OAM VIP Address, the SAPC requires a new VIP for handling provisioning, that is VIP-PROVISIONING. This new VIP is published to the external network through the same FEEs as the VIP-OAM.
Figure 8 shows how to configure the VIP-GX that is specified in Section 3.2.2.3.3.
In deployments with an external database like the CUDB, the SAPC requires a new VIP for handling the LDAP traffic and the SOAP notifications traffic with the external database. This new VIP is published to the external network through the same FEEs as the VIP-GX.
3.2.2.3.2 eVIP Elements
In the table below, the distribution of eVIP elements is listed. The location of eVIP front-ends (FEE) requires corresponding configuration in the network, that is, virtual routers. This configuration is already made by default and adjustment is not required.
|
Abstract Load Balancer (ALB) |
Virtual IP (VIP) |
Front-End Element (FEE) |
Load Balancer Element (LBE) |
Security Element (SE) |
|---|---|---|---|---|
|
alb_oam |
<VIP-OAM> 10.58.31.7/32 |
SC-1 (fee_1) SC-2 (fee_2) |
lbe_1 lbe_2 |
se_1 se_2 |
|
alb_tr |
<VIP-GX> 10.58.31.137/32 <VIP-ExtDB>(1) |
PL-3 (fee_1) PL-4 (fee_2) PL-5 (fee_3) PL-6 (fee_4) PL-7 (fee_5) PL-8 (fee_6) |
lbe_1 lbe_2 lbe_3 lbe_4 lbe_5 lbe_6 |
se_1 se_2 se_3 se_4 se_5 se_6 |
(1) Only in deployments with an external database.
3.2.2.3.3 OSPF v2 Areas
The traffic is separated into two OSPF v2 areas and ALBs. Each ALB has links to IPs defined for the FEEs and the remote GW which are the virtual routers in this design. Table 10 shows how the network IPs are defined in this Cloud configuration.
|
ALB |
FEE |
Network |
FEE IP |
FEE Interface |
Virtual Router IP |
|---|---|---|---|---|---|
|
alb_oam Area=10.1.13.1 Hello=3 Dead=9 Retransmit=5 Delay=1 Priority=0 |
fee_1 |
172.16.213.0/29 |
.3 |
SC-1 eth2 |
.1, .2 |
|
fee_2 |
.4 |
SC-2 eth2 | |||
|
alb_tr Area=10.1.13.2 Hello=3 Dead=9 Retransmit=5 Delay=1 Priority=0 |
fee_1 |
172.16.113.0/28 |
.3 |
PL-3 eth2 |
.1, .2 |
|
fee_2 |
.4 |
PL-4 eth2 | |||
|
fee_3 |
.5 |
PL-5 eth2 | |||
|
fee_4 |
.6 |
PL-6 eth2 | |||
|
fee_5 |
.7 |
PL-7 eth2 | |||
|
fee_6 |
.8 |
PL-8 eth2 | |||
3.2.2.4 Virtual Router Configuration
Virtual router configurations are part of their images, similarly to other VMs composing the SAPC. Apart from the OSPF-related configuration previously described in Section 3.2.2.3.3, the following configuration is set up in the respective images and is part of the SAPC delivery.
|
OSPF Area |
Router IDs |
OSPF Parameters |
Use |
|---|---|---|---|
|
10.1.13.1 |
172.16.213.1 (Virtual Router 1) |
Hello=3 seconds Dead=9 seconds Retransmit=5 seconds Delay=1 second Priority=1 |
|
|
172.16.213.2 (Virtual Router 2) | |||
|
10.1.13.2 |
172.16.113.1 (Virtual Router 3) |
||
|
172.16.113.2 (Virtual Router 4) |
3.2.3 External Network Configuration
3.2.3.1 External Networks
The following networks are configured to interconnect the SAPC with the customer network.
|
Network Name |
Network |
Default GW |
Use |
|---|---|---|---|
|
External-OAM |
10.41.30.224/29 |
||
|
External-Traffic |
10.41.70.224/29 |
Traffic network for the SAPC node (VR-3, VR-4) |
(1) External VRRP address
between GWs
(2) Internal VRRP address between virtual routers
3.2.3.2 IP Addressing
Each SAPC node includes a set of IP addresses configured.
3.2.3.2.1 Virtual Routers IP Addresses
|
IP Address |
Network |
Value |
Use |
|---|---|---|---|
|
VR-1 OAM |
10.41.30.224/29 |
10.41.30.227/29 |
IP address of VR-1 on the ExtOAM Network |
|
VR-2 OAM |
10.41.30.228/29 |
IP address of VR-2 on the ExtOAM Network | |
|
OAM VRRP |
10.41.30.226/29 |
||
|
VR-3 Traffic |
10.41.70.224/29 |
10.41.70.227/29 |
IP address of VR-3 on the ExtTraffic Network |
|
VR-4 Traffic |
10.41.70.228/29 |
IP address of VR-4 on the ExtTraffic Network | |
|
Traffic VRRP |
10.41.70.226/29 |
3.2.3.2.2 IP Addresses of External Elements
See Section 3.1.2.4.
3.2.3.3 Virtual Router Configuration
Virtual router configurations are part of their images, similarly to other VMs composing the SAPC. The following configuration is set up in the respective images and is part of the SAPC delivery.
|
OSPF Area |
OSPF Parameters |
Use |
|---|---|---|
|
Backbone area (0.0.0.0) |
Dead Interval: 9 seconds, Hello Interval: 3 seconds, Retransmit: 5 seconds, Delay: 1 second, Priority=1 |
OSPF backbone |
|
VRRP Group |
Virtual Router |
VRRP Parameters |
Use |
|---|---|---|---|
|
10 |
Virtual Router 1 |
Priority=150 |
|
|
Virtual Router 2 |
Priority=100 | ||
|
20 |
Virtual Router 3 |
Priority=150 |
External Traffic VRRP (10.41.70.226) |
|
Virtual Router 4 |
Priority=100 |
3.3 Solution to Define Multiple OAM and Traffic Networks
The traffic separation configurations can be applied separately, i.e., it is possible to configure unique OAM Network with multiple Traffic Networks, or to chose multiple OAM Networks with single Traffic Network.
This section describes the SAPC configuration to support two OAM networks to serve general OAM functions and provisioning function, and several traffic networks to provide policy controller functions to the rest of the nodes.
These traffic functions can be separated as follows:
- A network for Rx and Sy traffic support, which can also be separated into different networks
- A network for the rest of the policy controller functions supported, mainly Gx traffic support
- A network for connection to an external database
- A network for connection to a redundant SAPC in the GeoRed scenario
- More networks can be used for other traffic purposes not explicitly mentioned above
Figure 9 shows the general network overview with two OAM networks and two traffic networks. The same concept is applicable when more traffic networks are to be created, with and additional eth interface in the PL elements for each extra network.
3.3.1 Internal Network Configuration
In this section the same configuration applies as described in Section 3.1.1.
3.3.2 External/VIP Network Configuration
3.3.2.1 Networks for eVIP
Table 16 shows a proposal for the networks to be allocated, in which VIP addresses are made available. They are customer dependent, since some of the addresses have to be defined in elements acting as GWs.
|
Network Name |
Subnet |
Use |
|---|---|---|
|
OamVip0 |
172.16.213.0/29 |
Between OAM0 FEEs and GWs |
|
OamVip1 |
172.16.213.8/29 |
Between OAM1 FEEs and GWs |
|
TrafficVip0 |
172.16.113.0/28 |
Between Traffic0 FEEs and GWs |
|
TrafficVip1 |
172.16.113.32/28 |
Between Traffic1 FEEs and GWs |
|
TrafficVipn |
172.16.113.xx/28 |
Between Trafficn FEEs and GWs |
3.3.2.2 SAPC VIP Addresses
Additional VIP addresses can be configured in the SAPC. Each VIP address is assigned to a traffic network. Several VIP addresses can share a traffic network for communication purposes.
|
VIP Description |
VIP |
Use |
|---|---|---|
|
VIP-OAM |
10.58.31.7/32 |
|
|
VIP-PROVISIONING(1) |
10.58.32.7/32 |
|
|
VIP-GX |
10.58.31.137/32 |
|
|
VIP-RX |
10.58.32.142/32 |
|
|
VIP-ExtDB(2) |
N/A |
VIP address for handling the access to the external Database |
(1) Only for deployments that
require a provisioning address that is different from the OAM VIP
Address. Traffic to this address goes through second OAM network.
(2) Only in deployment with
an external database.
3.3.2.3 eVIP Configuration
This section describes the mapping of networks to vNICs in the different pieces of networking equipment related to eVIP components.
This section describes the eVIP configuration defined in the SAPC images for the Cloud environment. The evip.xml configuration file included in the SAPC images holds many parameters, however, this document describes only the ones that are key to the design.
3.3.2.3.1 eVIP Configuration Overview
Traffic is separated into different networks through which the VIP-OAM, VIP-GX, VIP-RX, and other traffic VIPs are propagated. The VIP-OAM networks enclose SCs and traffic VIPs enclose PLs. From eVIP point of view, one FEE manages each kind of traffic in each processor they run on.
Figure 10 shows how to configure the VIP-OAM as specified in Section 3.3.2.3.2.
In deployments that require a provisioning VIP address that is different from the OAM VIP Address, the SAPC requires a new VIP for handling provisioning, that is VIP-PROVISIONING. This new VIP is published to the external network either through the same FEEs as the VIP-OAM (unique OAM network) or through separated FEEs in an additional OAM network as described in this chapter.
Figure 11 shows how to configure several traffic VIPs as specified in Section 3.3.2.3.2.
In deployments with an external database like the CUDB, the SAPC requires a new VIP for handling the LDAP traffic and the SOAP notifications traffic with the external database. This new VIP can be published to the external network through the same FEEs as the existing VIP addresses, or through an extra separated traffic channel.
For both OAM and traffic VIPs, static routes are created from GWs to the FEEs, and VRRP address between GWs is default GW for eVIP front-ends.
3.3.2.3.2 eVIP Elements
In the table below, the distribution of eVIP elements is listed. The location of eVIP FEEs requires the corresponding configuration in the network, that is, site routers.
|
ALB |
VIP |
FEE |
LBE |
SE |
|---|---|---|---|---|
|
alb_oam |
<VIP-OAM> 10.58.31.7/32 |
SC-1 (fee_1) SC-2 (fee_2) |
lbe_1 lbe_2 |
se_1 se_2 |
|
alb_prov |
<VIP-OAMPROVISIONING> 10.58.32.7/32 |
SC-1 (fee_1) SC-2 (fee_2) |
lbe_1 lbe_2 |
se_1 se_2 |
|
alb_trf_1 |
<VIP-GX> 10.58.31.137/32 <VIP-ExtDB>(2) |
PL-3 (fee_1) PL-4 (fee_2) PL-5 (fee_3) PL-6 (fee_4) PL-7 (fee_5) PL-8 (fee_6) |
lbe_1 lbe_2 lbe_3 lbe_4 lbe_5 lbe_6 |
se_1 se_2 se_3 se_4 se_5 se_6 |
|
alb_trf_2 |
<VIP-RX> 10.58.32.142/32 |
PL-3 (fee_1) PL-4 (fee_2) PL-5 (fee_3) PL-6 (fee_4) PL-7 (fee_5) PL-8 (fee_6) |
lbe_1 lbe_2 lbe_3 lbe_4 lbe_5 lbe_6 |
se_1 se_2 se_3 se_4 se_5 se_6 |
(1) When there are fewer FEE elements than indicated here,
the corresponding IP addresses are automatically distributed among
the existing FEEs.
(2) Only in deployments with an external database.
It can also be separated on its own ALB.
3.3.2.4 IP Addressing
Each SAPC includes a set of IP addresses configured.
3.3.2.4.1 IP Addresses of External Elements
In this section the same configuration applies as described in Section 3.1.2.4.
3.3.3 Gateway Router Configuration
This section covers all the IP routes to be configured in the GW routers to interoperate with each SAPC.
- Configuration in OAM GWs:
- Configuration in Traffic GWs:
ECMP has to be configured for traffic distribution among FEE IP addresses for each traffic type.
3.4 Solution to Define Multiple OAM and Traffic Networks with Virtual Routers
The traffic separation configurations can be applied separately, i.e., it is possible to configure unique OAM Network with multiple Traffic Networks, or to chose multiple OAM Networks with single Traffic Network.
This section describes the SAPC configuration, with virtual routers included, to support two OAM networks to serve general OAM functions and provisioning function, and several traffic networks to provide policy controller functions to the rest of the nodes.
These traffic functions can be separated as follows:
- A network for Rx and Sy traffic support, which can be also separated into different networks
- A network for the rest of the policy controller functions supported, mainly Gx traffic support
- A network for connection to an external database
- A network for connection to a redundant SAPC in the GeoRed scenario
- More networks can be used for other traffic purposes not explicitly mentioned above.
Figure 12 shows the general network overview with two OAM networks and two traffic networks. The same concept is applicable when more traffic networks are to be created, with an additional eth interface (1 in the payload elements, 2 on virtual routers) for each extra network.
3.4.1 Internal Network Configuration
In this section the same configuration applies as described in Section 3.1.1.
3.4.2 VIP Network Configuration
3.4.2.1 Networks for OSPF v2
The following table shows the networks allocated inside the SAPC node images, in which the OSPF protocol is enabled. They are already defined in the SAPC and in the configuration of the virtual routers by default to ensure a proper operation after the SAPC deployment in the Cloud environment.
|
Network Name |
Subnet |
Use |
|---|---|---|
|
OamVip0 |
172.16.213.0/29 |
Between OAM0 FEEs and GWs |
|
OamVip1 |
172.16.213.8/29 |
Between OAM1 FEEs and GWs |
|
TrafficVip0 |
172.16.113.0/28 |
Between Traffic0 FEEs and GWs |
|
TrafficVip1 |
172.16.113.32/28 |
Between Traffic1 FEEs and GWs |
|
TrafficVipn |
172.16.113.xx/28 |
Between Trafficn FEEs and GWs |
3.4.2.2 SAPC VIP Addresses
See Section 3.3.2.2.
3.4.2.3 eVIP Configuration
This section describes the mapping of networks to vNICs in the different pieces of networking equipment related to eVIP components.
This section describes the eVIP configuration defined in the SAPC images for the Cloud environment. The evip.xml configuration file included in the SAPC images holds many parameters, however, this document describes only the ones that are key to the design.
3.4.2.3.1 eVIP Configuration Overview
Traffic is separated into several pairs of networks through which the VIP-OAM, VIP-PROVISIONING, VIP-GX, VIP-RX, and other traffic VIPs are propagated. On each pair of network, there is an internal network between FEEs and virtual routers, and another external network between virtual routers and site routers. The VIP-OAM and VIP-PROVISIONING networks enclose SCs, the VIP-GX, and VIP-RX enclose PLs. From eVIP point of view, one FEE manages each kind of traffic in each processor they run on.
Figure 13 shows how to configure the VIP-OAM as specified in Section 3.4.2.3.3.
In deployments that require a provisioning VIP address that is different from the OAM VIP address, the SAPC requires a new VIP address for handling provisioning, that is VIP-PROVISIONING. This new VIP is published to the external network either through the same FEEs as the VIP-OAM (unique OAM network) or through separated FEEs in an additional OAM network as described in this chapter.
Figure 14 shows how to configure the VIP-GX and VIP-RX as specified in Section 3.4.2.3.3. The same principle is applicable when additional traffic VIPs need to be configured.
In deployments with an external database like the CUDB, the SAPC requires a new VIP for handling the LDAP traffic and the SOAP notifications traffic with the external database. This new VIP can be published to the external network through the same FEEs as the VIP-GX, or through a separated ALB, with its corresponding FEEs.
3.4.2.3.2 eVIP Elements
In the table below, the distribution of eVIP elements is listed. The location of eVIP FEEs requires the corresponding configuration in the network, that is, virtual routers. This configuration is already made by default and adjustment is not required.
|
ALB |
VIP |
FEE |
LBE |
SE |
|---|---|---|---|---|
|
alb_oam |
<VIP-OAM> 10.58.31.7/32 |
SC-1 (fee_1) SC-2 (fee_2) |
lbe_1 lbe_2 |
se_1 se_2 |
|
alb_prov |
<VIP-OAMPROVISIONING> 10.58.32.7/32 |
SC-1 (fee_1) SC-2 (fee_2) |
lbe_1 lbe_2 |
se_1 se_2 |
|
alb_trf_1 |
<VIP-GX> 10.58.31.137/32 <VIP-ExtDB>(1) |
PL-3 (fee_1) PL-4 (fee_2) PL-5 (fee_3) PL-6 (fee_4) PL-7 (fee_5) PL-8 (fee_6) |
lbe_1 lbe_2 lbe_3 lbe_4 lbe_5 lbe_6 |
se_1 se_2 se_3 se_4 se_5 se_6 |
|
alb_trf_2 |
<VIP-RX> 10.58.32.142/32 |
PL-3 (fee_1) PL-4 (fee_2) PL-5 (fee_3) PL-6 (fee_4) PL-7 (fee_5) PL-8 (fee_6) |
lbe_1 lbe_2 lbe_3 lbe_4 lbe_5 lbe_6 |
se_1 se_2 se_3 se_4 se_5 se_6 |
(1) Only in deployments with an external database.
3.4.2.3.3 OSPF v2 Areas
In the example with two traffic ALBs, the traffic is separated into three OSPF v2 areas and ALBs. If additional VIPs are to be separated, the corresponding OSPF area and the ALB are configured, following the same principles as in the provided example. Each ALB has links with IPs defined for the FEEs and the remote GW which are the virtual routers in this design. Table 21 shows how the network IPs are defined in this Cloud configuration.
|
ALB |
FEE |
Network |
FEE IP |
FEE Interface |
Virtual Router IP |
|---|---|---|---|---|---|
|
alb_oam Area=10.1.13.1 Hello=3 Dead=9 Retransmit=5 Delay=1 Priority=0 |
fee_1 |
172.16.213.0/29 |
.3 |
SC-1 eth2 |
.1, .2 |
|
fee_2 |
.4 |
SC-2 eth2 | |||
|
alb_oam Area=10.1.13.4 Hello=3 Dead=9 Retransmit=5 Delay=1 Priority=0 |
fee_1 |
172.16.213.8/29 |
.11 |
SC-1 eth2 |
.9, .10 |
|
fee_2 |
.12 |
SC-2 eth2 | |||
|
alb_trf_1 Area=10.1.13.2 Hello=3 Dead=9 Retransmit=5 Delay=1 Priority=0 |
fee_1 |
172.16.113.0/28 |
.3 |
PL-3 eth2 PL-4 eth2 |
.1, .2 |
|
fee_2 |
.4 |
PL-4 eth2 | |||
|
fee_3 |
.5 |
PL-5 eth2 PL-8 eth2 | |||
|
fee_4 |
.6 |
PL-6 eth2 | |||
|
fee_5 |
.7 |
PL-7 eth2 | |||
|
fee_6 |
.8 |
PL-8 eth2 | |||
|
alb_trf_2 Area=10.1.13.3 Hello=3 Dead=9 Retransmit=5 Delay=1 Priority=0 |
fee_1 |
172.16.113.16/28 |
.19 |
PL-3 eth4 |
.17, .18 |
|
fee_2 |
.20 |
PL-4 eth4 | |||
|
fee_3 |
.21 |
PL-5 eth4 | |||
|
fee_4 |
.22 |
PL-6 eth4 | |||
|
fee_5 |
.23 |
PL-7 eth4 | |||
|
fee_6 |
.24 |
PL-8 eth4 | |||
3.4.3 External Network Configuration
3.4.3.1 External Networks
The following networks are configured to interconnect the SAPC with the customer network. Example values are provided.
|
Network Name |
Network |
Default GW |
Use |
|---|---|---|---|
|
ExtOAM0 |
10.41.30.224/29 |
10.41.30.225 |
|
|
ExtOAM1 |
10.41.50.224/29 |
10.41.50.225 |
Network for provisioning function for the SAPC (VR-1, VR-2) |
|
ExtTraffic0 |
10.41.70.224/29 |
10.41.70.225 |
Traffic network for Gx traffic for the SAPC (VR-3, VR-4) |
|
ExtTraffic1 |
10.41.90.224/29 |
10.41.90.225 |
Traffic network for Rx and Sy traffic for the SAPC (VR-3, VR-4) |
3.4.3.2 IP Addressing
Each SAPC includes a set of IP addresses configured.
3.4.3.2.1 Virtual Router IP Addresses
This is an example of how the IP addresses would be assigned, based on provided networks.
|
IP Address |
Network |
Value |
Use |
|---|---|---|---|
|
VR-1 OAM0 |
10.41.30.224/29 |
10.41.30.227/29 |
IP address of VR-1 on the ExtOAM0 Network |
|
VR-2 OAM0 |
10.41.30.228/29 |
IP address of VR-2 on the ExtOAM0 Network | |
|
OAM0 VRRP |
10.41.30.226/29 |
||
|
VR-1 OAM1 |
10.41.50.224/29 |
10.41.50.227/29 |
IP address of VR-1 on the ExtOAM1 Network |
|
VR-2 OAM1 |
10.41.50.228/29 |
IP address of VR-2 on the ExtOAM1 Network | |
|
OAM1 VRRP |
10.41.50.226/29 |
||
|
VR-3 Traffic0 |
10.41.70.224/29 |
10.41.70.227/29 |
IP address of VR-3 on the ExtTraffic0 Network |
|
VR-4 Traffic0 |
10.41.70.228/29 |
IP address of VR-4 on the ExtTraffic0 Network | |
|
Traffic0 VRRP |
10.41.70.226/29 |
||
|
VR-3 Traffic1 |
10.41.90.224/29 |
10.41.90.227/29 |
IP address of VR-3 on the ExtTraffic1 Network |
|
VR-4 Traffic1 |
10.41.90.228/29 |
IP address of VR-4 on the ExtTraffic1 Network | |
|
Traffic1 VRRP |
10.41.90.226/29 |
3.4.3.2.2 IP Addresses of External Elements
In this section the same configuration applies as described in Section 3.1.2.4.

Contents













