SAPC VNF Network Configuration Guide
Ericsson Service-Aware Policy Controller

Contents

1Introduction
1.1Document Purpose and Scope

2

Overview
2.1Hardware Components
2.2Software Components
2.3Network

3

Networks Configuration Solutions
3.1Solution to Define Unique OAM and Traffic Networks
3.2Solution to Define Unique OAM and Multiple Traffic Networks

Abstract

The purpose of this document is to give detailed information about the network configuration of the SAPC.


1   Introduction

1.1   Document Purpose and Scope

This document provides information to define the hardware, software, network components, and network configuration needed to run the SAPC in the supported Cloud environments:

2   Overview

This section provides an overview of the hardware and software components used for the SAPC internal and external networks in a Cloud deployment, as well as a general network description. The Cloud deployment, as one of its intrinsic characteristics, untie the SAPC Node from hardware, therefore all hardware components, and some of the software components are not part of the SAPC Node but the Cloud provided platform.

Even though those components are not part of the SAPC and they are out of the scope of this document, they are briefly described in this chapter to make it more understandable.

From the point of view of network configuration, the following elements must be considered:

The Figure 1, describes the SAPC network model used on current Cloud environment. There are eight virtual machines each one of them with a different role. The external access to the application can only be done through included Virtual Routers VR-1, VR-2, VR-3, and VR-4, which abstract the SAPC Node from the Cloud environment and makes its deployment much more flexible and independent.

Figure 1   SAPC Node deployed in Cloud Environment

2.1   Hardware Components

2.1.1   Border Gateway

The Border Gateway is the connection point between the physical world and the Cloud virtual infrastructure.

2.1.2   Compute Hosts

Compute hosts provides infrastructure resources (CPU, RAM, and Disk) to the Cloud environment and they make possible connectivity among different virtual machines deployed in the Cloud environment.

2.1.3   Controller Hosts

Controller hosts manage infrastructure resources (CPU, RAM, and Disk) to the Cloud environment and it provides the OpenStack APIs towards upper layer (that is Ericsson Cloud Manager application).

2.1.4   Cloud Manager Hosts

Hosts where the Cloud Manager is deployed. The Cloud Manager is the central point for managing the Cloud infrastructure and the vAPPs deployed on top of it.

2.2   Software Components

2.2.1   Mirantis OpenStack (MOS)

The OpenStack delivery included into CEE, provided by Mirantis.

2.2.2   Ubuntu Hypervisor (KVM)

Linux distribution used on physical hosts as part of CEE Both compute and controller hosts counts on this Linux distribution as Operating System to provide required services as part of the OpenStack Cloud Manager Platform.

However Kernel-Based Virtual Machine (KVM) modules are only installed in Compute Hosts case, while controllers do not require them since they do not provide any compute or infrastructure resources to the cloud environment. Ubuntu OS and KVM modules are included into Mirantis OpenStack delivery (CEE) and they are automatically deployed into the specific physical hosts depending on the assigned role during deployment.

As far as KVM is concerned, KVM is a full virtualization solution for x86 processors supporting hardware virtualization (Intel VT or AMD-V). It consists of two main components: A set of Kernel modules (kvm.ko, kvm-intel.ko, and kvm-amd.ko) providing the core virtualization infrastructure and processor-specific drivers and a userspace program (qemu-kvm) that provides emulation for virtual devices and control mechanisms to manage VM Guests (virtual machines). The term KVM more properly refers to the Kernel level virtualization functionality, but is in practice more commonly used to reference the userspace component. VM Guests (virtual machines), virtual storage, and networks can be managed with libvirt-based and QEMU tools. Libvirt is a library that provides an API to manage VM Guests based on different virtualization solutions, among them KVM and Xen. It offers a graphical user interface and a command line program. The QEMU tools are KVM/QEMU specific and are only available for the command line.

2.2.3   Open Virtual Switch

Similarly to KVM, Open Virtual Switch modules are used to provide network connectivity among all members of the OpenStack Cloud Manager Platform, being the base for virtual machines connectivity in the Cloud Infrastructure.

Open vSwitch is also included into Mirantis OpenStack delivery (CEE) and is also deployed at the time of CEE installation.

2.2.4   vSphere ESXi

VMWare hypervisor for deploying and serving virtual machines.

2.2.5   vSphere Distributed Virtual Switch

vSphere Distributed Switch (VDS) provides a centralized interface to configure, monitor, and administer virtual machine access switching for the entire data center.

2.2.6   Virtual Router

Virtual routers are deployed with the SAPC Node and their main purpose is to abstract it from the specific Cloud Infrastructure in which is deployed. They eliminate the need of OSPF manual configuration and OSPF protocol support or license handling in the physical routers in which the SAPC Node is finally deployed during SAPC instantiation.

By introducing them, OSPF is not longer a prerequisite for the Cloud infrastructure and ensure that, all VIPs for OAM and Traffic are available for any other node, if proper routing has been established to interconnect both nodes.

Also, they provide redundancy access to the SAPC Node from the Cloud environment through VRRP protocol.

Virtual routers use Vyatta Software for this purpose.

2.2.7   eVIP

The eVIP Component is used for announcing an IP address and isolate the SAPC cluster to the outside network. The way to be announced is using OSPF v2 protocol by creating adjacency with the OSPF neighbors, in this case, the virtual routers.

2.3   Network

2.3.1   General Overview

The following networks are used for SAPC connectivity:

2.3.2   External Connectivity

In the SAPC Node, every virtual machine composing the cluster, and virtual routers are connected through vNIC. Virtual routers are also connected to external world by Ethernet interfaces through External OAM and Traffic networks, which are the only Subnets with external exposure in the SAPC Node.

The SAPC Node provides several VIPs addresses to provide service to other Network Virtualization Functions which are VIPs for OAM to serve Operations and Managements functions and VIPs for Traffic to provide Policy Controller functions to the rest of the nodes.

VIPs for OAM are provided by both SCs while VIPs for Traffic are provided by a maximun of 6 PLs at the same time. Those VIPs addresses are reachable from outside world through Virtual Routers (VR-1, VR-2, VR-3, and VR-4) which discover them through OSPF protocol.

2.3.2.1   SAPC networks Designated for OSPF Discovery

Several networks are used in the SAPC Node for OSPF interconnection between so called Node Front Ends (SC-1 and SC-2 to provide VIPs for OAM and some of the PLs to VIPs for Traffic) and Virtual routers. OAMVip<x> and TrafficVip<y> networks are used for this purpose and are part of OSPF Areas.

Because of OSPF protocol activation among these virtual machines and virtual routers, all VIPs addresses are automatically included in the routing tables of the virtual routers, and the SAPC Node automatically learn their default routes.

Also, these networks provide redundancy to the SAPC Node to guarantee VIPs availability in the event of any possible failure (One SC down, One Traffic Payload down, or One Virtual Router Down). As previously mentioned in this document, these networks are internal and not visible or routable from the customer network.

An OSPF Backbone Area (Area 0) is configured in the Virtual Routers to interconnect with the OSPF backbone of the customer network, and learn the routes needed to communicate with the neighbor nodes.

2.3.3   Internal Connectivity

In the SAPC Node, the SAPC cluster consists of four processors in the minimal configuration of the node, SC-1, SC-2, PL-3, and PL-4 that requires connectivity among them for the different traffic that needs to be interchanged for different purposes. This is possible through two internal Subnets, one for TIPC communication and other for other internal communication. All VMs are connected internally through those Subnets through eth0 and eth1 interfaces.

2.3.4   Preconfigured Values

The particular values of the networking configuration described in this document, are the ones preconfigured in the SAPC . Some of them can be changed, based on operator needs, during deployment. To check which values can be modified, see SAPC VNF Descriptor Generator Tool.

3   Networks Configuration Solutions

This section specifies how the SAPC Node is connected to the network. All the external networks and IP addresses described in this chapter are reachable through customer network after a successful SAPC deployment in Cloud environment. Since the SAPC Node Cloud based uses full previously installed images for their virtual machines, all the details (IP addresses, Networks, and Gateways) referenced in this section are already configured by default in the SAPC Node.

3.1   Solution to Define Unique OAM and Traffic Networks

This section describes the SAPC configuration to support one OAM Network to serve Operations and Managements functions and one Traffic Network to provide Policy Controller functions to the rest of the nodes.

Next figure shows the general network overview:

Figure 2   SAPC node solution with Unique OAM and Traffic Networks

3.1.1   Internal Network Configuration

3.1.1.1   IP Addressing

The Internal0 and Internal1 networks, see Figure 2, provides network connectivity between processors in the SAPC cluster. Each processor has two interfaces, eth0, and eth1 connected to internal networks composing the SAPC backplane.

Table 1    Internal0 (Cluster Internal Network)

IP Address

Assign To

172.16.100.0/24

Network

.1

SC-1

.2

SC-2

.3

PL-3

.4

PL-4

.x

PL-X in case more traffic payloads are needed

Table 2    Internal Network 1 (TIPC Network)

TIPC Node Address

Assign To

1.1.1

SC-1

1.1.2

SC-2

1.1.3

PL-3

1.1.4

PL-4

1.1.x

PL-x in case more traffic payloads are needed

3.1.1.2   Extra Services over Internal0 Network

Every service (NFS, and so on) is offered in a different IP and is offered by the SC acting as primary.

Table 3    Extra IPs on 172.16.100.0/24 Network

IP Address

Assign To

172.16.100.0/24

Network

.100(1)

SC-1


SC-2

.200 (2)

SC-1


SC-2

.244 (3)

PL-3(4)


PL-4 (4)

.245 to .254(5)

Scalability temporary pool for any added payload

.255

Broadcast

(1)  NFS movable IP. eth0:1 alias interface

(2)  boot movable IP. eth0:3 alias interface

(3)  SCTP movable IP. eth0:1 alias interface

(4)  In the minimal configuration of the SAPC node

(5)  Scalability temporary pool


3.1.2   VIP Networks Configuration

3.1.2.1   Networks for OSPF v2

The following table shows the networks allocated inside the SAPC node images, in which OSPF protocol is enabled. They are already defined by default in the SAPC Node and in the configuration of the virtual routers to ensure a proper operation after the SAPC deployment in Cloud environment.

Table 4    Private Networks Allocated inside the SAPC Node for SAPC VIPs population Through OSPF Protocol

Network Name

Subnet

Use

OamVip0

172.16.213.0/29

OSPFv2 Attachment between SCs and VR-1

OamVip1

172.16.213.16/29

OSPFv2 Attachment between SCs and VR-2

TrafficVip0-0

172.16.113.0/28

OSPFv2 Attachment between PL-3 and PL-4, and VR-3

TrafficVip0-1

172.16.113.16/28

OSPFv2 Attachment between PL-3 and PL-4, and VR-4

3.1.2.2   SAPC VIP Addresses

Table 5    VIP Addresses

VIP Description

VIP

Use

VIP-OAM

10.58.31.7/32

SAPC OAM VIP Address

VIP-PROVISIONING(1)

N/A

SAPC Provisioning VIP Address

VIP-GX

10.58.31.137/32

SAPC Traffic VIP Address. All the payload traffic from all the available interfaces (Gx, Rx, Sy, and so on) is handled through this VIP

VIP-ExtDB(2)

N/A

VIP address for handling the access to the external Database

(1)  Only for deployment that requires Provisioning Address different than OAM Address.

(2)  Only in deployment with an external database.


3.1.2.3   eVIP Configuration

This section describes the mapping of networks to vNICs in the different pieces of networking equipment related to eVIP components.

This section describes the eVIP configuration defined in the SAPC Node images for Cloud environment. The evip.xml configuration file included into the SAPC Node images holds many parameters however this document describes the ones that are key to the design.

3.1.2.3.1   eVIP Configuration Overview

Traffic is separated in four networks through which VIP-OAM and VIP-GX VIPs are propagated. OAM-VIP networks enclose SCs and Traffic-VIP networks encloses PLs. From eVIP point of view, one FEE manages each kind of traffic in each processor they run on.

The following figure shows how VIP-OAM is configured as specified at Section 3.1.2.3.3.

Figure 3   eVIP VIP-OAM Overview

In deployments that requires Provisioning Address different than OAM Address, SAPC requires a new Virtual IP for handling Provisioning, VIP-PROVISIONING. This new VIP is published to the External Network through the same FEEs than VIP-OAM.

The following figure shows how VIP-GX is configured as specified at Section 3.1.2.3.3.

Figure 4   eVIP VIP-GX Overview

In deployments with an external database like CUDB, SAPC requires a new Virtual IP for handling the LDAP traffic and the SOAP notifications traffic with the external database. This new VIP is published to the External Network through the same FEEs than VIP-GX.

3.1.2.3.2   eVIP Elements

In the table below, the distribution of eVIP elements is listed. The location of eVIP front ends (FEE) requires corresponding configuration in the network, that is, virtual routers. This configuration is already made by default and adjustment is not required.

Table 6    Distribution of eVIP Elements

Abstract Load Balancer (ALB)

VIP

Front-End Element (FEE)

Load Balancer Element (LBE)

Security Element (SE)

alb_oam

<VIP-OAM> 10.58.31.7/32

SC-1 (fee_1)


SC-2 (fee_2)


SC-1 (fee_3)


SC-2 (fee_4)

lbe_1


lbe_2

se_1


se_2

alb_tr

<VIP-GX> 10.58.31.137/32


<VIP-ExtDB>(1)

PL-3 (fee_1)


PL-4 (fee_2)


PL-5 (fee_3)


PL-6 (fee_4)


PL-7 (fee_5)


PL-8 (fee_6)


PL-3 (fee_7)


PL-4 (fee_8)


PL-5 (fee_9)


PL-6 (fee_10)


PL-7 (fee_11)


PL-8 (fee_12)

lbe_1


lbe_2


lbe_3


lbe_4


lbe_5


lbe_6

se_1


se_2


se_3


se_4


se_5


se_6

(1)   Only in deployment with an external database.


3.1.2.3.3   OSPF v2 Areas

The traffic is separated into two OSPF v2 areas and ALBs. Each ALB has links with IPs defined for the FEEs and the remote gateway which are the virtual routers in this design. Next table shows how the networks IPs are defined in this Cloud configuration.

Table 7    FEEs and OSPF v2 Configuration

Abstract Load Balancer (ALB)

Front-End Element (FEE)

Network

FEE IP

FEE Interface

Virtual Router IP

alb_oam


Area=10.1.13.1


Hello=3


Dead=9


Retransmit =5


Delay=1


Priority=0

fee_1

172.16.213.0/29

.2

SC-1 eth2

.1

fee_2

.3

SC-2 eth2

fee_3

172.16.213.16/29

.18

SC-1 eth3

.17

fee_4

.19

SC-2 eth3

alb_tr


Area=10.1.13.2


Hello=3


Dead=9


Retransmit =5


Delay=1


Priority=0

fee_1

172.16.113.0/28

.2

PL-3 eth2

.1

fee_2

.3

PL-4 eth2

fee_3

.4

PL-5 eth2

fee_4

.5

PL-6 eth2

fee_5

.6

PL-7 eth2

fee_6

.7

PL-8 eth2

fee_7

172.16.113.16/28

.18

PL-3 eth3

.17

fee_8

.19

PL-4 eth3

fee_9

.20

PL-5 eth3

fee_10

.21

PL-6 eth3

fee_11

.22

PL-7 eth3

fee_12

.23

PL-8 eth3

3.1.2.4   Virtual Router Configuration

Virtual router configurations are part of their images, similarly to other virtual machines composing the SAPC Node. Apart of the OSPF-related configuration previously described into Section 3.1.2.3.3, the following remarkable configuration has been set up into the respective images and is part of the SAPC delivery.

Table 8    OSPF Areas for Internal Networks Configuration

OSPF Area

Router IDs

OSPF Parameters

Use

10.1.13.1

172.16.213.1 (Virtual Router 1)

Hello=3 seconds


Dead=9 seconds


Retransmit =5 seconds


Delay=1 second


Priority=1

SAPC OAM and Provisioning VIP Addresses

172.16.213.17 (Virtual Router 2)

10.1.13.2

172.16.113.1 (Virtual Router 3)

SAPC Traffic VIPs Addresses

172.16.113.17 (Virtual Router 4)

3.1.3   External Networks Configuration

3.1.3.1   External Networks

The following networks are configured to interconnect the SAPC Node with the customer network.

Table 9    External-OAM Networks

Network Name

Network

Default Gateway

Use

External-OAM

10.41.30.224/29

10.41.30.225

OAM network for the SAPC Node

External-Traffic

10.41.70.224/29

10.41.70.225

Traffic network for the SAPC node (VR-3, VR-4),

3.1.3.2   IP Addressing

Each SAPC Node includes a set of IP addresses configured.

3.1.3.2.1   Virtual Routers IP Addresses
Table 10    IP Addresses

IP Address

Network

Value

Use

VR-1 OAM

10.41.30.224/29

10.41.30.229/29

IP Address of VR-1 on ExtOAM Network

VR-2 OAM

10.41.30.230/29

IP Address of VR-2 on ExtOAM Network

OAM VRRP

10.41.30.226/29

IP Address for OAM VRRP (Virtual Router Redundancy Protocol)

VR-3 Traffic

10.41.70.224/29

10.41.70.229/29

IP Address of VR-3 on ExtTraffic Network

VR-4 Traffic

10.41.70.230/29

IP Address of VR-4 on ExtTraffic Network

Traffic VRRP

10.41.70.226/29

IP Address for Traffic VRRP (Virtual Router Redundancy Protocol)

3.1.3.2.2   IP Addresses of External Elements

This section covers all the IP addresses in the customer network that do not belong to the SAPC Node but are configured in the SAPC Node to interoperate with other nodes. No default values are configured for them, since they are customer dependant.

Table 11    IP Addresses of External Elements

IP Address

Network

Use

<NTP-SERVER>

<NTP-NETWORK>/<NTP-NETMASK>

NTP Server

<SNMP-SERVER>

<SNMP-NETWORK>/<SNMP-NETMASK>

SNMP Server

<DNS-SERVER>

<DNS-NETWORK>/<DNS-NETMASK>

DNS Server

There can be several NTP, SNMP, and DNS servers.

NTP servers are configured by the adapt_cluster tool during deployment. For further details, see SAPC VNF Descriptor Generator Tool.

SNMP servers are configured for Fault Management. For security reasons, it is highly recommended to use Create SNMPv3 Target. Also, legacy versions can be used as Create SNMPv2C Target and Create SNMPv1 Target.

For DNS servers configuration, refer to LDE Management Guide.

3.1.3.3   Virtual Router Configuration

Virtual router configurations are part of their images, similarly to other virtual machines composing the SAPC Node. The following remarkable configuration has been set up into the respective images and is part of the SAPC delivery.

Table 12    OSPF Backbone Area Configuration

OSPF Area

OSPF Parameters

Use

Backbone area (0.0.0.0)

Dead Interval: 9 seconds, Hello Interval: 3 seconds, Retransmit: 5 seconds, Delay: 1 second, Priority= 1

OSPF backbone

 
Table 13    VRRP Configuration

VRRP Group

Virtual Router

VRRP Parameters

Use

10

Virtual Router 1

Priority= 150

External OAM VRRP (10.41.30.226)

Virtual Router 2

Priority= 100

20

Virtual Router 3

Priority= 150

External Traffic VRRP (10.41.70.226)

Virtual Router 4

Priority= 100

3.2   Solution to Define Unique OAM and Multiple Traffic Networks

This section describes the SAPC configuration to support one OAM Network to serve Operations and Managements functions and two Traffic Networks to provide Policy Controller functions to the rest of the nodes. These functions are separated as follows:

Next figure shows the general network overview:

Figure 5   SAPC Node solution with Unique OAM and two Traffic Networks

3.2.1   Internal Network Configuration

In this section applies the same configuration described into Section 3.1.1.

3.2.2   VIP Networks Configuration

3.2.2.1   Networks for OSPF v2

The following table shows the networks allocated inside the SAPC node images, in which OSPF protocol is enabled. They are already defined by default in the SAPC Node and in the configuration of the virtual routers to ensure a proper operation after the SAPC deployment in Cloud environment.

Table 14    Private Networks Allocated inside the SAPC Node for SAPC VIPs population Through OSPF Protocol

Network Name

Subnet

Use

OamVip0

172.16.213.0/29

OSPFv2 Attachment between OAM FEEs and VR-1

OamVip1

172.16.213.16/29

OSPFv2 Attachment between OAM FEEs and VR-2

TrafficVip0-0

172.16.113.0/28

OSPFv2 Attachment between Traffic-1 FEEs and VR-3

TrafficVip0-1

172.16.113.16/28

OSPFv2 Attachment between Traffic-1 FEEs and VR-4

TrafficVip1-0

172.16.113.32/28

OSPFv2 Attachment between Traffic-2 FEEs and VR-3

TrafficVip1-1

172.16.113.48/28

OSPFv2 Attachment between Traffic-2 FEEs and VR-4

3.2.2.2   SAPC VIP Addresses

Table 15    VIP Addresses

VIP Description

VIP

Use

VIP-OAM

10.58.31.7/32

SAPC OAM VIP Address

VIP-PROVISIONING(1)

N/A

SAPC Provisioning VIP Address

VIP-GX

10.58.31.137/32

SAPC Traffic VIP Address for Gx mainly

VIP-RX

10.58.32.142/32

SAPC Traffic VIP Address for Rx/Sy

VIP-ExtDB(2)

N/A

VIP address for handling the access to the external Database

(1)  Only for deployment that requires Provisioning Address different than OAM Address.

(2)  Only in deployment with an external database.


3.2.2.3   eVIP Configuration

This section describes the mapping of networks to vNICs in the different pieces of networking equipment related to eVIP components.

This section describes the eVIP configuration defined in the SAPC Node images for Cloud environment. The evip.xml configuration file included into the SAPC Node images holds many parameters however this document describes the ones that are key to the design.

3.2.2.3.1   eVIP Configuration Overview

Traffic is separated in six networks through which VIP-OAM, VIP-GX and VIP-RX VIPs are propagated. VIP-OAM networks enclose SCs, VIP-GX, and VIP-RX enclose PLs. From eVIP point of view, one FEE manages each kind of traffic in each processor they run on.

The following figure shows how VIP-OAM is configured as specified at Section 3.2.2.3.3.

Figure 6   eVIP VIP-OAM Overview

In deployments that requires Provisioning Address different than OAM Address, SAPC requires a new Virtual IP for handling Provisioning, VIP-PROVISIONING. This new VIP is published to the External Network through the same FEEs than VIP-OAM.

The following figure shows how VIP-GX and VIP-RX are configured as specified at Section 3.2.2.3.3.

Figure 7   eVIP VIP-GX Overview

In deployments with an external database like CUDB, SAPC requires a new Virtual IP for handling the LDAP traffic and the SOAP notifications traffic with the external database. This new VIP is published to the External Network through the same FEEs than VIP-GX.

3.2.2.3.2   eVIP Elements

In the table below, the distribution of eVIP elements is listed. The location of eVIP front ends (FEE) requires corresponding configuration in the network, that is, virtual routers. This configuration is already made by default and adjustment is not required.

Table 16    Distribution of eVIP Elements

Abstract Load Balancer (ALB)

VIP

Front-End Element (FEE)

Load Balancer Element (LBE)

Security Element (SE)

alb_oam

<VIP-OAM> 10.58.31.7/32

SC-1 (fee_1)


SC-2 (fee_2)


SC-1 (fee_3)


SC-2 (fee_4)

lbe_1


lbe_2

se_1


se_2

alb_trf_1

<VIP-GX> 10.58.31.137/32


<VIP-ExtDB>(1)

PL-3 (fee_1)


PL-4 (fee_2)


PL-5 (fee_3)


PL-6 (fee_4)


PL-7 (fee_5)


PL-8 (fee_6)


PL-3 (fee_7)


PL-4 (fee_8)


PL-5 (fee_9)


PL-6 (fee_10)


PL-7 (fee_11)


PL-8 (fee_12)

lbe_1


lbe_2


lbe_3


lbe_4


lbe_5


lbe_6

se_1


se_2


se_3


se_4


se_5


se_6

alb_trf_2

<VIP-RX> 10.58.32.142/32

PL-3 (fee_1)


PL-4 (fee_2)


PL-5 (fee_3)


PL-6 (fee_4)


PL-7 (fee_5)


PL-8 (fee_6)


PL-3 (fee_7)


PL-4 (fee_8)


PL-5 (fee_9)


PL-6 (fee_10)


PL-7 (fee_11)


PL-8 (fee_12)

lbe_1


lbe_2


lbe_3


lbe_4


lbe_5


lbe_6

se_1


se_2


se_3


se_4


se_5


se_6

(1)   Only in deployment with an external database.


3.2.2.3.3   OSPF v2 Areas

The traffic is separated into three OSPF v2 areas and ALBs. Each ALB has links with IPs defined for the FEEs and the remote gateway which are the virtual routers in this design. Next table shows how the networks IPs are defined in this Cloud configuration.

Table 17    FEEs and OSPF v2 Configuration

Abstract Load Balancer (ALB)

Front-End Element (FEE)

Network

FEE IP

FEE Interface

Virtual Router IP

alb_oam


Area=10.1.13.1


Hello=3


Dead=9


Retransmit =5


Delay=1


Priority=0

fee_1

172.16.213.0/29

.2

SC-1 eth2

.1

fee_2

.3

SC-2 eth2

fee_3

172.16.213.16/29

.18

SC-1 eth3

.17

fee_4

.19

SC-2 eth3

alb_trf_1


Area=10.1.13.2


Hello=3


Dead=9


Retransmit =5


Delay=1


Priority=0

fee_1

172.16.113.0/28

.2

PL-3 eth2

.1

fee_2

.3

PL-4 eth2

fee_3

.4

PL-5 eth2

fee_4

.5

PL-6 eth2

fee_5

.6

PL-7 eth2

fee_6

.7

PL-8 eth2

fee_7

172.16.113.16/28

.18

PL-3 eth3

.17

fee_8

.19

PL-4 eth3

fee_9

.20

PL-5 eth3

fee_10

.21

PL-6 eth3

fee_11

.22

PL-7 eth3

fee_12

.23

PL-8 eth3

alb_trf_2


Area=10.1.13.3


Hello=3


Dead=9


Retransmit =5


Delay=1


Priority=0

fee_1

172.16.113.32/28

.34

PL-3 eth4

.33

fee_2

.35

PL-4 eth4

fee_3

.36

PL-5 eth4

fee_4

.37

PL-6 eth4

fee_5

.38

PL-7 eth4

fee_6

.39

PL-8 eth4

fee_7

172.16.113.48/28

.50

PL-3 eth5

.49

fee_8

.51

PL-4 eth5

fee_9

.52

PL-5 eth5

fee_10

.53

PL-6 eth5

fee_11

.54

PL-7 eth5

fee_12

.55

PL-8 eth5

3.2.2.4   Virtual Router Configuration

Virtual router configurations are part of their images, similarly to other virtual machines composing the SAPC Node. Apart of the OSPF-related configuration previously described into Section 3.2.2.3.3, the following remarkable configuration has been set up into the respective images and is part of the SAPC delivery.

Table 18    OSPF Areas for Internal Networks Configuration

OSPF Area

Router IDs

OSPF Parameters

Use

10.1.13.1

172.16.213.1 (Virtual Router 1)

Hello=3 seconds


Dead=9 seconds


Retransmit =5 seconds


Delay=1 second


Priority=1

SAPC OAM and Provisioning VIP Addresses

172.16.213.17 (Virtual Router 2)

10.1.13.2

172.16.113.1 (Virtual Router 3)

SAPC VIPs Addresses for the rest of the traffic, mainly for Gx traffic

172.16.113.17 (Virtual Router 4)

10.1.13.3

172.16.113.33 (Virtual Router 3)

SAPC VIP Address for Rx/Sy traffic

172.16.113.49 (Virtual Router 4)

3.2.3   External Networks Configuration

3.2.3.1   External Networks

The following networks are configured to interconnect the SAPC Node with the customer network.

Table 19    External-OAM Networks

Network Name

Network

Default Gateway

Use

External-OAM

10.41.30.224/29

10.41.30.225

OAM network for the SAPC Node

External-Traffic-1

10.41.70.224/29

10.41.70.225

Traffic network for Gx traffic for the SAPC node (VR-3, VR-4),

External-Traffic-2

10.41.90.224/29

10.41.90.225

Traffic network for Rx and Sy traffic the SAPC node (VR-3, VR-4),

3.2.3.2   IP Addressing

Each SAPC Node includes a set of IP addresses configured.

3.2.3.2.1   Virtual Routers IP Addresses
Table 20    IP Addresses

IP Address

Network

Value

Use

VR-1 OAM

10.41.30.224/29

10.41.30.229/29

IP Address of VR-1 on ExtOAM Network

VR-2 OAM

10.41.30.230/29

IP Address of VR-2 on ExtOAM Network

OAM VRRP

10.41.30.226/29

IP Address for OAM VRRP (Virtual Router Redundancy Protocol)

VR-3 Traffic-1

10.41.70.224/29

10.41.70.229/29

IP Address of VR-3 on ExtTraffic-1 Network

VR-4 Traffic-1

10.41.70.230/29

IP Address of VR-4 on ExtTraffic-1 Network

Traffic-1 VRRP

10.41.70.226/29

IP Address for Traffic-1 VRRP (Virtual Router Redundancy Protocol)

VR-3 Traffic-2

10.41.90.224/29

10.41.90.229/29

IP Address of VR-3 on ExtTraffic-2 Network

VR-4 Traffic-2

10.41.90.230/29

IP Address of VR-4 on ExtTraffic -2Network

Traffic-2 VRRP

10.41.90.226/29

IP Address for Traffic-2 VRRP (Virtual Router Redundancy Protocol)

3.2.3.2.2   IP Addresses of External Elements

In this section applies the same configuration described into Section 3.1.3.2.2.

3.2.3.3   Virtual Router Configuration

Virtual router configurations are part of their images, similarly to other virtual machines composing the SAPC Node. The following remarkable configuration has been set up into the respective images and is part of the SAPC delivery.

Table 21    OSPF Backbone Area Configuration

OSPF Area

OSPF Parameters

Use

Backbone area (0.0.0.0)

Dead Interval: 9 seconds, Hello Interval: 3 seconds, Retransmit: 5 seconds, Delay: 1 second, Priority= 1

OSPF backbone

 
Table 22    VRRP Configuration

VRRP Group

Virtual Router

VRRP Parameters

Use

10

Virtual Router 1

Priority= 150

External OAM VRRP (10.41.30.226)

Virtual Router 2

Priority= 100

20

Virtual Router 3

Priority= 150

External Traffic-1 VRRP (10.41.70.226)

Virtual Router 4

Priority= 100

30

Virtual Router 3

Priority= 150

External Traffic-2 VRRP (10.41.90.226)

Virtual Router 4

Priority= 100