1 Introduction
The Network Impact Report (NIR) describes the changes made in the Cloud Execution Environment (CEE) 6 release, compared to CEE 16A IRP releases, and the impact on the overall network of the operator, including affected products and functions.
The intention is to cover the delta between the CEE 16A latest IRP up to the CEE 6 latest release. This document revision covers CEE 6 releases up to CEE 6.6.
1.1 Purpose
The purpose of this document is to provide information for Ericsson system operators at an early stage, and help them plan the introduction of new products and upgrades to their networks.
2 General Impact
This section provides information on how the changes introduced in CEE 6 affect general product behavior and characteristics.
2.1 Capacity and Performance
2.1.1 Capacity and Dimensioning Guidelines
The new features introduced in CEE 6 compared to CEE 16A impact capacity and performance.
- ScaleIO uses data mirroring in order to maintain data availability and protect against single point of failure scenarios. The mirroring of data needs to be considered when calculating the usable capacity versus raw capacity. The usable capacity is much less than the raw capacity (somewhere between one third and one half of total raw capacity), depending on the number of ScaleIO servers and how many faults can be tolerated without data loss.
- The tight SDN integration feature added in CEE 6 has an impact on vCIC capacity and dimensioning.
- In case of tight SDN integration, data plane performance / latency is impacted due to use of VXLAN VTEPs.
- In case of tight SDN integration, delay / delay variation can be impacted because of data plane packets have to be sent to CSC in some cases.
- In case of tight SDN integration, the CSC southbound uses part of the 10GE traffic NICs.
- In case of tight SDN integration, it is not recommended to run VNFs on the compute hosts dedicated to vCICs.
- There are changes in the Multi-Server System Dimensioning Guide, CEE 6 and the Single Server System Dimensioning Guide, CEE 6 related to resource reservation. In particular, the amount of RAM, CPU, and disk has changed or follows new formulas.
- The default allocation of CPUs and memory for vCIC, CPUs for host OS and CSS at CEE installation has changed. For details, refer to the Configuration File Guide.
- Dimensioning guideline for CPU requirements on systems running ScaleIO Data Clients (SDC) has been changed to be more specific. For details, refer to the Multi-Server System Dimensioning Guide, CEE 6.
2.1.2 Performance
In CEE 6 a new version of the Open Virtual Switch, OVS 2.6.2 is introduced. The Data Plane Development Kit (DPDK) version used in CEE 6 is DPDK 16.11.4. In CEE 16A OVS 2.4 and DPDK 2.0 were used. The user can select the configuration of CSS matching different performance levels at CEE installation.
2.2 Hardware
CEE 6 is verified on the following hardware platforms: HDS 8000, BSP 8100 and Dell R630.
2.3 Implementation
The following update paths are supported:
The following changes are made in the CEE 6 release that have an impact on implementation:
- The Atlas installation has changed in CEE 6, and now uses localrc for setting variables instead of using arguments and variables as parameters for the Atlas installation script. For more details, refer to the Atlas SW Installation document.
- Ansible is replaced by Fuel plugins for CEE deployment.
- Separation of CSS plugin:
The Cloud SDN Switch (CSS) package is not part of CEE SW release tarball, but it is mandatory for CEE installation. For more information, refer to Preparation of Kickstart Server document.
2.4 Interface
The changes in the interfaces of CEE 6 are mostly related to the following features:
- OpenStack Mitaka support. OpenStack Kilo was used in CEE 16A.
- ScaleIO introduced
- Unmanaged ScaleIO support introduced (ScaleIO as CEE-External Software Defined Block Storage)
- SDN tight integration
- IPv6 support for dedicated Neutron APIs: L2, Security Groups
- IPv6 basic address management: manual configuration and Stateless Automatic Address Configuration (SLAAC)
2.4.1 API Interface
For a summary of the major Application Programming Interface (API) impacts, see Table 1.
|
Interface |
Nodes |
Protocol |
Impact |
CEE Feature |
|---|---|---|---|---|
|
ScaleIO |
ScaleIO Gateway |
ScaleIO integration | ||
|
Vi-Vnfm (VIM-VNFM) and Or-Vi (NFVO-VIM) |
NFVO/VNFM |
The behavior of CEE 6 is aligned to the behavior of OpenStack Mitaka. |
OpenStack Mitaka support | |
|
Vi-Vnfm (VIM-VNFM) and Or-Vi (NFVO-VIM) |
NFVO/VNFM |
Partial support for IPv6 in Neutron API |
Neutron APIs: L2, Security Groups IPv6 basic address management: manual configuration and SLAAC |
See the subsections of the specific features for more details on the major API impact: OpenStack Mitaka in Section 4.1, Secure Communication over TLS in Section 4.6, Resource Management for Virtual Machines in Section 4.14, and Ericsson Neutron in Section 4.18
2.4.2 Man–Machine Interface
The changes in the Man–Machine interface are mostly related to the OpenStack Command Line Interface (CLI) changes in the OpenStack Mitaka release.
The following changes are made in the CEE 6 release that are related to the Man-Machine interface:
- The CEE-managed ScaleIO feature introduces a new CLI interface called ScaleIO Command Line Interface (SCLI) to interact with the ScaleIO components. See the subsections of the specific feature for other changes in the CLI.
- The ScaleIO GUI is not part of the CEE GUI in Atlas, but is part of the Man-Machine interface.
- SDN tight integration introduces the CSC CLI as a Man-Machine interface.
2.4.3 Other Interfaces
For information on other interfaces, check the information on the feature that implements the corresponding interface. For example, for changes related to the SNMP interface, see Section 4.21.
2.5 Operation
See the subsections of the specific features in Section 4 for the impact on the CLI and for other changes related to the operation and maintenance of CEE.
2.6 New Features
New features are features that are new in the current release.
The following features and functionalities are new in this release:
- Features introduced in the latest CEE 6.6 release:
- Features introduced in previous releases of CEE 6:
- OpenStack Mitaka
- Distributed Block Storage
- Swift on ScaleIO
- SDN tight integration enabling the following features:
- SR-IOV VLAN
- Fencing CM-HA
- Bandwidth Management
- Atlas: Mistral support, deployment wizard, and enhanced stack panels
- ScaleIO as CEE-External Software Defined Block Storage
- PCI Passthrough support
- IPv6 support for dedicated Neutron APIs: L2, Security Groups
- IPv6 basic address management: manual configuration and SLAAC
- L3 fabric support
- Skylake support
- Support for Fortville NICs
- Multi-Server Host Network Configuration without Storage Switching Domain
- Software RAID
- Glance Image Transfer Traffic on Storage Switching Domain
- PCI Passthrough: SR-IOV physical function passthrough functionality
2.7 Deprecated Features
Deprecated features are features that, although they still provide their functionality in the current release, their use should be avoided and instead the alternative features should be used.
The following features and functionalities are deprecated in this release:
- The persistency of the hints same_host and different_host. It is recommended to use ServerGroup instead.
- Proprietary Neutron APIs, for example, static route extensions are deprecated.
2.8 Obsolete Features
Obsolete features are features that are discontinued in favor of new features that replace them.
2.8.1 Obsolete Features in the Current Release
|
Feature |
Replaced by |
|---|---|
|
CEE Fuel backup and restore |
Fuel synchronization |
|
CEE infrastructure backup and restore |
CIC Domain Data Backup and Restore |
|
NoMigration and OfflineMigration migration policies |
HA policies |
|
Evacuation and NoEvacuation evacuation policies |
HA policies |
Certain functionalities of CEE features are made obsolete. See the subsections for the specific features in Section 4.
2.8.2 Obsolete Features in Upcoming Releases
|
Feature |
Replaced by |
|---|---|
|
Trunk Port API |
|
|
same_host and different_host scheduler hints |
ServerGroup |
2.9 Other Network Elements
Information on other network elements affected by CEE 6 is provided in Section 3.
2.10 Other Impacts
2.10.1 Changes in IP Addressing
There are changes in the IP addressing for CEE 6 compared to what is required for CEE 16A.
- Four IP addresses from the storage network are needed for the ScaleIO feature if CEE-managed ScaleIO is used.
- Two IP addresses from the storage network are needed for ScaleIO as CEE-External Software Defined Block Storage.
- The existing CEE management VIP IP address is used for HA access of the ScaleIO gateway if CEE managed ScaleIO is used.
- A new VLAN and IP are added for the HDS monitoring agent.
- A new public IP from cee_om_sp allocated to the Fuel host is needed for HDS.
- There are changes in IP addressing related to tight SDN integration.
- Floating IP functionality is supported with SDN tight integration.
2.10.2 Changes in VLAN Allocation
A new VLAN and IP are added for the HDS monitoring agent.
There are changes in the VLAN allocation related to SDN tight integration.
Four new VLANs are needed if using CEE-managed ScaleIO, two front-end and two back-end. The customer has to assign the VLAN ID.
Two new VLANs are needed if using CEE-external ScaleIO for front-end traffic. The customer has to assign the VLAN ID.
Glance VLAN can be configured either on Control Network domain (1 GE) or on Traffic Network domain (10 GE).
2.10.3 Glance Image Cache
There are no changes in the Glance image cache on CEE 6 compared to 16A.
2.10.4 Neutron DHCP Agents
With SDN tight integration, the Neutron DHCP agent is not used anymore. The DHCP server functionality is part of CSC instead.
2.10.5 Logging Configuration
There are no changes in the logging configuration.
2.10.6 RADIUS Server
There are no changes related to RADIUS server support.
2.10.7 Service Supervision Changes
CSC is now also supervised by CEE mechanisms.
2.10.8 Hostname Changes
The restrictions related to shelf ID and blade ID in the hostname, such as compute-<self id>-<blade id> have been removed. The shelf and blade IDs can be any unique pair of natural numbers.
2.10.9 Deployment Changes
In the CEE 6 release, the Ericsson component deployment is changed from being Ansible-based to using Fuel plugins.
3 Summary of Impacts
For features that existed in the previous release, "major/minor" refers to the impact on this feature in this release. For new features, the "major/minor" description of an impact is provided depending on how the feature is considered to affect CEE functionality. See Table 4 for a summary of the impacts in CEE 6, and see Table 5 for a summary of the impacts in Atlas R6.
|
Feature |
Impact |
Feature Impact |
Other Nodes |
|---|---|---|---|
|
OpenStack Mitaka |
Major |
New |
NFVO/VNFM |
|
L2GW |
Major |
New |
|
|
BGPVPN |
Major |
New |
|
|
SDN tight integration |
Major |
New |
|
|
Small Footprint Deployment |
None |
None |
|
|
Single Server Deployment |
None |
None |
|
|
Ericsson Neutron |
Major |
Enhanced |
NFVO/VNFM |
|
Major |
Enhanced |
NFVO/VNFM | |
|
Bandwidth Management |
Major |
Enhanced |
|
|
Major |
Enhanced |
NFVO/VNFM | |
|
Resource Management for Virtual Machines |
Major |
Enhanced |
NFVO/VNFM |
|
CEE Reference Configurations |
Major |
Enhanced |
NFVO/VNFM |
|
Equipment management |
Minor |
Enhanced |
NFVO/VNFM |
|
Distributed Block Storage |
Major |
New |
|
|
ScaleIO as CEE-External Software Defined Storage |
Major |
New |
|
|
Swift on ScaleIO |
Major |
New |
|
|
Backup and Restore |
Major |
Enhanced |
|
|
Upgrade and Rollback |
Major |
Enhanced |
|
|
Fault Management |
Minor |
Enhanced |
|
|
Performance Management |
Minor |
Enhanced |
|
|
Centralized Identity Management |
Minor |
Enhanced |
|
|
Secure Communication over TLS |
Minor |
Enhanced |
NFVO/VNFM |
|
Security and Audit Trail Logging |
Minor |
Enhanced |
|
|
IPv6 support for dedicated Neutron APIs (L2, Security Groups) |
Major |
New |
NFVO/VNFM |
|
IPv6 basic address management (manual configuration and SLAAC) |
Major |
New |
NFVO/VNFM |
|
PCI Passthrough support |
Major |
New |
|
|
L3 fabric support |
Minor |
New |
|
|
Skylake support |
Minor |
New |
|
|
Support for Fortville NICs |
Minor |
New |
|
|
Multi-Server Host Network Configuration without Storage Switching Domain |
Major |
New |
|
|
Software RAID |
Minor |
New |
|
|
Glance Image Transfer Traffic on Storage Switching Domain |
Minor |
New |
|
|
Region Scale-in |
Major |
New |
|
|
Minor |
New |
||
|
Cinder Backup in CEE |
Minor |
New |
|
|
License management |
Minor |
New |
|
|
Automated Health Check |
Minor |
Enhanced |
|
|
Automated Data Collection |
Minor |
|
Feature |
Impact |
Feature Impact |
Other Nodes |
|---|---|---|---|
|
OpenStack Newton |
Major |
New |
NFVO/VNFM |
|
None |
N/A |
NFVO/VNFM | |
|
On-Demand Application Scaling |
None |
N/A |
NFVO/VNFM |
|
Multi-Region Dashboard and Placement Zones |
None |
N/A |
|
|
Application Template Export |
None |
N/A |
NFVO/VNFM |
|
TOSCA Support |
Minor |
Enhanced |
NFVO/VNFM |
|
Mistral Support Deployment |
Major |
New |
NFVO/VNFM |
|
Deployment Wizard |
Major |
New |
NFVO/VNFM |
|
Enhanced Stack Panels |
Major |
New |
NFVO/VNFM |
|
Atlas Backup |
Major |
Enhanced |
NFVO/VNFM |
4 Impact on CEE Features
This section describes the changes made in the different CEE features in the CEE 6 release.
4.1 OpenStack Mitaka
4.1.1 Description
In CEE 6 Mirantis OpenStack (MOS) 9 release is used, which includes OpenStack Mitaka. Mirantis OpenStack 7 with OpenStack Kilo was used in CEE 16A.
4.1.2 Impact
There is impact on CEE features due to OpenStack Mitaka:
- CEE 6 adds support for the optional allocation of realtime vCPUs of the VMs. For more details, refer to the Openstack Compute API in CEE document.
4.1.2.1 Capacity and Performance
There is no information available at this time.
4.1.2.2 Hardware
This feature has no hardware.
4.1.2.3 Implementation
No specific steps are needed for the implementation of this feature.
4.1.2.4 Interface
The impact of this feature on interfaces is described in the following subsections.
4.1.2.4.1 API
The APIs of the OpenStack components are backwards compatible within an API version. Additional functionality that is available can be used with the same API version.
For more information, refer to the OpenStack API Complete Reference and CEE-specific API documents referenced below.
A high-level overview of the API versions to be used in CEE is provided below:
- Compute API V2.1 (OpenStack component Nova) –
mandatory
Refer to the OpenStack Compute API in CEE. - Networking API V2 (OpenStack component Neutron) –
mandatory
The Networking API in CEE must be checked for changes related to supported functionality and extensions. Refer to the OpenStack Networking API in CEE in Dell Multi-Server Deployment, OpenStack Networking API in CEE in Single Server Deployment, and OpenStack Networking API in CEE in BSP Deployment documents. - Identity API V2 (OpenStack component Keystone) –
mandatory
Refer to the OpenStack Identity API in CEE. - Block Storage API V2 (OpenStack component Cinder) –
mandatory
Refer to the OpenStack Block Storage API in CEE. - Image Service API V2 (OpenStack component Glance) –
recommended
Refer to the OpenStack Image Service API in CEE. - Telemetry API V2 (OpenStack component Ceilometer) –
mandatory
Refer to the OpenStack Telemetry API in CEE.
The Object Storage API is used only internally in CEE, it is not available for tenants. The Orchestration API is covered as part of Atlas, see Section 5.
4.1.2.4.2 Man–Machine Interface (CLI)
The impact is related to the following:
- Changes due to new or changed functionality: changes are described in API documents referenced in Section 4.1.2.4.1.
- Migration from OpenStack component specific to OpenStack CLI client.
4.1.2.5 Obsolete Features
Details on obsolete features are mostly given in the sections for the specific features within this document.
4.2 Cloud SDN Switch
4.2.1 Description
In CEE 6 Cloud, the Software Defined Networking Switch (CSS) is used in the traffic switching domain. It is based on Open vSwitch (OVS) and Intel DPDK – network packets are processed in user space, and high performance and low latency is achieved by the use of poll mode drivers. Due to DPDK and the user space packet processing, the network performance is higher compared to a conventional (kernel) OVS datapath. DPDK acceleration for OVS is only used in the vSwitch of the compute host (host OS) and not in vCIC nodes or vFuel.
DPDK acceleration for OVS is for the tenant network only, for control network the kernel datapath is used.
4.2.2 Impact
A newer version of Cloud Software Defined Networking Switch, CSS is delivered with CEE 6.
CSS2 has been replaced by CSS version 4 on the compute hosts. The supported port types for the DPDK datapath are still dpdk and dpdkvhostuser for NIC and Virtio interfaces, respectively.
4.2.2.1 Capacity and Performance
The CSS performance is calculated based on managed switches solution. Unmanaged switch solution is not preferred, because the traffic will flood the system, resulting in packet drop.
4.2.2.2 Hardware
There is no hardware impact for this feature.
4.2.2.3 Implementation
For implementation information, refer to the Configuration File Guide.
4.2.2.4 Interface
There is no interface impact for this feature.
4.2.2.4.1 API
The enhanced vSwitch feature does not have an API interface.
4.2.2.4.2 Man–Machine Interface (CLI)
There is no impact on CLI related to this feature.
4.2.2.5 Obsolete Features
The port type dpdkvhostcuse should no longer be used, as it will be removed in the next release of CSS.
4.3 SDN Tight Integration
4.3.1 Description
With the CEE SDN Tight Integration , the components of CSC is integrated into the vCIC cluster utilizing CEE’s high availability functionalities.
CSC consists of two main components, E-ODL and QBGP, running as separate processes in the vCICs. The SDN controller serves as a Neutron back end and provides an extended set of L2 and L3 functions. Neutron manages the SDN controller via the Networking-ODL ML2 mechanism driver and a set of ODL L3 service plugins.
4.3.2 Impact
This is a new feature.
4.3.2.1 Capacity and Performance
The CSC is integrated into the vCIC thus consuming additional resources, therefore limiting the resources available for tenant VMs on Compute blades hosting vCICs. For more details, refer to OpenStack Networking API in CEE with SDN.
4.3.2.2 Hardware
SDN tight integration is supported on Hyperscale Datacenter System 8000 (HDS) platform.
4.3.2.3 Implementation
For implementation information, refer to the Configuration File Guide.
4.3.2.4 Interface
4.3.2.4.1 API
For impacts on the Neutron API, refer to OpenStack Networking API in CEE with SDN.
BGP L3VPN is controlled outside Neutron using a REST API provided by CSC. Refer to BGP L3VPN Service, Reference [2] for more information.
4.3.2.4.2 Man–Machine Interface (CLI)
CSC provides a CLI for direct command line access to the SDN controller. Refer to Using the CLI, Reference [5] and CSC Application Command List, Reference [3] for more information.
4.3.2.5 Obsolete Features
There are no obsolete features related to this feature.
4.4 Small Footprint Deployment
4.4.1 Description
Small Footprint Deployment is a feature that implements vCIC and vFuel and allows the decrease of the number of servers needed for the CEE infrastructure. In CEE 6, some compute hosts can be used as hosts for vCIC and vFuel as well as tenant VMs (the exact allocation depends on the available capacity).
The redundant deployment should contain at least three servers with three vCIC nodes running on different servers called vCIC hosts (compute hosts hosting vCIC nodes). It is possible to collocate Atlas and vFuel on some of these servers if there is enough capacity.
4.4.2 Impact
There is no impact on this feature in this release.
4.4.2.1 Capacity and Performance
There is no information available at this time.
4.4.2.2 Hardware
Small Footprint deployment is used with all hardware types.
4.4.2.3 Implementation
For implementation information, refer to the Configuration File Guide and the installation documentation.
4.4.2.4 Interface
4.4.2.4.1 API
There is no impact on API related to this feature.
4.4.2.4.2 Man–Machine Interface (CLI)
There is no impact on CLI related to this feature.
4.4.2.5 Obsolete Features
There are no obsolete features related to this feature.
4.5 Single Server Deployment
4.5.1 Description
Single Server solution runs on a single physical server and includes the following:
- One vCIC
- Atlas VM, in a configuration that requires less CPU, RAM and local disk resources
- Tenant VMs
In a Single Server configuration vFuel runs on the Kickstart Server and is not migrated to the CEE Region (however, some of the Fuel services are migrated to the vCIC).
4.5.2 Impact
In CEE 6 there are changes in the Single Server System Dimensioning Guide, CEE 6 concerning resource reservation. The amount of RAM, CPU, and disk has changed or follows new formulas and rules to allocate CSS cores.
4.5.2.1 Capacity and Performance
There is no impact on capacity and performance for this feature in this release. Single Server has limitations in capacity and performance. Refer to the documentation of a respective VNF to decide if Single Server deployment is supported.
4.5.2.2 Hardware
Dell R630 is the only verified hardware for Single Server deployment. Extreme switches as ToR switches, and block storage (Cinder, provided by centralized storage) cannot be used with this feature.
4.5.2.3 Implementation
For implementation information, refer to the Configuration File Guide and the installation documentation.
4.5.2.4 Interface
The handling of NUMA-related aspects is different in Single Server deployments – different flavor keys are used for Single Server to allocate RAM and CPU resources.
4.5.2.4.1 API
Refer to the OpenStack Compute API in CEE for information on CPU and memory allocation in Single Server deployments.
4.5.2.4.2 Man–Machine Interface (CLI)
The changes in the CLI interface correspond to the changes in API interface described above.
4.5.2.5 Obsolete Features
There are no obsolete features related to this feature.
4.6 Secure TLS Communication
4.6.1 Description
The Secure TLS Communication feature changes the configuration of OpenStack endpoints: HTTPS is used now instead of HTTP for communication with OpenStack REST API. TLS 1.2 is the recommended protocol version.
4.6.2 Impact
There are changes for this feature in this release, as a result of the Mitaka uplift.
In the CEE 6 release, haproxy is configured to add an additional header to the HTTP request (X-Forwarded-Proto https) when the original request was using HTTPS. The services supporting this option will use the setting in the URL schema of the response.
4.6.2.1 Capacity and Performance
There is no information available at this time.
4.6.2.2 Hardware
There is no hardware impact for this feature.
4.6.2.3 Implementation
HTTPS is secure only if the certificates are compliant with security recommendations. It is necessary to get certificates for each CEE region being installed. Certificates must be signed by a valid Certification Authority – self-signed certificates are not accepted. Two separate certificate files are needed for Atlas and for the vCIC, and two certificate files for the respective Certification Authorities (CAs). For more details, refer to the following documents:
- SW Installation in Multi-Server Deployment and SW Installation in Single Server Deployment
- Configuration File Guide
- Security User Guide
4.6.2.4 Interface
There are no interface changes for this feature in this release.
4.6.2.4.1 API
There is impact on tools that could be used for connectivity to OpenStack API since HTTPS is used instead of HTTP.
4.6.2.4.2 Man–Machine Interface (CLI)
There is no impact on CLI related to this feature.
4.6.2.5 Obsolete Features
There are no obsolete features related to this feature.
4.7 Distributed Block Storage
4.7.1 Description
Distributed block storage is a new feature in CEE 6 that allows the implementation of Cinder functionality distributing and scaling the available local disk capacity using EMC ScaleIO. EMC ScaleIO is a software only solution that utilizes local storage devices and turns them into shared block storage.
4.7.2 Impact
This is a new feature.
4.7.2.1 Capacity and Performance
The ScaleIO system raw capacity can range between 300 GB and 16 PB. ScaleIO uses data mirroring in order to maintain data availability and protect against a single-point failure, so the usable capacity is half of the system raw capacity.
4.7.2.2 Hardware
The device size can be between 100 GB and 8 TB.
4.7.2.3 Implementation
CEE 6 supports a two layer architecture of ScaleIO. Dedicated blades are needed for the ScaleIO components without counting the already existing compute nodes in CEE. The deployment of ScaleIO is triggered automatically during CEE installation and it is configurable.
4.7.2.4 Interface
Details of interface changes are described in the following subsections.
4.7.2.4.1 API
CEE 6 uses REST API through HTTP(S) to communicate with the ScaleIO gateway.
4.7.2.4.2 Man-Machine Interface (CLI)
CEE 6 uses the SCLI to interact with the ScaleIO components.
4.7.2.5 Obsolete Features
There are no obsolete features related to this feature.
4.8 ScaleIO as CEE-External Software Defined Block Storage
4.8.1 Description
ScaleIO as CEE-External Software Defined Block Storage is a new feature in CEE 6 that allows CEE to use an external ScaleIO cluster as block storage back end.
4.8.2 Impact
This is a new feature.
4.8.2.1 Capacity and Performance
Not applicable.
4.8.2.2 Hardware
There is no hardware impact for this feature.
4.8.2.3 Implementation
CEE 6 supports ScaleIO cluster as external block storage over existing L2 storage network connectivity.
4.8.2.4 Interface
Details of interface changes are described in the following subsections.
4.8.2.4.1 API
CEE 6 uses REST API through HTTP(S) to communicate with the ScaleIO gateway.
4.8.2.4.2 Man-Machine Interface (CLI)
There is no impact on CLI related to this feature.
4.8.2.4.3 Obsolete Features
There are no obsolete features related to this feature.
4.9 Swift on ScaleIO
4.9.1 Description
By default Glance uses Swift as storage backing and Swift store is located on the local disks of the vCIC hosts. This storage has capacity limitations for Glance images. With this feature, capacity located on the ScaleIO Cluster can be used for Swift store.
4.9.2 Impact
This is a new feature in CEE 6.
4.9.2.1 Capacity and Performance
The capacity for Glance is increased. The performance of Glance on ScaleIO, which implies a connection over the network, may be slightly slower than the performance of having Glance on the local disk with direct access (as it is attached to the host).
4.9.2.2 Hardware
There is no hardware impact for this feature.
4.9.2.3 Implementation
This feature can be automatically activated during installation of CEE if configured in config.yaml file.
The feature can be manually activated after installation, by following the Swift Store on ScaleIO Activation and Swift Store on ScaleIO Expansion operating instructions.
4.9.2.4 Interface
There are no interface changes for this feature in this release.
4.9.2.4.1 API
There is no impact on API related to this feature.
4.9.2.4.2 Man-Machine Interface (CLI)
There is no impact on CLI related to this feature.
4.9.2.5 Obsolete Features
There are no obsolete features related to this feature.
4.10 SR-IOV
4.10.1 Description
SR-IOV is a technology that allows the isolation of the PCI Express resources for manageability and performance reasons. SR-IOV allows different VMs to share a single PCI Express hardware interface (as of now, NIC). A NIC is seen as a Physical Function – it is shared between VMs, but there is still direct access to the network interface to enhance performance.
There are two types of SR-IOV:
- SR-IOV flat — the virtual ports managed by Neutron can be configured to not provide VLAN segmentation.
- SR-IOV vlan — the virtual ports managed by Neutron can be configured to provide VLAN segmentation.
4.10.2 Impact
In the CEE 6 release, support for SR-IOV vlan and SR-IOV flat on Dell is added. Automated configuration for permanent SR-IOV support in the host operating system and at host network setup is also added in this release.
Automated configuration for permanent SR-IOV support in the host operating system and at host network setup is added in this release.
Neutron API can be used to configure L2 connectivity between VMs using SR-IOV.
4.10.2.1 Capacity and Performance
There is no impact on capacity and performance compared to CEE 16A.
4.10.2.2 Hardware
There is no hardware impact for this feature.
4.10.2.3 Implementation
For information about the implementation of SR-IOV, refer to the Configuration File Guide, OpenStack Networking API in CEE in Dell Multi-Server Deployment and OpenStack Networking API in CEE in HDS Deployment documents.
4.10.2.4 Interface
There is no impact on the interfaces in the CEE 6 release.
4.10.2.5 Obsolete Features
There are no obsolete features related to this feature.
4.11 Bandwidth Management
4.11.1 Description
The Bandwidth Management feature provides the possibility to schedule VMs with respect to available network resources, such as bandwidth. Nova ensures that the total amount of allocated bandwidth on an interface does not exceed the capacity of the interface.
4.11.2 Impact
This feature was part of CEE 15B and it has been reintroduced in CEE 6.
4.11.2.1 Capacity and Performance
No impact on capacity and performance.
4.11.2.2 Hardware
There is no hardware impact for this feature.
4.11.2.3 Implementation
Bandwidth management can be configured using the config.yaml file. For more information, refer to the Configuration File Guide.
4.11.2.4 Interface
There are no interface changes for this feature in this release.
4.11.2.4.1 API
There is no impact on API related to this feature.
4.11.2.4.2 Man–Machine Interface (CLI)
There is no impact on CLI related to this feature.
4.11.2.5 Obsolete Features
There are no obsolete features related to this feature.
4.12 Backup and Restore
4.12.1 Description
Backup and Restore in the CEE 6 release has changed.
Fuel backup is replaced by Fuel synchronization. Fuel synchronization is a manual procedure that copies the active Fuel VM to a cold standby Fuel VM on another compute host. CIC Domain Data Backup and Restore replace CEE infrastructure backup. The CIC domain data backup and restore procedures are manual, used in cases of corruption or misconfiguration in the vCICs. The following are included in the CIC Domain data Backup:
- OpenLDAP databases
- MySQL databases
- Openstack and SDN configuration files from the vCICs
The restore of OpenLDAP databases, MySQL databases, Openstack and SDN configuration files can be partial or complete.
- The SDN backup operation is performed only for SDN enabled nodes.
- CIC domain data backup and restore are not applicable for Single Server.
Disaster recovery has been added in CEE 6. Disaster Recovery backs up environment configuration files only, which can be used for redeployment after a man-made or natural disaster.
4.12.2 Impact
The CEE Fuel backup and restore are replaced by Fuel synchronization. The CEE infrastructure backup and restore are replaced by CIC Domain Data Backup and Restore.
The CIC domain backup and restore are performed using scripts that can be run from any of the vCICs.
The disaster recovery backup includes environment configuration files (for example: CEE version, configuration files under the /mnt/cee_config folder, user files of OpenStack deployment). The disaster recovery backup file must be stored at a different location, outside the CEE region. The restore process of CEE includes the redeployment of CEE on a similar hardware topology to the one the backup was performed on, regarding parameters described in the configuration files. Refer to Configuration File Guide for more information.
For more details regarding disaster recovery, refer to Disaster Recovery.
4.12.2.1 Capacity and Performance
Disaster recovery times are reduced.
4.12.2.2 Hardware
CIC Domain Backup and Restore and Fuel synchronization are not applicable for Single Server deployment.
Disaster Recovery is not available on HDS platform.
4.12.2.3 Implementation
CIC Domain Backup and Restore are implemented as scripts that can be run from any of the vCICs. No other configuration is needed.
4.12.2.4 Interface
4.12.2.4.1 API
There is no API interface of the Backup and Restore features.
4.12.2.4.2 Man–Machine Interface (CLI)
There is no impact on CLI in this release.
4.12.2.5 Obsolete Features
No information is available.
4.13 VLAN Restore in CEE on BSP
4.13.1 Description
The BSP backup and restore feature creates backups for BSP software and configuration. For CEE as a BSP tenant, a new global configuration attribute port_list_restorable is introduced. The default value is True. The default value enables the restoration of the VLAN port subscriptions in CMX for the tenants of CEE, to reflect the BSP configuration backup.
- Note:
- The BSP backup and restore feature is specific to BSP hardware platforms, and is implemented by the BSP software.
4.13.2 Impact
This is a new feature in CEE 6.
4.13.2.1 Capacity and Performance
No impact on capacity and performance.
4.13.2.2 Hardware
There is no hardware impact for this feature.
4.13.2.3 Implementation
A new global attribute is introduced in neutron_ericsson_cmx.yaml.
For implementation, refer to the Configuration File Guide.
4.13.2.4 Interface
There are no interface changes for this feature in this release.
4.13.2.4.1 API
There is no impact on API related to this feature.
4.13.2.4.2 Man–Machine Interface (CLI)
There is no impact on CLI in this release.
4.13.2.5 Obsolete Features
There are no obsolete features related to this feature.
4.14 Resource Management for Virtual Machines
4.14.1 Description
The Resource Management for Virtual Machines feature includes the following:
- Handling of NUMA architecture for memory allocation and huge pages memory backing for VMs
- Handling of NUMA architecture for CPU allocation and CPU pinning for VMs
- Exposure of the NUMA/CPU topology to the guest OS
- NUMA balancing – the possibility to disable the automatic NUMA balancing kernel feature on the compute nodes
4.14.2 Impact
The implementation of the features related to the Resource Management for Virtual Machines is changed due to the use of OpenStack Mitaka in CEE 6. The main changes are as follows:
- Hypervisor support for NUMA
4.14.2.1 Capacity and Performance
The performance of virtualized guests is improved. Guests can be optimized to use specific NUMA nodes when provisioning resources. By exposing the NUMA topology to the VM and pinning vCPU to a specific core, VM performance can be improved by ensuring that the access to memory will always be local in terms of NUMA topology.
4.14.2.2 Hardware
There is no hardware impact of this feature.
4.14.2.3 Implementation
For information about the allocation of RAM and CPU during installation of CEE, refer to the Configuration File Guide.
4.14.2.4 Interface
There are no interface changes for this feature in this release.
4.14.2.4.1 API
There is no impact on API related to this feature.
4.14.2.4.2 Man–Machine Interface (CLI)
There is no impact on CLI related to this feature.
4.14.2.5 Obsolete Features
There are no obsolete features related to this feature.
4.15 CEE Reference Configurations
4.15.1 Description
CEE is a software product that can be installed on several hardware configurations, and optionally configured with the following:
- HDS and Ericsson Cloud SDN Controller (CSC) as networking back end
- ScaleIO distributed block storage as Cinder back end
4.15.2 Impact
4.15.2.1 Capacity and Performance
Improved installation procedures and vCIC optimizations allow to increase the number of managed physical servers in the CEE region.
4.15.2.2 Hardware
New hardware matrix is introduced in this release of CEE. For detailed information, refer to the CEE Technical Description document.
4.15.2.3 Implementation
Different CEE installation prerequisites are added in CEE 6, depending on the following:
- HDS configuration with SDN tight integration and with ScaleIO
- HDS configuration with SDN tight integration and without ScaleIO
- HDS configuration without SDN tight integration and with ScaleIO
- HDS configuration without SDN tight integration and without ScaleIO
The CEE installation flow is described in the CEE Installation document.
4.15.2.4 Interface
The templates for all files (config.yaml, configuration, switching scheme and cabling scheme YAML files) are updated and must be used when preparing deployment-specific configuration files.
4.15.2.4.1 API
There is no API interface related to this feature.
4.15.2.4.2 Man–Machine Interface (CLI)
There are changes in the commands used to install vFuel.
4.15.2.5 Obsolete Features
There are no obsolete features related to this feature.
4.16 Equipment Management
4.16.1 Description
The Equipment Management feature allows the replacement of servers and ToR switches used in CEE.
4.16.2 Impact
Documents on the expansion and replacement or repair have been updated.
4.16.2.1 Capacity and Performance
Not applicable.
4.16.2.2 Hardware
There is no hardware impact for this feature.
4.16.2.3 Implementation
Not applicable.
4.16.2.4 Interface
Not applicable.
4.16.2.4.1 API
This feature has no API interface.
4.16.2.4.2 Man–Machine Interface (CLI)
There is no impact on CLI related to this feature.
4.16.2.5 Obsolete Features
There are no obsolete features due to this feature.
4.17 Centralized Identity Management
4.17.1 Description
The purpose of the Identity and Access Management (CEE IdAM) is to manage identities and credentials for cloud users, and to provide authentication and access control services for user accesses.
4.17.2 Impact
There is no impact.
4.17.2.1 Capacity and Performance
No information is available.
4.17.2.2 Hardware
There is no hardware impact for this feature.
4.17.2.3 Implementation
Refer to the Security User Guide and the Configuration File Guide for information about the implementation of IdAM.
4.17.2.4 Interface
No information is available.
4.17.2.4.1 API
This feature has no API.
4.17.2.4.2 Man–Machine Interface (CLI)
There is no impact on CLI related to this feature.
4.17.2.5 Obsolete Features
No information is available.
4.18 Ericsson Neutron
4.18.1 Description
Ericsson Neutron provides most of the functionality provided by OpenStack Neutron. The following functionality is added in the CEE 6 release:
- Flat networking with Neutron SR-IOV
- VLAN-based Neutron SR-IOV by uplift to Mitaka
- The Neutron DHCP service is handled by the CSC SDN controller
for SDN deployments.
For non SDN deployments, the DHCP service is provided under the supervision of the neutron-dhcp-agent. - CEE provides a performant implementation for the security group and allowed address pair APIs in form of CSS.
4.18.2 Impact
The impact is listed below:
- The changes in Ericsson Neutron are related to the changes in OpenStack Mitaka. See the sections below for more information on documentation.
- In CEE 6, security groups and allowed address pair APIs
are supported, their functionality is subject to the firewall driver
chosen during deployment time.
- Note:
- Depending on the deployment time, setting up the allowed address pairs and security groups may or may not affect the connectivity to the VMs. For more details, refer to the Configuration File Guide.
- The Trunk port v1 API is deprecated in this release. This API is fully supported in this release, but will be discontinued in the following CEE releases, as it will be replaced by the Trunk port v4 API. There are no plans for CEE to support both versions of this API in parallel.
- In CEE 6 neutron-server is managed only through Pacemaker. neutron-server is not managed through upstart anymore. For any neutron-server management operations, for example, restart of neutron-server , use crm commands, for example: crm resource restart neutron-server
- With SDN tight integration, SDN works as the back end for Neutron via ML2 driver and Service plugins.
4.18.2.1 Capacity and Performance
When VXLAN is used instead of VLAN in SDN tight integration more tenant networks are available.
4.18.2.2 Hardware
There is no hardware impact for this feature.
4.18.2.3 Implementation
The description of certain Neutron configuration information is covered in the Configuration File Guide.
4.18.2.4 Interface
4.18.2.4.1 API
The changes in the API are related to the support of the API provided by Neutron in OpenStack Mitaka, refer to OpenStack Networking API in CEE in Dell Multi-Server Deployment, OpenStack Networking API in CEE in Single Server Deployment, and OpenStack Networking API in CEE in BSP Deployment.
4.18.2.4.2 Man–Machine Interface (CLI)
The changes in the CLI correspond to the changes in the API specified above.
4.18.2.5 Obsolete Features
There are no obsolete features in this release related to Ericsson Neutron.
4.19 Continuous Monitoring High Availability (CM-HA)
4.19.1 Description
CM-HA is a functionality that periodically checks the status of compute hosts, vCIC nodes and vFuel.
CM-HA issues alarms related to the failure of a vCIC, compute host (including VM-related alarms), and Fuel, and provides alerts on restart of these components.
If a compute host is detected to have failed, then CM-HA initiates one of the following cases, depending on the HA-policy of the VM:
- Evacuation of the VMs from the affected compute host
- Restart of the VM on the same compute host
- No actions are performed in case there is no policy defined
In all three cases there are certain alarms triggered by CM-HA.
4.19.2 Impact
The impact related to CM-HA in this release is as follows:
- CM-HA fencing is introduced – when a compute node
is detected faulty by CM-HA, the failure of the control network will
not cause the duplication of tenant VMs when they are evacuated by
CM-HA.
CM-HA fencing is configurable and can be turned off, refer to the Configuration File Guide.
4.19.2.1 Capacity and Performance
There is no impact of this feature on capacity and performance.
4.19.2.2 Hardware
There is no hardware impact for this feature.
4.19.2.3 Implementation
There are no specific steps related to the implementation of the CM-HA feature in general. VMs must have metadata defined to use CM-HA.
4.19.2.4 Interface
4.19.2.4.1 API
The HA Policy can be set in the metadata of the VM, for example:
- ha-policy=unmanaged
- ha-policy=managed-on-host
- ha-policy=ha-offline
The above HA policies replace the following removed evacuation policies: NoEvacuation and Evacuation. Refer to the OpenStack Compute API in CEE for details.
4.19.2.4.2 Man–Machine Interface (CLI)
The changes in the CLI correspond to the changes in the API described above.
4.19.2.5 Obsolete Features
The following evacuation policies have been removed: NoEvacuation and Evacuation.
4.20 Performance Management
4.20.1 Description
Performance Management is a feature that includes the collection of performance data related to the virtual resources (by Ceilometer), on host environment (by Zabbix), and providing this data through the northbound interfaces. This functionality provides 3GPP compliant XML report files and access via REST API.
4.20.2 Impact
There is impact on this feature in this release due to the OpenStack uplift to Mitaka.
New SDN-related counters are added.
4.20.2.1 Capacity and Performance
There is no impact on capacity and performance in this release.
4.20.2.2 Hardware
Ceilometer is not implemented in the Single Server solution.
4.20.2.3 Implementation
There are no specific implementation steps for this feature.
4.20.2.4 Interface
4.20.2.4.1 API
Refer to the following documents: OpenStack Telemetry API in CEE, Preconfigured Key Performance Indicators, and the Telemetry API section in OpenStack API Complete Reference
4.20.2.4.2 Man–Machine Interface (CLI)
There is no impact on this feature in this release.
4.20.2.5 Obsolete Features
There are no obsolete features or functionality related to this feature.
4.21 Fault Management
4.21.1 Description
The Fault Management feature provides alarms and alerts from the different software and hardware components used in the CEE region. Fault Management provides REST API and SNMP as northbound interfaces.
4.21.2 Impact
There is a new feature in CEE 6 related to fault management. The alarms can now be filtered based on tenant.
4.21.2.1 Capacity and Performance
There is no impact on capacity and performance.
4.21.2.2 Hardware
There is no hardware impact for this feature.
4.21.2.3 Implementation
The Fault Management Northbound API interwork description and the Fault Management Configuration Guide user guide describes the implementation of Fault Management.
4.21.2.4 Interface
The impact on the interface is described in the subsections below.
4.21.2.4.1 API
There is no API interface changes in this release.
4.21.2.4.2 Man–Machine Interface (CLI)
There is functionality to get alarm and alert history using CLI client of Watchmen (watchmen-client alarm-history).
4.21.2.5 Obsolete Features
There are no interface changes for this feature in this release.
4.22 Upgrade And Rollback
4.22.1 Description
Upgrade (update) and Rollback features allows to perform SW upgrade/update of the vCIC nodes, compute hosts and vFuel.
Major version upgrade from CEE 16A or earlier product versions is not provided.
4.22.2 Impact
CEE 6..5 can be updated to CEE 6 latest version, CEE 6.5.1.
4.22.2.1 Capacity and Performance
There is no foreseen impact on capacity and performance.
4.22.2.2 Hardware
There is no hardware impact for this feature.
4.22.2.3 Implementation
There is no specific configuration to implement this feature.
4.22.2.4 Interface
4.22.2.4.1 API
This feature has no API.
4.22.2.4.2 Man–Machine Interface (CLI)
There is no impact on CLI related to this feature.
4.22.2.5 Obsolete Features
There are no obsolete features related to this feature.
4.23 Security and Audit Trail Logging
4.23.1 Description
The audit trail log contains detailed information about system configuration changes. This audit tool enables the service provider to check who carried out specific operations in the system, and when.
The security log records the security events on the node. The purpose of this is to record security events, for example, failed logins and attempts to access the node with valid or invalid credentials.
4.23.2 Impact
There is no information available as of now.
4.23.2.1 Capacity and Performance
There is no impact on capacity and performance due to this feature.
4.23.2.2 Hardware
There is no hardware impact for this feature.
4.23.2.3 Implementation
The configuration of this functionality is included in the config.yaml template and does not need to be changed in almost all cases.
4.23.2.4 Interface
4.23.2.4.1 API
Not applicable.
4.23.2.4.2 Man–Machine Interface (CLI)
Not applicable.
4.23.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.24 IPv6 Support for Dedicated Neutron APIs
4.24.1 Description
Support for IPv6 addresses in Security Group rules and in Allowed Address Pairs.
4.24.2 Impact
There is impact on the CEE features Security Groups and Allowed Address Pairs. For more details, refer to OpenStack Networking API in CEE with SDN.
4.24.2.1 Capacity and Performance
No foreseen impact.
4.24.2.2 Hardware
There is no hardware impact for this feature.
4.24.2.3 Implementation
No specific steps are needed for the implementation of this feature.
4.24.2.4 Interface
The impact of this feature on interfaces is described in the following subsections.
4.24.2.4.1 API
Refer to the OpenStack Networking API in CEE with SDN document.
4.24.2.4.2 Man–Machine Interface (CLI)
The impact is related to the following:
- Changes due to new or changed functionality:
Changes are described in the OpenStack Networking API in CEE with SDN document.
4.24.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.25 IPv6 Basic Address Configuration: Manual Configuration and SLAAC
4.25.1 Description
Support for Neutron IPv6 address management using manual configuration and SLAAC.
4.25.2 Impact
There is impact on configuration of Neutron subnets. For more details, refer to the OpenStack Networking API in CEE with SDN document.
4.25.2.1 Capacity and Performance
No foreseen impact.
4.25.2.2 Hardware
In one mode of SLAAC, an external router is required for providing IPv6 address prefixes via Router Advertisement (RA) messages. For more information, refer to the OpenStack Networking API in CEE with SDN document.
4.25.2.3 Implementation
No specific steps are needed for the implementation of this feature.
4.25.2.4 Interface
The impact of this feature on interfaces is described in the following subsection.
4.25.2.4.1 API
Refer to the OpenStack Networking API in CEE with SDN document.
4.25.2.4.2 Man-Machine Interface (CLI)
The impact is related to the following:
- Changes due to new or changed functionality:
Changes are described in the OpenStack Networking API in CEE with SDN document.
4.25.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.26 PCI Passthrough Support
4.26.1 Description
Support for Nova PCI Passthrough allowing full access and direct control of a physical PCI device in guests.
SR-IOV physical function passthrough functionality is added, and can also be used for NICs on which the SR-IOV virtualization mode cannot be disabled, therefore PCI passthrough feature is otherwise not configurable.
4.26.2 Impact
Initial configurations for PCI Passthrough has to be done, refer to Configuration File Guide. Nova flavor requesting PCI devices has to be created. For more details, refer to OpenStack documentation.
4.26.2.1 Capacity and Performance
The guest VM can utilize the full line rate of the physical NIC.
4.26.2.2 Hardware
SR-IOV and PCI passthrough cannot be configured on the same compute host.
4.26.2.3 Implementation
For implementation and configuration, refer to the Configuration File Guide.
4.26.2.4 Interface
The impact of this feature on interfaces is described in the following subsection.
4.26.2.4.1 API
Usage of existing functionality in Nova is enabled. Refer to the OpenStack Compute API.
4.26.2.4.2 Man-Machine Interface (CLI)
The impact is related to the following:
- Changes due to new or changed functionality:
Changes are described in the OpenStack Compute API document.
4.26.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.27 L3 Fabric Support
4.27.1 Description
Possibility to deploy CEE and use L2 overlay service on an HDS L3 switch fabric.
4.27.2 Impact
For CEE with SDN on HDS, the hw_vtep_ip and hw_vtep_gw parameters must be configured in config.yaml in order to add a HW-VTEP route on the compute hosts for HW-VTEP IP reachability.
For more information on the changes in CEE installation process on HDS hardware platform, refer to CEE on HDS Installation.
4.27.2.1 Capacity and Performance
It is possible to deploy CEE on larger vPODs in HDS.
4.27.2.2 Hardware
In HDS L3 fabric configuration, the DC-GWs are connected to leaf switches and not to spines as in the L2 fabric.
4.27.2.3 Implementation
HDS provides an L3 router API to read the HW-VTEP IPs from the fabrics. The IPs can be obtained from CCM APIs and has to be entered in config.yaml. Configuration templates are updated for HDS with SDN. For configuration and implementation, refer to the CEE on HDS Installation and Configuration File Guide.
4.27.2.4 Interface
The impact of this feature on interfaces is described in the following subsection.
4.27.2.4.1 API
There are no API interface changes in relation to this feature.
4.27.2.4.2 Man-Machine Interface (CLI)
There are no Man-Machine Interface changes in relation to this feature.
4.27.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.28 Skylake Support
4.28.1 Description
Skylake support allows deploying CEE on HDS servers based on the Skylake architecture.
4.28.2 Impact
There is no impact on this feature in this release.
4.28.2.1 Capacity and Performance
The capacity and performance depend on the HDS server which uses the Skylake CPU and the corresponding chipset.
4.28.2.2 Hardware
CEE 6 uses HDS CSU0201 as Skylake-based servers.
4.28.2.3 Implementation
HDS BIOS contains the necessary BIOS-level drivers and fixes for Skylake processors for the safe operation, for example, the Skylake/Kaby Lake hyper-threading issue is already fixed on HDS.
Moreover, CEE uses Ubuntu 14.04 as host operating system and CSU02 certification for Ubuntu 14.04 and 16.04 is completed successfully. The result is summarized on the following page: https://certification.ubuntu.com/hardware/201706-25565/.
CSU0201 hardware configurations contain NVMe disks as boot devices, which are supported by CEE.
4.28.2.4 Interface
CEE doesn’t introduce any impact on any interfaces for this feature.
Before CEE deployment or during hardware management, the interface of the underlying hardware (for example, HDS CSU0201) might be used, but these are not applicable for CEE.
4.28.2.4.1 API
Not applicable.
4.28.2.4.2 Man-Machine Interface (CLI)
Not applicable.
4.28.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.29 Support for Fortville NICs 10/25/40G
4.29.1 Description
Since CEE uses Fuel and MOS9.2 as life cycle management and Ubuntu 14.04 as host operating system with v4.4.0 Linux kernel, the network card support in CEE has been limited by these two components. The feature for supporting Fortville network cards is delivered with CEE 6 by using the most recent i40e Linux driver both in Fuel and in the host operating system.
4.29.2 Impact
Fortville cards in a host can be used as tenant traffic and storage network interfaces in CEE.
For tenant traffic network, a Fortville card can be configured for SR-IOV. PCI passthrough is not possible, as virtualization mode cannot be disabled on Fortville NICs, which is a prerequisite of the PCI passthrough feature in OpenStack Mitaka.
Using a Fortville card for DPDK may impact the resilience and performance due to limitations and issues in Fortville card hardware and i40e driver.
4.29.2.1 Capacity and Performance
The raw physical network port capacity and performance depends on whether 10G, 25G or 40G Fortville card is installed.
4.29.2.2 Hardware
The feature can be used when the proper Fortville card is mounted in the host which is supported by the i40e driver.
4.29.2.3 Implementation
- Note:
- Ensure that the i40 driver and the Non-Volatile Memory (NVM) image versions match according to section NVM and Software Compatibility in Intel® Ethernet Controller 710 Series Feature Support Matrix, Reference [6].
CEE uses the following driver versions as a DKMS package:
|
Version: |
2.1.26 |
|
License: |
GPL |
|
Description: |
Intel® 40-10 Gigabit Ethernet Connection Network Driver |
4.29.2.4 Interface
By using a Fortville card, the physical interfaces (ports) on the cards are specific to Fortville cards. The Fuel GUI is also able to display the ports of the Fortville card.
4.29.2.4.1 API
Not applicable.
4.29.2.4.2 Man-Machine Interface (CLI)
Not applicable.
4.29.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.30 Multi-Server Host Network Configuration without Storage Switching Domain
4.30.1 Description
CEE on multi-server platforms supports configurations without storage interfaces (storage switching domain) defined on compute hosts that are not hosting vCICs. This configuration is applicable to the whole CEE region.
- Note:
- This feature requires a global configuration at CEE installation. All compute hosts that are not hosting vCICs have no storage interfaces when using this feature.
4.30.2 Impact
The free storage interfaces can be left out, left unconnected or used for other purposes such as PCI passthrough or SR-IOV functionality, if supported. However, only local storage can be used on the compute servers with this configuration, as it is not possible to attach remote storage volumes to the tenant VMs.
4.30.2.1 Capacity and Performance
There is no impact on capacity and performance.
4.30.2.2 Hardware
No additional NICs are required for the storage interfaces on all compute hosts (except vCIC hosts) and the physical switch must have storage network ports available only for the 3 vCIC hosts. A physical switch with less available ports can be feasible with the same number of compute hosts compared to the case when SAN is implemented on every compute host.
4.30.2.3 Implementation
For implementation and configuration, refer to the Configuration File Guide.
4.30.2.4 Interface
There is no impact on the CEE interfaces. To configure the feature, config.yaml must be used and modified accordingly.
4.30.2.4.1 API
There are no API interface changes in relation to this feature.
4.30.2.4.2 Man-Machine Interface (CLI)
There are no Man-Machine Interface changes in relation to this feature.
4.30.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.31 Software RAID
4.31.1 Description
This feature enables software RAID configuration in CEE. The array is created utilizing local storage disk space, requiring two physical disks with a minimum space of 15 GiB on each. After the feature is enabled on the corresponding compute blade, the local storage disk capacity is reduced by half according to RAID 1 logic.
- Note:
- Only RAID 1 configuration is supported.
4.31.2 Impact
This is a new feature in CEE 6.
4.31.2.1 Capacity and Performance
The capacity of the local storage disks is reduced by half on the corresponding compute blades.
4.31.2.2 Hardware
To enable the feature, a compute blade with two physical disks and an available space of minimum 15GiB free space is required.
4.31.2.3 Implementation
This is an optional feature. RAID 1 can be configured during CEE installation. For more details and implementation, refer to the Configuration File Guide.
4.31.2.4 Interface
There is no impact on the CEE interfaces. To configure the feature, config.yaml must be used and modified accordingly.
4.31.2.4.1 API
There are no API interface changes in relation to this feature.
4.31.2.4.2 Man-Machine Interface (CLI)
There are no Man-Machine Interface changes in relation to this feature.
4.31.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.32 Glance Image Transfer Traffic on Storage Switching Domain
4.32.1 Description
A new configuration option allows to have Glance API and Swift internal and administration endpoints on a dedicated network on top of NIC interfaces dedicated to storage. For configuration and dimensioning details, refer to Configuration File Guide and Multi-Server System Dimensioning Guide, CEE 6.
4.32.2 Impact
This is a new feature in CEE 6.
4.32.2.1 Capacity and Performance
Unstable OpenStack agents can be avoided while booting VMs from image in large configurations. Image upload and download operations are faster, compared to when Glance VLAN is configured on the control switching domain (1 GE).
4.32.2.2 Hardware
There is no hardware impact for this feature.
4.32.2.3 Implementation
This is an optional feature. For implementation details, refer to the Configuration File Guide.
4.32.2.4 Interface
The impact of this feature on interfaces is described in the following subsection.
4.32.2.4.1 API
There are no API interface changes in relation to this feature.
4.32.2.4.2 Man-Machine Interface (CLI)
There are no Man-Machine Interface changes in relation to this feature.
4.32.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.33 Region Scale-in
4.33.1 Description
The region scale-in feature enables the removal of one or a set of compute host nodes from a CEE region. The servers are decommissioned from the CEE cluster and the corresponding Nova compute and Neutron agent resources are removed.
- Note:
- This feature is applicable for compute hosts not hosting vCICs or vFuel.
4.33.2 Impact
This is a new feature in CEE 6.
4.33.2.1 Capacity and Performance
There is no impact on capacity and performance.
4.33.2.2 Hardware
The CEE Region Scale-in feature is not applicable for CEE deployments on single server hardware platforms.
4.33.2.3 Implementation
For details on implementation and configuration, refer to the Region Scale-in document.
4.33.2.4 Interface
The impact of this feature on interfaces is described in the following subsection.
4.33.2.4.1 API
There are no API interface changes in relation to this feature.
4.33.2.4.2 Man-Machine Interface (CLI)
There are no Man-Machine Interface changes in relation to this feature.
4.33.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.34 Cinder Backup in CEE
The Cinder backup feature provides backing up of volumes managed by Cinder to a backup storage provider using a driver architecture. In CEE 6, the cinder-backup service can utilize Swift or an external NFS storage server as storage back end.
4.34.1 Impact
This is a new feature in CEE 6.
4.34.1.1 Capacity and Performance
Connectivity between the service and the external NFS storage is established through the public VLAN network cee_om_sp, configured on the traffic switching domain.
The traffic network bandwidth is shared with the tenant networks. The effects of data transfer related to Cinder backup on tenant network bandwidth must be considered.
4.34.1.2 Hardware
There is no hardware impact for this feature.
4.34.1.3 Implementation
The feature is enabled and configured by the ericsson_openstack_config Fuel plugin in config.yaml.
For more information, refer to the Configuration File Guide, Fuel Plugin Configuration Guide and the Runtime Configuration Guide documents.
4.34.1.4 Interface
The impact of this feature on interfaces is described in the following subsection.
4.34.1.4.1 API
For information on the API calls of the Cinder backup feature, refer to the relevant section of OpenStack API Complete Reference.
4.34.1.4.2 Man-Machine Interface (CLI)
There are no Man-Machine Interface changes in relation to this feature.
4.34.1.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.35 License Management
This feature enables license management in CEE using Ericsson NeLS.
4.35.1 Impact
This is a new feature in CEE 6.
4.35.1.1 Capacity and Performance
Connectivity between the service and the NeLS server is established through the public VLAN network cee_om_sp, configured on the traffic switching domain. The connection is permanent and is implemented using TLS.
4.35.1.2 Hardware
There is no hardware impact for this feature.
4.35.1.3 Implementation
The service is deployed as a mandatory Fuel plugin in config.yaml. For configuration details, refer to the Configuration File Guide document.
4.35.1.4 Interface
The impact of this feature on interfaces is described in the following subsection.
4.35.1.4.1 API
There are no API interface changes in relation to this feature.
4.35.1.4.2 Man-Machine Interface (CLI)
There are no Man-Machine Interface changes in relation to this feature.
4.35.1.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.36 Automated Health Check
In the current release of CEE 6, manual health check is replaced by a script-based on-demand procedure.
4.36.1 Impact
This is an enhanced feature in CEE 6.
4.36.1.1 Capacity and Performance
Health check execution time has been reduced.
For more information, refer to the Health Check Procedure document.
4.36.1.2 Hardware
There is no hardware impact for this feature.
4.36.1.3 Implementation
Automation is implemented using the script healtcheck.py, available in vFuel.
For execution details, available parameters and for further information, refer to the Health Check Procedure document.
4.36.1.4 Interface
The impact of this feature on interfaces is described in the following subsection.
4.36.1.4.1 API
There are no API interface changes in relation to this feature.
4.36.1.4.2 Man-Machine Interface (CLI)
There are no Man-Machine Interface changes in relation to this feature.
4.36.1.5 Obsolete Features
There are no obsolete features in relation to this feature.
4.37 Automated Data Collection
In the current release of CEE 6, manual data collection is replaced by a script-based on-demand procedure.
4.37.1 Impact
This is an enhanced feature in CEE 6.
4.37.1.1 Capacity and Performance
Data collection execution time has been reduced.
For more information, refer to the Data Collection Guideline document.
4.37.1.2 Hardware
There is no hardware impact for this feature.
4.37.1.3 Implementation
Automation is implemented using the script ACDC.py, available in vFuel.
For execution details, available parameters and for further information, refer to the Data Collection Guideline document.
4.37.1.4 Interface
The impact of this feature on interfaces is described in the following subsection.
4.37.1.4.1 API
There are no API interface changes in relation to this feature.
4.37.1.4.2 Man-Machine Interface (CLI)
There are no Man-Machine Interface changes in relation to this feature.
4.37.1.5 Obsolete Features
There are no obsolete features in relation to this feature.
5 Ericsson Atlas
5.1 OpenStack Newton
5.1.1 Description
Ericsson Atlas includes the OpenStack Newton release. The OpenStack Heat, Mistral and Horizon services, which are part of Atlas, are uplifted to Newton as a consequence.
5.1.2 Impact
There is no impact for this feature in this release.
5.1.2.1 Capacity and Performance
The performance of Heat operations is improved due to improvements in the Heat-engine in OpenStack Newton.
5.1.2.2 Hardware
There is no hardware impact.
5.1.2.3 Implementation
For information on the implementation of Atlas, refer to the Atlas SW Installation document.
5.1.2.4 Interface
5.1.2.4.1 API
Refer to the OpenStack Heat Interwork Description for details on the Heat template.
5.1.2.4.2 Man–Machine Interface (CLI)
The generic OpenStack CLI client is present in Atlas. In this release, the OpenStack CLI client is only to be used for functionality provided by Keystone.
The Graphical User Interface (GUI) of Atlas is based on the OpenStack dashboard (Horizon). For information on the Atlas GUI, refer to the following documents: Atlas Dashboard End User Guide and Atlas Dashboard Administrator User Guide.
5.1.2.5 Obsolete Features
There are no obsolete features related to this feature.
5.2 OVFT
5.2.1 Description
Ericsson Atlas includes the Ericsson OVF Translation (OVFT) component, which allows use of the OVF (OVF 2.1) format for VNF deployment through translation to Heat Orchestration Template (HOT) format.
5.2.2 Impact
In Atlas for CEE 6 the CM-HA metadata has been changed, for more details, refer to the Atlas OVF to HOT Mapping and Atlas OVFT API documents.
5.2.2.1 Capacity and Performance
There is no capacity or performance impact for this feature in this release.
5.2.2.2 Hardware
There is no hardware impact for this feature.
5.2.2.3 Implementation
OVFT is included in Ericsson Atlas.
5.2.2.4 Interface
5.2.2.4.1 API
For information on the Open Virtualization Format Translator (OVFT) API, refer to the Atlas OVF to HOT Mapping and Atlas OVFT API documents.
5.2.2.4.2 Man–Machine Interface (CLI)
Refer to the Atlas CLI End User Guide for information on the CLI interface of OVFT.
The Graphical User Interface (GUI) of Atlas is based on the OpenStack dashboard (Horizon). For information on the Atlas GUI, refer to the following documents: Atlas Dashboard End User Guide and Atlas Dashboard Administrator User Guide.
5.2.2.5 Obsolete Features
There are no obsolete features related to this feature.
5.3 Application Template Export
5.3.1 Description
The Application Template Export enables the user to export application templates in OVF or HOT format from Atlas.
5.3.2 Impact
There is no impact on this feature in this release.
5.3.2.1 Capacity and Performance
There is no impact on capacity and performance related to this feature.
5.3.2.2 Hardware
There is no hardware impact for this feature.
5.3.2.3 Implementation
There are no specific implementation procedures for this feature.
5.3.2.4 Interface
There is no impact on the interface for this feature.
5.3.2.4.1 API
The Orchestration API is supported according to the OpenStack Newton release of the Heat component, refer to the Openstack Orchestration API in CEE and the Orchestration API section in the OpenStack API Complete Reference.
This functionality is supported with the OVFT API, refer to the Atlas OVFT API.
5.3.2.4.2 Man–Machine Interface (CLI)
There are new commands to support the Application Template Export, refer to the Atlas CLI End User Guide.
The Graphical User Interface (GUI) of Atlas is based on the OpenStack dashboard (Horizon). For information on the Atlas GUI, refer to the following documents: Atlas Dashboard End User Guide and Atlas Dashboard Administrator User Guide.
5.3.2.5 Obsolete Features
There are no obsolete features related to this feature.
5.4 Multi-Region Dashboard and Placement Zones
5.4.1 Description
Atlas enables support of multiple CEE regions – it can manage the configuration and provide a common dashboard for a group of sites or regions.
5.4.2 Impact
In Atlas for CEE 6, there is a change related to the filters used in the compute environment: server groups filters are used instead of the same host / different host filters.
5.4.2.1 Capacity and Performance
There is no impact on capacity and performance related to this feature.
5.4.2.2 Hardware
There is no hardware impact for this feature.
5.4.2.3 Implementation
Refer to the Atlas Multi-Region Configuration User Guide for details on how to implement this feature.
5.4.2.4 Interface
There is no impact on the interface due to this feature.
5.4.2.4.1 API
Not applicable.
5.4.2.4.2 Man–Machine Interface (CLI)
Not applicable.
5.4.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
5.5 On-Demand Application Scaling
5.5.1 Description
In addition to the Heat-based Autoscaling, it is possible to perform scale in and scale out on application demand.
5.5.2 Impact
There is no impact in this release for this feature.
5.5.2.1 Capacity and Performance
There is no impact on capacity and performance due to this feature.
5.5.2.2 Hardware
There is no hardware impact due to this feature.
5.5.2.3 Implementation
There are no specific steps to implement this feature.
5.5.2.4 Interface
5.5.2.4.1 API
Refer to Atlas OVFT API for information on API of this feature.
5.5.2.4.2 Man–Machine Interface (CLI)
Refer to Atlas CLI End User Guide for information on how to use this feature using CLI.
The Graphical User Interface (GUI) of Atlas is based on OpenStack dashboard (Horizon). For information on the Atlas GUI, refer to the following documents: Atlas Dashboard End User Guide and Atlas Dashboard Administrator User Guide.
5.5.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
5.6 TOSCA Support
5.6.1 Description
The Topology and Orchestration Specification for Cloud Applications (TOSCA) provides an interoperable description of services and applications hosted on the cloud and elsewhere, including their components, relationships, dependencies, requirements, and capabilities. The support of TOSCA format complements the support of OVF and HOT formats in Atlas.
5.6.2 Impact
In CEE 6.5 this feature in impacted. Cloud Service Archive (CSAR) package format is also supported in Atlas. The CSAR is a package defined by OASIS TOSCA. It is a compressed file that includes a TOSCA template of a Network Service, and all the scripts or files that a VNF needs for the lifecycle time from creation to termination.
5.6.2.1 Capacity and Performance
There is no impact on capacity and performance due to this feature.
5.6.2.2 Hardware
There is no hardware impact due to this feature.
5.6.2.3 Implementation
There are no specific steps to implement this feature.
5.6.2.4 Interface
Information on interface impact is provided below.
5.6.2.4.1 API
Refer to Atlas OVFT API for information on API of this feature.
5.6.2.4.2 Man–Machine Interface (CLI)
Refer to Atlas CLI End User Guide for information on how to use this feature using CLI.
The Graphical User Interface (GUI) of Atlas is based on the OpenStack dashboard (Horizon). For information on the Atlas GUI, refer to the following documents: Atlas Dashboard End User Guide and Atlas Dashboard Administrator User Guide.
5.6.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
5.7 Mistral Support Deployment
5.7.1 Description
Mistral is a workflow service. Most business processes consist of multiple distinct interconnected steps that need to be executed in a particular order in a distributed environment. Such a process can be described as a set of tasks and task relations and the description can be uploaded to Mistral, so that it takes care of state management, correct execution order, parallelism, synchronization, and high availability. Mistral also provides flexible task scheduling, allowing the user to run a process according to a specified schedule (for example, every Sunday at 4.00pm) instead of running it immediately. We call such set of tasks and relations between them a workflow.
In CEE 6 releases, the vanilla Mistral implementation of the Newton OpenStack release is used. For more information, refer to the Mistral OpenStack Documentation, Reference [4].
5.7.2 Impact
This is a new feature.
5.7.2.1 Capacity and Performance
There is no impact on capacity and performance due to this feature.
5.7.2.2 Hardware
There is no hardware impact due to this feature.
5.7.2.3 Implementation
There are no specific steps to implement this feature.
5.7.2.4 Interface
Information on interface impact is provided below.
5.7.2.4.1 API
Refer to the Atlas OVFT API for information on API of this feature.
5.7.2.4.2 Man–Machine Interface (CLI)
Refer to the OpenStack Mistral CLI for information on how to use this feature using CLI.
Refer to the Atlas Dashboard End User Guide Openstack Mistral CLI for information on how to use this feature using GUI.
5.7.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
5.8 Deployment Wizard
5.8.1 Description
The Deployment wizard allows the user to update the following options for server resources in the HOT template stored in the Catalog before deploying the stack:
- User data
- Metadata
- File injection
- Availability zone
- extra_specs for flavor or the use of existing flavor
5.8.2 Impact
This is a new feature.
5.8.2.1 Capacity and Performance
There is no impact on capacity and performance due to this feature.
5.8.2.2 Hardware
There is no hardware impact due to this feature.
5.8.2.3 Implementation
There are no specific steps to implement this feature.
5.8.2.4 Interface
Information on interface impact is provided below.
5.8.2.4.1 API
Refer to Atlas OVFT API for information on API of this feature.
5.8.2.4.2 Man–Machine Interface (CLI)
Refer to Atlas CLI End User Guide for information on how to use this feature using CLI.
5.8.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
5.9 Enhanced Stack Panels
5.9.1 Description
The enhanced stack panels feature provides the possibility to display stacks overview, topology, resources, and events pages for better understanding of resource usage for a stack deployed.
5.9.2 Impact
This is a new feature.
5.9.2.1 Capacity and Performance
There is no impact on capacity and performance due to this feature.
5.9.2.2 Hardware
There is no hardware impact due to this feature.
5.9.2.3 Implementation
There are no specific steps to implement this feature.
5.9.2.4 Interface
Information on interface impact is provided below.
5.9.2.4.1 API
Not applicable.
5.9.2.4.2 Man–Machine Interface (CLI)
Refer to the Atlas Dashboard End User Guide OpenStack Mistral CLI for information on how to use this feature using GUI.
5.9.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
5.10 Atlas Backup
5.10.1 Description
The Atlas backup feature provides the possibility to create an encrypted backup of the Atlas configuration for CEE.
5.10.2 Impact
In the CEE 6 release this feature is impacted. The Atlas backup must be encrypted with a password for security reasons.
Atlas Backup feature provides the following:
- Automatic uploads Backup to Swift, if available
- Supports deletion of existing backups
- Logs operations when Atlas backup or restore is performed
- Allows restore from Atlas backup without reboot
The Atlas backup contains key configuration files. The backup is needed if the Atlas configuration must be restored to a previous state.
5.10.2.1 Capacity and Performance
There is no impact on capacity and performance due to this feature.
5.10.2.2 Hardware
There is no hardware impact due to this feature.
5.10.2.3 Implementation
There are no specific steps to implement this feature.
5.10.2.4 Interface
Information on interface impact is provided below.
5.10.2.4.1 API
Not applicable.
5.10.2.4.2 Man–Machine Interface (CLI)
Not applicable.
5.10.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
5.11 Atlas UI Plugins
5.11.1 Descriptions
Atlas UI plugins feature ensure dynamic enabling of plugins based on the enabled services in the underlying OpenStack. By addition of Atlas UI features as plugin without forking OpenStack Horizon source code.
5.11.2 Impact
This is a new feature.
5.11.2.1 Capacity and Performance
There is no impact on capacity and performance due to this feature.
5.11.2.2 Hardware
There is no hardware impact due to this feature.
5.11.2.3 Implementation
There are no specific steps to implement this feature.
5.11.2.4 Interface
Information on interface impact is provided below.
5.11.2.4.1 API
Not applicable.
5.11.2.4.2 Man-Machine Interface (CLI)
For information on using the feature, refer to Atlas Dashboard End User Guideand Atlas Dashboard Administrator User Guide.
5.11.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
5.12 Support for Multiple HOT Templates
5.12.1 Description
Support for applications that consists of nested templates, for example, multiple HOT templates that refer to each other in Atlas. View the nested template structure and ability to update templates from Atlas UI.
5.12.2 Impact
This is a new feature.
5.12.2.1 Capacity and Performance
There is no impact on capacity and performance due to this feature.
5.12.2.2 Hardware
There is no hardware impact due to this feature.
5.12.2.3 Implementation
There are no specific steps to implement this feature.
5.12.2.4 Interface
Information on interface impact is provided below.
5.12.2.4.1 API
For information on API changes, refer to Atlas OVFT API document.
5.12.2.4.2 Man-Machine Interface (API)
For information on using the feature, refer to Atlas Dashboard End User Guide.
5.12.2.5 Obsolete Features
There are no obsolete features in relation to this feature.
Reference List
| [1] Troubleshooting Guideline, 3/1553-AZE 102 01 |
| [2] BGP L3VPN Service, 5/155 34-HSD 101 048/2 |
| [3] CSC Application Command List, 2/190 77-AXD 101 08/6-V1 |
| [4] Mistral OpenStack Documentation. http://docs.openstack.org/developer/mistral/ |
| [5] Using the CLI, 1/190 80-AXD 101 08/6-V1 |
| [6] Intel® Ethernet Controller 710 Series Feature Support Matrix. , https://www.intel.com/content/dam/www/public/us/en/documents/release-notes/xl710-ethernet-controller-feature-matrix.pdf |

Contents