Configuration File Guide
Cloud Execution Environment

Contents

1Introduction
1.1Scope
1.2Target Groups
1.3Prerequisites
1.4Generated and Prefilled Passwords
1.5Note on YAML Syntax

2

Basic Parameter Settings
2.1Region Name
2.2Neutron Configuration
2.3Hardware Switches
2.4Cloud Management
2.5Server Configuration
2.6Networks
2.7Configure NTP
2.8Legal Text Presented at Logon
2.9Storage
2.10Local Disks
2.11IdAM
2.12LDAP Users
2.13VNX Users
2.14Glance Image Service
2.15Swift Configuration Options
2.16SDN Standard Integration on HDS
2.17CM-HA
2.18Fuel Plugins

3

Post-Configuration Activities

4

Advanced Parameter Settings
4.1Advanced CPU Allocation
4.2NIC Information
4.3SR-IOV
4.4Bandwidth-Based Scheduling
4.5Neutron Configuration Options
4.6Nova Configuration Options
4.7Hardware Switch Configuration Options
4.8Multiple Border Gateways
4.9Change of Border Gateway Settings
4.10Change of Linux I/O Scheduler
4.11Time Zone
4.12Secure NBI API Endpoints
4.13Fuel Administration Network
4.14Location of Logs
4.15Link Monitoring for CEE on BSP
4.16Reduced Footprint Monitoring Data Collection
4.17Zabbix CEE User
4.18Initial Memory Amount of vCIC

Reference List

1   Introduction

This document describes how to prepare the site-specific configuration used when installing Cloud Execution Environment (CEE).

Before installation, the CEE configuration files must be edited. The configuration file templates are included in the installation tarball. The files provide configuration for the following areas:

1.1   Scope

Configurations are described in both the Configuration File Guide and the System Dimensioning Guides. This has been verified against the Cloud Execution Environment reference configurations. See the following documents:

The guide is focused on mandatory parameters that must be set according to the certified configuration.

There are parameters that can be changed for other configurations. They are either described in Section 4, or outside the scope of this document.

Storage is measured in gibibyte (GiB), tebibyte (TiB), and mebibyte (MiB) in this document. 1 GiB is equal to 230 bytes.

Note:  
Ericsson Hyperscale Datacenter System (HDS), Hewlett Packard (HP), Blade Server Platform (BSP), Dell multi-servers and Dell single server are the supported hardware configurations in CEE R6.3.

This guide describes the initial configurations needed before CEE deployment. For post-deployment configuration options, see the Runtime Configuration Guide.

1.1.1   Hardware Terminology

In the configuration file templates for all hardware configurations, <blade> and <shelf> are used. <blade> and <shelf> map to the following hardware terms:

Table 1    Hardware Terminology

Ericsson Cloud Term

Definition

Dell Rack Servers Term

BSP (EBS) Term

HP BladeSystem c7000 Term

HDS Term

CEE Region

Several blade servers or rack-mounted servers

     

A vPOD with a set of ComputerSystems

Shelf

A collection of servers that shares a control switch

Servers connecting to the same X440 Extreme switches

Subrack

Enclosure

 

Server

A basic compute engine to host cloud applications in CEE Region

Server

Blade

Blade

ComputerSystem

Blade (only in context of config.yaml)

Logical/physical location of the server

Logical definition that typically corresponds to the iDRAC IP addressing plan

Defined by slot of Generic Ericsson Processor (GEP) blade in subrack

Bay

Logical definition that corresponds to a ComputerSystem identified with a UUID

1.2   Target Groups

This document is aimed at skilled professionals from the following groups:

1.3   Prerequisites

This section describes the site-specific configuration parameters that are to be collected and defined before the installation. The overall workflow is described in CEE Installation. Ensure that the following is available:

For more information on the required tarball contents, see Section 1.3.1.

Have a local copy of config.yaml, <switch_model>_switch.yaml, and neutron_ericsson_<neutrontemplate>.yaml available. In a Single Server deployment, the neutron_ericsson_user_spec.yaml is used. In HDS deployment, neutron_ericsson_sdn_standard.yaml is used. Compare the examples in the sections to the relevant part of the files. A file editor that can render indentations well (for example, UltraEdit) is recommended.

Note:  
EMC storage username and password and <switch_model>_switch.yaml is not applicable to HDS or Single Server deployment.

Note:  
Use the config.yaml template that matches the hardware, that is, HP, Dell multi-server, Dell Single Server, HDS, or BSP.

1.3.1   CEE Software Release Tarball

The CEE software release tarball is required for CEE installation and can be downloaded from the SW Gateway. The tarball consists of the following files:

Below is an example of the release tarball contents:

Note:  
The contents of the release tarball may vary according to the release.

cee-CXC1737883_4-<release>.iso
CEE_RELEASE/cabling_scheme/4_x770_hp.yaml
CEE_RELEASE/cabling_scheme/4_x670v_dell.yaml
CEE_RELEASE/cabling_scheme/4_x670v_hp.yaml
CEE_RELEASE/cabling_scheme/2_x670v_hp.yaml
CEE_RELEASE/cabling_scheme/2_x670v_dell.yaml
CEE_RELEASE/switch_config/4_x670v_switch.yaml
CEE_RELEASE/switch_config/4_x770_switch.yaml
CEE_RELEASE/switch_config/cmx_switch.yaml
CEE_RELEASE/switch_config/2_x670v_switch.yaml
CEE_RELEASE/config.yaml.dell
CEE_RELEASE/config.yaml.bsp
CEE_RELEASE/neutron/neutron_ericsson_basic.yaml
CEE_RELEASE/neutron/neutron_ericsson_user_spec.yaml
CEE_RELEASE/neutron/neutron_ericsson_extreme.yaml
CEE_RELEASE/neutron/neutron_ericsson_cmx.yaml
CEE_RELEASE/neutron/neutron_ericsson_sdn_standard.yaml
CEE_RELEASE/neutron/neutron_ericsson_sdn_tight.yaml
CEE_RELEASE/config.yaml.dell-single_server
CEE_RELEASE/config.yaml.hds
CEE_RELEASE/config.yaml.hp
CEE_RELEASE/scripts/parseyaml.rb
CEE_RELEASE/scripts/migrate_fuel.sh
CEE_RELEASE/scripts/install_vfuel.sh
CEE_RELEASE/host_net_templates/host_nw_hp.yaml
CEE_RELEASE/host_net_templates/host_nw_dell-single_server.yaml
CEE_RELEASE/host_net_templates/host_nw_dell.yaml
CEE_RELEASE/host_net_templates/host_nw_bsp.yaml
CEE_RELEASE/host_net_templates/host_nw_hds.yaml

1.4   Generated and Prefilled Passwords

Some generated passwords are stored in a separate file, see /etc/openstack_deploy/user_secrets.yml on the Fuel node. The user_secrets.yml file contains passwords queried from Fuel, for example rabbitmq_password, galera_root_password.

As shown in Example 1, other passwords used by Ericsson components must be prefilled in /etc/openstack_deploy/user_secrets.yml on the Fuel node, or generated by /usr/share/ericsson-orchestration/scripts/pw-token-gen.py:

Example 1   user_secrets.yml

rabbitmq_password: "{{ fuel_generated.rabbit.password }}" 
galera_root_password: "{{ fuel_generated.mysql.root_password }}" 
keystone_admin_password: "{{ fuel_settings.editable.access.password.value }}" 
keystone_admin_token: "{{ fuel_generated.keystone.admin_token }}" 
neutron_galera_password: "{{ fuel_generated.quantum_settings.database.passwd }}" 
neutron_service_password: "{{ fuel_generated.quantum_settings.keystone.
admin_password }}" 
nova_service_password: '{{ fuel_generated.nova.user_password }}' 
nova_metadata_proxy_secret: '{{ fuel_generated.quantum_settings.metadata.metadata_
proxy_shared_secret }}' 
cinder_password: "{{ fuel_generated.cinder.user_password }}" 

cmha_galera_password: 
cmha_service_password: 
watchmen_galera_password: 
watchmen_service_password: 
idam_ldap_root_password: 
idam_ldap_anonymous_bind_password: 
idam_ldap_manager_bind_password: 
idam_ldap_sync_bind_password: 
idam_user_vnxlf_vnx_key: 
idam_user_vnxlf_galera_password:
zabbix_cee_user_password:

1.5   Note on YAML Syntax

The CEE configuration file is a YAML file. The YAML standard defines anchors and aliases, see section Anchors and Aliases in Reference [4]. An anchor is used to attach a label to a section of the data structure, so the section can be referred by an alias. An anchor consists of a label prefixed with an & character: &label. An alias consists of a label prefixed with a * character: *label.

Example 2   YAML Anchors and Aliases

 ...
  presets:
    - &label1
      key1: value11
      key2: value12
    - &label2
      key1: value21
      key2: value22
 ...
  actual_use: *label2
 ...

An alias can be used to reference the data structure defined by the anchor.

The CEE configuration templates define the recommended settings for several parts of the configuration. Each of these settings is marked with an anchor that can be used to reference them with an alias at the place where they are used. For example, settings such as the Network Interface Controller (NIC) assignment are expected to be identical for each server. With the use of aliases, the same NIC assignment is referenced by each server definition.

2   Basic Parameter Settings

This chapter describes how to update config.yaml with site-specific parameters. The updated configuration is used as input to the automated installation. The placeholder <variables> in the config.yaml must be replaced with valid values based on the information in this section.

Note:  
The indentation in the template files must be kept. Use the SPACE key (blanks) to make the indentation. TAB must not be used.

2.1   Region Name

The region_name parameter refers to the CEE region name. The parameter is included in config.yaml as follows:

Example 3   CEE Region Name

ericsson:
  ...
  region_name: <CEE Region Name>
  ...

<CEE Region Name> must be replaced by an OpenStack region name, with a maximum length of 14 characters.. The CEE region name must not contain the underscore "_" character.

Note:  
The CEE region name specified in the configuration file will only be applied at the components deployed by Ansible. Most of the OpenStack endpoints are deployed by Fuel where the region name cannot be specified, so all the OpenStack endpoints will be configured with the default region name RegionOne. This difference does not affect the operation of CEE. Contact the next level of maintenance support to change the region name manually after installation if it is required.

The CEE region name and an underscore (_) are prepended to the system names of the hardware switches, if the switches are configured. For hardware switch configuration, see Section 2.3, and Section 4.7.

Note:  
Each switch name is a maximum of 17 characters long, and the resulting total string length must not exceed 32 characters.

2.2   Neutron Configuration

The neutron section in the configuration file templates enables configuration of the OpenStack networking module (Neutron) in CEE. The section is included in config.yaml as follows:

Example 4   Neutron Configuration

ericsson:
  ...
  neutron:
    mgmt_vip: 192.168.2.15
    mgmt_subnetmask: 25
    l2_vlan_start: <L2.VLAN.START>
    l2_vlan_end: <L2.VLAN.END>
    neutron_config_yaml_file: <neutron_CONFIG.yaml>
  ...
Note:  
For SDN configuration, the parameters l2_vlan_start and l2_vlan_end are not applicable but currently need to be configured with the value 0 (zero).

Change the configuration template values to reflect the site specific values:

mgmt_vip

mgmt_vip is the common IP for the Neutron server process. This value must be a valid IP in cee_ctrl_sp network static sub range.

mgmt_subnetmask

mgmt_subnetmaskis consistent with the subnet mask size of the cee_ctrl_sp network.

l2_vlan_start and l2_vlan_end

l2_vlan_start is the first element of the range for tenant VLAN IDs used for Neutron network segments.

l2_vlan_start is the last element of the range for tenant VLAN IDs used for Neutron network segments.

Note:  
This VLAN range is only for CEE tenant, it cannot include any of the CEE internal VLANs.

neutron_config_yaml_file

neutron_config_yaml_file is the name of the file containing the Neutron configuration parameters. The following templates can be used for Neutron:

The following example is a Neutron BSP configuration:

Example 5   Neutron Configuration for BSP

ericsson:
  ...
  neutron:
    mgmt_vip: 192.168.2.15
    mgmt_subnetmask: 25
    l2_vlan_start: 130
    l2_vlan_end: 3999
    neutron_config_yaml_file: neutron_ericsson_cmx.yaml
  ...

For more information on the Neutron configuration file configuration, see Section 4.5.

2.3   Hardware Switches

The hw_switches section describes the configuration to be deployed to the hardware switches during CEE region installation. The section can contain the following parameters:

Example 6   Hardware Switch Configuration

ericsson:
  ...
  hw_switches:
    initial_setup: <INITIAL_SETUP>
    switching_scheme_yaml_file: <SWITCH.CONFIGURATION_FILE.yaml>
    cabling_scheme_yaml_file: <CABLING.SCHEME_FILE.yaml>

initial_setup

initial_setup specifies the initial setup of the hardware switches. The following values are valid:

switching_scheme_yaml_file

Note:  
This keyword is only applicable if initial_setup is cmx or extreme.

The value of <SWITCH.CONFIGURATION_FILE.yaml> specifies the switch setup used:

For information on how to configure the selected <switch_model>_switch.yaml file, see Section 2.3.1 (Extreme) or Section 2.3.2 (CMX).

cabling_scheme_yaml_file

Note:  
This keyword is only applicable if initial_setup is extreme.

This parameter specifies the cabling schema used. The following values are valid:

For advanced hardware switch configuration options, see Section 4.7.

2.3.1   Configuring Extreme Switches

To configure the Extreme switches in multi-server deployment, the following networks need to be configured in the <switch_model>_switch.yaml file with site-specific values for both traffic switch A and B:

For more information on these networks, see Section 2.6.

The structure of the configuration options is as follows:

Example 7   Update Switch IP Addresses

Switch A

  switching:
    -
      ...
      mgmt_vrrp_config:
        -
          vip: 10.0.3.1/25
          is_master: true
          vlan: subrack_ctrl_sp
        -
          vip: VIP.OF.THE.VRRP/PREFIXSIZE
          is_master: true
          vlan: subrack_om_sp
      ...    
      mgmt_config:
        vlans:
          -
            name: cee_ctrl_sp
            tagged: true
          -
            name: subrack_ctrl_sp
            ip: 10.0.3.2/25
            tagged: true
      bgw_config:
        -
          id: 1
          vlans:
            -
              name: cee_om_sp
              tagged: true
              ip: <IP.OF.SWITCH-A/PREFIXSIZE>
            -
              name: subrack_om_sp
              tagged: true
              ip: IP.OF.THE.SWITCH/PREFIXSIZE
      ...


Switch B

  switching:
    -
      ...
      mgmt_vrrp_config:
        -
          vip: 10.0.3.1/25
          is_master: false
          vlan: subrack_ctrl_sp
        -
          vip: VIP.OF.THE.VRRP/PREFIXSIZE
          is_master: false
          vlan: subrack_om_sp
      ...
      mgmt_config:
        vlans:
          -
            name: cee_ctrl_sp
            tagged: true
          -
            name: subrack_ctrl_sp
            ip: 10.0.3.3/25
            tagged: true
      bgw_config:
        -
          id: 1
          vlans:
            -
              name: cee_om_sp
              tagged: true
              ip: <IP.OF.SWITCH-B/PREFIXSIZE>
            -
              name: subrack_om_sp
              tagged: true
              ip: IP.OF.THE.SWITCH/PREFIXSIZE
    ...

mgmt_vrrp_config

The following parameters are needed:

mgmt_config

The following parameters are needed:

bgw_config

The following parameters are needed:

2.3.1.1   Limitation

Dell R620 and R630 Top of Rack (ToR) traffic switch ports are not configured when a VM is booted in a Neutron network used for SR-IOV. Even if CEE Neutron is installed with managed Extreme switches, the Modular Layer 2 (ML2) mechanism driver does not understand to cover SR-IOV based Neutron ports.

To create SR-IOV network or networks, the following command can be used. The parameter values can be different from the ones used in this example:

neutron net-create --provider:physical_network=pool_0000_41_00_0 --provider:network_type=vlan --provider:segmentation_id=3666 sriov_net1
neutron net-create --provider:physical_network=pool_0000_41_00_0 ⇒
--provider:network_type=vlan --provider:segmentation_id=3666 sriov_net1

Manual configuration of the Extreme Switch can be done by using the following command. The value of segmentation_id must be used for the tag parameter.

create vlan <sriov_VLAN_NAME>
configure vlan <sriov_vlan_name> tag 3666
configure vlan <sriov_vlan_name> add ports <sriov_port_1> <sriov_port_2> tagged
create vlan <sriov_VLAN_NAME>
configure vlan <sriov_vlan_name> tag 3666
configure vlan <sriov_vlan_name> add ports <sriov_port_1> ⇒
<sriov_port_2> tagged

The configuration of the ToR switch can be done when the CEE region has been installed, if the VLAN specifications are available at that time. It is suggested to use a VLAN tag for the Neutron network that is not included in the range used for the default physical network. If external connectivity must be configured as well, add the port or ports that connect to the Border Gateway (BGW) as shown below:

configure vlan <sriov_vlan_name> add ports <sriov_port_1> <sriov_port_2> <port_to_BGW> tagged
configure vlan <sriov_vlan_name> add ports <sriov_port_1> ⇒
<sriov_port_2> <port_to_BGW> tagged

The configuration changed manually on the ToR switches can be saved on each ToR switch. However, if the consistency checker function is triggered for other reasons, it will restore the previous configuration that does not contain the SR-IOV settings.

2.3.2   Configuring CMX Switches on BSP

This section describes CEE on BSP hardware. The configuration file template contains the following section:

Example 8   Hardware Switches

ericsson:
  ...
  hw_switches:
    initial_setup: cmx
    switching_scheme_yaml_file: cmx_switch.yaml
      ...   

The switch configuration template cmx_switch.yaml contains the following section:

Example 9   Switch Configuration

switching:
-
  model: cmx
  provider_vlan_start: 50
  provider_vlan_end: 129
  provider_name_prefix: provider_
switch_config:
  restore_suffix: preCEETenant
  initial_backup: postCEETenant
  migrated_backup: postMigrateCEETenant
 

2.3.3   Unmanaged Switch

This section describes unmanaged CEE on HDS hardware or in Single Server deployment. The configuration file template contains the following section:

Example 10   Unmanaged HW Switch

ericsson:
  ...
  hw_switches:
    initial_setup: none
  ...   

2.4   Cloud Management

2.4.1   General Configuration

The Atlas northbound and southbound networks need to be configured in the cloud_mgmt section of config.yaml. Both the Atlas Northbound Interface (NBI) and the Atlas Southbound Interface (SBI) have to be configured with site-specific information. See the structure of the nbi and sbi subsections:

Example 11   Cloud Management Configuration

ericsson:
  ...
  cloud_mgmt:
    nbi:
      name: <NETWORK.NAME>
      cidr: <NETWORK.IP/PREFIXSIZE>
      start: <FIRST.IP.TO.USE>
      end: <LAST.IP.TO.USE>
      gateway: <IP.OF.THE.GW>
      ip: <NBI.IP.OF.ATLAS>
    sbi:
      name: <NETWORK.NAME>
      cidr: <NETWORK.IP/PREFIXSIZE>
      start: <FIRST.IP.TO.USE>
      end: <LAST.IP.TO.USE>
      gateway: <IP.OF.THE.GW>
      ip: <SBI.IP.OF.ATLAS>
  ....

The following parameters are to be configured with unique values for NBI and SBI, respectively:

During the Atlas installation, there are two options available on how IP addresses are assigned to Atlas:

Note:  
<FIRST.IP.TO.USE>, <LAST.IP.TO.USE>, <NBI.IP.OF.ATLAS>, and <SBI.IP.OF.ATLAS> must always be defined, regardless of how Atlas is installed.

2.4.2   Additional Configuration for VLAN (Non-SDN) deployments

In VLAN deployments, additional tag parameters need to be configured for both nbi and sbi.

Example 12   Cloud Management VLAN Configuration

ericsson:
  ...
  cloud_mgmt:
    nbi:
      name: <NETWORK.NAME>
      tag: <VLAN.TAG>
      ...
    sbi:
      name: <NETWORK.NAME>
      tag: <VLAN.TAG>
      ...

The tag parameter is the unique VLAN ID of the Atlas Northbound or Southbound Network.

2.4.3   Additional Configuration for SDN deployments

In SDN deployment, the following extra cloud_mgmt parameters need to be configured:

Example 13   Cloud Management SDN Configuration

  cloud_mgmt:
    network_type: <NETWORK.TYPE>
    vpn:
      name: <VPN.NAME>
      rd: <ROUTE.DISTINGUISHER>
      export_rt: <EXPORT.ROUTE.TARGET>
      import_rt: <IMPORT.ROUTE.TARGET>

network_type

The mechanism through which the Atlas Northbound and Southbound networks are implemented. Allowed values are vlan and vxlan.

vpn

A VPN has to be configured to provide connectivity for the Atlas VM. The following parameters are needed to create the VPN entity in the CSC:

2.5   Server Configuration

2.5.1   Shelf and Blade Management

The shelf or blade management sections must be edited with site-specific information. Remove the unused shelves (including definitions for the unused shelf) from config.yaml. Some shelves are excluded from the following example for readability. The exact structure of the shelf or blade management information is hardware dependent. The hardware-specific details are shown in the sections below the example.

Note:  
The passwd and username parameters configured in this section are also used by fencing for out-of-band management access.

Example 14   Shelf Management of an HP-based CEE

ericsson:
  ...
  shelf:
    -
      id: 0
      shelf_mgmt:
        ip: 10.0.3.100
        name: subrack_ctrl_sp
        passwd: firstPassword
        username: firstUsername
      blade:	
        ...
    -
      id: 1
      shelf_mgmt:
        ip: 10.0.3.102
        name: subrack_ctrl_sp
        passwd: secondPassword
        username: secondUsername
      blade:
        ...
    -
      id: 2
      shelf_mgmt:
        ip: 10.0.3.104
        name: subrack_ctrl_sp
        passwd: thirdPassword
        username: thirdUserName
      blade:
        ...

Zero must be used as the ID of the first shelf, and the shelf ID must be monotonically increased for the further shelves.

HP

The shelf configuration options for HP deployment are as follows:

Example 15   HP Shelf Configuration

ericsson:
  ...
  shelf:
    -
      id: 0
      shelf_mgmt:
        ip: <IP.FIRST.SHELF>
        name: subrack_ctrl_sp
        passwd: <PASSWORD.FIRST.SHELF>
        username: <USERNAME.FIRST.SHELF>
      blade:
      ...
      id: 1
      shelf_mgmt:
        ip: <IP.SECOND.SHELF>
        name: subrack_ctrl_sp
        passwd: <PASSWORD.SECOND.SHELF>
        username: <USERNAME.SECOND.SHELF>
      blade:
      ...

The following shelf manager information has to be updated for each configured shelf:

HP uses the term "enclosure" instead of "shelf". Zero must be used as the ID of the first shelf, and the shelf ID must be monotonically increased in steps of one for the further shelves.

Note:  
Values of 01, or 001 are not acceptable as shelf IDs. The installation stops if these values are used.

The blade ID is the position of the blade. For example:

BSP

The shelf configuration options for BSP deployment are as follows:

Example 16   BSP Shelf Configuration

ericsson:
  ...
  shelf:
    -
      id: 0
      shelf_mgmt:
        ip: <IP.SHELF.MANAGER>
        name: cee_ctrl_sp
        lct_ip: 10.0.10.2
        passwd: <PASSWORD-SHELF.MANAGER>
        username: <USERNAME.SHELF.MANAGER>
      blade:
        -
          id: 1

The following shelf manager information has to be updated:

The blade ID is the position of the blade, blade ID=(slot ID+1)/2. For example:

Dell

The shelf configuration options for Dell deployment are as follows:

Example 17   Dell Multi-Server Blade Configuration

ericsson:
  ...
  shelf:
    -
      id: 0
      blade:
        -
          id: 1
          blade_mgmt:
            ip: <IP.FIRST.SERVER>
            name: subrack_ctrl_sp
            passwd: <PASSWORD.FIRST.SERVER>
            username: <USERNAME.FIRST.SERVER>
          ...
        -
          id: 2
          blade_mgmt:
            ip: <IP.SECOND.SERVER>
            name: subrack_ctrl_sp
            passwd: <PASSWORD.SECOND.SERVER>
            username: <USERNAME.SECOND.SERVER>
          ...

The following blade manager information has to be updated:

The blade ID is the position of the Dell blade manager. For example:

Unmanaged Server

Note:  
HDS deployment uses unmanaged CEE.

The blade configuration options for unmanaged CEE are as follows:

Example 18   Unmanaged Server Configuration

shelf:
     -
       id: 0
       cee_managed: false
       blade:
         -
           id: 1
           hw_uuid: <SERVER.UUID>
           mac_assignment:
             control0: <LEFT.MAC.ADDRESS>
             control1: <RIGHT.MAC.ADDRESS>

Unmanaged server mode requires the servers to be discovered and preconfigured before running the installation. The cee_managed shelf configuration option needs to be set to false. The following blade configuration options have to be updated:

SDN Blade Configuration

For HDS SDN deployment, two extra parameters are needed to be in the blade section to indicate the VTEP network where the blade belongs:

Example 19   SDN Blade Configuration

  shelf:
  ...
    blade:
    ...
      vtep_net: sdn_underlay_sp_<net_id>
      vtep_ip: <IP.OF.THE.VTEP>

This information can be fetched from the vPOD definition of HDS. If the blade IP is missing, any free IP can be set for the blade in the IP range defined for the VTEP, see Section 2.6.

ScaleIO Blade Configuration

To dedicate a blade to be part of the ScaleIO cluster, the role of the blade in the cluster has to be defined in the scaleio subsection of blade configuration. This section is optional and only has effect if the global scaleio parameters are defined in the storage section, see Section 2.9.3. The following blade.scaleio parameters are available:

Example 20   ScaleIO Blade Configuration

ericsson:
  shelf:
    blade
      - id:  ..
        scaleio:
          mode: dedicated
          roles:
            mdm:
            gw:
            sds:
              - protection_domain: domain1
                devices:
                  - name: /dev/sdb
                    pool: pool1

2.5.2   Compute Hosts

Compute hosts are defined as a list of blades within each shelf. Each Compute host must have an ID that corresponds to the physical/logical location of the server, see Section 1.1.1. Assignment of physical NIC devices to different CEE networks must be defined for each server. Memory (huge pages), CPU, and storage must also be allocated to different resource owners. The allocation of these resources is controlled through the configuration file.

A Compute host can contain vCIC and/or vFuel VMs. The allocation of resources depends on whether the host contains these infrastructure VMs or not.

hw_uuid

Certain CEE alarms contain the UUID of the compute host associated with the alarm. The UUID can be used to correlate CEE alarms with alarms generated by the lower-layer server management tools. By default, the UUID of each compute blade is obtained automatically during CEE install by running the following command:

dmidecode --string system-uuid

hw_uuid is an optional parameter for servers. If it is defined in config.yaml, the applicable alarms show the specified UUID instead of the default, automatically obtained one. If hw_uuid is used, it must be defined for all servers. If hw_uuid is not defined in config.yaml, the default auto value is assumed. The UUID can be assigned as follows:

Example 21   Hardware System UUID Assignment

ericsson:
    ...
    blade:
      -
        id: 2
        hw_uuid: <SERVER.UUID>
        ...
      -
        id: 3
        hw_uuid: <SERVER.UUID>
        ...

The value of the hw_uuid parameter must be either a valid UUID or auto.

2.5.3   NIC Assignment

Each blade must have a nic_assignment section to define which physical NIC to use for control, storage, and data traffic. Each NIC is defined by its PCI address. The actual mapping depends on the cabling of the server.

The configuration template contains a list of predefined NIC assignments for supported hardware assuming the Ericsson recommended cabling scheme. Predefined NIC assignments are listed in nic_assignments and each setting is labeled by an anchor. Normally, it is sufficient to refer to the appropriate predefined NIC assignment by using an alias in the blade definition.

Example 22   NIC Assignment of an HP Server

ericsson:
      ...
      blade:
        -
          id: 2
          ...
          nic_assignment: *HP_GEN8_nic_assignment
          ....

Example 23   NIC Assignment of an Unmanaged Server

shelf:
  cee_managed : false
  blade:
    id: 1
    hw_uuid: 4c4c4544-0030-3310-8039-b8c04f423232 (For alarm handling only)
    nic_assignment: *DELL_620_nic_assignment
    mac_assignment:
      control0: ec:f4:bb:c1:27:d4
      control1: ec:f4:bb:c1:27:d5
Note:  
If cee_managed is true, the CEE is managed and the mac_assignment fields are ignored.

If the alias for a blade used in nic_assignment is not appropriate for the hardware, change the alias to refer to the relevant setting from the predefined values. If the hardware is not listed in nic_assignments:, a new NIC mapping must be added to the list. See Section 4.2 for information on how to define a new NIC mapping.

2.5.4   Memory Allocation

The physical memory of a server is partitioned into memory pages. Pages can have three different sizes: 4 KiB, 2 MiB, and 1 GiB. The 2 MiB and 1 GiB pages are called huge pages. In CEE, the memory is used as follows:

The memory to be allocated in huge pages is defined in config.yaml for each blade. The remainder of the physical memory that is not allocated in huge pages is accessible in 4 KiB pages. If using the keyword auto for the VM huge page allocation, the installer calculates the huge page count.

To reserve huge pages on a Compute host, the reservedHugepages section of the corresponding blade must be filled in.

Example 24   Huge Page Reservations for Three Different Blades in BSP

ericsson:
      ...
      blade:
        -
          id: 1
          nic_assignment: *BSP_GEP7_nic_assignment
          reservedHugepages: *BSP_GEP7_reservedHugepages_with_vcic_and_vfuel
          reservedCPUs: *auto_reservedCPUs_with_vcic_and_vfuel
          reservedDisk: *reservedDisk_for_vcic_and_vfuel
          cfm_role: active
          virt:
            cic:
              id: 1
        -
          id: 2
          nic_assignment: *BSP_GEP5_nic_assignment
          reservedHugepages: *BSP_GEP5_reservedHugepages_with_vcic_and_vfuel
          reservedCPUs: *auto_reservedCPUs_with_vcic_and_vfuel
          reservedDisk: *reservedDisk_for_vcic_and_vfuel
          cfm_role: active
          virt:
            cic:
              id: 2
        -
          id: 3
          nic_assignment: *BSP_GEP5_nic_assignment
          reservedHugepages: *BSP_GEP5_reservedHugepages_with_vcic
          reservedCPUs: *auto_reservedCPUs_with_vcic
          reservedDisk: *reservedDisk_for_vcic
          cfm_role: active
          virt:
            cic:
              id: 3
        -
          id: 4
          nic_assignment: *BSP_GEP5_nic_assignment
          reservedHugepages: *BSP_GEP5_reservedHugepages
          reservedCPUs: *auto_reservedCPUs
          cfm_role: passive
        ...

The amount of memory to reserve in huge pages depends on whether the Compute host contains a vCIC and/or vFuel. The examples above illustrate different alternatives. Recommended huge page reservations are listed in the reservedHugepages section. Refer to one of the available predefined settings by using an alias as shown in the examples.

There is a lower limit on the amount of memory for the host OS. In other words, this is the amount of physical memory that must not be allocated to huge pages. The actual value depends on whether a vCIC is hosted on the Compute host. The limits are defined by the compute_os_reserved_mem and compute_with_vcic_os_reserved_mem configuration parameters. The values are expressed in MiB:

compute_os_reserved_mem: 8192
compute_with_vcic_os_reserved_mem: 14336

The example above shows the minimum required values. These parameters are defined globally and are valid for all blades.

The default vCIC swap space is 512 MiB. The swap space can be changed by setting the vcic_swap_size optional parameter. The value is expressed in MiB. For example, the following setting increases the swap space to 5 GiB:

vcic_swap_size: 5120

2.5.5   CPU Allocation

The CPUs of a Compute host can be reserved for different purposes, such as OVS, tenant VMs, and infrastructure VMs (vCIC and vFuel). Reservation of the CPUs means that the reserved CPUs are used exclusively by the owner of the reservation. Reserved CPUs are isolated which means that the kernel scheduler does not schedule processes to run on these CPUs by itself. The CPUs not reserved for these owners remain non-isolated and regular processes of the host OS are scheduled on these CPUs.

To reserve CPUs on a Compute host, the reservedCPUs section of the corresponding blade must be filled in.

Example 25   CPU Allocation

ericsson:
        ...
        -
          id: 3
          blade_mgmt:
            ...
          nic_assignment: *DELL_630_OEM_nic_assignment
          reservedHugepages: *DELL_630_OEM_reservedHugepages_with_vcic
          reservedCPUs: *auto_reservedCPUs_with_vcic
          reservedDisk: *reservedDisk_for_vcic
          virt:
            cic:
              id: 3
        -
          id: 4
          blade_mgmt:
            ...
          nic_assignment: *DELL_630_OEM_nic_assignment
          reservedHugepages: *DELL_630_OEM_reservedHugepages
          reservedCPUs: *auto_reservedCPUs
        -
          id: 5
          blade_mgmt:
            ...
          nic_assignment: *DELL_630_OEM_nic_assignment
          reservedHugepages: *DELL_630_OEM_reservedHugepages_with_vfuel
          reservedCPUs: *auto_reservedCPUs_with_vfuel
          reservedDisk: *reservedDisk_for_vfuel
          vfuel: ""
          ...
            

If the Compute host contains a vCIC and/or vFuel, CPUs must be reserved for those as well. The examples above illustrate different alternatives. Recommended CPU reservations are listed at the beginning of the configuration template in the reservedCPUs section. Refer to one of the available predefined settings by using an alias as shown in the examples.

Note:  
The default CPU reservation uses automatic CPU allocation in Multi-Server configurations but uses manual CPU allocation for Single Server deployments. See Section 4.1 for details on the supported CPU allocation methods.

2.5.6   Disk Reservation

Storage space must be reserved on Compute hosts that contain vCIC or vFuel. Use the reservedDisk section for reserving disk storage on a blade. Such disk reservation is not needed on Compute hosts that do not contain infrastructure VMs.

Example 26   Disk reservation for vCIC and vFuel

ericsson:
        ...
        -
          id: 1
          ...
          nic_assignment: *HP_GEN9_nic_assignment
          reservedHugepages: *HP_GEN9_reservedHugepages_with_vcic
          reservedCPUs: *auto_reservedCPUs_with_vcic
          reservedDisk: *reservedDisk_for_vcic
          virt:
            cic:
              id: 2
        -
          id: 2
          ...
          nic_assignment: *HP_GEN9_nic_assignment
          reservedHugepages: *HP_GEN9_reservedHugepages_with_vfuel
          reservedCPUs: *auto_reservedCPUs_with_vfuel
          reservedDisk: *reservedDisk_for_vfuel
          vfuel: ""
          ...

The value set for vFuel disk size is 50 GB in the config.yaml templates. The recommendation is to increase it to 70 GB if there is enough disk space on the vFuel host. A vFuel disk size of 50 GB might not be enough for two generations of CEE software to be installed simultaneously, which is needed during an update procedure.

Example 27   vFuel Disk Size 70 GB

ericsson:
  ...
  - &reservedDisk_for_vfuel
    - owner: vfuel
      size: 70G
      ...
 
Or...

  - &reservedDisk_for_vcic_and_vfuel
    - owner: vfuel
      size: 70G
      ...

2.5.7   Cloud Infrastructure Controller (CIC)

In a Multi-Server deployment, three Compute hosts must be configured to host a vCIC. In a Single Server deployment, the single Compute host must be configured to host a vCIC.

To contain a vCIC, the corresponding blade must contain the virt key with a data structure that defines a vCIC as shown in the examples in the previous subsections. In addition, the blade definition must contain resource reservations for huge pages, CPU, and disk storage suitable for vCIC.

2.5.8   Host Networking

The used host networking yaml file must correspond to the used hardware.

Example 28   Host Networking, HP

  host_networking:
    template_yaml_file: host_nw_hp.yaml

2.5.9   Virtual Fuel

In a Multi-Server deployment, two Compute hosts must be configured to be able to host a vFuel. In a Single Server deployment, no vFuel is used.

To be able to contain a vFuel, the blade definition must contain resource reservations for huge pages, CPU, and disk storage suitable for vFuel. In addition, one of the two blade definitions must contain the vfuel key with an empty string as its value (see the previous sections for examples).

2.6   Networks

The networks section must be edited with site-specific information. See the following example for network configurations:

Example 29   Network Configuration

ericsson:
  ...
  networks:
    -
      name: cee_om_sp
      mos_name: public
      tag: <VLAN.TAG>
      enable_ntp: true
      cidr: <NETWORK.IP/PREFIXSIZE>
      start: <FIRST.IP.TO.USE>
      end: <LAST.IP.TO.USE>
      gateway: <IP.OF.THE.GW>

The network fuel_ctrl_sp is used for PXE boot of Compute Hosts (host OS) and vCIC nodes. For information on how to change this network, see Section 4.13.

The following networks have to be configured based on hardware deployment:

Table 2    Hardware-dependent Network Configuration

Network name

HDS

Dell multi-server

HP

BSP

Single server

subrack_ctrl_sp

N/A

to be configured

to be configured

N/A

to be configured

subrack_om_sp

N/A

to be configured

to be configured

N/A

N/A

cee_om_sp

to be configured

to be configured

to be configured

to be configured

to be configured

cee_ctrl_sp

to be configured

preconfigured

preconfigured

preconfigured

preconfigured

iscsi_san_pda

to be configured

preconfigured

preconfigured

preconfigured

N/A

iscsi_san_pdb

to be configured

preconfigured

preconfigured

preconfigured

N/A

swift_san_sp

to be configured

preconfigured

preconfigured

preconfigured

N/A

migration_san_sp

to be configured

preconfigured

preconfigured

preconfigured

N/A

hds_agent

to be configured

N/A

N/A

N/A

N/A

sdn_underlay__sp_<net_id>

to be configured

N/A

N/A

N/A

N/A

subrack_ctrl_sp

Note:  
The use of subrack_ctrl_sp is specific to setups using Extreme switches.

The network subrack_ctrl_sp is used for subrack management. The VLAN is used to monitor and manage the server blades. This is done regardless of whether the blade is powered on, or if an operating system is installed or functional. Update tag with the correct VLAN tag, and cidr with the IP address in the subrack_ctrl_sp network used by Fuel.

Example 30   Subrack Management

ericsson:
  ...
  networks:
    ...
    -
      name: subrack_ctrl_sp
      mos_name: subrack_ctrl_sp
      tag: <VLAN.TAG>
      vr: subrack_om
      ipforwarding: true
      cidr: <IP.IN.FUEL/PREFIXSIZE>
      start: <FIRST.IP.TO.USE>
      end: <LAST.IP.TO.USE>
    ...	

Additional parameters must be set for subrack_ctrl_sp in the specific <switch_model>_switch.yaml file, see Section 2.3.1.

subrack_om_sp

Note:  
The use of subrack_om_sp is specific to multi-server deployments using Extreme switches.

The network subrack_om_sp is used as a link network between the traffic switches and the border gateways. Subrack management traffic is routed over this network.

Example 31   Network subrack_om_sp

ericsson:
  ...
  networks:
    ...
    -
      name: subrack_om_sp
      tag: <VLAN.TAG>
      vr: subrack_om
      cidr: <NETWORK.IP/PREFIXSIZE>
      gateway: <IP.OF.THE.GW>
      ipforwarding: true
    ...

Additional parameters must be set for subrack_om_sp in the specific <switch_model>_switch.yaml file, see Section 2.3.1.

cee_om_sp

The network cee_om_sp is used for CIC northbound communication:

Example 32   CIC Northbound Communication

ericsson:
  ...
  networks:
    ...
    -
      name: cee_om_sp
      mos_name: public
      tag: <VLAN.TAG>
      enable_ntp: true
      cidr: <NETWORK.IP/PREFIXSIZE>
      start: <FIRST.IP.TO.USE>
      end: <LAST.IP.TO.USE>
      gateway: <IP.OF.THE.GW>
    ...

The number of dynamically assigned IP addresses inside cee_om_sp in the range FIRST.IP.TO.USE to LAST.IP.TO.USE must be at least 7 including the endpoints:

cee_ctrl_sp

Network used for CEE internal system functions. cee_ctrl_sp VLAN needs to be configured on HDS.

iscsi_san_pda

Network used for persistent block storage (Cinder) in a centralized storage array. cee_ctrl_sp VLAN needs to be configured on HDS.

swift_san_sp

Network used for persistent block storage (Cinder) in a centralized storage array. cee_ctrl_sp VLAN needs to be configured on HDS.

migration_san_sp

Network used for Swift traffic on the storage switching domain. cee_ctrl_sp VLAN needs to be configured on HDS.

hds_agent

Network used for HDS In-band Metrics Collection. The following parameters need to be configured:

sdn_underlay_sp

The network sdn_underlay_sp is used for SDN managed networks to handle the tenant VM traffic. The network is declared in HDS per (pair of) leaf switch for every vPOD.

Example 33   SDN Managed Networks for VTEP Handling

ericsson:
  ...
  networks:
    ...
    -
      name: sdn_underlay_sp_<net_id>
      tag: <VLAN,TAG>
      cidr: <NETWORK.IP/PREFIXSIZE>
      gateway: <IP.OF.THE.GW>
   ...

The <net_id> and cidr IP range is different for each sdn_underlay_sp network.

The gateway must be an existing, reachable gateway of this network and come from HDS.

2.7   Configure NTP

The ntp_config section contains NTP servers accessible by the vCIC. The section has to be updated with site-specific information.

Example 34   NTP Configuration

ericsson:
  ...
  ntp_config:
    servers: [<NTP.SERVER.1.IP>, <NTP.SERVER.2.IP>, <NTP.SERVER.3.IP>]
    orphan_mode_stratum: <ORPHAN.MODE.STRATUM>
  ...

Define up to four external NTP server IP addresses. A minimum of one NTP server IP address is required, and at least three are recommended. For example, servers: [IP-ADDRESS.FOR.NTP.SERVER-1, IP-ADDRESS.FOR.NTP.SERVER-2], IP-ADDRESS.FOR.NTP.SERVER-3].

When the NTP server running on one of the Compute nodes enters NTP Orphan mode, the value of the parameter orphan_mode_stratum is used as stratum by that NTP server. To configure a correct value, the following criteria must be fulfilled:

2.7.1   NTP Authentication

The ntp_config section contains NTP authentication setup. The authentication can be set up between the Compute host hosting the vCIC and the CIC upstream NTP servers.

Example 35   NTP Authentication

ericsson:
   ...
   ntp_config:
     ...
     authentication: None
  ...

To enable authentication towards the NTP servers upstream of the Compute host that hosts the vCIC, set the md5 to the supported authentication method. Currently, the only available encryption method is md5.

Example 36   NTP Authentication, MD5

ericsson:
  ...
  ntp_config:
      ...
      authentication: md5
  ...

To enable authentication between the controllers in CEE and the upstream NTP servers, configure the following:

Configure for the upstream NTP servers, and for the controllers in CEE. The group key is a decimal number from 1 to 65534, inclusive. The group password consists of a printable ASCII string, less than or equal to 16 characters.

Example 37   Group Key and Password

ericsson:
  ...
  ntp_config:
     ...
     authentication_upstream_group_key: 1
     authentication_upstream_group_password: upstream_group_password
     authentication_ericsson_group_key: 10
     authentication_ericsson_group_password:4|BoCdm;S-A1]a>o
  ...
Note:  
Enabling the NTP authentication requires the support and configuration of the upstream NTP servers.

2.8   Legal Text Presented at Logon

There are predefined messages in the config.yaml template that are shown at logon. These can be changed if needed.

If some specific legal text is to be displayed before login attempts, update the predefined text in the section legaltext:

Example 37   Predefined Legal Text

ericsson:
   ...
  legaltext:
    local: "Attention! Unauthorized local access is strictly prohibited!\n"
    remote: "\nAttention! Unauthorized remote access is strictly ⇒
prohibited!\n\n"
  ...

Example 38   Predefined Legal Text

ericsson:
   ...
  legaltext:
    local: "Attention! Unauthorized local access is strictly prohibited!\n"
    remote: "\nAttention! Unauthorized remote access is strictly prohibited!\n\n"
  ...

Legal text has two items: local and remote:

Text can be formatted using normal C-style (\t for tab, \n for new line for example).

2.9   Storage

The storage section must be edited with site-specific information.

2.9.1   Centralized storage

Note:  
Centralized storage is not applicable to Single Server CEE, BSP and HDS.

Centralized storage is available for CEE hardware including EMC storage solution (VNX5400). The structure of the centralized storage subsection is as follows:

Example 39   Storage Pool Type

ericsson:
  ...
  storage:
    centralized:
      type: <CENTRALIZED.STORAGE>
      cee_managed: true
      hw_type: VNX5400
      storagepool_name: <STORAGE.POOL.NAME>
      mgmt_ip_A: 192.168.2.12
      mgmt_ip_B: 192.168.2.13
      emc_admin:
        user: <EMC.USER>
        passwd: <EMC.PASSWORD>
      ...

type

To set up centralized storage, set the type parameter for <CENTRALIZED.STORAGE> to EMC2. If centralized storage is not available or not wanted, set the type to None.

cee_managed

Set cee_managed to true to enable CEE to manage the centralized storage array. Only set to false after careful consideration. Scripts setting and cleaning up the VNX prior to an installation do not run if the parameter is set to false. LDAP is not configured for authentication on the VNX. Contact Ericsson support for more information.

storagepool_name

The variable <STORAGE.POOL.NAME> must match the name of the Storage Pool for Cinder, created on the VNX as written in the document VNX5400 SW Installation, Reference [2]. The admin user <EMC.USER> created during VNX5400 installation has password <EMC.PASSWORD>.

emc_admin

The EMC admin refers to the admin user created during the VNX5400 installation. <EMC.USER> has to be replaced with the admin username and <EMC.PASSWORD> with the password.

2.9.2   Adding iSCSI Ports to Centralized Storage

In the config.yaml template, four iSCSI ports are defined, two for each of the Storage Processors (A and B).

See the pre-defined iSCSI ports:

Example 40   Pre-defined iSCSI Ports

ericsson:
  ...
  storage:
    centralized:
      iscsi_ports:
        -
          port: 0
          SP: A
          name: iscsi_san_pda
          target_ip: 192.168.11.1
          tagged: true
        -
          port: 1
          SP: A
          name: iscsi_san_pdb
          target_ip: 192.168.12.1
          tagged: true
        -
          port: 0
          SP: B
          name: iscsi_san_pda
          target_ip: 192.168.11.2
          tagged: true
        -
          port: 1
          SP: B
          name: iscsi_san_pdb
          target_ip: 192.168.12.2
          tagged: true
      ...

Up to 12 additional ports can be added in the iscsi_ports section using this structure:

Example 41  

          port: <PORT.NUMBER>
          vport: <PORT.NUMBER>
          SP: <STORAGE.PROCESSOR>
          name: <NETWORK.NAME>
          target_ip: <TARGET.IP>
          tagged: true

port

The iSCSI port number depends on the slot used for the interface card, and the port used on the interface card. Valid values are 0–7.

vport

The optional parameter vport is only to be used if the virtual port number configured on the VNX is non-zero. This is normally not the case, but can be relevant if CEE does not manage the centralized storage array (that is if parameter cee_managed is set to false).

SP

The Storage Processor connected. Valid values are A and B.

name

The name of the network used on this port. Valid values depend on the names defined in the networks section of config.yaml, see Section 2.6.

target_ip

The iSCSI target IPs must be in the CIDR but outside the specified range in iscsi-left and iscsi-right.

tagged

VLAN tagging tagged must be set to true.

2.9.3   ScaleIO Configuration

ScaleIO block storage is configured in the scaleio storage subsection in config.yaml. If the scaleio section exists during deployment, ScaleIO is deployed and configured as storage backend. The structure of the scaleio section is as follows:

Example 42   Configuring ScaleIO

ericsson:
  storage:
    scaleio:
      license:
      protection_domains:
          - name: <PROTECTION_DOMAIN_NAME_1>
            pools:
              - name: <STORAGE_POOL_NAME_1>
                type: <PROVISIONING_TYPE_1>
      cluster_name: <SCALEIO CLUSTER NAME>
      frontend_networks:
          - <FRONTEND_NETWORK_1>
          - <FRONTEND_NETWORK_2>
      backend_networks:
          - <BACKEND_NETWORK_1>
          - <BACKEND_NETWORK_2>
      password: <SCALEIO_PASSWORD>
      gateway_admin_password: <SCALEIO_PASSWORD>
      lia_token: <SCALEIO_PASSWORD>
      gateway_port: 443
      gateway_user: admin
      round_volume_capacity: True
      verify_server_certificate: False

The following parameters can be configured for ScaleIO:

license

This parameter is not in use in R6.3.

protection_domains

The protection domain has a name. and storage pools. pools has two parameters:

cluster_name

Cluster name

frontend_networks

List of ScaleIO frontend networks

backend_networks

List of ScaleIO backend networks

password

password is the password for the admin user.

gateway_admin_password

gateway_admin_password is the password for the gateway admin user. It has the same value as password.

lia_token

lia_token is the LIA password for node management. It has the same value as password.

gateway_port

Gateway port. The value must be 443.

gateway_user

Gateway user. The value must be admin.

To dedicate a blade to be part of the ScaleIO cluster, the role of the blade in the cluster has to be defined in the blade section, see ScaleIO Blade Configuration.

2.10   Local Disks

The section localdisks has local disk partition sizes for local disk (for Compute hosts) and partition sizes of virtual disk (for vCIC). The sizes are in mebibyte (MiB). The sizes are applied to all partitioning for vCIC (guest OS) and Host OS. The default settings in config.yaml are the minimum requirement and cannot be decreased. On Compute nodes, the remaining disk space is allocated to /var/lib/nova (for data that includes local storage and ephemeral disks).

On vCIC nodes, the remaining disk space is allocated to /var/lib/glance and is used for Glance, Swift, and CEE Backup.

The partition sizes must be dimensioned according to the site-specific needs. For example, if the data retention times for Zabbix are to be increased, then the size for the MySQL partition (parameter mysql_size) must be increased. For information on MongoDB and MySQL (database size), refer to the System Dimensioning Guides:

Note:  
Consider the size of virtual disk available to vCIC.

Example 43   Local Disks

ericsson:
  ...
  storage:
    localdisks:
      cic:
        os_size: 51200
        logs_size: 40960
        mysql_size: 40960
        mongo_size: 40960
      compute:
        os_size: 51200
        logs_size: 40960
  ...
Note:  
Do not define mongo_size in a Single Server deployment.

2.11   IdAM

IdAM is used for managing the system administrator accounts. The idam: section includes credentials for the IdAM component. It allows setting initial passwords for the predefined accounts needed for CEE to operate.

The ldap section includes credentials of LDAP entities used exclusively by infrastructure applications. The recommendation is to leave these credentials blank to enable the system to use generated passwords.

It is possible to set the initial password of the ceeadm and ceebackup in the users section:

Example 44   Password for the ceeadm Predefined User

ericsson:
  ...
  users:
     ceebackup:
       passwd:'<IDAM.CEEBACKUP.PASSWD>'
     ceeadm:
       passwd:'<IDAM.CEEADM.PASSWD>'
     ...

Passwords must be quoted by using single-quotes and must be compliant with CEE password policy, otherwise the deployment fails. The minimal accepted password length is 12 characters. There is no factory default password.

Note:  
ceebackup is not applicable to a Single Server deployment.

2.12   LDAP Users

The LDAP section includes credentials used by infrastructure applications, if these credentials are left blank in the config.yaml, the entries from the idam_ldap_*_password section in /etc/openstack_deploy/user_secrets.yml are used.

Example 45   LDAP

ericsson:
  idam:
    ldap:
      basedn: <IDAM.LDAP.BASE>
      rootdn: <IDAM.LDAP.ROOTRDN>
      rootpw: ''
      anonymous_binddn: <IDAM.LDAP.ANONRDN>
      anonymous_bindpwd: ''
      manager_binddn: <IDAM.LDAP.MNGRRDN>
      manager_bindpwd: ''
      sync_binddn: <IDAM.LDAP.SYNCRDN>

2.13   VNX Users

vnx-log-fetcher is set up when managed VNX (EMC VNX storage) is present, if these credentials are left blank in config.yaml, the entries in /etc/openstack_deploy/user_secrets.yml are used.

Example 46   VNX Users

ericsson:
   vnx-log-fetcher:
     mysql:
       password: ''
     vnx:
       password: ''
   ...

2.14   Glance Image Service

The Glance API server can be configured to have an optional local image cache. A local image cache stores a copy of image files, essentially enabling multiple API servers to serve the same image file. This increases scalability due to an increased number of endpoints serving an image file.

The Glance image cache is disabled in the configuration file templates.

Enable the local image cache for CEE deployments where there is sufficient disk space available for Glance image cache on local disks. For CEE deployments where the local disk space is limited it is more favorable to disable the function or to reduce the disk space used for the local image cache on the /var/lib/glance disk partition.

Two parameters are available in the storage section of config.yaml to manipulate the Glance image cache function:

2.15   Swift Configuration Options

The swift section in the configuration file template allows to configure Swift to use the backend storage system, and by this to move the location of the Swift store from the local disks to the backend storage. Currently, the supported storage backend for Swift is either EMC VNX or ScaleIO.

Note:  
The prerequisite to configure Swift on backend storage is as follows:
  • To use EMC VNX as storage backend: a properly configured centralized storage, see Section 2.9.1.
  • To use ScaleIO as storage backend: a properly configured ScaleIO block storage, see Section 2.9.3.

If the prerequisite is not fulfilled, the swift_on_backend_storage section is not applicable and will be ignored.


The following configuration options are available:

Example 47   Swift Configuration Options

ericsson:
  ...
  swift:
    swift_on_backend_storage:
      type: <SWIFT.SWIFT_ON_BACKEND_STORAGE.TYPE>
      activation_mode: <SWIFT.SWIFT_ON_BACKEND_STORAGE.MODE>
      lun_size: <SWIFT.SWIFT_ON_BACKEND_STORAGE.SIZE>

type

The type of the backend storage system. The supported values are centralized and scaleio:

activation_mode

The supported values are manual and automatic. The value set in the config.yaml templates is manual.

To deploy Swift on VNX manually after installation, see the Swift Store on VNX Activation operating instructions. To deploy Swift on ScaleIO manually after installation, see the Swift Store on ScaleIO Activation operating instructions.

The automatic activation mode can be used to deploy Swift on backend storage during installation.

lun_size

lun_size specifies the LUN size of Swift on the storage backend. The value must be given as an integer value followed by the unit (GiB or TiB). The value set in the config.yaml templates is 0GiB in combination with activation_mode set to manual.

If activation_mode is set to automatic, a value different from 0GiB must be used. The unit has to be given in GiB or TiB. The minimum value is 1GiB. The maximum initial size of the Swift store on EMC VNX backend is 6000 GiB / 6 TiB.

Note:  
ScaleIO only supports volumes with a granularity of 8 GiBs. As a result, the physical size of the LUN will always be rounded up to the nearest multiple of 8 GiBs, while Cinder is not aware of this rounding and uses the given size. Therefore, it is recommended to set a lun_size which is a multiple of 8 GiBs.

2.16   SDN Standard Integration on HDS

To enable the SDN integration feature of HDS, the sdn section has to be defined in config.yaml. This feature enables the user to deploy CEE alongside a remote SDN Controller without any other extension package. The remote SDN controller will be used to manage the tenant network and create tunnels on compute blades for tenants network isolation. Software VTEPs needs to be created manually at the CSC CLI after CEE deployment. The parameters needed for SDN configuration are as follows:

Example 48   SDN Configuration

  sdn:
    type: standard
    sdnc_sbi_vip: <SBI.VIP.OF.SDNC>
    sdnc_nbi_vip: <NBI.VIP.OF.SDNC>
    sdnc_sbi_gw_ip: <SBI.GW.IP>
    sdnc_admin_username: <SDNC.ADMIN.USERNAME>
    sdnc_admin_password: <SDNC.ADMIN.PASSWORD>
    remote_gre_term: <REMOTE.GRE.TERM.IP>

type

The SDN integration type. Only the standard type is supported. In standard mode, CSC is installed on dedicated nodes.

sdnc_sbi_vip

The IP of the southbound interface used for CSS-CSC communication

sdnc_nbi_vip

The IP of the northbound interface used for Neutron-CSC communication

sdnc_sbi_gw_ip

The IP used to set the static route on the compute blades in order to let CSS access CSC SBI IP

sdnc_admin_username

Username for CSC authentication

sdnc_admin_password

Password for CSC authentication

remote_gre_term

The remote GRE term IPs. If there are multiple IPs, this should be a yaml list, for example: [' 10.33.215.30', ' 10.33.215.31']

Additional configuration is needed for SDN in the following sections of config.yaml:

2.17   CM-HA

The cmha section of config.yaml is used to configure the Continuous Monitoring High Availability (CM-HA) service, and includes the following parameters:

Note:  
The parameters below are optional. If these parameters are not included in config.yaml, the default values apply.

fence_compute_before_evacuation

If fence_compute_before_evacuation is true, CM-HA is to fence down the compute before evacuating the VMs. This function is to prevent VM duplication in case of a partial compute failure. Default value: true

try_to_recover_compute_after_evacuation

If try_to_recover_compute_after_evacuation is true, CM-HA attempts to power on the compute after finishing the evacuation of the VMs. This function can help recover the failed compute. Default value: true

2.18   Fuel Plugins

The fuel-plugins section of config.yaml is used for the installation and configuration of Fuel plugins. For more information on Fuel plugins, including the plugin name, configuration attributes, and the list of mandatory Fuel plugins, refer to the Fuel Plugin Configuration Guide.

The fuel-plugins section includes the following parameters:

Example 49   Fuel Plugin Configuration

ericsson:
  ...   
  fuel-plugins:
    -
       name: <PLUGIN-NAME>
       config_attributes:
         <ATTRIBUTE1>: <VALUE1>
         <ATTRIBUTE2>: <VALUE2>
    -
       name: <PLUGIN-NAME>
  ...  

name

The name of the Fuel plugin to be installed.

Note:  
The variable <PLUGIN-NAME> must match the name mentioned in the metadata.yaml file of the Fuel plugin.

config_attributes (optional)

Any configuration attributes needed for the plugin are added in the config_attributes section using the below structure:

       ...
       config_attributes:
         <ATTRIBUTE1>: <VALUE1>
         <ATTRIBUTE2>: <VALUE2>
       ...

3   Post-Configuration Activities

If the configuration file is edited in Windows, it is likely that the file contains CRLF characters. To remove CR characters (Linux only uses LF), run the following command after transferring the file to the Fuel Master node:

$> sed -i.bak -e 's/\r//g' <CONFIG.FILE.NAME>

A backup of the original file with the name <CONFIG.FILE.NAME>.bak is also created.

4   Advanced Parameter Settings

This chapter is for non-certified configurations, for information about parameters that can be changed. The settings described in this section are considered to work, but they have not necessarily been formally verified by CEE Integration and Verification.

4.1   Advanced CPU Allocation

To reserve CPUs on a specific Compute host, the reservedCPUs section of the corresponding blade must be filled in with information on the reservation. Reservation for a server consists of a list of CPU reservations. Each item in the list represents the CPU reservation for a specific system component (owner). The reservation for a system component is defined by a mapping (also called hash or dictionary in programming languages) containing the keys owner, count, and cpus.

The owner key is mandatory. It specifies the component for whom the reservation is intended. Supported owners are vm, ovs, vcic, and vfuel.

Note:  
It is possible to reserve CPUs for not supported owners. The CPUs reserved for such owners remain idle since they are added to the list of isolated CPUs and none of the supported system components use them.

Either the count or the cpus key must be defined, but not both. Deviations from this rule are explained later.

The value of the count key is either the string auto or an integer. When the value auto is used, the CEE installer determines how many CPUs to reserve for the given owner and which ones. If the value is an integer, it defines the number of CPU cores to reserve. Due to hyperthreading, one core corresponds to two CPUs. When a CPU core is reserved for an owner, both CPUs on that core are dedicated to that owner. The System Dimensioning Guides provide information about the relation between CPUs and cores for supported hardware models.

Refer to the following documents:

The value of the cpus key is a list of CPU IDs or ID ranges. The ranges are inclusive. By using the cpus key, the cloud administrator can directly control the allocation of CPUs to owners. Although it is possible to reserve CPUs by mixing allocations that use the count and the cpus keys, this practice is not recommended. If the goal is to get a CPU allocation that works well for a wide range of workloads, then all reservations must use the count key with value auto. Optionally, the default number of CPU cores can be overruled by specifying the exact number of CPU cores to allocate. If the goal is to optimize the performance for a specific workload or configuration, then all reservations must use the cpus key that provides full control for the cloud administrator. The automatic CPU allocation scheme is not recommended for a Single Server deployment.

4.1.1   Allocating CPUs for OVS

OVS requires CPUs for different purposes. Some of the CPUs are used for poll-mode driver (PMD) threads. The PMD threads continuously poll the physical and virtual NICs to check if there is new data. It is crucial that no other tasks are scheduled on the CPUs assigned to OVS PMD threads for performance reasons. In addition, one CPU must be specified for the OVS control process. This process does not have special requirements on its CPU. It can share one of the non-isolated CPUs with the other host OS processes.

OVS has two constraints on the CPU allocation related to the NUMA topology of the server:

Failure to fulfill these constraints leads to system malfunction.

CPUs reserved for the owner ovs in config.yaml are used for PMD threads. If the count key is used for reserving the OVS PMD CPUs, at least one CPU core is allocated to OVS PMD threads on each NUMA node. Therefore, at least two cores are required on Dell and HP blades, while only one is required on BSP. No more cores are allocated if the CPU reservation is count: auto. If the number of CPU cores to allocate is specified explicitly, then the given value must not be lower than the number of NUMA nodes. Any additional cores are allocated sequentially from the available cores starting with NUMA node 0 and then continuing with NUMA node 1, if necessary.

Note:  
Providing two CPUs located on the same physical core for OVS can result in decreased throughput capacity. As such, it is recommended to allocate one CPU for OVS (or one per NUMA node, depending on the hardware). Refer to the relevant System Dimensioning Guides.

To allocate a single CPU per core for OVS, new definitions have to be created under reservedCPUs for each compute host and deployment scenario (with or without vCIC and vFuel). An example is:

ericsson:
  ...
  reservedCPUs:
  - &OVS_single_thread_DELL_630_reservedCPUs
    - owner: vm
      cpus: 1,3,5-22,25,27,29-46
    - owner: ovs
      cpus: 2,23
      cpus_nonpmd: 0
    - owner: idle
      cpus: 26,47
      notick: true


The actual CPU IDs depend on the server and CPU model.

The hyper-thread siblings of OVS PMD CPUs are allocated to the owner idle and the notick allocation parameter is set to the value true.

The appropriate CPU reservation has to be set for each blade in the blade section, depending on whether it hosts vCIC or vFuel. An example is

ericsson:
  ...
  shelf:
    ...
    blade:
      -
        id: 1
        blade_mgmt:
            ...
          nic_assignment: *DELL_630_nic_assignment
          reservedHugepages: *DELL_630_reservedHugepages
          reservedCPUs: *OVS_single_thread_DELL_630_reservedCPUs

ericsson:
  ...
  shelf:
    ...
    blade:
      -
        id: 1
        blade_mgmt:
            ...
          nic_assignment: *DELL_630_nic_assignment
          reservedHugepages: *DELL_630_reservedHugepages
          reservedCPUs: *OVS_single_thread_DELL_⇒
630_reservedCPUs

After CEE deployment, interrupt handling has to be adjusted for the CPUs configured as idle. Refer to SW Installation in Multi-Server Deployment or SW Installation in Single Server Deployment.


By default, the OVS control process uses the first CPU allocated for the host OS. This can be overruled by defining the cpus_nonpmd key. The recommended value is a single CPU ID. Only non-isolated CPUs are allowed. That is, CPUs reserved for other purposes (including OVS PMD threads) are not allowed. Make sure that the constraints listed above are fulfilled.

Note:  
Although it is possible to define a set of CPUs for the OVS control process, it is currently not recommended to assign more than one CPU.

OVS is configured to use DPDK only if CPUs and huge pages are both reserved for the owner ovs. OVS is configured to run without DPDK if there are no reservations. It is a configuration error if only one of CPUs or huge pages are reserved. VMs must also use huge pages to be able to communicate over OVS/DPDK. Therefore, it is mandatory to reserve 1 GiB huge pages for VMs if OVS/DPDK is used.

Note:  
It is not supported to configure a CEE Region without DPDK acceleration for traffic network (provided by the corresponding OVS bridges).

4.1.2   Resource Allocation for vCIC

The CPU allocation for vCIC (owner: vcic) supports the boolean key isolated to control if dedicated CPUs have to be allocated for the vCIC VM. Its value is either the default true or false.

If isolated: true is configured, the vCIC uses dedicated physical CPUs, and the VMs virtual CPUs are mapped one-to-one to the reserved physical CPUs. If isolated: false is configured, the parameter count only defines the number of virtual CPUs for the VM. The number of vCPUs to use is calculated as count×HTs/core. These vCPUs can float over the pool of non-isolated physical CPUs. If isolated: false is defined, the reservation can only use count, and the use of cpus is not supported in this case.

Example 50   Use non-isolated CPUs for a vCIC with 4 vCPUs

ericsson:
  ...
  reservedCPUs:
    - owner: vcic
      count: 2
      isolated: false
  ...
Note:  
Using the isolated: false CPU allocation is not recommended on multi-server systems.

In the memory allocation for the vcic owner, the memnode key can specify the NUMA node ID on the physical host. If specified, then the memory of the VM is allocated from the given NUMA node. If the memnode key is not given, the VM memory is allocated from NUMA node 0.

Example 51   Reserve 10 GiB RAM for a vCIC using physical memory on NUMA node 1

ericsson:
  ...
  reservedHugepages:
    - owner: vcic
      size: 1GB
      count: 10
      memnode: 1
  ...

4.1.3   Automatic CPU Allocation Rules

The CEE installer allocates CPU resources using the rules described here when only the count key is used in CPU reservations.

Note:  
These rules are not applicable when the CPUs are reserved manually using the cpus key.

Two CPU cores are reserved for the host OS (that is, left as non-isolated) when the reservation count: auto is used for the owner: vm. If the number of CPU cores to be reserved for VMs is explicitly set, then all remaining CPU cores are used by the host OS. It is a configuration error if there are no CPU cores left for the host OS.

Four CPU cores are reserved for vCIC, and one CPU core is reserved for vFuel, if count: auto is used.

The first CPU core on NUMA node 0 is assigned to the host OS (left non-isolated). Then one CPU core is reserved for OVS PMD threads on each NUMA node. On NUMA node 0, the first available core is reserved for OVS PMD threads. On NUMA node 1, the last available CPU core is reserved for OVS PMD if necessary. Then the remaining cores are reserved for the host OS. After that, the CPUs are reserved for vFuel, vCIC, and tenant VMs in this order.

The first CPU used by the host OS is also assigned to the OVS control process unless it is explicitly defined. See Section 4.1.1 for details.

4.1.4   CPU Allocation for Single Server Deployment

The following example shows the recommended CPU allocation for a Single Server deployment:

Example 52   Allocating vCPUs Reserved for VMs

ericsson:
  ...
  reservedCPUs:
  
    - owner: vm
      cpus: 0,24,2,26,4,28,5,29,7,31,9,33,11,35,13,37,15,39,17,41,19,43
    - owner: ovs
      cpus: 3,27
      cpus_nonpmd: 1
    - owner: vcic
      count: 2
      isolated: false
    ...

This example illustrates some of the concepts discussed before:

For more information on the Single Server configuration, refer to Single Server System Dimensioning Guide, CEE R6.

4.2   NIC Information

The section nic_assignments defines role-based PCI addresses for all NICs in the system, for the control, traffic and storage networks. If the relevant hardware is not listed in nic_assignments, add or modify the relevant element in the list. This is mandatory.

The PCI addresses must be defined in the following format: XXXX:YY:ZZ.W where:
XXXX = domain, must always be 0000
YY = bus
ZZ = slot
W = function

Each assignment must be labeled with an anchor. See Section 1.5 for the use of anchors and aliases.

Example 53   Information on Available NIC Assignments

ericsson:
  nic_assignments:
  - &HP_GEN9_nic_assignment
    control0: "0000:06:00.0"
    control1: "0000:06:00.1"
    data0:    "0000:08:00.0"
    data1:    "0000:08:00.1"
    storage0: "0000:87:00.0"
    storage1: "0000:87:00.1"
  - &HP_GEN8_nic_assignment
    control0: "0000:04:00.0"
    control1: "0000:04:00.1"
    data0:    "0000:05:00.0"
    data1:    "0000:05:00.1"
    storage0: "0000:21:00.0"
    storage1: "0000:21:00.1"
  - &DELL_630_OEM_nic_assignment
    control0: "0000:01:00.0"
    control1: "0000:01:00.1"
    data0:    "0000:81:00.0"
    data1:    "0000:03:00.0"
    storage0: "0000:81:00.1"
    storage1: "0000:03:00.1"
  - &DELL_630_OEM_nic_assignment_with_sriov
    control0: "0000:01:00.0"
    control1: "0000:01:00.1"
    data0:    "0000:81:00.0"
    data1:    "0000:04:00.0"
    storage0: "0000:81:00.1"
    storage1: "0000:04:00.1"
  - &DELL_630_nic_assignment
    control0: "0000:01:00.0"
    control1: "0000:01:00.1"
    data0:    "0000:81:00.0"
    data1:    "0000:04:00.0"
    storage0: "0000:81:00.1"
    storage1: "0000:04:00.1"
  - &DELL_620_nic_assignment
    control0: "0000:01:00.0"
    control1: "0000:01:00.1"
    data0:    "0000:42:00.0"
    data1:    "0000:04:00.0"
    storage0: "0000:42:00.1"
    storage1: "0000:04:00.1"
  - &BSP_GEP5_nic_assignment
    control0: "0000:02:00.1"
    control1: "0000:02:00.2"
    data0:    "0000:01:00.0"
    data1:    "0000:01:00.1"
    storage0: "0000:04:00.0"
    storage1: "0000:04:00.1"
  ...

Example 54   Single Server NIC Assignment

ericsson:
  nic_assignments:
  - &DELL_630_OEM_single_server_nic_assignment
    control0: "0000:01:00.0"
    data0: "0000:81:00.0"
  ...

4.3   SR-IOV

The sriov key is to be configured individually for each blade that will have the SR-IOV feature enabled. The key is not to be configured for blades not using SR-IOV. To define SR-IOV on a specific blade, include the sriov key with the number of virtual functions and devices properties in the blade section. You can reference the devices with aliases that point to pre-defined anchors in sriov_configs at the beginning of the configuration file.

Additionally, the configuration of an arbitrary physical network for each SR-IOV Physical Function (PF) is supported. This makes the PF resource management significantly more dynamic.

More anchors can be added in sriov_configs if the used hardware configuration is different from the provided ones. See also Section 1.5 about YAML syntax.

See the configuration example below:

Example 55   8 VFs with 2 SR-IOV devices in blade 2

ericsson:
  ...
  sriov_configs:
  - &DELL_620_sriov_info
    - pci_address: "0000:41:00.0"
      bandwidth: 10000000
      physical_network: "PHY1"
    - pci_address: "0000:41:00.1"
      bandwidth: 10000000
      physical_network: "PHY1"
  - &DELL_630_OEM_sriov_info
    - pci_address: "0000:83:00.0"
      bandwidth: 10000000
      physical_network: "PHY1"
    - pci_address: "0000:83:00.1"
      bandwidth: 10000000
      physical_network: "PHY1"
  - &DELL_630_sriov_info
    - pci_address: "0000:84:00.0"
      bandwidth: 10000000
      physical_network: "PHY1"
    - pci_address: "0000:84:00.1"
      bandwidth: 10000000
      physical_network: "PHY1"
  ...
  shelf
    - 
      blade:
        -
          id: 2
          sriov:
            devices:*DELL_630_sriov_info
            vf: 8
      ...

sriov_configs

The dictionaries containing the PCI addresses of SR-IOV NICs (Physical Functions) have to be defined in sriov_configs. The N-th SR-IOV PF on a blade is the N-th listed item in the dictionary for the blade. Each listed item must contain the pci_address and physical_network parameters.

SR-IOV Blade Configuration

Note:  
The sriov section is optional, and only configurable on Dell hardware platforms.

sriov_segmentation_type

The global SR-IOV segmentation needs to be set if the above sriov configuration is enabled on at least one of the blades:

Example 56   SR-IOV Segmentation

ericsson:
  ...
  sriov_segmentation_type: vlan

The value of sriov_segmentation can be vlan or flat.

The following configuration is needed:

The cabling scheme yaml file must be updated according to the allocation of SR-IOV ports in the ToR switch. The SR-IOV ports have the value usage: sriov.

Example 57   SR-IOV Cabling Scheme Configuration

cabling_scheme:
  shelves:
    - blades:
      - blade_id: 1
        network interfaces:
        - {nic_id: 1, switch_id: 1, switch_port: 1, usage: data}
        - {nic_id: 2, switch_id: 2, switch_port: 1, usage: data}
        - {nic_id: 3, switch_id: 1, switch_port: 2, usage: storage}
        - {nic_id: 4, switch_id: 2, switch_port: 2, usage: storage}
        - {nic_id: 5, switch_id: 1, switch_port: 40, usage: sriov}
        - {nic_id: 6, switch_id: 2, switch_port: 40, usage: sriov}

4.3.1   Limitation

Dell R620 and R630 ToR traffic switch ports are not configured when a VM is booted in a Neutron network used for SR-IOV. See Section 2.3.1.1 for more information.

4.4   Bandwidth-Based Scheduling

4.4.1   Neutron Networks

The neutron_networks section describes the characteristics of the physical Neutron networks on a host. It defines the bond interfaces, Neutron name, and bandwidth capacity.

Format:

neutron_networks:
  <neutron-physical-network-name>:
    devices:
      - <device>
      …
    bandwidth: <capacity-of-network>

devices: lists the interfaces used to bond the network. bandwidth: defines the capacity of the network in kbit per second.

Note:  
Currently the only supported Neutron physical network is the default network. devices: is not used.

Example 58   Neutron_networks

ericsson:
  ...
  neutron_networks:
    - &neutron_networks_std_limit
      control:
        devices:
        - control0
        - control1
        bandwidth: 1000000
      default:
        devices:
        - data0
        - data1
        bandwidth: 10000000 
  ...
  shelf:
    -
      ...
      blade:
        ...
        -
          id: 3
          nic_assignment: *HP_GEN9_nic_assignment
          reservedHugepages:
            &mldr;
          reservedCPUs:
            ...
          vswitch_capacity: <vswitch capacity>
          neutron_networks: *neutron_networks_std_limit
    

4.4.2   vSwitch Capacity

The vswitch_capacity attribute defines the capacity of the virtual switch on each host. The capacity is shown in kilo packet per second. It is used for bandwidth-based scheduling.

Example 59   vswitch_capacity

ericsson:
  ...
  shelf:
    -
      ...
      blade:
        ...
        -
          id: 3
          nic_assignment: *HP_GEN9_nic_assignment
          reservedHugepages:
            ...
          reservedCPUs:
            …
          vswitch_capacity: <vswitch capacity>
Note:  
The value of vswitch_capacity is documented in the System Dimensioning Guides:

4.5   Neutron Configuration Options

The Neutron configuration file template (selected by using information provided in Section 2.2) can be modified compared to the Ericsson default parameters and values. The change must be part of a system integration activity that includes CEE verification.

Note:  
CEE was verified only with unchanged Neutron configuration files.

Example 60   neutron_ericsson_user_spec.yaml Template File

# Without further configuration ericsson_user_spec is equivalent to
# ericsson_basic, but ericsson_user_spec is not locked down.
conf_type: ericsson_user_spec
# The .deb files must be included into the Fuel .iso.
# Will be installed on the given target groups.
# Multiple groups are allowed in the target list with comma separation.
# The followin groups are usually enough to install the additional package:
# all - this means the package will be installed on all compute and
# controller nodes
# compute - this means that the package will be installed on all compute nodes
# controller - this means that the package will be instaleed on all controllers
additional_packages:
    - name:"<DEBIAN.PACKAGE>"
      target:["<TARGET.NODE.GROUP>"]
    - name:"<DEBIAN.PACKAGE>"
      target:["<TARGET.NODE.GROUP>"]
neutron_configuration_files:
    -
      name: neutron.conf
      option:
        # default if next line is not present: no service plugins
        DEFAULT/service_plugins: <COMMA.SEPARATED.LIST.OF.SERVICE.PLUGINS>
    -
      # It is possible to list multiple .ini files here and they will get
      # merged into a single plugin.ini.
      name: ml2_conf.ini
      option:
        # default if next line is not present: openvswitch
        ml2/mechanism_drivers: <COMMA.SEPARATED.LIST.OF.MECHANISM.DRIVERS>

Example 61   neutron_ericsson_cmx.yaml Template File

conf_type: ericsson_user_spec
additional_packages: ["neutron-plugin-bsp"]
neutron_configuration_files:
  -
    name: ml2_conf.ini
    option:
      ml2/mechanism_drivers: openvswitch,l2population,bsp
  -
    name: ml2_conf_bsp.ini
    option:
      ml2_bsp/management_ip: 192.168.2.2
      ml2_bsp/bsp_tenant: CEE
Note:  
Do not modify the initial indentation when editing the configuration files.

Example 62   neutron_ericsson_sdn_standard.yaml Template File

       option:
        # Comma-Separated list of <vni_min>:<vni_max> tuples enumeratingranges
        #of VXLAN VNI IDs that are available for tenant network allocation
        M12_type_vxlan/vni_ranges: 100:10000

Different <vni_min>:<vni_max> ranges need to be chosen for the different vPODs, with the following limitations:

4.5.1   Prevent ARP Spoofing

Note:  
CEE was verified only with unchanged Neutron configuration files.

The ARP spoofing rules can be enabled or disabled globally per deployment. If unspecified, prevention of ARP spoofing is by default enabled. If the IP addresses of the VNFs are manually configured, the ARP spoofing rules must be switched off. For more information, refer to the section about port security of the document CEE Network Infrastructure, Reference [5].

To disable the ARP spoofing prevention function, add the prevent_arp_spoofing parameter to the Neutron configuration file template with False value:

conf_type: <NEUTRON_CONFIG.YAML>
neutron_configuration_files:
   -
     name: ml2_conf.ini
     option:
       agent/prevent_arp_spoofing: False

This workaround is only applicable if using neutron_ericsson_user_spec.yaml template.

4.6   Nova Configuration Options

The default values in the nova section is used for a regular deployment scenario. The parameters below are optional. If these parameters are not included in config.yaml, the default values apply. Do not modify the config.yaml file unless the listed use cases are required.

Example 63   Nova Configuration Options

ericsson:
  ...
  nova:
    disk_cachemodes: file=directsync,block=none
    enable_nova_quotas: True
    force_config_drive: True
    vms_use_raw_images: False
  ...   

4.7   Hardware Switch Configuration Options

The hw_switches: section in the config.yaml template provides the initial_setup: parameter. According to the settings in this section, the CEE installation deploys the initial configuration relevant to the used switch type, that is, Extreme traffic and storage switches in HP and Dell multi-server deployments, and CMXB in BSP hardware. No initial hardware switch configuration is used for the Single Server CEE and for user specific Neutron options.

The setting of initial_setup: in the hw_switches: section must be aligned with the setting of the neutron_config_yaml_file: in the neutron: section. Table 3 shows the settings that can be used together.

Table 3    Neutron Configuration File and Initial Setup Values

Hardware
Deployment

neutron:
  neutron_config_yaml_file:

hw_switches:
  initial_setup:

BSP

neutron_ericsson_cmx.yaml

cmx

HP
Dell multi-server

neutron_ericsson_extreme.yaml

extreme

Single Server

neutron_ericsson_user_spec.yaml

none

HDS

neutron_ericsson_sdn_standard.yaml

none

Other, user specific

neutron_ericsson_user_spec.yaml

none

See the following sections for more information:

Example 64   Neutron Configuration File and Hardware Switch Settings for CMX, BSP

ericsson:
  ...
  neutron:
  ...
    neutron_config_yaml_file: neutron_ericsson_cmx.yaml
  ...
  hw_switches:
    initial_setup: cmx
    switching_scheme_yaml_file: cmx_switch.yaml
      ...

Example 65   No Hardware Switch Configuration, Single Server

ericsson:
  ...
  neutron:
  ...
  neutron_config_yaml_file: neutron_ericsson_user_spec.yaml
  ...
  hw_switches:
    initial_setup: None
  ...   

4.8   Multiple Border Gateways

Two Border Gateways (BGWs) are specified in the cabling schema, 4_x770_hp.yaml, 4_x670V_hp.yaml, or 2_x670V_hp.yaml:

Example 66   Two Border Gateways in Cabling Schema

    external_components:
      border_gateways:
        - id: 1
          name: BGW-1
          switch_id: 1
          ports: [85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96]
          master: 85
          partition: 4x10G
        - id: 2
          name: BGW-2
          switch_id: 2
          ports: [85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96]
          master: 85
          partition: 4x10G
      ...

The BGWs are also specified in the switch configuration file:

Example 67   Border Gateways in Switch Configuration File

  switching:
    -
      name: TRAFFIC_SWA_X770
      device_id: 1
      ...
      bgw_config:
        -
          id: 1
          vlans:
            -
              name: cee_om_sp
              tagged: true
              ip: <IP.OF.SWITCH-A/PREFIXSIZE>
            -
              name: subrack_om_sp
              tagged: true
              ip: <IP.OF.THE.SWITCH/PREFIXSIZE>
      ...
    -
      name: TRAFFIC_SWB_X770
      device_id: 2
      ...
      bgw_config:
        -
          id: 2
          vlans:
            -
              name: cee_om_sp
              tagged: true
              ip: <IP.OF.SWITCH-B/PREFIXSIZE>
            -
              name: subrack_om_sp
              tagged: true
              ip: <IP.OF.THE.SWITCH/PREFIXSIZE>
      ...

Adding further border gateways requires the following:

Example 68   Additional Border Gateway in Cabling Schema

  external_components:
    border_gateways: 
       - id: x
         name: BGW-x
         switch_id: <1 or 2>
         ports: [y, y+1, y+2, y+3, ... , y+n]
         master: y
         partition: 4x10G
Note:  
A partition of 1x40G is also possible, if 40G connections are used towards the BGW.

Switch ID is 1 or 2, depending on the switch to which the BGW is connected.


The switch configuration file can be updated as shown in the following example:

Example 69   Switch Configuration File Update for Multiple BGWs

  switching:
    -
      name: TRAFFIC_SWA_X770
      device_id: <1 or 2>
      ...
      bgw_config:
        -
          id: z
          vlans:
            -
              name: cee_om_sp
              tagged: true
              ip: <IP.OF.SWITCH-A/PREFIXSIZE> ⇒
---  (same as the other BGW, in the same switch)
            -
              name: subrack_om_sp
              tagged: true
              ip: <IP.OF.THE.SWITCH/PREFIXSIZE> ⇒
---  (same as the other BGW, in the same switch)

4.9   Change of Border Gateway Settings

To configure BGW with settings different from the default in CEE, the startup configuration of the Extreme switches must be handled differently for traffic and storage.

Global process:

  1. If the system is already deployed, change the configuration version in the following file: /mnt/cee_config/<switch_model>_switch.yaml
    switch_config:
    restore_conf_version: 15B_R10 <---- must be replaced with higher number, for example 15B_R11
  2. Change the default configuration of the switches in the following file:
    /opt/ecs-fuel-utils/python_libdir/extreme_conf/sw_conf_XXX.xsf
  3. If the hardware configuration contains dedicated storage switches, make sure that the storage-specific default configuration file is named as follows: sw_conf_XXX_storage.xsf. Tip: Make a copy of the original traffic-specific file and add " _storage" to the name of the new file.
  4. Modify the default traffic and storage switch configuration in the files: /opt/ecs-fuel-utils/python_libdir/extreme_conf/sw_conf_XXX.xsf
    and
    /opt/ecs-fuel-utils/python_libdir/extreme_conf/sw_conf_XXX_storage.xsf

Finally, continue with the normal installation process.

4.10   Change of Linux I/O Scheduler

The user can select I/O scheduler for computes. This changes the strategy used for scheduling IO requests.

Example 70   Selecting IO Scheduler Options

ericsson:
  ...   
  timezone: Etc/UTC
  compute_io_scheduler: deadline
  ...
  neutron:
  ...

The following values can be configured for compute_io_scheduler:

4.11   Time Zone

The time zone to be used on Fuel and the deployed nodes in the CEE region is stated in the config.yaml template. The time zone configured in the templates is UTC (Etc/UTC).

Example 71   Time Zone

ericsson:
  ...   
  timezone: Etc/UTC
  ...

For a list of available time zone settings, execute the following command on a Linux system, for example on the Kickstart Server:
ls -R /usr/share/zoneinfo

4.12   Secure NBI API Endpoints

API NBI endpoints are exposed over SSL/TLS on HTTP. To set the needed trust on the client side, a set of CA certificates must exist.

The options are in the Ericsson namespace, so each option is prefixed with "ericsson".

Example 72   Secure NBI API Options

ericsson:
  security:
    openssl:
      ciphersuites: 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:
                     ECDHE-RSA-AES256-GCM-SHA384'
      protocols: TLSv1.2
    gnutls:
      priority: 'NONE:+SUITEB128:+SUITEB192:+VERS-TLS-1.2'
    haproxy:
      sslprotocols: no-sslv3 no-tlsv10 no-tlsv11
      sslrate: 100
      sslconns: 40
    nbi:
      atlas:
        hostname: <ATLAS_HOSTNAME>
        certfilename: <ATLAS_CERTFILENAME>
        cafilename: <ATLAS_CAFILENAME>
      controller:
        hostname: <CONTROLLER_HOSTNAME>
        certfilename: <CONTROLLER_CERTFILENAME>
        cafilename: <CONTROLLER_CAFILENAME>
    ...

Example 72   Secure NBI API Options

ericsson:
  security:
    openssl:
      ciphersuites: 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256⇒
-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:
                     ECDHE-RSA-AES256-GCM-SHA384'
      protocols: TLSv1.2
    gnutls:
      priority: 'NONE:+SUITEB128:+SUITEB192:+VERS-TLS-1.2'
    haproxy:
      sslprotocols: no-sslv3 no-tlsv10 no-tlsv11
      sslrate: 100
      sslconns: 40
    nbi:
      atlas:
        hostname: <ATLAS_HOSTNAME>
        certfilename: <ATLAS_CERTFILENAME>
        cafilename: <ATLAS_CAFILENAME>
      controller:
        hostname: <CONTROLLER_HOSTNAME>
        certfilename: <CONTROLLER_CERTFILENAME>
        cafilename: <CONTROLLER_CAFILENAME>
    ...

security.openssl.ciphersuites

The string containing the list of allowed OpenSSL cipher suites. Validate the support of cipher suites with the external hosts that use the REST API and change the specified values if needed.

Examples on external hosts using the REST API:

Recommended setting:
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384

security.openssl.protocols

The string containing the list of allowed SSL/TLS protocols.

Recommended setting: TLSv1.2

security.gnutls.priority

The string containing gnuTLS protocol/ciphersuite settings.

Recommended setting:
NONE:+SUITEB128:+SUITEB192:+VERS-TLS-1.2

security.haproxy.sslprotocols

The string containing accepted/disabled SSL/TLS protocols to offer.

Recommended setting:
no-sslv3 no-tlsv10 no-tlsv11

security.haproxy.sslrate

The number of SSL sessions allowed to be established per second.

Recommended setting: 100
Recommended setting for Dell Single server: 20

security.haproxy.sslconns

The number of SSL connections to allow per listener.

Recommended setting: 40

Note:  
One session requires two connections.

security.nbi.controller.hostname

The string containing the hostname by which the CIC is referenced through the NBI.

security.nbi.controller.certfilename

The string containing the filename (relative to /mnt/cee_config) of the CIC certificate.

security.nbi.controller.cafilename

The string containing the filename (relative to /mnt/cee_config) of the CA certificate signing the CIC certificate.

security.nbi.atlas.hostname

The string containing the hostname of the Atlas VM that is reachable from the CICs. Only needed if Atlas is installed.

security.nbi.atlas.certfilename

The string containing the filename (relative to /mnt/cee_config) of the Atlas certificate when Atlas is installed.

security.nbi.atlas.cafilename

The string containing the filename (relative to /mnt/cee_config) of the CA certificate signing the Atlas certificate if Atlas is installed.

Hostnames for the controllers and Atlas are user-supplied. End users must make sure that their hostnames are unique and the hostnames used for cloud endpoints are resolving to the proper IP addresses.

Note:  
The certification files are either acquired from a third party or generated by own CA authority, and out of scope of this document. Refer to the SW Installation in Single Server Deployment and SW Installation in Multi-server Deployment for more information on the certificate for the Northbound Interface (NBI) required for secure HTTPS access to CEE.

4.13   Fuel Administration Network

fuel_ctrl_sp is used for PXE boot of Compute hosts (host OS) and vCIC nodes.

Note:  
Make sure that Fuel IP is not included in the dynamic IP range of dhcp_pool_start and dhcp_pool_end.

If the IP address of the Fuel administration network is different from the network specified in config.yaml and the IP and VLAN plan Reference [1], update fuel_ctrl_sp before running CEE_RELEASE/scripts/install_vfuel.sh:

Example 73   Fuel Administration Network

ericsson:
  ...
  networks:
    ...
    -
      name: fuel_ctrl_sp
      mos_name: fuelweb_admin
      cidr: 192.168.0.11/24
      dhcp_pool_start: 192.168.0.20
      dhcp_pool_end: 192.168.0.253
      gateway: 192.168.0.254
      dns: 10.51.40.100
    ...

4.14   Location of Logs

Note:  
Not applicable to HDS.

The location of core and crash dump logs can be changed using the following parameters:

Example 74   Crashes Local

ericsson:
  ...
  logging:
    crashes: local
    forward_to_fuel: false
    forward_to_controller: true
    forward_to_external: false
    external_server_ip:
    external_server_port:
    local_on_controller: true
    local_on_compute: false
  ...

Example 75   Setting Server IP and Port for External Server

ericsson:
  ...
  logging:
    forward_to_fuel: false
    forward_to_controller: true
    forward_to_external: true
    external_server_ip: 1.2.3.4
    external_server_port: 5678
    local_on_controller: true
    local_on_compute: false
  ...
Note:  
The boolean parameters of logging must be included in the config.yaml. The default values mentioned below refer to the values originally set in the template.

crashes

The destination of crashes (core and kernel crash dumps). crashes can have the value local or cics. HP, Dell, and Single Server store crashes locally in each blade/server by default. BSP stores most crashes in the CICs, BSP still saves crashes locally where possible, since disk space is scarce in most BSP installations. If cics is selected for Single Server, local crashes are still used when needed, for example at Compute host kernel crash or core dump of qemu.

forward_to_fuel

Indicates whether to forward logs from both controller and Compute to Fuel or not. Boolean parameter, default value: false

forward_to_controller

Indicates whether to forward logs from Compute to controller or not. Boolean parameter, default value: false on HP/Dell/Single Server, true on BSP

forward_to_external

Indicates whether to forward logs from both controller and Compute to external log server or not. Boolean parameter, default value: false

external_server_ip

The IP address of an external log server. Mandatory if forward_to_external is set to true

external_server_port

The port of an external log server. Mandatory if forward_to_external is set to true

local_on_controller

Enable or disable local logging on controller. Boolean parameter, default value: true

local_on_compute

Enable or disable local logging compute. Boolean parameter, default value: true on HP/Dell/Single Server, false on BSP

4.15   Link Monitoring for CEE on BSP

4.15.1   Control Network

4.15.1.1   Link Redundancy Based on ARP Monitoring

ARP setup is only applicable to BSP. To enable link monitoring, configure the ARP settings as follows:

Example 76   ARP Link Redundancy

ericsson:
  ...
  arp_setup:
    control:
      schema: <schema>
      network: fw-admin
      [router: <cmx_virtual_router>]
      [target_ips:]
        [-<ip_1>]
        [- ...]
        [-<ip_n>]    
    ...

ARP monitoring is only implemented for control networks with the following parameters:

Example 77   ARP Monitoring Example Setup

ericsson:
  ...
  arp_setup:
    control:
      schema: open-single
      network: fw-admin
      router: om_sibb_vr
      target_ips:
        - 192.168.9.111
        - 192.168.9.112
  ...

4.15.2   Traffic Network

4.15.2.1   Enable Link Redundancy Based on CFM in CEE Region

Connectivity Fault Management (CFM) link redundancy is only implemented for traffic networks. To enable CFM link redundancy, configure CFM settings based on the following:

Example 78   CFM Link Redundancy

ericsson:
  ...
  cfm_setup:
    traffic:
      enabled: true
      ccm_interval: 100
  ...

Valid ccm_interval: possible values are 3, 10, 100, 1000, 10000, 60000, 600000 ms. The recommended value is 100 ms.

To configure blades with CFM, see Section 4.15.2.2.

Set ccm_interval to 100, this makes sure that the failover is quick enough.

4.15.2.2   Configure CFM Roles

Link monitoring based on CFM is only applicable to BSP. To enable CFM on the traffic network of the blades, set the following:

Example 79   Active role

ericsson:
  ...
      blade:
        -
          id: 2
          cfm_role: active
        ...
 

Example 80   Passive role

ericsson:
  ...
      blade:
        -
          id: 2
          cfm_role: passive
        ...

Enable CFM on all blades in a BSP deployment. Configure three blades as active and ensure that all these blades are located in subrack 0. Configure all remaining blades as passive.

4.16   Reduced Footprint Monitoring Data Collection

reduced_footprint enables reduced KPI data collection in Zabbix to save storage, computing, and network resources. It uses a set of alternative KPI/metric lists for Zabbix that gather and store less measurement data.

monitoring_data_collection default value is false.

Both keys are optional, if ommitted, the default value applies.

Example 81   Reduced Footprint

ericsson:
  ...
  reduced_footprint:
    monitoring_data_collection: false
  ...

4.17   Zabbix CEE User

The zabbix_cee_user section contains configuration options to configure the user group, username, and password of the read-only user in Zabbix. All keys are optional. If the keys are not present, the following default values are used for the user group, user, and password, respectively:

Example 82   Zabbix CEE User

ericsson:
  ...
  zabbix_cee_user:
    zabbix_user_group: <ZABBIX_USER_GROUP>
    zabbix_user: <ZABBIX_USER>
    zabbix_password: <ZABBIX_PASSWORD>
  ...

zabbix_user_group: String, the name of the user group.

zabbix_user: String, the name of the user.

zabbix_password: String, the password of the user.

Use single quote marks as shown in the example below:

Example 83   Zabbix CEE User Example

ericsson:
  ...
  zabbix_cee_user:
    zabbix_user_group: ‘CEEUserGroup’
    zabbix_user: ‘ceeuser’
    zabbix_password: ‘examplepassword’
  ...

4.18   Initial Memory Amount of vCIC

Fuel handles the virt role as follows:

8 GB is set for initial memory, which is consumed from the compute host memory solely for the creation of the VM. In some cases it is advised to set a different value for initial memory, for example:

The initial vCIC memory amount can be changed by adding the initial_vcic_memory parameter to config.yaml. The value is in MiB.

Example 84   Configure Initial vCIC Memory

ericsson
  ...
  initial_vcic_memory: 8192

Reference List

[1] IP and VLAN plan, 2/102 62-CRA 119 1862/5 Uen
[2] VNX5400 SW Installation, 3/1531-CSA 113 125/5 Uen
[3] BSP External Network Connectivity, 2/1553-APP 111 01 Uen
[4] YAML Specification. http://www.yaml.org/spec/1.2/spec.html
[5] CEE Network Infrastructure, 1/102 62-CRA 119 1862/5 Uen


Copyright

© Ericsson AB 2016. All rights reserved. No part of this document may be reproduced in any form without the written permission of the copyright owner.

Disclaimer

The contents of this document are subject to revision without notice due to continued progress in methodology, design and manufacturing. Ericsson shall have no liability for any error or damage of any kind resulting from the use of this document.

Trademark List
All trademarks mentioned herein are the property of their respective owners. These are shown in the document Trademark Information.

    Configuration File Guide         Cloud Execution Environment