CEE on HDS Installation
Cloud Execution Environment

Contents

1Introduction
1.1Prerequisites
1.1.1Documents
1.1.2Hardware and Software Required
1.1.3Installation Data

2

Network Configuration
2.1Control Network Configuration for CEE vPOD
2.1.1Agent Network for CEE vPOD
2.1.2Configuration of CEE Control Networks Using CCM
2.2Data Network Configuration for CEE vPOD for CEE without SDN
2.2.1Create HDS L2 GWs
2.2.2Configure L2 Networks
2.2.3Create HDS L2 GW Interfaces for L2 Networks
2.3Data Network Configuration for CEE vPOD with Tightly Integrated SDN
2.3.1Create HDS L2 GWs
2.3.2Configure L2 Networks
2.3.3Create HDS L2 GW Interfaces for L2 Networks

3

Fuel Host Preparation
3.1Install Ubuntu Host
3.2Establish Fuel Host Server Connectivity
3.3Install Dependent Packages

4

Fuel Installation
4.1Connect Fuel Host to CCM

5

CEE Deployment
5.1Temporary pre-Installation Steps
5.1.1CEE Installation Fails If a GRE Tunnel ID or VXLAN VNI Above 65535 Is Used
5.1.2Audit Log Contains Several Unfiltered CM-HA Related Events
5.2Creation of OVSDB Interface
5.3NIC Firmware Version Check and Upgrade
5.4CEE Installation
5.5Temporary Installation Steps
5.5.1Deployment Can Fail If Data PCI Slot For Blades Cannot Be Read
5.6Temporary post-Installation Steps
5.6.1ml2_conf_sriov.ini Not Properly Populated After CEE 6.6 Installation on HDS
5.7Configuration of OVSDB Interface for HW-VTEP Access
5.8vFuel Migration into the CEE Region
5.9Region Expansion

6

Concluding Steps
Appendix

7

CA and NBI Certificates for Secure HTTPS Access

Reference List

1   Introduction

This document is part of the installation flow for CEE on HDS 8000 deployment and describes how to create a CEE region on a Hyperscale Datacenter System (HDS) Virtualized Performance Optimized Datacenter (vPOD), including CEE software installation. Complete this procedure when directed here from CEE Installation:

  1. Start the procedure in CEE Installation.
  2. Continue with this document when directed here from CEE Installation.
  3. Return to CEE Installation and carry out the remaining steps.

For the complete installation flow, refer to section Installation Flow in CEE Installation.

Component Names

In this document, the terms L2 Gateway (GW), External L2 GW, and L2 GW Interface refer to HDS functions, unless explicitly stated otherwise.

In this document, certain components are named to reflect that the CEE vPOD installed in this procedure is the only CEE vPOD of the data center customer, for example, fuelhost and cee_om_sp. In the case of multiple CEE vPODs, it is recommended to use the naming convention <component_name><x>, where x identifies the CEE vPOD, such as 1 for the first CEE vPOD, 2 for the second CEE vPOD, and so on.

This naming convention is applicable for the following:

The following are examples for this naming convention:

1.1   Prerequisites

This section describes the prerequisites that must be fulfilled before CEE software can be installed.

1.1.1   Documents

Activities in the following documents must be performed before the steps in this instruction are performed:

1.1.2   Hardware and Software Required

A dedicated vPOD must be created for CEE. The CEE vPOD must have the required dedicated servers assigned to it. For more information, refer to the corresponding step in the document CEE Installation.

The required SW can be downloaded from SW Gateway. If you have problems with the download procedure, contact the next level of support.

The following software is always required:

Server BIOS

Intel Virtualization Technology for Directed I/O (VT-d) must be enabled in the BIOS settings for all servers of the CEE vPOD.

Fuel Host

A server must be selected as dedicated Fuel host server.

The server designated as Fuel host must not contain any data or volume groups from previous CEE deployments.

The selected server must meet the following minimum requirements:

Note:  
If the server selected as dedicated Fuel host server during installation is to be integrated to the CEE region through region expansion after installation, the server must also meet the minimum requirements for compute hosts, refer to Multi-Server System Dimensioning Guide, CEE 6.

Access is required to one of the following:

HDS External L2 GWs

For the successful deployment of CEE, one HDS external Layer 2 Gateway (L2 GW) must be present for the external connectivity of cee_om_sp, atlas_sbi_sp and similar. This HDS external L2 GW can be shared among CEE vPOD.

Depending on the used networking solution, additional HDS external L2 GWs must be present:

NIC Firmware

When using servers with Intel x710 NICs assigned to DPDK, the firmware version of the X710 NICs must be 6.0.1 or above. To verify and if applicable, update firmware, see Section 5.3.

1.1.3   Installation Data

The following data is needed:

Table 1    Installation Data

Data Type

Description

Passwords

Initial vFuel server root user password is r00tme (used for installation only)

Username and password to the CCM as DC Customer user

Certificates

Certificates for the vCIC and Atlas Northbound Interfaces (NBIs), see Section 7

yaml files

Site-specific config.yaml in /mnt/cee_config, refer to Configuration File Guide

Neutron configuration file (HW and configuration-specific, see CEE_RELEASE/neutron/)

Host networking configuration file (HW-specific file from CEE_RELEASE/host_net_templates/)

IP addresses

The local version of IP and VLAN Plan updated with customer and site-specific values

 
 

IP addresses to CCM and CCM GUI

2   Network Configuration

This section contains CEE networking requirements. The exact steps to be executed on HDS are out of the scope of this document. The steps described in this section must be executed as described in the respective workflow of Hyperscale Datacenter System 8000 Customer Documentation, Reference [2].

For detailed information on CEE Network requirements, refer to the local IP and VLAN Plan and CEE Network Infrastructure, Reference [1].

2.1   Control Network Configuration for CEE vPOD

By moving the untagged network from hds-equipment-mgmt network to fuel_ctrl_sp network, the servers use the DHCP service from Fuel instead of the DHCP service from CCM. To achieve this host networking requirement of CEE, the following changes must be performed in the control network interfaces of the servers:

Configure L2 connectivity according to the respective workflows in Hyperscale Datacenter System 8000 Customer Documentation, Reference [2].

2.1.1   Agent Network for CEE vPOD

For the HDS agent in the CEE vPOD to communicate to the CCM virtual machine (VM), additional interfaces need to be configured on the CCM VM.

Order the configuration of additional interfaces from the DC Owner for the network hds-agent according to the local IP and VLAN Plan.

2.1.2   Configuration of CEE Control Networks Using CCM

Control networks must be configured on EAS, including agent network VLAN creation and port assignment.

2.2   Data Network Configuration for CEE vPOD for CEE without SDN

Note:  
Before the configuration of data networks, make sure that the necessary HDS external L2 GWs are available in the data center and are assigned to the CEE vPOD. For more information, see HDS External L2 GWs.

Data network configuration consists of the following:

2.2.1   Create HDS L2 GWs

CEE on HDS without SDN requires two HDS L2 gateways defined for each vPOD for the external connection of, for example, the cee_om_sp and atlas_sp networks. These gateways must be attached to the external HDS L2 GWs in CCM.

2.2.2   Configure L2 Networks

For the successful installation of CEE, it is required to configure VLANs appropriately on the data network of the CEE vPOD. This can be done from both CCM GUI and CLI. The following VLANs need to be configured:

To setup the data network of CEE vPOD region, do the following in CCM:

  1. Assign the CEE VLANs to the vPOD.
  2. Create a LAG interface for the Ethernet interfaces of the compute servers used for CEE traffic domain on all the compute servers within the CEE vPOD.
  3. Assign VLANs to the CEE server interfaces on all servers within the CEE vPOD. The VLANs to be assigned and the respective interfaces are the following:

    VLAN

    CEE Server interface

    cee_om_sp

    LAG for traffic domain

    iscsi_san_pda

    storage0

    iscsi_san_pdb

    storage1

    swift_san_sp

    storage0 and storage1

    migration_san_sp

    storage0 and storage1

    atlas_nbi_sp

    LAG for traffic domain

    atlas_sbi_sp

    LAG for traffic domain

    glance_san_sp

    storage0 and storage1

2.2.3   Create HDS L2 GW Interfaces for L2 Networks

Create HDS L2 GW interfaces to the HDS L2 GWs created in Section 2.2.1 for the following L2 networks:

2.3   Data Network Configuration for CEE vPOD with Tightly Integrated SDN

Note:  
Before the configuration of data networks, make sure that the necessary HDS external L2 GWs are available in the data center and are assigned to the CEE vPOD. For more information, see HDS External L2 GWs.

Data network configuration consists of the following:

2.3.1   Create HDS L2 GWs

CEE on HDS with SDN requires two HDS L2 gateways defined for each vPOD for the external connection of, for example, the cee_om_sp and atlas_sp networks.

Two additional HDS L2 GWs are required for each vPOD for HW-VTEP, which are attached to OVSDB Interface at a later step of the installation process.

These HDS L2 GWs are used for CEE tenant external connectivity (OpenStack L2 GW function). These GWs need to be attached in the CCM to the HDS external L2 GW listed in HDS External L2 GWs.

2.3.2   Configure L2 Networks

For the successful installation of CEE, it is required to configure VLANs appropriately on the data network of the CEE vPOD. This can be done from both CCM GUI and CLI. The following VLANs need to be configured:

To setup the data network of CEE vPOD region, do the following in CCM:

  1. Do one of the following depending on the fabric used:
    • In the case of L2 fabric, create sdnc_sbi_sp with NetworkType: Data and ProviderNetworkType: Vlan.
    • In the case of L3 fabric, create sdnc_sbi_sp with NetworkType: Data. ProviderNetworkType has to be left empty, without value.

  2. For both L2 and L3 fabrics, create sdn_ul_sp with NetworkType: Data and ProviderNetworkType: VxlanUnderlay.
  3. Do one of the following depending on the fabric used:
    • In the case of L2 fabric, create the remaining networks with NetworkType: Data and ProviderNetworkType: Vlan.
    • In the case of L3 fabric, create the remaining networks with NetworkType: Data. ProviderNetworkType has to be left empty, without value.

2.3.3   Create HDS L2 GW Interfaces for L2 Networks

Create HDS L2 GW interfaces to the HDS L2 GWs created in Section 2.3.1 for the following L2 networks:

3   Fuel Host Preparation

The preparation of the host designated as kickstart server includes the following:

3.1   Install Ubuntu Host

Order the installation of standard Ubuntu 14.04 from the data center owner on the server designated as kickstart server in the CEE vPOD using virtual media. The following values must be used at the installation:

In Not Installed Packages the following packages must be selected:

The GRUB boot loader must be installed to the master boot record.

3.2   Establish Fuel Host Server Connectivity

The data center owner must establish external connectivity to the Fuel host server through the DC-GW on the cee_om_sp network.

The data center owner must add a route to the CCM VM, so that CCM knows where to send the reply to Fuel. This route must be routed to the cee_om_sp network by a route on the DC-GW. This route also needs to be persistent, that is, it must not disappear when the CCM reboots or does a failover.

3.3   Install Dependent Packages

The following dependent packages have to be installed separately:

Do the following:

  1. Log on to fuelhost through SSH using the IP address set on the data network (bond_address_ip).
  2. Transfer the above Debian packages to fuelhost.
  3. Start a terminal and switch to sudo:

    sudo -i

  4. Change to the directory of the packages.
  5. Install the packages:

    dpkg -i genext2fs_1.4.1-4build1_amd64.deb

    dpkg -i ruby1.9.1_1.9.3.484-2ubuntu1.2_amd64.deb ruby_1.9.3.4_all.deb libruby1.9.1_1.9.3.484-2ubuntu1.2_amd64.debdpkg -i ruby1.9.1_1.9.3.484-2ubuntu1.2_amd64.deb ⇒
    ruby_1.9.3.4_all.deb libruby1.9.1_1.9.3.484⇒
    -2ubuntu1.2_amd64.deb

    dpkg -i sshpass_1.05-1_amd64.deb

    dpkg -i python-libxml2_2.9.1+dfsg1-3ubuntu4.7_amd64.deb python-libvirt_1.2.2-0ubuntu2_amd64.deb virtinst_0.600.4-3ubuntu2_all.debdpkg -i python-libxml2_2.9.1+dfsg1-3ubuntu4.7_amd64.⇒
    deb python-libvirt_1.2.2-0ubuntu2_amd64.deb virtinst_⇒
    0.600.4-3ubuntu2_all.deb

4   Fuel Installation

Do the following:

  1. Transfer the release tarball to the Fuel host server root directory.
  2. Extract the contents of the tarball.
  3. Update the configuration files with the previously prepared ones.
  4. Transfer the certificates required for CEE to the certs directory.
  5. Clean up the unused configuration files.

    Example of list of files needed for the CEE installation:

    root@fuelhost:~/CEE_RELEASE# ls -lR
    .:
    total 56
    drwxr-xr-x 2 sysadmin sysadmin  4096 Jul 22 13:00 cabling_scheme
    drwxrwxr-x 2 sysadmin sysadmin  4096 Jul 22 13:01 certs
    -rw-r--r-- 1 sysadmin sysadmin 13746 Jul 22 17:41 config.yaml
    drwxr-xr-x 2 sysadmin sysadmin  4096 Jul 22 13:01 host_net_templates
    drwxr-xr-x 2 sysadmin sysadmin  4096 Jul 22 13:01 neutron
    drwxr-xr-x 2 sysadmin sysadmin  4096 Jul 22 12:46 scripts
    drwxr-xr-x 2 sysadmin sysadmin  4096 Jul 22 13:00 switch_config
    
    ./cabling_scheme:
    total 0
    
    ./certs:
    total 16
    -rw-r--r-- 1 sysadmin sysadmin    4433 Jul 22 13:01 cacert.pem
    -rw-r--r-- 1 sysadmin sysadmin    3825 Jul 22 13:01 dc315atlas.pem
    -rw-r--r-- 1 sysadmin sysadmin    3825 Jul 22 13:01 dc315nbi.pem
    
    ./host_net_templates :
    total 12
    -rw-rw-r—1 sysadmin sysadmin 8236 Jul 22 13 :01 host_nw_hds.yaml
    
    ./neutron :
    total 4
    -rw-rw-r—1 sysadmin sysadmin 814 Jul 22 13 :01 neutron_ericsson_user_spec.yaml
    
    ./scripts :
    total 44
    -rwxr-xr-x 1 sysadmin sysadmin 16409 Jul 22 13 :01 install_vfuel.sh
    -rwxr-xr-x 1 sysadmin sysadmin 16938 Jul 22 13 :01 migrate_fuel.sh
    -rw-r—r—1 sysadmin sysadmin  2014 Jul 22 13 :01 parseyaml.rb
    
    ./switch_config :
    total 0
    

root@fuelhost:~/CEE_RELEASE# ls -lR
.:
total 56
drwxr-xr-x 2 sysadmin sysadmin  4096 Jul 22 13:00 cabling_scheme
drwxrwxr-x 2 sysadmin sysadmin  4096 Jul 22 13:01 certs
-rw-r--r-- 1 sysadmin sysadmin 13746 Jul 22 17:41 config.yaml
drwxr-xr-x 2 sysadmin sysadmin  4096 Jul 22 13:01 host_net_templates
drwxr-xr-x 2 sysadmin sysadmin  4096 Jul 22 13:01 neutron
drwxr-xr-x 2 sysadmin sysadmin  4096 Jul 22 12:46 scripts
drwxr-xr-x 2 sysadmin sysadmin  4096 Jul 22 13:00 switch_config

./cabling_scheme:
total 0

./certs:
total 16
-rw-r--r-- 1 sysadmin sysadmin    4433 Jul 22 13:01 cacert.pem
-rw-r--r-- 1 sysadmin sysadmin    3825 Jul 22 13:01 dc315atlas.pem
-rw-r--r-- 1 sysadmin sysadmin    3825 Jul 22 13:01 dc315nbi.pem

./host_net_templates :
total 12
-rw-rw-r—1 sysadmin sysadmin 8236 Jul 22 13 :01 host_nw_hds.yaml

./neutron :
total 4
-rw-rw-r—1 sysadmin sysadmin 814 Jul 22 13 :01 ⇒
neutron_ericsson_user_spec.yaml

./scripts :
total 44
-rwxr-xr-x 1 sysadmin sysadmin 16409 Jul 22 13:01 install_vfuel.sh
-rwxr-xr-x 1 sysadmin sysadmin 16938 Jul 22 13:01 migrate_fuel.sh
-rw-r—r—1 sysadmin sysadmin  2014 Jul 22 13:01 parseyaml.rb

./switch_config :
total 0
  1.  
    Note:  
    Only perform this step if CEE is to be deployed with tightly integrated SDN.

    Edit the following files:

    • cat /etc/cee/openstack_config/compute_multi_server.yaml
    • cat /etc/cee/openstack_config/controller_multi_server.yaml

    Add the following in both files under the section nova_config:

      DEFAULT/default_schedule_zone:
        value: 'nova'
    

  2. Make sure that the time and timezone in the Fuel host server is in accordance with the setting in config.yaml:

    date

  3. Install vFuel as described in Preparation of Kickstart Server, in the section of vFuel installation in Libvirt managed VM.
  4. Change Fuel password as described in the respective section of Preparation of Kickstart Server.
  5. Add the relevant Fuel plugin packages as described in the mandatory and optional Fuel plugin sections of Preparation of Kickstart Server.
    Note:  
    The HDS Agent Fuel plugin is mandatory in CEE on HDS deployment.

4.1   Connect Fuel Host to CCM

Establish a permanent route between the Fuel host and the CCM VM. Do the following:

  1. Log on to fuelhost through SSH using the IP address set on the data network (bond_address_ip).
  2. Create an SNAT rule by executing the following command:

    ipconfig -A POSTROUTING -s <network_ip_address>/<prefix_length> -o <interface> -j SNAT --to-source <fuelhost_ip_address>
    ipconfig -A POSTROUTING -s ⇒
    <network_ip_address>/<prefix_length>⇒
     -o <interface> -j SNAT --to-source ⇒
    <fuelhost_ip_address>

    where the variables correspond to the following:

    • <network_ip_address>/<prefix_length> is the IP address and subnet mask of the fuel_ctrl_sp network
    • <interface> is the name of the tagged interface defined as described in Section 3.2
    • fuelhost_ip_address is the static IP address of the Fuel host on the cee_om_sp network..

5   CEE Deployment

5.1   Temporary pre-Installation Steps

This section describes the temporary pre-installation workaround that is needed for this release. Carry out this workaround before starting the installation.

5.1.1   CEE Installation Fails If a GRE Tunnel ID or VXLAN VNI Above 65535 Is Used

If the tunnel_id_start and tunnel_id_end parameters are configured with values above 65535 in the neutron section of the config.yaml file, CEE installation fails with the error " given ID is greater than the maximum of 65535".

Associated trouble report: HW61835.

Workaround: Define tunnel IDss below 65536 in config.yaml.

Example:

neutron:
  mgmt_vip: 192.168.2.15
  mgmt_subnetmask: 24
  tunnel_id_start: 22000
  tunnel_id_end: 31999

5.1.2   Audit Log Contains Several Unfiltered CM-HA Related Events

Excessive audit logging is triggered when CM-HA logs to the infrastructure nodes, because all program executions during shell initialization are logged, not only the session start / end events. The information in these logs is not useful, and therefore not intended for the audit trail.

Associated trouble report: HW74686.

Workaround: Do the following:

Before CEE deployment, adjust the audit configuration template /var/www/nailgun/plugins/ericsson_logging-1.0/deployment_scripts/puppet/modules/ericsson_audit_logging/templates/auditd/audit.rules.erb on vFuel:

  1. Insert the below lines before the line that begins with # Monitoring for all :

    -a exit,never -F auid=1100 -F arch=b64 -S execve

    -a exit,never -F auid=1100 -F arch=b32 -S execve

This will exclude auditing program executions for the CM-HA user having UID of 1100 on all CEE systems.

5.2   Creation of OVSDB Interface

Note:  
Only perform the instructions described in this section if CEE is deployed with tightly integrated SDN. Only one OVSDB interface can be created for each CEE vPOD.

Configure OVSDB Interface for the sdnc_sbi_sp network as described in the relevant topic of Hyperscale Datacenter System 8000 Customer Documentation, Reference [2], using the following values:

Parameter

Value

L2 Network ID

sdnc_sbi_sp

IP Addresses

<switch_ip>


In the case of L2 fabric, <switch_ip> refers to all spine and leaf switches.


In the case of L3 fabric, <switch_ip> refers to the leaf switches.

Prefix Length

Contact the DC Owner for this information.

Number of VxLANs

Contact the DC Owner for this information.

IP addresses of spine and leaf switches are available in the local copy of IP and VLAN Plan.

5.3   NIC Firmware Version Check and Upgrade

To check the firmware version of any X710 NICs assigned to DPDK, do the following on each compute host:

  1. Log on to the compute host as root using SSH. For more information, refer to the CEE Connectivity User Guide.
  2. Check NIC driver binding and record the PCI address and device name of any X710 NIC assigned to DPDK using the following command:

    dpdk-devbind.py -s

    An example of the printout is the following:

    root@compute-0-3:~#  dpdk-devbind.py -s
    Network devices using DPDK-compatible driver
    ============================================
    0000:83:00.0 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci unused=
    0000:83:00.3 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci unused=
     
    Network devices using kernel driver
    ===================================
    0000:01:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb unused=vfio-pci 
    0000:01:00.1 'I350 Gigabit Network Connection' if=eth1 drv=igb unused=vfio-pci 
    0000:03:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth2 drv=ixgbe unused=vfio-pci 
    0000:03:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3 drv=ixgbe unused=vfio-pci 
    0000:83:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=eth5 drv=i40e unused=vfio-pci 
    0000:83:00.2 'Ethernet Controller X710 for 10GbE SFP+' if=eth6 drv=i40e unused=vfio-pci 
     
    Other network devices
    =====================
    <none>
     
    Crypto devices using DPDK-compatible driver
    ===========================================
    <none>
     
    Crypto devices using kernel driver
    ==================================
    <none>
     
    Other crypto devices
    ====================
    
  3. Check the firmware version of the NICs:

    Follow the steps described in the topic on checking NIC firmware upgrade necessity of Hyperscale Datacenter System 8000 Customer Documentation, Reference [2].

  4. If the NIC firmware version is lower than 6.0.1, update the firmware version according to the procedure described by the NIC manufacturer. Refer to Reference [3].
    Note:  
    In the procedure provided by the NIC manufacturer, the following step must be changed:

    Instead of the chmod755 nvmupdate.cfg command, chmod 755 nvmupdate64e must be used.


  5. After firmware update, restart the server to activate the NIC firmware by executing the following command:

    shutdown -r

5.4   CEE Installation

  1. Change the working directory to /opt/ecs-fuel-utils with the following command:

    cd /opt/ecs-fuel-utils

  2. Set up a screen session to ensure that the installation process is not interrupted:

    # screen -S installcee -L

    If the connection to vFuel is lost, log on to vFuel again and reattach the screen session with the below command:

    # screen -r installcee

    Note:  
    The nohup option can cause installation failure and must not be used.

  3. Install CEE by running the installcee script on Fuel:
    ./installcee.sh

The time required for command execution is approximately two to three hours for a system with 10 compute servers.

Check that the printout is the following:

Ericsson CEE installed successfully.

5.5   Temporary Installation Steps

This section describes the temporary installation workaround that is needed for this release. Carry out this workaround if there are problems during the installation process and the installation does not complete.

5.5.1   Deployment Can Fail If Data PCI Slot For Blades Cannot Be Read

CEE deployment can fail during config.yaml validation if the PCI slot addresses of the blades cannot be read by the system, and the following error message is displayed: "AssertionError: NIC role 'data1' is assigned to pci slot '0000:af:00.0' which does not exists in blade 7 in shelf 0".

Associated trouble report: HW71245

Workaround: Perform the below steps:

  1. Restart the blade found in the error message.
  2. Re-run installcee.sh.

5.6   Temporary post-Installation Steps

This section describes temporary procedures that must be executed in this release after a successful installation. Carry out these workarounds after the installation script has run successfully.

5.6.1   ml2_conf_sriov.ini Not Properly Populated After CEE 6.6 Installation on HDS

It is possible that the ml2_conf_sriov.ini file is not populated on the vCICs for SR-IOV after CEE installation. For example, the 'supported_pci_vendor_devs' field is empty.

Associated trouble report: HW74332.

Workaround:

  1. After CEE installation is completed, run the eri_sriov_controller plugin from vFuel to correctly update the ml2_conf_sriov.ini file:

    fuel node --node <vcic_nodes> --tasks eri_sriov_controller --force
    fuel node --node <vcic_nodes> --tasks⇒
     eri_sriov_controller --force

  2. Replace <vcic_nodes> with the comma-separated list of the node IDs for all three vCICs.

5.7   Configuration of OVSDB Interface for HW-VTEP Access

Note:  
Only perform the instructions described in this section if CEE is deployed with tightly integrated SDN.

Do the following:

  1. In the OVSDB Interface, create a new OVSDB Controller with the following values:

    Parameter

    Value

    IP Address

    <csc_vip>

    Port

    6640

    For the VIP of the Cloud SDN Controller (CSC) on sdnc_sbi_sp, fetch astute.yaml from any of the vCICs. The CSC VIP is listed under the vips section of astute.yaml. An example of the relevant section is the following:

    sdnc_sbi_vip:
      ipaddr: 192.168.41.27
      is_user_defined: false
      namespace: haproxy
      network_role: sdnc-sbi-vip
      node:roles:
      - controller

  2. Attach the HDS L2 GWs created in Section 2.2.1 or Section 2.3.1 to the OVSDB Interface.
  3. If SR-IOV is used, attach one additional Ethernet Interface per switch physical interface to the OVSDB Interface.

5.8   vFuel Migration into the CEE Region

To migrate vFuel into the CEE region, do the following:

  1. Log on to the kickstart server.
  2. Execute the following script:

    CEE_RELEASE/scripts/migrate_fuel.sh

    An example of the output is:

./migrate_fuel.sh

migrate_fuel.sh.info: Checking current Fuel state

migrate_fuel.sh.info: Preparing to migrate Fuel

migrate_fuel.sh.info: Fuel will be migrated to compute-0-4 (192.168.0.23)

migrate_fuel.sh.info: The vFuel image and the Domin XML will also be prepared ⇒
on compute-0-6 (192.168.0.25)

migrate_fuel.sh.info: Shutting down current Fuel

migrate_fuel.sh.info: Waiting for Fuel to complete shutdown

migrate_fuel.sh.info: Copying Fuel disk image to compute-0-4 (172.30.160.1)

sending incremental file list

fuel_br3160.qcow2

 68,730,224,640 100%  107.69MB/s    0:10:08 (xfr#1, to-chk=0/1)

migrate_fuel.sh.info: Copying Fuel disk image to compute-0-6 (172.30.160.2)

sending incremental file list

fuel_br3160.qcow2

 68,730,224,640 100%  103.17MB/s    0:10:35 (xfr#1, to-chk=0/1)

migrate_fuel.sh.info: Starting new vFuel inside CEE region on compute-0-4 (192.168.0.23)

migrate_fuel.sh.info: Waiting for new vFuel to start up

migrate_fuel.sh.info: Waiting for new vFuel to be ready

migrate_fuel.sh.info: New vFuel ready

migrate_fuel.sh.info: Performing post migrate actions

migrate_fuel.sh.info: Post migrate actions done

migrate_fuel.sh.info: Fuel is successfully migrated to compute-0-4 (192.168.0.23)

5.9   Region Expansion

Expand the CEE region to include the compute server previously used as the kickstart server. To expand the CEE region, follow the instructions of the document Region Expansion.

6   Concluding Steps

  1. Assign the Atlas_nbi and Atlas_sbi VLANs to the LAG interfaces of the CEE compute server which is planned to host the Atlas VM.
  2. In addition to the above networks, the CEE neutron VLANs needs to be configured for all the compute server ports of the CEE region. This can be done anytime before VM instantiation in the CEE region, and is not required to be present during the CEE deployment.
  3. Segregate the three compute hosts hosting vCIC. Do the following on any of the vCICs:
    1. List the current host aggregates:

      nova aggregate-list

      An example of the printout:

      +----+------+-------------------+
      | Id | Name | Availability Zone |
      +----+------+-------------------+
      +----+------+-------------------+
      
    2. Create the new host aggregate:

      nova aggregate-create infra_HA infra_AZ

      An example of the printout:

      +----+----------+-------------------+-------+------------------------------+
      | Id | Name     | Availability Zone | Hosts | Metadata                     |
      +----+----------+-------------------+-------+------------------------------+
      | 5  | infra_HA | infra_AZ          |       | 'availability_zone=infra_AZ' |
      +----+----------+-------------------+-------+------------------------------+
      
    3. Add each compute host hosting vCIC to the host aggregate:

      nova aggregate-add-host infra_HA compute-<shelf_id>-<blade_id>.domain.tldnova aggregate-add-host infra_HA compute-⇒
      <shelf_id>-<blade_id>.domain.tld

      An example of the command:

      nova aggregate-add-host infra_HA compute-2-1.domain.tld
      nova aggregate-add-host infra_HA compute-2-2.domain.tld
      nova aggregate-add-host infra_HA compute-2-3.domain.tld

      An example of the printout:

Host compute-2-3.domain.tld has been successfully added for aggregate 5
+----+------------+-------------------+------------------------------------------------------------------------------+------------------------------+
| Id | Name       | Availability Zone | Hosts                                                                        |           Metadata           |
+----+------------+-------------------+------------------------------------------------------------------------------+------------------------------+
| 5  | infra_HA   | infra_AZ          | 'compute-2-1.domain.tld', 'compute-2-2.domain.tld', 'compute-2-3.domain.tld' | 'availability_zone=infra_AZ' |
+----+------------+-------------------+------------------------------------------------------------------------------+------------------------------+
Host compute-2-3.domain.tld has been successfully added for aggregate 5
+----+------------+-------------------+----------------------------------------------------
--------------------------+------------------------------+
| Id | Name       | Availability Zone | Hosts                                              
                          |           Metadata           |
+----+------------+-------------------+----------------------------------------------------
--------------------------+------------------------------+
| 5  | infra_HA   | infra_AZ          | 'compute-2-1.domain.tld', 'compute-2-2.domain.tld',
 'compute-2-3.domain.tld' | 'availability_zone=infra_AZ' |
+----+------------+-------------------+----------------------------------------------------
--------------------------+------------------------------+
  1. After installation, there is an active NeLS Server Communication Problem alarm, because the NeLS server is not configured and not available.
    1. To configure the connection to the NeLS server, follow the instructions in the Runtime Configuration Guide.
    2. If the alarm does not clear, follow the instructions in the NeLS Server Communication Problem alarm OPI.
  2. If the customized QEMU with increased VirtIO queue size was configured in config.yaml, isolate the compute hosts according to VirtIO queue size. For more information, see SW Installation in Multi-Server Deployment.
  3. Continue with the relevant section of the document CEE Installation.

Appendix

7   CA and NBI Certificates for Secure HTTPS Access

Certification Authority (CA) and Northbound Interface (NBI) certificates are required for secure HTTPS access to CEE.

Make sure to perform the following tasks before starting the installation process:

  1. Choose a unique hostname for the vCIC NBI.
  2. Choose a unique hostname for the Atlas NBI.
  3. Obtain certificates for the NBIs from an authorized Certification Authority (CA).

    The following certificate files are needed:

    • CA certificate (or chain of certificates) of the organization issuing the Atlas NBI
    • CA certificate (or chain of certificates) of the organization issuing the vCIC NBI
    • Atlas NBI certificate
    • vCIC NBI certificate
    Note:  
    Atlas and vCIC certificates can be issued by the same CA, or by two separate CAs.

    The Common Name (CN) and at least one DNS entry in the Subject Alternate Name (SAN) attribute must contain the publicly known hostname chosen for the NBI, so that the certificate refers to this publicly known hostname. The private key belonging to the certificate cannot be encrypted.

  4. Concatenate the vCIC NBI certificate and private key into a single PEM format under /mnt/cee_config on vFuel. Perform the same for the Atlas NBI.

    ASCII format is preferred for the individual certificates.

    Note:  
    The pkcs12 binary format is commonly used. This output format contains multiple entities in a single binary file and uses encryption. Issue the following command to convert it to PEM format:

    openssl pkcs12 -in <inputfile> -out <outputfile> -nodes

    -nodes is needed to save the private key in unencrypted format.

    In case other binary formats need to be converted, refer to Reference [5] or Reference [6].


  5. Update the config.yaml file with the necessary information. Refer to the Configuration File Guide for updating the publicly known hostname and other relevant options in the config.yaml file.
  6. Update the DNS resolver to contain the hostname and IP address pairs for the NBI.

Reference List

[1] CEE Network Infrastructure, 1/102 62-CRA 119 1862/5
[2] Hyperscale Datacenter System 8000 Customer Documentation, 2/1551-LZN 901 5032
[3] Non-Volatile Memory (NVM) Update Utility for Intel® Ethernet Adapters—Linux. https://downloadcenter.intel.com/download/25791/Ethernet-Non-Volatile-Memory-NVM-Update-Utility-for-Intel-Ethernet-Adapters-Linux-?product=82947
[4] Limitations and Workarounds for Cloud Execution Environment (CEE) 6.5.1, 5/109 21-AZE 102 01/5-11
[5] SSL Support. https://support.ssl.com/Knowledgebase/Article/View/19/0/der-vs-crt-vs-cer-vs-pem-certificates-and-how-to-convert-them
[6] Thawte Licensing. https://search.thawte.com/support/ssl-digital-certificates/index?page=content&id=SO26449