1 Introduction
This document is part of the installation flow for CEE on HDS 8000 deployment and describes how to create a CEE region on a Hyperscale Datacenter System (HDS) Virtualized Performance Optimized Datacenter (vPOD), including CEE software installation. Complete this procedure when directed here from CEE Installation:
- Start the procedure in CEE Installation.
- Continue with this document when directed here from CEE Installation.
- Return to CEE Installation and carry out the remaining steps.
For the complete installation flow, refer to section Installation Flow in CEE Installation.
Component Names
In this document, the terms L2 Gateway (GW), External L2 GW, and L2 GW Interface refer to HDS functions, unless explicitly stated otherwise.
In this document, certain components are named to reflect that the CEE vPOD installed in this procedure is the only CEE vPOD of the data center customer, for example, fuelhost and cee_om_sp. In the case of multiple CEE vPODs, it is recommended to use the naming convention <component_name><x>, where x identifies the CEE vPOD, such as 1 for the first CEE vPOD, 2 for the second CEE vPOD, and so on.
This naming convention is applicable for the following:
- cee_om_sp
- atlas_nbi
- atlas_sbi
- fuelhost
The following are examples for this naming convention:
1.1 Prerequisites
This section describes the prerequisites that must be fulfilled before CEE software can be installed.
1.1.1 Documents
Activities in the following documents must be performed before the steps in this instruction are performed:
- Relevant sections of CEE Installation
- Configuration File Guide
- Limitations in Limitations and Workarounds for Cloud Execution Environment (CEE) Reference [4] must be known and considered
1.1.2 Hardware and Software Required
A dedicated vPOD must be created for CEE. The CEE vPOD must have the required dedicated servers assigned to it. For more information, refer to the corresponding step in the document CEE Installation.
The required SW can be downloaded from SW Gateway. If you have problems with the download procedure, contact the next level of support.
The following software is always required:
- CEE SW release tarball:
- CXC1737883_4-<release>.tar
- CXC1737883_4-<release>.tar.md5
- CXC1737883_4-<release>.tar.sha1
Server BIOS
Intel Virtualization Technology for Directed I/O (VT-d) must be enabled in the BIOS settings for all servers of the CEE vPOD.
Fuel Host
A server must be selected as dedicated Fuel host server.
The server designated as Fuel host must not contain any data or volume groups from previous CEE deployments.
The selected server must meet the following minimum requirements:
- 8 core 64-bit x86 CPU
- 16 GB RAM
- 128 GB root disk
- Note:
- If the server selected as dedicated Fuel host server during installation is to be integrated to the CEE region through region expansion after installation, the server must also meet the minimum requirements for compute hosts, refer to Multi-Server System Dimensioning Guide, CEE 6.
Access is required to one of the following:
- To the remote console, from CCM GUI. This is the preferred option.
- Access to iDRAC or equivalent. Remote console is preferred.
For the successful deployment of CEE, one HDS external Layer 2 Gateway (L2 GW) must be present for the external connectivity of cee_om_sp, atlas_sbi_sp and similar. This HDS external L2 GW can be shared among CEE vPOD.
Depending on the used networking solution, additional HDS external L2 GWs must be present:
- In case of tightly integrated SDN, with SR-IOV, one HDS external L2 GW per CEE vPOD for the spine switch cluster, used for HW-VTEP.
- In the case of L3 fabric, one HDS external L2 GW per CEE vPOD for the Border Leaf Cluster, used for HW-VTEP. The Border Leaf Cluster consists of the leaf switches connected to the DC-GW
NIC Firmware
When using servers with Intel x710 NICs assigned to DPDK, the firmware version of the X710 NICs must be 6.0.1 or above. To verify and if applicable, update firmware, see Section 5.3.
1.1.3 Installation Data
The following data is needed:
|
Data Type |
Description |
|---|---|
|
Passwords |
Initial vFuel server root user password is r00tme (used for installation only) |
|
Certificates |
Certificates for the vCIC and Atlas Northbound Interfaces (NBIs), see Section 7 |
|
yaml files |
Site-specific config.yaml in /mnt/cee_config, refer to Configuration File Guide |
|
Neutron configuration file (HW and configuration-specific, see CEE_RELEASE/neutron/) | |
|
Host networking configuration file (HW-specific file from CEE_RELEASE/host_net_templates/) | |
|
IP addresses |
The local version of IP and VLAN Plan updated with customer and site-specific values |
2 Network Configuration
This section contains CEE networking requirements. The exact steps to be executed on HDS are out of the scope of this document. The steps described in this section must be executed as described in the respective workflow of Hyperscale Datacenter System 8000 Customer Documentation, Reference [2].
For detailed information on CEE Network requirements, refer to the local IP and VLAN Plan and CEE Network Infrastructure, Reference [1].
2.1 Control Network Configuration for CEE vPOD
By moving the untagged network from hds-equipment-mgmt network to fuel_ctrl_sp network, the servers use the DHCP service from Fuel instead of the DHCP service from CCM. To achieve this host networking requirement of CEE, the following changes must be performed in the control network interfaces of the servers:
- The untagged VLAN must be changed to fuel_ctrl_sp on the switch ports where the control network (1G) interfaces of all the servers to be used as compute hosts in CEE vPOD.
- The cee_ctrl_sp VLAN and <hds_agent> VLAN must be added as a member on the switch ports where the control network (1G) interfaces of all of the servers to be used as compute hosts in CEE vPOD.
Configure L2 connectivity according to the respective workflows in Hyperscale Datacenter System 8000 Customer Documentation, Reference [2].
2.1.1 Agent Network for CEE vPOD
For the HDS agent in the CEE vPOD to communicate to the CCM virtual machine (VM), additional interfaces need to be configured on the CCM VM.
Order the configuration of additional interfaces from the DC Owner for the network hds-agent according to the local IP and VLAN Plan.
2.1.2 Configuration of CEE Control Networks Using CCM
Control networks must be configured on EAS, including agent network VLAN creation and port assignment.
2.2 Data Network Configuration for CEE vPOD for CEE without SDN
- Note:
- Before the configuration of data networks, make sure that the necessary HDS external L2 GWs are available in the data center and are assigned to the CEE vPOD. For more information, see HDS External L2 GWs.
Data network configuration consists of the following:
- Creating HDS L2 GWs referring to HDS external L2 GWs
- Creating L2 networks and assigning Ethernet interfaces
- HDS L2 Gateway configuration
2.2.1 Create HDS L2 GWs
CEE on HDS without SDN requires two HDS L2 gateways defined for each vPOD for the external connection of, for example, the cee_om_sp and atlas_sp networks. These gateways must be attached to the external HDS L2 GWs in CCM.
2.2.2 Configure L2 Networks
For the successful installation of CEE, it is required to configure VLANs appropriately on the data network of the CEE vPOD. This can be done from both CCM GUI and CLI. The following VLANs need to be configured:
- cee_om_sp
- swift_san_sp
- migration_san_sp
- iscsi_san_pda
- iscsi_san_pdb
- atlas_nbi_sp
- atlas_sbi_sp
- glance_san_sp
To setup the data network of CEE vPOD region, do the following in CCM:
- Assign the CEE VLANs to the vPOD.
- Create a LAG interface for the Ethernet interfaces of the compute servers used for CEE traffic domain on all the compute servers within the CEE vPOD.
- Assign VLANs to the CEE server interfaces on all servers
within the CEE vPOD. The VLANs to be assigned and the respective interfaces
are the following:
VLAN
CEE Server interface
cee_om_sp
LAG for traffic domain
iscsi_san_pda
storage0
iscsi_san_pdb
storage1
swift_san_sp
storage0 and storage1
migration_san_sp
storage0 and storage1
atlas_nbi_sp
LAG for traffic domain
atlas_sbi_sp
LAG for traffic domain
glance_san_sp
storage0 and storage1
2.2.3 Create HDS L2 GW Interfaces for L2 Networks
Create HDS L2 GW interfaces to the HDS L2 GWs created in Section 2.2.1 for the following L2 networks:
- cee_om_sp
- atlas_nbi_sp
- atlas_sbi_sp
2.3 Data Network Configuration for CEE vPOD with Tightly Integrated SDN
- Note:
- Before the configuration of data networks, make sure that the necessary HDS external L2 GWs are available in the data center and are assigned to the CEE vPOD. For more information, see HDS External L2 GWs.
Data network configuration consists of the following:
- Creating HDS L2 GWs referring to HDS external L2 GWs
- Creating L2 networks
- Assigning Ethernet interfaces
- HDS L2 Gateway configuration
2.3.1 Create HDS L2 GWs
CEE on HDS with SDN requires two HDS L2 gateways defined for each vPOD for the external connection of, for example, the cee_om_sp and atlas_sp networks.
Two additional HDS L2 GWs are required for each vPOD for HW-VTEP, which are attached to OVSDB Interface at a later step of the installation process.
These HDS L2 GWs are used for CEE tenant external connectivity (OpenStack L2 GW function). These GWs need to be attached in the CCM to the HDS external L2 GW listed in HDS External L2 GWs.
2.3.2 Configure L2 Networks
For the successful installation of CEE, it is required to configure VLANs appropriately on the data network of the CEE vPOD. This can be done from both CCM GUI and CLI. The following VLANs need to be configured:
- cee_om_sp
- swift_san_sp
- migration_san_sp
- iscsi_san_pda
- iscsi_san_pdb
- sdnc_sbi_sp
- sdn_ul_sp
- sdnc_internal_sp
- sdnc_sig_sp
- glance_san_sp
- Note:
- Configure glance_san_sp network only in cases when Glance is set up on the storage switching domain. For dimensioning and configuration details, refer to Multi-Server System Dimensioning Guide, CEE 6 and Configuration File Guide documents.
To setup the data network of CEE vPOD region, do the following in CCM:
- Do one of the following depending on the fabric used:
- In the case of L2 fabric, create sdnc_sbi_sp with NetworkType: Data and ProviderNetworkType: Vlan.
- In the case of L3 fabric, create sdnc_sbi_sp with NetworkType: Data. ProviderNetworkType has to be left empty, without value.
- For both L2 and L3 fabrics, create sdn_ul_sp with NetworkType: Data and ProviderNetworkType: VxlanUnderlay.
- Do one of the following depending on the fabric used:
- In the case of L2 fabric, create the remaining networks with NetworkType: Data and ProviderNetworkType: Vlan.
- In the case of L3 fabric, create the remaining networks with NetworkType: Data. ProviderNetworkType has to be left empty, without value.
2.3.3 Create HDS L2 GW Interfaces for L2 Networks
Create HDS L2 GW interfaces to the HDS L2 GWs created in Section 2.3.1 for the following L2 networks:
- cee_om_sp
- atlas_nbi_sp
- atlas_sbi_sp
- sdnc_sig_sp
3 Fuel Host Preparation
The preparation of the host designated as kickstart server includes the following:
- Installation of standard Ubuntu 14.04 on the designated host (done by the data center owner)
- Establishing connectivity of the host designated as
kickstart server (Fuel host), including the following:
- Establishing external connectivity of the designated host (done by the data center owner)
- Establishing persistent route between designated host and CCM (done by the data center owner)
- Installation of dependent packages (done by the data center customer)
3.1 Install Ubuntu Host
Order the installation of standard Ubuntu 14.04 from the data center owner on the server designated as kickstart server in the CEE vPOD using virtual media. The following values must be used at the installation:
- Detect keyboard layout: No
- Primary network interface: eth0: Intel Corporation I350 Gigabit Backplane Connection
- Hostname: fuelhost
- Full name for the new user: sysadmin
- Username for your account: sysadmin
- Encrypt your home directory: No
- Unmount partitions that are in use: Yes
- Partitioning method: Guided – use entire disk and set up LVM
- Write the changes to disks: Yes
- Amount of volume group to use for guided partitioning: Continue (accept default value)
- Write the changes to disks: Yes
- HTTP proxy information (blank for none): Continue (leave blank)
- How do you want to manage upgrades on this system: No automatic updates
- Choose software to install. Select:
- OpenSSH server
- VM host
- Manual package selection
In Not Installed Packages the following packages must be selected:
- misc\main\vlan
- net\main\ifenslave
- python\main\python-netaddr
- python\main\python-pycurl
- python\main\python-urlgrabber
- python\main\python-yaml
The GRUB boot loader must be installed to the master boot record.
3.2 Establish Fuel Host Server Connectivity
The data center owner must establish external connectivity to the Fuel host server through the DC-GW on the cee_om_sp network.
The data center owner must add a route to the CCM VM, so that CCM knows where to send the reply to Fuel. This route must be routed to the cee_om_sp network by a route on the DC-GW. This route also needs to be persistent, that is, it must not disappear when the CCM reboots or does a failover.
3.3 Install Dependent Packages
The following dependent packages have to be installed separately:
- genext2fs
- ruby
- sshpass
- virtinst
Do the following:
- Log on to fuelhost through SSH using the IP address set on the data network (bond_address_ip).
- Transfer the above Debian packages to fuelhost.
- Start a terminal and switch to sudo:
sudo -i
- Change to the directory of the packages.
- Install the packages:
dpkg -i genext2fs_1.4.1-4build1_amd64.deb
dpkg -i ruby1.9.1_1.9.3.484-2ubuntu1.2_amd64.deb ruby_1.9.3.4_all.deb libruby1.9.1_1.9.3.484-2ubuntu1.2_amd64.debdpkg -i ruby1.9.1_1.9.3.484-2ubuntu1.2_amd64.deb ⇒
ruby_1.9.3.4_all.deb libruby1.9.1_1.9.3.484⇒
-2ubuntu1.2_amd64.debdpkg -i sshpass_1.05-1_amd64.deb
dpkg -i python-libxml2_2.9.1+dfsg1-3ubuntu4.7_amd64.deb python-libvirt_1.2.2-0ubuntu2_amd64.deb virtinst_0.600.4-3ubuntu2_all.debdpkg -i python-libxml2_2.9.1+dfsg1-3ubuntu4.7_amd64.⇒
deb python-libvirt_1.2.2-0ubuntu2_amd64.deb virtinst_⇒
0.600.4-3ubuntu2_all.deb
4 Fuel Installation
Do the following:
- Transfer the release tarball to the Fuel host server root directory.
- Extract the contents of the tarball.
- Update the configuration files with the previously prepared ones.
- Transfer the certificates required for CEE to the certs directory.
- Clean up the unused configuration files.
Example of list of files needed for the CEE installation:
root@fuelhost:~/CEE_RELEASE# ls -lR .: total 56 drwxr-xr-x 2 sysadmin sysadmin 4096 Jul 22 13:00 cabling_scheme drwxrwxr-x 2 sysadmin sysadmin 4096 Jul 22 13:01 certs -rw-r--r-- 1 sysadmin sysadmin 13746 Jul 22 17:41 config.yaml drwxr-xr-x 2 sysadmin sysadmin 4096 Jul 22 13:01 host_net_templates drwxr-xr-x 2 sysadmin sysadmin 4096 Jul 22 13:01 neutron drwxr-xr-x 2 sysadmin sysadmin 4096 Jul 22 12:46 scripts drwxr-xr-x 2 sysadmin sysadmin 4096 Jul 22 13:00 switch_config ./cabling_scheme: total 0 ./certs: total 16 -rw-r--r-- 1 sysadmin sysadmin 4433 Jul 22 13:01 cacert.pem -rw-r--r-- 1 sysadmin sysadmin 3825 Jul 22 13:01 dc315atlas.pem -rw-r--r-- 1 sysadmin sysadmin 3825 Jul 22 13:01 dc315nbi.pem ./host_net_templates : total 12 -rw-rw-r—1 sysadmin sysadmin 8236 Jul 22 13 :01 host_nw_hds.yaml ./neutron : total 4 -rw-rw-r—1 sysadmin sysadmin 814 Jul 22 13 :01 neutron_ericsson_user_spec.yaml ./scripts : total 44 -rwxr-xr-x 1 sysadmin sysadmin 16409 Jul 22 13 :01 install_vfuel.sh -rwxr-xr-x 1 sysadmin sysadmin 16938 Jul 22 13 :01 migrate_fuel.sh -rw-r—r—1 sysadmin sysadmin 2014 Jul 22 13 :01 parseyaml.rb ./switch_config : total 0
root@fuelhost:~/CEE_RELEASE# ls -lR .: total 56 drwxr-xr-x 2 sysadmin sysadmin 4096 Jul 22 13:00 cabling_scheme drwxrwxr-x 2 sysadmin sysadmin 4096 Jul 22 13:01 certs -rw-r--r-- 1 sysadmin sysadmin 13746 Jul 22 17:41 config.yaml drwxr-xr-x 2 sysadmin sysadmin 4096 Jul 22 13:01 host_net_templates drwxr-xr-x 2 sysadmin sysadmin 4096 Jul 22 13:01 neutron drwxr-xr-x 2 sysadmin sysadmin 4096 Jul 22 12:46 scripts drwxr-xr-x 2 sysadmin sysadmin 4096 Jul 22 13:00 switch_config ./cabling_scheme: total 0 ./certs: total 16 -rw-r--r-- 1 sysadmin sysadmin 4433 Jul 22 13:01 cacert.pem -rw-r--r-- 1 sysadmin sysadmin 3825 Jul 22 13:01 dc315atlas.pem -rw-r--r-- 1 sysadmin sysadmin 3825 Jul 22 13:01 dc315nbi.pem ./host_net_templates : total 12 -rw-rw-r—1 sysadmin sysadmin 8236 Jul 22 13 :01 host_nw_hds.yaml ./neutron : total 4 -rw-rw-r—1 sysadmin sysadmin 814 Jul 22 13 :01 ⇒ neutron_ericsson_user_spec.yaml ./scripts : total 44 -rwxr-xr-x 1 sysadmin sysadmin 16409 Jul 22 13:01 install_vfuel.sh -rwxr-xr-x 1 sysadmin sysadmin 16938 Jul 22 13:01 migrate_fuel.sh -rw-r—r—1 sysadmin sysadmin 2014 Jul 22 13:01 parseyaml.rb ./switch_config : total 0
-
Edit the following files:
- cat /etc/cee/openstack_config/compute_multi_server.yaml
- cat /etc/cee/openstack_config/controller_multi_server.yaml
Add the following in both files under the section nova_config:
DEFAULT/default_schedule_zone: value: 'nova' - Make sure that the time and timezone in the Fuel host
server is in accordance with the setting in config.yaml:
date
- Install vFuel as described in Preparation of Kickstart Server, in the section of vFuel installation in Libvirt managed VM.
- Change Fuel password as described in the respective section of Preparation of Kickstart Server.
- Add the relevant Fuel plugin packages as described in
the mandatory and optional Fuel plugin sections of Preparation of Kickstart Server.
4.1 Connect Fuel Host to CCM
Establish a permanent route between the Fuel host and the CCM VM. Do the following:
- Log on to fuelhost through SSH using the IP address set on the data network (bond_address_ip).
- Create an SNAT rule by executing the following command:
ipconfig -A POSTROUTING -s <network_ip_address>/<prefix_length> -o <interface> -j SNAT --to-source <fuelhost_ip_address>
ipconfig -A POSTROUTING -s ⇒ <network_ip_address>/<prefix_length>⇒ -o <interface> -j SNAT --to-source ⇒ <fuelhost_ip_address>
where the variables correspond to the following:
- <network_ip_address>/<prefix_length> is the IP address and subnet mask of the fuel_ctrl_sp network
- <interface> is the name of the tagged interface defined as described in Section 3.2
- fuelhost_ip_address is the static IP address of the Fuel host on the cee_om_sp network..
5 CEE Deployment
5.1 Temporary pre-Installation Steps
This section describes the temporary pre-installation workaround that is needed for this release. Carry out this workaround before starting the installation.
5.1.1 CEE Installation Fails If a GRE Tunnel ID or VXLAN VNI Above 65535 Is Used
If the tunnel_id_start and tunnel_id_end parameters are configured with values above 65535 in the neutron section of the config.yaml file, CEE installation fails with the error " given ID is greater than the maximum of 65535".
Associated trouble report: HW61835.
Workaround: Define tunnel IDss below 65536 in config.yaml.
Example:
neutron: mgmt_vip: 192.168.2.15 mgmt_subnetmask: 24 tunnel_id_start: 22000 tunnel_id_end: 31999
5.1.2 Audit Log Contains Several Unfiltered CM-HA Related Events
Excessive audit logging is triggered when CM-HA logs to the infrastructure nodes, because all program executions during shell initialization are logged, not only the session start / end events. The information in these logs is not useful, and therefore not intended for the audit trail.
Associated trouble report: HW74686.
Workaround: Do the following:
Before CEE deployment, adjust the audit configuration template /var/www/nailgun/plugins/ericsson_logging-1.0/deployment_scripts/puppet/modules/ericsson_audit_logging/templates/auditd/audit.rules.erb on vFuel:
- Insert the below lines before the line that begins with # Monitoring for all :
-a exit,never -F auid=1100 -F arch=b64 -S execve
-a exit,never -F auid=1100 -F arch=b32 -S execve
This will exclude auditing program executions for the CM-HA user having UID of 1100 on all CEE systems.
5.2 Creation of OVSDB Interface
- Note:
- Only perform the instructions described in this section if CEE is deployed with tightly integrated SDN. Only one OVSDB interface can be created for each CEE vPOD.
Configure OVSDB Interface for the sdnc_sbi_sp network as described in the relevant topic of Hyperscale Datacenter System 8000 Customer Documentation, Reference [2], using the following values:
|
Parameter |
Value |
|---|---|
|
L2 Network ID |
sdnc_sbi_sp |
|
IP Addresses |
<switch_ip> In the case of L2 fabric, <switch_ip> refers to all spine and leaf switches. In the case of L3 fabric, <switch_ip> refers to the leaf switches. |
|
Prefix Length |
Contact the DC Owner for this information. |
|
Number of VxLANs |
Contact the DC Owner for this information. |
IP addresses of spine and leaf switches are available in the local copy of IP and VLAN Plan.
5.3 NIC Firmware Version Check and Upgrade
To check the firmware version of any X710 NICs assigned to DPDK, do the following on each compute host:
- Log on to the compute host as root using SSH. For more information, refer to the CEE Connectivity User Guide.
- Check NIC driver binding and record the PCI address and
device name of any X710 NIC assigned to DPDK using the following command:
dpdk-devbind.py -s
An example of the printout is the following:
root@compute-0-3:~# dpdk-devbind.py -s Network devices using DPDK-compatible driver ============================================ 0000:83:00.0 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci unused= 0000:83:00.3 'Ethernet Controller X710 for 10GbE SFP+' drv=vfio-pci unused= Network devices using kernel driver =================================== 0000:01:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb unused=vfio-pci 0000:01:00.1 'I350 Gigabit Network Connection' if=eth1 drv=igb unused=vfio-pci 0000:03:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth2 drv=ixgbe unused=vfio-pci 0000:03:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3 drv=ixgbe unused=vfio-pci 0000:83:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=eth5 drv=i40e unused=vfio-pci 0000:83:00.2 'Ethernet Controller X710 for 10GbE SFP+' if=eth6 drv=i40e unused=vfio-pci Other network devices ===================== <none> Crypto devices using DPDK-compatible driver =========================================== <none> Crypto devices using kernel driver ================================== <none> Other crypto devices ====================
- Check the firmware version of the NICs:
Follow the steps described in the topic on checking NIC firmware upgrade necessity of Hyperscale Datacenter System 8000 Customer Documentation, Reference [2].
- If the NIC firmware version is lower than 6.0.1, update
the firmware version according to the procedure described by the NIC
manufacturer. Refer to Reference [3].
- Note:
- In the procedure provided by the NIC manufacturer, the following
step must be changed:
Instead of the chmod755 nvmupdate.cfg command, chmod 755 nvmupdate64e must be used.
- After firmware update, restart the server to activate
the NIC firmware by executing the following command:
shutdown -r
5.4 CEE Installation
- Change the working directory to /opt/ecs-fuel-utils with the following command:
cd /opt/ecs-fuel-utils
- Set up a screen session to ensure that the installation
process is not interrupted:
# screen -S installcee -L
If the connection to vFuel is lost, log on to vFuel again and reattach the screen session with the below command:
# screen -r installcee
- Note:
- The nohup option can cause installation failure and must not be used.
- Install CEE by running the installcee script on Fuel:
./installcee.sh
The time required for command execution is approximately two to three hours for a system with 10 compute servers.
Check that the printout is the following:
Ericsson CEE installed successfully.
5.5 Temporary Installation Steps
This section describes the temporary installation workaround that is needed for this release. Carry out this workaround if there are problems during the installation process and the installation does not complete.
5.5.1 Deployment Can Fail If Data PCI Slot For Blades Cannot Be Read
CEE deployment can fail during config.yaml validation if the PCI slot addresses of the blades cannot be read by the system, and the following error message is displayed: "AssertionError: NIC role 'data1' is assigned to pci slot '0000:af:00.0' which does not exists in blade 7 in shelf 0".
Associated trouble report: HW71245
Workaround: Perform the below steps:
- Restart the blade found in the error message.
- Re-run installcee.sh.
5.6 Temporary post-Installation Steps
This section describes temporary procedures that must be executed in this release after a successful installation. Carry out these workarounds after the installation script has run successfully.
5.6.1 ml2_conf_sriov.ini Not Properly Populated After CEE 6.6 Installation on HDS
It is possible that the ml2_conf_sriov.ini file is not populated on the vCICs for SR-IOV after CEE installation. For example, the 'supported_pci_vendor_devs' field is empty.
Associated trouble report: HW74332.
Workaround:
- After CEE installation is completed, run the eri_sriov_controller plugin from vFuel to correctly
update the ml2_conf_sriov.ini file:
fuel node --node <vcic_nodes> --tasks eri_sriov_controller --force
fuel node --node <vcic_nodes> --tasks⇒ eri_sriov_controller --force
- Replace <vcic_nodes> with the comma-separated list of the node IDs for all three vCICs.
5.7 Configuration of OVSDB Interface for HW-VTEP Access
- Note:
- Only perform the instructions described in this section if CEE is deployed with tightly integrated SDN.
Do the following:
- In the OVSDB Interface, create a new OVSDB Controller
with the following values:
Parameter
Value
IP Address
<csc_vip>
Port
6640
For the VIP of the Cloud SDN Controller (CSC) on sdnc_sbi_sp, fetch astute.yaml from any of the vCICs. The CSC VIP is listed under the vips section of astute.yaml. An example of the relevant section is the following:
sdnc_sbi_vip: ipaddr: 192.168.41.27 is_user_defined: false namespace: haproxy network_role: sdnc-sbi-vip node:roles: - controller
- Attach the HDS L2 GWs created in Section 2.2.1 or Section 2.3.1 to the OVSDB Interface.
- If SR-IOV is used, attach one additional Ethernet Interface per switch physical interface to the OVSDB Interface.
5.8 vFuel Migration into the CEE Region
To migrate vFuel into the CEE region, do the following:
- Log on to the kickstart server.
- Execute the following script:
CEE_RELEASE/scripts/migrate_fuel.sh
An example of the output is:
./migrate_fuel.sh migrate_fuel.sh.info: Checking current Fuel state migrate_fuel.sh.info: Preparing to migrate Fuel migrate_fuel.sh.info: Fuel will be migrated to compute-0-4 (192.168.0.23) migrate_fuel.sh.info: The vFuel image and the Domin XML will also be prepared ⇒ on compute-0-6 (192.168.0.25) migrate_fuel.sh.info: Shutting down current Fuel migrate_fuel.sh.info: Waiting for Fuel to complete shutdown migrate_fuel.sh.info: Copying Fuel disk image to compute-0-4 (172.30.160.1) sending incremental file list fuel_br3160.qcow2 68,730,224,640 100% 107.69MB/s 0:10:08 (xfr#1, to-chk=0/1) migrate_fuel.sh.info: Copying Fuel disk image to compute-0-6 (172.30.160.2) sending incremental file list fuel_br3160.qcow2 68,730,224,640 100% 103.17MB/s 0:10:35 (xfr#1, to-chk=0/1) migrate_fuel.sh.info: Starting new vFuel inside CEE region on compute-0-4 (192.168.0.23) migrate_fuel.sh.info: Waiting for new vFuel to start up migrate_fuel.sh.info: Waiting for new vFuel to be ready migrate_fuel.sh.info: New vFuel ready migrate_fuel.sh.info: Performing post migrate actions migrate_fuel.sh.info: Post migrate actions done migrate_fuel.sh.info: Fuel is successfully migrated to compute-0-4 (192.168.0.23)
5.9 Region Expansion
Expand the CEE region to include the compute server previously used as the kickstart server. To expand the CEE region, follow the instructions of the document Region Expansion.
6 Concluding Steps
- Assign the Atlas_nbi and Atlas_sbi VLANs to the LAG interfaces of the CEE compute server which is planned to host the Atlas VM.
- In addition to the above networks, the CEE neutron VLANs needs to be configured for all the compute server ports of the CEE region. This can be done anytime before VM instantiation in the CEE region, and is not required to be present during the CEE deployment.
- Segregate the three compute hosts hosting vCIC. Do the
following on any of the vCICs:
- List the current host aggregates:
nova aggregate-list
An example of the printout:
+----+------+-------------------+ | Id | Name | Availability Zone | +----+------+-------------------+ +----+------+-------------------+
- Create the new host aggregate:
nova aggregate-create infra_HA infra_AZ
An example of the printout:
+----+----------+-------------------+-------+------------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+----------+-------------------+-------+------------------------------+ | 5 | infra_HA | infra_AZ | | 'availability_zone=infra_AZ' | +----+----------+-------------------+-------+------------------------------+
- Add each compute host hosting vCIC to the host aggregate:
nova aggregate-add-host infra_HA compute-<shelf_id>-<blade_id>.domain.tldnova aggregate-add-host infra_HA compute-⇒
<shelf_id>-<blade_id>.domain.tldAn example of the command:
nova aggregate-add-host infra_HA compute-2-1.domain.tld nova aggregate-add-host infra_HA compute-2-2.domain.tld nova aggregate-add-host infra_HA compute-2-3.domain.tld
An example of the printout:
- List the current host aggregates:
Host compute-2-3.domain.tld has been successfully added for aggregate 5 +----+------------+-------------------+------------------------------------------------------------------------------+------------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+------------+-------------------+------------------------------------------------------------------------------+------------------------------+ | 5 | infra_HA | infra_AZ | 'compute-2-1.domain.tld', 'compute-2-2.domain.tld', 'compute-2-3.domain.tld' | 'availability_zone=infra_AZ' | +----+------------+-------------------+------------------------------------------------------------------------------+------------------------------+
Host compute-2-3.domain.tld has been successfully added for aggregate 5
+----+------------+-------------------+----------------------------------------------------
--------------------------+------------------------------+
| Id | Name | Availability Zone | Hosts
| Metadata |
+----+------------+-------------------+----------------------------------------------------
--------------------------+------------------------------+
| 5 | infra_HA | infra_AZ | 'compute-2-1.domain.tld', 'compute-2-2.domain.tld',
'compute-2-3.domain.tld' | 'availability_zone=infra_AZ' |
+----+------------+-------------------+----------------------------------------------------
--------------------------+------------------------------+
- After installation, there is an active NeLS
Server Communication Problem alarm, because the NeLS
server is not configured and not available.
- To configure the connection to the NeLS server, follow the instructions in the Runtime Configuration Guide.
- If the alarm does not clear, follow the instructions in the NeLS Server Communication Problem alarm OPI.
- If the customized QEMU with increased VirtIO queue size was configured in config.yaml, isolate the compute hosts according to VirtIO queue size. For more information, see SW Installation in Multi-Server Deployment.
- Continue with the relevant section of the document CEE Installation.
Appendix
7 CA and NBI Certificates for Secure HTTPS Access
Certification Authority (CA) and Northbound Interface (NBI) certificates are required for secure HTTPS access to CEE.
Make sure to perform the following tasks before starting the installation process:
- Choose a unique hostname for the vCIC NBI.
- Choose a unique hostname for the Atlas NBI.
- Obtain certificates for the NBIs from an authorized Certification
Authority (CA).
The following certificate files are needed:
- CA certificate (or chain of certificates) of the organization issuing the Atlas NBI
- CA certificate (or chain of certificates) of the organization issuing the vCIC NBI
- Atlas NBI certificate
- vCIC NBI certificate
The Common Name (CN) and at least one DNS entry in the Subject Alternate Name (SAN) attribute must contain the publicly known hostname chosen for the NBI, so that the certificate refers to this publicly known hostname. The private key belonging to the certificate cannot be encrypted.
- Concatenate the vCIC NBI certificate and private key into
a single PEM format under /mnt/cee_config on vFuel. Perform the same for the Atlas NBI.
ASCII format is preferred for the individual certificates.
- Note:
- The pkcs12 binary format is commonly
used. This output format contains multiple entities in a single binary
file and uses encryption. Issue the following command to convert it
to PEM format:
openssl pkcs12 -in <inputfile> -out <outputfile> -nodes
-nodes is needed to save the private key in unencrypted format.
In case other binary formats need to be converted, refer to Reference [5] or Reference [6].
- Update the config.yaml file with the necessary information. Refer to the Configuration File Guide for updating the publicly known hostname and other relevant options in the config.yaml file.
- Update the DNS resolver to contain the hostname and IP address pairs for the NBI.
Reference List
| [1] CEE Network Infrastructure, 1/102 62-CRA 119 1862/5 |
| [2] Hyperscale Datacenter System 8000 Customer Documentation, 2/1551-LZN 901 5032 |
| [3] Non-Volatile Memory (NVM) Update Utility for Intel® Ethernet Adapters—Linux. https://downloadcenter.intel.com/download/25791/Ethernet-Non-Volatile-Memory-NVM-Update-Utility-for-Intel-Ethernet-Adapters-Linux-?product=82947 |
| [4] Limitations and Workarounds for Cloud Execution Environment (CEE) 6.5.1, 5/109 21-AZE 102 01/5-11 |
| [5] SSL Support. https://support.ssl.com/Knowledgebase/Article/View/19/0/der-vs-crt-vs-cer-vs-pem-certificates-and-how-to-convert-them |
| [6] Thawte Licensing. https://search.thawte.com/support/ssl-digital-certificates/index?page=content&id=SO26449 |

Contents