OpenStack Compute API in CEE
Cloud Execution Environment

Contents

1Introduction
1.1API Version
1.2Document References
1.2.1API Design Base Reference

2

Supported Operations
2.1Basic OpenStack Operations
2.2OpenStack Extensions

3

Ericsson Extended Functions
3.1forcemove
3.1.1API Operation
3.1.2VM Migration and Evacuation
3.1.3Scheduling
3.1.4Hints for VM Affinity and Antiaffinity
3.2Bandwidth Based Scheduling
3.2.1API Operation
3.2.2VMs with and without Bandwidth Requirements

4

Limitations
4.1Volumes of a Deleted VM Not Removed if nova-compute is Not Running
4.2Limitations for Multi-Server Deployment
4.3Limitations for Single Server Deployment

1   Introduction

This document serves as an introduction to the use of the Application Programming Interface (API) of the OpenStack component "Compute" in the Cloud Execution Environment (CEE).

While the main aim of the document is to present the Compute API in CEE, it also contains descriptive information about the features of CEE Compute.

1.1   API Version

By default, the external Virtualized Network Function Manager (VNFM) and Network Function Virtualization Orchestrator (NFVO) use the CEE Compute API microversion based on OpenStack Compute API v2.1.

Note:  
At deployment, OpenStack Compute API v2 can also be selected (however, v2.1 is recommended). In this case, some functions — including forcemove — are available as extensions.

1.2   Document References

This section contains the official OpenStack API reference.

1.2.1   API Design Base Reference

For the description of the API operations and extensions of Compute, refer to section "OpenStack Compute API v2.1" in the OpenStack API Complete Reference.

This is a stored copy of the OpenStack API Reference document version that was the base for the development of this version of CEE.

2   Supported Operations

The following sections contain the API operations and API extensions that are supported in the CEE.

2.1   Basic OpenStack Operations

For the detailed description of basic Compute API operations, refer to section "OpenStack Compute API v2.1" in the OpenStack API Complete Reference.

CEE Compute supports basic Compute API operations, with the limitations listed in Section 4.

2.2   OpenStack Extensions

Not applicable for OpenStack Compute API v2.1.

3   Ericsson Extended Functions

This section presents the extended API functions that are specific to the CEE.

3.1   forcemove

forcemove is an API function that supports the migration and evacuation of Virtual Machines (VMs) from a compute host.

forcemove uses Nova rescheduling, honoring same_host, different_host, server_groups affinity filters, and High Availability (HA) policy.

Note:  
The HA policy are configured in the metadata field of the VM.

3.1.1   API Operation

In CEE, the standard OpenStack API is extended with a call for forcemove:

POST v2/{tenant_id}/servers/{server_id}/action

Request body:

{
  "forcemove": {
      "ignore_hints": false,
      "ignore_broken_dependencies": false,
      "block_migrate": false,
      "disk_over_commit": false
  }
}

Note:  
It is required to pass all four forcemove parameters, even though, in most cases, these are false (OpenStack default).

If the request succeeds, the response body is the following:

{
  "needs_start": false
}

In case of an error, the response body is the following:

{
  "badRequest": {
       "message": "No valid host was found.", 
       "code": 507
  }
}

Note:  
The message and the code can vary.

3.1.2   VM Migration and Evacuation

CEE allows the configuration of HA policy for each VM. This policy is honored by the forcemove function.

CEE 15B policies VM Migration and VM Evacuation are removed from CEE R6 and replaced by ha-policies. For more information, see Section 3.1.2.1.

3.1.2.1   HA Policy

The user can define the level of High Availability (HA) needed on a specific VM. Three levels of HA can be achieved by defining a HA Policy on the VM.

The possible configuration values for ha-policy are the following:

unmanaged "Unmanaged" means that CEE does not try to manage the VM after it was started. No action is performed by CEE on this VM. (This is the default, if no HA policy is provided.)
managed-on-host "Managed on host" means that the VM starts up with the host and shuts down with it. In case of failure, the VM is not moved to another host, but it is restarted, when the node is restarted. On forcemove, the VM is shut down.
ha-offline "High Availability with offline migration" means that the VM is evacuated in case of failure, moved to another host on forcemove.
Note:  
  • ha-policy is case-sensitive.

3.1.2.2   Policy Configuration

During the creation of a VM, configure policies in the metadata field in the following way:

{ 
  "server":{ 
     "flavorRef":"http://openstack.example.com/openstack/flavors/1", 
     "imageRef":"http://openstack.example.com/openstack/images/70a599e0-31e7-49b7-b260-868f441e862b", 
     "metadata":{ 
        "ha-policy":"managed-on-host" 
     }, 
     "name":"new-server-test" 
  } 
} 
{ 
  "server":{ 
     "flavorRef":"http://openstack.example.com/openstack⇒
/flavors/1", 
     "imageRef":"http://openstack.example.com/openstack/images⇒
/70a599e0-31e7-49b7-b260-868f441e862b", 
     "metadata":{ 
        "ha-policy":"managed-on-host" 
     }, 
     "name":"new-server-test" 
  } 
} 

Here, the migration policy is set to managed-on-host.

3.1.3   Scheduling

The default OpenStack migration call contain a parameter for the destination Compute host to where the VM is moved.

For evacuation <host> is optional:

root@cic-1:~# nova evacuate

Syntax:

nova evacuate [--password <password>] ⇒
[--on-shared-storage] <server> [<host>]
nova evacuate [--password <password>] [--on-shared-storage] <server> [<host>]

In CEE, a VM can be moved simply from one compute host to another compute host in the cluster using Compute rescheduling to choose the destination compute host.

This process is supported by the forcemove operation.

3.1.4   Hints for VM Affinity and Antiaffinity

In order to have high availability and redundancy, the application layer needs to guarantee that some VMs are on different compute hosts.

To achieve this, OpenStack provides a feature called "scheduler hints". For example, if vm1 is booted, and, after that, vm2 is booted using -–hint different_host=vm1, then OpenStack guarantees that vm2 will be on a different node than vm1.

The forcemove functionality of CEE adds a patch to OpenStack to save hints used during boot.

3.1.4.1   Supported Hints

In CEE, server_group hint is supported and recommended , refer to the section "Server groups (os-server-groups)" in the OpenStack API Complete Reference.

The hints same_host and different_host are also supported, but deprecated.

3.1.4.2   Prerequisites

In order to use scheduler hints, nova.conf must add them through scheduler_default_filters.

Note:  
This prerequisite is automatically met during installation, and does not require further action.

3.1.4.3   Recommended Setup

The following scheduling filters are configured with CEE:

3.1.4.4   Hint Configuration

During the boot of a VM, configure scheduler hints in the following way:

{
  "os:scheduler_hints": {
    "different_host": "f2e31dcd-927b-4231-a652-3ceb42c9182e"
  },
  "server": {
    "name": "test-server",
    "imageRef": "e5bb056f-af7e-4d10-9b85-fca2519a74a0",
    "flavorRef": "1",
    "max_count": 1,
    "min_count": 1,
    "networks": [{"uuid": "d67ccfaf-0de5-4ae6-9cbb-765882d1c895"}]
  }
}

3.1.4.5   Broken Dependencies

forcemove always tries to take into consideration the saved scheduler hints, but, in certain cases, it may not be able to follow the saved hints.

Example 0   Broken Dependencies 1

$ nova boot vm1 --image ... --flavor ... # this vm got id 8088e8b6-fd1a-4bf2-bff5-e2debb668a3a
$ nova boot vm2 --image ... --flavor ... --hint same_host=8088e8b6-fd1a-4bf2-bff5-e2debb668a3a
$ nova delete vm1
$ nova forcemove vm2
+--------------------------------------+---------------+---------------------------+-------------+
| Server UUID                          | Move accepted | Error Message             | Needs Start |
+--------------------------------------+---------------+---------------------------+-------------+
| 59961962-fc06-4d90-82e4-366aaa3a9b38 | False         | No valid host was found.  | False       |
+--------------------------------------+---------------+---------------------------+-------------+

Example 1   Broken Dependencies 1

$ nova boot vm1 --image ... --flavor ... # this vm got id 8088e8b6-fd1a-4bf2-bff5-e2debb668a3a
$ nova boot vm2 --image ... --flavor ... --hint same_host=8088e8b6-fd1a-4bf2-bff5-e2debb668a3a
$ nova delete vm1
$ nova forcemove vm2
+--------------------------------------+---------------+---------------------------+-------------+
| Server UUID                          | Move accepted | Error Message             | Needs Start |
+--------------------------------------+---------------+---------------------------+-------------+
| 59961962-fc06-4d90-82e4-366aaa3a9b38 | False         | No valid host was found.  | False       |
+--------------------------------------+---------------+---------------------------+-------------+

Here, forcemove tries to use the saved hint, but cannot succeed, since vm2 has been deleted.

However, deleted VMs can be ignored during the handling of hints:

Example 1   Broken Dependencies 2

$ nova forcemove vm2 --ignore-broken-dependencies
+--------------------------------------+---------------+---------------+-------------+
| Server UUID                          | Move accepted | Error Message | Needs Start |
+--------------------------------------+---------------+---------------+-------------+
| 59961962-fc06-4d90-82e4-366aaa3a9b38 | True          |               | False       |
+--------------------------------------+---------------+---------------+-------------+

Example 2   Broken Dependencies 2

$ nova forcemove vm2 --ignore-broken-dependencies
+--------------------------------------+---------------+---------------+-------------+
| Server UUID                          | Move accepted | Error Message | Needs Start |
+--------------------------------------+---------------+---------------+-------------+
| 59961962-fc06-4d90-82e4-366aaa3a9b38 | True          |               | False       |
+--------------------------------------+---------------+---------------+-------------+

Here, the hint same_host=8088e8b6-fd1a-4bf2-bff5-e2debb668a3a is ignored.

Note:  
If the server_groups affinity filter is used, hints can be ignored, but forcemove does not check the deleted VMs, so broken dependencies have no effect on forcemove.

3.1.4.6   Ignore Hints

forcemove can be configured to ignore all hints:

Example 2   Ignore Hints

$ nova forcemove vm1 --ignore-hints
+--------------------------------------+---------------+---------------+-------------+
| Server UUID                          | Move accepted | Error Message | Needs Start |
+--------------------------------------+---------------+---------------+-------------+
| 59961962-fc06-4d90-82e4-366aaa3a9b38 | True          |               | False       |
+--------------------------------------+---------------+---------------+-------------+

Example 3   Ignore Hints

$ nova forcemove vm1 --ignore-hints
+--------------------------------------+---------------+---------------+-------------+
| Server UUID                          | Move accepted | Error Message | Needs Start |
+--------------------------------------+---------------+---------------+-------------+
| 59961962-fc06-4d90-82e4-366aaa3a9b38 | True          |               | False       |
+--------------------------------------+---------------+---------------+-------------+

3.2   Bandwidth Based Scheduling

Bandwidth based scheduling is an extension that enables VM scheduling based on free network bandwidth. The user requests the required bandwidth using attributes in flavor. Nova scheduler uses these attributes when scheduling VMs with that specific flavor. Nova keeps track of the reserved bandwidth on each network interface controller (NIC), on every host. Nova ensures that no NIC will be overprovisioned. Both bit rate and packet rate capacity are taken into consideration.

3.2.1   API Operation

This extension extends the extra_specs attribute in flavor. Two new attributes are added to extra_specs:

The values for the new attributes are in JSON format and contain requested bandwidth and average packet size. Rate is requested bandwidth in kbyte per sec. Size is average packet size in bytes of the VM frames on this interface. The size is used together with the byte rate, to calculate the requested frames per second.

Format:

{
‘<device name of sriov hwnic>’ : [<minimum-bandwidth-vf1>, <minimum-bandwidth-vf2>,&mldr; ],'<device name of neutron network hwnic>’ : { "rate": [<rate-vnic1>, <rate-vnic2>, &mldr;], "size": [<avg-packet-size-vnic1> , <avg-packet-size-vnic2>,&mldr;] } 
&mldr;
}
{
‘<device name of sriov hwnic>’ : [<minimum-bandwidth-vf1>, ⇒
<minimum-bandwidth-vf2>,&mldr; ],⇒
'<device name of neutron network hwnic>’ : { "rate": ⇒
[<rate-vnic1>, <rate-vnic2>, &mldr;], "size": ⇒
[<avg-packet-size-vnic1> , <avg-packet-size-vnic2>,&mldr;] } 
&mldr;
}

The physical interfaces can be both Neutron physical networks and SR-IOV NICs. Currently only one Neutron physical network is supported. This network has the name 'default'. SR-IOV NICs have names defined as 'pool_' + <PCI-bus-address of device>.

Example 4   bandwidth:vif_*bound_average Attribute

{
   ‘pool_41_00_0’: [ 20000, 10000 ],
   ‘pool_41_00_1’: [ 10000 ],
   ‘default’:
      {
         ‘rate’: [ 10000, 20000, 30000 ],
         ‘size’: [ 512, 1024, 1024 ]
      }
}

Example 5   Complete extra_specs Definition

{
“extra_specs”:{
“bandwidth:vif_inbound_average: ”{ ‘pool_41_00_0’: ⇒
[20000, 10000], ‘pool_41_00_1’: [10000], ‘default’: { ‘rate’: ⇒
[ 10000, 20000, 30000 ], ‘size’: [ 512, 1024, 1024] } }”,
“bandwidth:vif_outbound_average: ”{ ‘pool_41_00_0’: ⇒
[20000, 10000], ‘pool_41_00_1’: [10000], ‘default’: { ‘rate’: ⇒
[ 10000, 20000, 30000 ], ‘size’: [ 512, 1024, 1024] } }”,
"pci_passthrough:alias": "pool_41_00_0:2, pool_41_00_1:1"

}
}

The vNICs specified in the flavor have to match the vNICs specified on "nova boot". To use this feature user has to specify bandwidth on all NICs on the VM, including SR-IOV interfaces. For example, it is not allowed to specify one bandwidth in flavor and then boot VM with two vNICs. It is accepted to not use the extension by not specifying any "bandwidth:*" attributes in flavor.

3.2.2   VMs with and without Bandwidth Requirements

This section provides information about handling VMs with and without bandwidth requirements in the same CEE Region.

VMs with unspecified bandwidth can consume all bandwidth. VMs with unspecified bandwidth cannot be scheduled on the same hosts as VMs with specified bandwidth. This can lead to fragmentation issues.

In CEE, the default ram_weight_multiplier is set to 1. This makes the scheduler spread VMs on hosts. In a worst-case scenario, this results in all hosts having VMs without specified bandwidth. This makes it impossible to schedule VMs with specified bandwidth as illustrated in Figure 1.

Figure 1   Bandwidth Based Scheduling Fails

To avoid this issue, divide the compute hosts into two groups: VMs with bandwidth requirements, and VMs without bandwidth requirements as shown in Figure 2. Use host aggregates. When a flavor is created, the host aggregate must be set accordingly.

Figure 2   Using Host Aggregates to Manage Bandwidth Requirements

4   Limitations

Note:  
In addition to the limitations listed in this section, also refer to Section 4.2 for limitations specific to CEE in multi-server deployment.

In addition to the limitations listed in this section, also refer to Section 4.3 for limitations specific to CEE in single server deployment.


The following limitations exist in CEE Compute:

4.1   Volumes of a Deleted VM Not Removed if nova-compute is Not Running

If a VM is removed while nova-compute is not running, the attached volumes of the VM are not detached and removed. There are no errors indicating that the volumes are not removed.

Workaround

To remove the attached volumes, execute the following commands:

cinder reset-state --state available $<volume_uuid>
cinder reset-state --attach-status detached $<volume_uuid>
cinder delete $<volume_uuid>

4.2   Limitations for Multi-Server Deployment

In addition to the limitations described in Section 4, the following limitations apply to CEE in multi-server deployment:

4.3   Limitations for Single Server Deployment

In addition to the limitations described in Section 4, the following limitations apply to CEE in single server deployments:

Single server installation requires special flavor metadata (extra specs) settings.



Copyright

© Ericsson AB 2016. All rights reserved. No part of this document may be reproduced in any form without the written permission of the copyright owner.

Disclaimer

The contents of this document are subject to revision without notice due to continued progress in methodology, design and manufacturing. Ericsson shall have no liability for any error or damage of any kind resulting from the use of this document.

Trademark List
All trademarks mentioned herein are the property of their respective owners. These are shown in the document Trademark Information.

    OpenStack Compute API in CEE         Cloud Execution Environment