Swift Store on VNX Expansion
Cloud Execution Environment

Contents

1Introduction
1.1Description
1.2Prerequisites
1.3Procedure Overview

2

Preparation

3

Check Available Capacity on the VNX

4

Check OpenStack Quotas

5

Expand Storage on the VNX for Swift

6

Concluding Expansion
Appendix

7

Additional Information
7.1List CIC and Compute Nodes
7.2Swift Store on VNX Feature Check
7.3Check Write Cache Status
7.4Configurable Storage Connector Parameters

Reference List

1   Introduction

This Operational Instruction (OPI) describes how to expand the existing Swift store on the VNX.

1.1   Description

After the storage expansion, each Cloud Infrastructure Controller (CIC) gets an extra Logical Unit Number (LUN) in the storage pool used for Cinder on the VNX, with the size of the expansion.

1.2   Prerequisites

This section describes the prerequisites for this instruction.

1.2.1   Documents

Ensure that the following documents have been read:

1.2.2   Tools and Equipment

A computer is required, that can be used to connect to the Cloud Execution Environment (CEE) by using Secure Shell (SSH) protocol.

1.2.3   Conditions

Before starting this procedure, ensure that the following conditions are met:

1.2.4   Installation Data

Before starting this procedure, make sure that the following data is available:

Variable

Value

<sp_ip>

192.168.2.12 for Storage Processor A (SPA) (1)


192.168.2.13 for Storage Processor B (SPB) (1)

<STORAGE.POOL.NAME>

This is the name of the storage pool used for Cinder as defined during VNX5400 SW installation. Use the same name that was noted down for the Configuration File Guide.

<additional_size>

This is the planned additional size of the Swift store on the VNX in GiB. The maximum size that can be specified here is 6000GiB. Larger Swift store sizes need to be created in steps of maximum 6000 GiB each.


(1)  These values are valid if certified configuration with default setup is being used. Otherwise the corresponding customer and site specific values must be used according to the IP and VLAN plan.


1.3   Procedure Overview

Figure 1 gives an overview of the procedures covered by this OPI.

Figure 1   Procedure Overview

2   Preparation

This section describes how to prepare for the expansion of the Swift store on the VNX.

Perform the following steps:

  1. If the VNX expansion is performed from a remote location, log onto one of the CICs by using IdAM credentials, then change to user ceeadm using su - ceeadm, and log ono the Fuel node. Else, start with the next step below.
  2. From the Fuel node, log onto one of the CIC nodes as user ceeadm.
  1. Print the properties of the logical volume for the Swift store by issuing the following command:

    sudo lvdisplay image

Note:  
The path is located in the line starting with "LV Path", the size of the Swift store is located in the line starting with "LV Size". Note down these values for later use.

Example 1   Printout of the sudo lvdisplay image Command

ceeadm@cic-1:~# sudo lvdisplay image
  --- Logical volume ---
  LV Path                /dev/image/glance
  LV Name                glance
  VG Name                image
  LV UUID                crQyfr-r8y9-99qE-Iydo-7LPw-rhso-LDTeEC
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                649.91 GiB
  Current LE             20797
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:4
  1. Check that the file /etc/backend_storage_connector.conf exists, and that it contains the default values as described in Section 7.4.
  2. Repeat Step 2, Step 3, and Step 4 in Section 2 for the other two CIC nodes.
    Note:  
    The displayed volume sizes must be the same on the three CIC nodes. If they are not equal, note down the highest value and keep it for later use.

  3. List the available Cinder volumes by issuing the following command:

    cinder list

Example 2   cinder-list Example Printout

ceeadm@cic-1:~# cinder list
+----------+--------+---------------------------------+------+-[...]-+
|    ID    | Status |           Display Name          | Size | [...]-+
+----------+--------+---------------------------------+------+-[...]-+
| 19[...]9 | in-use |  CEE+cic-1+/dev/image/glance    | 100  | [...]-+
| 66[...]2 | in-use |  CEE+cic-3+/dev/image/glance    | 100  | [...]-+
| 7c[...]2 | in-use |  CEE+cic-2+/dev/image/glance    | 100  | [...]-+
---------------------------------------+--------+--------------[...]-+
Note:  
Note down the listed Cinder volumes for later use. This makes it easier to verify the creation of the new images in Section 6.

  1. Continue with the procedures in Section 3.

3   Check Available Capacity on the VNX

This section describes how to check if there is enough capacity on the VNX for the expansion.

  1. Ensure that you are logged onto one of the CIC nodes as ceeadm user.
  2. Check the "Available Capacity" of the storage pool used by the "cinder" service on the VNX by issuing the following command:

    /opt/Navisphere/bin/naviseccli -h <sp_ip>
    storagepool -list -name <STORAGE.POOL.NAME>

    Note:  
    For the <sp_ip> variable use the IP address of any of the two storage processors.

    Example 3 shows a partial printout with the relevant "Available Capacity" information:

Example 3   Available Capacity on the VNX

ceeadm@cic-1:~# /opt/Navisphere/bin/naviseccli⇒
 -h 192.168.2.12 storagepool -list
Pool Name:  cinderpool
[...]
Raw Capacity (GBs):  19506.114
[...]
User Capacity (GBs):  15296.818
[...]
Consumed Capacity (GBs):  4639.293
[...]
Available Capacity (GBs):  10657.525
Percent Full:  30.328
[...]
Total Subscribed Capacity (GBs):  4639.293
Percent Subscribed:  30.328
Oversubscribed by (Blocks):  0
Oversubscribed by (GBs):  0.000
[...]
Note:  
Although in the printout the available capacity is displayed in "GBs", in fact the correct unit of measure for the displayed capacity is "GiBs".

  1. Check if the "Available Capacity" is sufficient and proceed according to the result as follows:
    • If the "Available Capacity" is at least three times as large as the planned additional size of the Swift storage for one CIC on the VNX, continue with the procedures in Section 4. This triple additional size is required since each CIC must be extended by the same amount of storage capacity.
    • If the "Available Capacity" is less than three times the planned additional size of the Swift storage for one CIC on the VNX, then the available capacity on the VNX must be increased first. This is not in the scope of this document. Consult the next level of maintenance support.
  2. Continue with the procedures in Section 4.

4   Check OpenStack Quotas

This section describes how to check the OpenStack quotas for the "admin" project and how to expand them if needed.

  1. Ensure that you are logged onto one of the CIC nodes as ceeadm user.
  2. Print the project list by issuing the following command:

    openstack project list

Example 4   OpenStack Project List

# ceeadm@cic-2:/root$ openstack project list
+----------------------------------+----------+
| ID                               | Name     |
+----------------------------------+----------+
| 3052cafca2e14f85b8f02263025d2a8f | admin    |
| c16e42d79bfb406fb4730fa8bdbb6d8f | services |
+----------------------------------+----------+    
Note:  
Identify the project with the name "admin" and note down its ID (<project_id>) for later use.

  1. Print the quota usage by issuing the following command:

    cinder quota-usage <project_id>

    Example 5 is a printout of the following cinder quota-usage command:

    cinder quota-usage 3052cafca2e14f85b8f02263025d2a8f

Example 5   quota-usage Command Printout

ceeadm@cic-1:~# cinder quota-usage ⇒
3052cafca2e14f85b8f02263025d2a8f
+----------------+--------+----------+-------+
|      Type      | In_use | Reserved | Limit |
+----------------+--------+----------+-------+
|     [...]      |  [...] |   [...]  | [...] |
|   gigabytes    |  3100  |    0     | 10000 |
|     [...]      |  [...] |   [...]  | [...] |
|    volumes     |   4    |    0     |  100  |
|     [...]      |  [...] |   [...]  | [...] |
+----------------+--------+----------+-------+
Note:  
In the cinder command printouts the capacity values are given in "GBs", but in fact the correct unit of measure for the same values are "GiBs".

  1. Check the types "gigabytes" and "volumes" as follows:
    • For the type "volumes", the difference between "Limit" and "In_use" must be at least 3.
    • For the type "gigabytes", the difference between "Limit" and "In_use" must be at least three times as big as the intended Swift store size for a single CIC. For example, if the intended Swift store size for one CIC is 1000 GiB, the difference between "Limit" and "In_use" must be at least 3000 GiB.

    The example printout in Step 3 in Section 4 shows a disposition of 96 volumes and 6900 GiB of capacity.

    After calculating the quotas perform the relevant one of the following actions:

    • If enough quotas are available for the expansion, then continue with the procedures in Section 5.
    • If the "gigabytes" quota must be increased continue with Step 5 in Section 4.
    • If the "volumes" quota must be increased continue with Step 6 in Section 4.
  2. If the quota "gigabytes" needs to be increased, issue the following command:

    cinder quota-update <project_id> --gigabytes <new_g_limit>

    where

    • <new_g_limit> must be equal to or greater than:

      [(3* <additional_size>)+ <current_use>].

      Here <additional_size> is the planned new size of the Swift storage and current_use is the "in_use" value of "gigabytes" from Step 3 in Section 4.
    • <project_id> is the project ID of the project admin.

Example 6 shows how to increase the gigabytes quota by 99, from 10000 GiB to 10099 GiB for the project admin and an example printout:

Example 6   Cinder quota-update — 'gigabytes'

ceeadm@cic-1:~# cinder quota-update  ⇒
3052cafca2e14f85b8f02263025d2a8f  --gigabytes 10099
+----------------------+-------+
|       Property       | Value |
+----------------------+-------+
|   backup_gigabytes   |  1000 |
|       backups        |   10  |
|      gigabytes       | 10099 |
| per_volume_gigabytes |   -1  |
|      snapshots       |   10  |
|       volumes        |  100  |
+----------------------+-------+
  1. If the quota "volumes" needs to be increased, issue the following command:

    cinder quota-update ⇒
     <project_id> --volumes <new_vol_limit>

    where

    • <new_vol_limit> is at least three volumes more than the current value,
    • <project_id> is the project ID of the project admin.

Example 7 shows how to increase the volumes quota by 3, from 100 to 103, for the project admin:

Example 7   Cinder quota-update — volumes

ceeadm@cic-1:~# cinder quota-update  3052cafca2e14f8
5b8f02263025d2a8f --volumes 103⇒
  1. Continue with the procedures in Section 5.

5   Expand Storage on the VNX for Swift

This section describes how to expand the storage on the VNX for Swift.

  1. Ensure that you are logged onto one of the CIC nodes as ceeadm user.
  2. Start a "screen" session with the following command:

    sudo -E screen

  3. Press Space or Return after you have read the instruction on the screen.
    Note:  
    The screen command starts a session, which is independent from the current terminal window. Even if the terminal window crashes or is ended by other means, it is possible to reconnect to the session by using the command sudo -E screen -x from another terminal on the same host.

  4. To expand the Swift storage on the VNX issue the following command:

    backend-storage-connector expand <path>
    <additional_size>

    The description of the variables are as follows:

    Variable

    Description

    <path>

    This is the path of the logical volume (LV Path) noted down in Step 3 in Section 2 in Section 2.

    <additional_size>

    This is the planned additional size of the Swift Store on the VNX in GiB. The maximum size that can be specified here is 6000GiB. Larger Swift Store sizes need to be created in steps of maximum 6000 GiB each. (1)


    (1)   All CIC nodes must have the same storage capacity for Swift. Use the same value of <additional_size> for each CIC.


    The successful job is indicated by the final status information message "Success.", as shown in Example 8 below:

Example 8   Expand the Swift Store on the VNX

root@cic-1:~# backend-storage-connector expand⇒
 /dev/image/glance 50
[...]
<ts> INFO: Updated fstab with option "_netdev,...
<ts> INFO: Extending volume group: image with...
<ts> INFO: Move device /dev/sda7. This may take a while.
<ts> INFO: Move device /dev/sdb2. This may take a while.
<ts> INFO: Successfully disconected all local...
<ts> INFO: Extending logical volume /dev/imag...
<ts> INFO: Growing xfs filesystem of logical...
<ts> INFO: Success.
root@cic-2:~#
  1. If the execution fails, the file /var/backend-storage-connector.fail is created. Before another attempt at expansion, remove the .fail file using the following command:
    rm /var/backend-storage-connector.fail
  2. End the screen session by pressing CTRL+D.
  3. Repeat all the steps in Section 5 for the other two CIC nodes and then continue with the procedures in Section 6.
    Note:  
    A log file is generated under the following path on each CIC:

    /var/log/backend-storage-connector.log


6   Concluding Expansion

This section describes how to confirm that the expansion was successful.

  1. Ensure that you are logged onto one of the CIC nodes as ceeadm user.
  2. Check that Swift store on VNX expansion was successful, by listing the Cinder volumes with the following command:

    cinder list

Example printout:

Example 9   Cinder Volumes Listed

ceeadm@cic-1:~# cinder list
+----------+--------+---------------------------------+------+-[...]-+
|    ID    | Status |           Display Name          | Size | [...]-+
+----------+--------+---------------------------------+------+-[...]-+
| 19[...]9 | in-use |   CEE+cic-1+/dev/image/glance   | 650  | [...]-+
| 66[...]2 | in-use |   CEE+cic-3+/dev/image/glance   | 650  | [...]-+
| 7c[...]2 | in-use |   CEE+cic-2+/dev/image/glance   | 650  | [...]-+
| 95[...]7 | in-use |  CEE+cic-1+/dev/image/glance+1  |  50  | [...]-+
| 85[...]2 | in-use |  CEE+cic-3+/dev/image/glance+1  |  50  | [...]-+
| 45[...]3 | in-use |  CEE+cic-2+/dev/image/glance+1  |  50  | [...]-+
---------------------------------------+--------+--------------[...]-+

For each CIC, one volume must exist with a "Display Name" starting with "CEE+", followed by the CIC name, the logical volume path and an integer number. The size must match the value of <additional_size>, which has been provided in Section 5.

  1. Check the logical volume size for the Swift store with the following command:

    sudo lvdisplay image

    "LV Size" displays the size of the Swift store in GiB. The size must have increased by the value chosen for additional_size.

Example printout:

Example 10   Logical Volume Information

ceeadm@cic-1:~# lvdisplay image
  --- Logical volume ---
  LV Path                /dev/image/glance
  LV Name                glance
  VG Name                image
  LV UUID                crQyfr-r8y9-99qE-Iydo-7LPw-rhso-LDTeEC
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              available
  # open                 1
  LV Size                699.88 GiB
  Current LE             22396
  Segments               4
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:4
  1. Perform Step 3 in Section 6 for the other two CIC nodes.
  1. From the CIC node log onto the Fuel node using SSH as user ceeadm.
  2. On the Fuel node, edit the config.yaml file under /mnt/cee_config with root privileges (for example with the
    sudo vi /mnt/cee_config/config.yaml
    command) as follows:
    1. Under "Swift" > "swift_on_backend_storage:" > "lun_size" set the value to the new total <size> of the Swift store on each CIC (expanded with the <additional_size>).
    Note:  
    These modifications are necessary, so that in case of a rollback and CIC repair, the Swift store is set up again with the correct size.

Example 11 shows the relevant config.yaml hierarchy.

Example 11   Editing config.yaml

ericsson:
  swift:
    swift_on_backend_storage:
      type: centralized
      activation_mode: automatic
      lun_size: 700GiB

Appendix

7   Additional Information

This section describes the following:

7.1   List CIC and Compute Nodes

To display the hostnames and IP addresses of the CIC nodes, issue the following command while being logged onto the Fuel node:

sudo fuel node

Note:  
From a remote location only the CIC servers can be reached. For more information, see the CEE Connectivity User Guide.

Example 12 is a partial printout that shows only the relevant and CIC nodes:

Example 12   CIC and Compute Node Printout

[ceeadm@fuel ~]# sudo fuel node    ...                   
id | status | name         |...| ip           |...
---|--------|--------------|...|--------------|...
7  | ready  | compute-0-3  |...| 192.168.0.20 |...
8  | ready  | cic-1        |...| 192.168.0.21 |...
9  | ready  | cic-2        |...| 192.168.0.22 |...
12 | ready  | cic-3        |...| 192.168.0.25 |...
10 | ready  | compute-0-10 |...| 192.168.0.23 |...
11 | ready  | compute-0-2  |...| 192.168.0.24 |...

7.2   Swift Store on VNX Feature Check

To check whether the Swift store on VNX feature activation has taken place, list the Cinder volumes for the admin project with the following command:

cinder list

Example printout:

ceeadm@cic-1:~# cinder list
+--------------------------------------+--------+-------------------------------+------+-------------++----------+-------------+
|                  ID                  | Status |         Display Name          | Size | Volume Type |+ Bootable | Attached to |
+--------------------------------------+--------+-------------------------------+------+--  ---------++----------+-------------+
| 19abf3d7-32b2-4e79-856c-f7b95ba39bf9 | in-use | CEE+cic-1+/dev/image/glance | 100  |     None    |+  false   | ...         |
| 66a84a9b-abc5-45ae-ba6c-025e7088ef22 | in-use | CEE+cic-3+/dev/image/glance | 100  |     None    |+  false   | ...         |
| 7c9caba1-db2d-4fd2-b468-e92f6f800892 | in-use | CEE+cic-2+/dev/image/glance | 100  |     None    |+  false   | ...         |
+--------------------------------------+--------+-------------------------------+------+-------------++----------+-------------+

If the Swift store on VNX is already activated, then a Cinder volume must be found for each CIC, with the following naming convention:

CEE+<cic_name>+<lv_path>

where

If the Cinder volumes do not exist, then the Swift Store on VNX feature is not activated and the expansion cannot be performed. Activate the feature by performing the actions in the Swift Store on VNX Activation operating instructions.

7.3   Check Write Cache Status

To check if the write cache is enabled for the SPs on the VNX and enable it in case it is not, follow these steps:

  1. Log onto a CIC node with personal-user using SSH and change to user ceeadm.
  2. To check if the write cache is enabled for the SPs, issue the following command:

    /opt/Navisphere/bin/naviseccli -h <sp_ip> getcache

    Note:  
    For the <sp_ip> variable, use the IP address of any of the two storage processors.

Example 13 shows a scenario where the write cache is disabled for the SPs.

Example 13   SPA Information Printout — SP Write Cache Disabled

ceeadm@cic-3:~# /opt/Navisphere/bin/naviseccli -h⇒
 192.168.2.12 getcache
SP Read Cache State:  Enabled
SP Write Cache State:  Disabled
Cache Page size (KB):  8
Write Cache Mirrored:  YES
Low Watermark:  60
High Watermark:  80
SPA Cache Pages:  250303
SPB Cache Pages:  250304
Unassigned Cache Pages:  0
Read Hit Ratio:  N/A
Write Hit Ratio:  N/A
Prct Dirty Cache Pages =  0
Prct Cache Pages Owned =  0
SPA Read Cache State:  Enabled
SPB Read Cache State:  Enabled
SPA Write Cache State:  Disabled
SPB Write Cache State:  Disabled
System Buffer (spA):  7550 MB
System Buffer (spB):  7550 MB
SPS Test Day:  Sunday
SPS Test Time:  03:00
SPA Physical Memory Size (MB) =  12288
SPB Physical Memory Size (MB) =  12288
  1. Check the given value for the "SP Write Cache State".
    1. If it is set to "Enabled", the write cache is enabled for both storage processors. In this case the procedure is done.
    2. If it is set to "Disabled", the write cache is disabled for both of the SPs. In this case, continue with Step 4.
  2. To enable the write cache option on both storage processors, issue the following command:

    /opt/Navisphere/bin/naviseccli -h <sp_ip> setcache -wc 1

    Note:  
    This command has no printout.

  3. To verify that the write cache has been enabled, issue the following command:

    /opt/Navisphere/bin/naviseccli -h <sp_ip> getcache

Example 14 shows a scenario where the write cache is enabled for the SPs.

Example 14   SPA Information Printout — SP Write Cache Enabled

ceeadm@cic-3:~# /opt/Navisphere/bin/naviseccli -h⇒
 192.168.2.12 getcache
SP Read Cache State:  Enabled
SP Write Cache State:  Enabled
Cache Page size (KB):  8
Write Cache Mirrored:  YES
Low Watermark:  60
High Watermark:  80
SPA Cache Pages:  250303
SPB Cache Pages:  250304
Unassigned Cache Pages:  0
Read Hit Ratio:  N/A
Write Hit Ratio:  N/A
Prct Dirty Cache Pages =  0
Prct Cache Pages Owned =  0
SPA Read Cache State:  Enabled
SPB Read Cache State:  Enabled
SPA Write Cache State:  Enabled
SPB Write Cache State:  Enabled
System Buffer (spA):  7550 MB
System Buffer (spB):  7550 MB
SPS Test Day:  Sunday
SPS Test Time:  03:00
SPA Physical Memory Size (MB) =  12288
SPB Physical Memory Size (MB) =  12288

7.4   Configurable Storage Connector Parameters

Some storage connector parameters can be configured according to local requirements in the file /etc/backend_storage_connector.conf. The configurable parameters and the default values of the parameters are listed in Table 1.

Table 1    Storage Connector Parameters

Parameter

Default value

Description

<volume_wait_timeout>

1800

Maximum time to wait in seconds until a created cinder volume has status available

<volume_wait_time>

10

Scan interval in seconds checking for volume status

<volume_deletion_timeout>

600

Maximum time to wait in seconds until the deleted cinder volume or volumes are not found anymore using Openstack commmands(1)

<volume_deletion_time>

10

Scan interval in seconds checking for cinder volume deletion

<retry_vol_create_on_error>

True

If value True is set, volumes in error status are deleted and re-created (1)(2)

<log_path>

/var/log/backend-
storage-connector.log

Path to log file

<astute>

/etc/swift_backend_
astute.yaml

Path to astute file

<use_multipath>

True

 

<supported_storage_types>

centralized

 

<external_path_list>

/dev/mapper

 

<swift_logical_volume_path>

/dev/image/glance

 

<supported_logical_volume_path>

/dev/image/glance

 
     

(1)  Used only in repair mode

(2)  In repair mode volumes are deleted and re-created. Due to the latency of the backend storage system, when deleting volumes, volume creation can fail if the backend storage system has limited capacity. In this case, the created volumes end up in error status. The number of retries and the time interval are automatically calculated depending on the size of the volume.



Reference List

[1] IP and VLAN plan, 2/102 62-CRA 119 1862/5 Uen


Copyright

© Ericsson AB 2016. All rights reserved. No part of this document may be reproduced in any form without the written permission of the copyright owner.

Disclaimer

The contents of this document are subject to revision without notice due to continued progress in methodology, design and manufacturing. Ericsson shall have no liability for any error or damage of any kind resulting from the use of this document.

Trademark List
All trademarks mentioned herein are the property of their respective owners. These are shown in the document Trademark Information.

    Swift Store on VNX Expansion         Cloud Execution Environment