Data Collection Guideline
Cloud Execution Environment

Contents

1Introduction
1.1Scope
1.2Target Groups
1.3Prerequisites

2

Workflow

3

Collect Data

4

Submit CSR

5

Additional Information
5.1Split Files before Adding to CSR

1   Introduction

The purpose of this document is to describe how to collect troubleshooting data if a problem is experienced with the Cloud Execution Environment (CEE). This document also provides the procedure to enclose the collected data in a Customer Service Request (CSR).

1.1   Scope

This guideline is applicable for CEE releases, starting from CEE 6.6.

1.2   Target Groups

This document is intended for internal use and external customers raising a CSR. Target groups include the following:

1.3   Prerequisites

This section describes the prerequisites which have to be fulfilled.

1.3.1   Conditions

Ensure that vFuel is installed and running.

1.3.2   Tools and Equipment

No tools are required.

1.3.3   Documents

Not applicable.

1.3.4   User Access

The operator must have access to the deployment-specific credentials:

For information on Identity and Access Management (IdAM), refer to the Security User Guide.

2   Workflow

The workflow for collecting troubleshooting data is as follows:

  1. Perform automatic data collection as described in Section 3.
  2. Submit a CSR to the next level of support as described in Section 4.
Note:  
All logs and configuration data are collected automatically using the ACDC.py script. The collected archive must be relevant enough to be considered as a starting point for any further investigation, however, it is possible that after identifying the issue more specific logs will be required.

3   Collect Data

Collect troubleshooting data by running the data collection script:

  1. Log in to the vFuel node.
  2. Run the ACDC.py script:

    /usr/bin/ACDC.py [--blades <faulty_nodes> ] [-h | --help]

    The script collects data from vFuel and the available vCICs. If the optional --blades key is used, it also collects data from the specified compute hosts and ScaleIO servers. Replace <faulty_nodes> with the node name of the server(s) where data is to be collected. Multiple nodes can be added as a comma-separated list. The node names can be retrieved using the fuel node command. Only list the necessary nodes, as listing too many can heavily increase the size of the archive. For example: --blades compute-0-2,compute-0-3,scaleio-0-5

    Note:  
    Only add nodes that are available using SSH. Adding unreachable nodes results in data collection failure.

    If technical problems are experienced during data collection, contact the next level of support.

    Result:
    The output file data_collection_<date>.tar.gz can be found in the /var directory.
Note:  
Helper information and list of supported key parameters in the ACDC.py script can be obtained by using the -h or --help command line argument.

4   Submit CSR

Enclose the created archive in a CSR and submit the CSR to the next level of support:

  1. Transfer the resulting /var/data_collection_<date>.tar.gz file out of the system.
  2. Submit the file as part of the CSR.

5   Additional Information

5.1   Split Files before Adding to CSR

Before adding the archive file to the CSR as an enclosure, it must be split into pieces according to the appropriate enclosure limits.

split -d -b <piece>MB --verbose data_collection_<date>.tar.gz data_collection_<date>.tar.gz.part.

<piece> is less than the enclosure limit, for example, 500 MB.

Example 1   Split Archive File

split -d -b 1MB --verbose virtualbox-5.2_5.2.6-120293_Ubuntu_xenial_amd64.deb ⇒
virtualbox-5.2_5.2.6-120293_Ubuntu_xenial_amd64.deb.part.

Pieces can be put together with cat command. Add this information to the CSR:

cat data_collection_<date>.tar.gz.part.* > data_collection_<date>.tar.gz