﻿-------------------------------------------------------------------------------
Software name      	Lenovo Storage V3700 V2 Controller Firmware Update Bundle

Supported Model		System name		
			Lenovo Storage V3700 V2
			Lenovo Storage V3700 V2 XP

Version           	7.8.0.2

Issue date        	January 27, 2017

Prerequisites:		None

-------------------------------------------------------------------------------
WHAT THIS PACKAGE DOES
-------------------------------------------------------------------------------
This firmware update enables you to update the controller firmware via;
1. Lenovo Storage V3700 V2 - Web User Interface. 
2. Lenovo Storage V3700 V2 - Command Line Interface (CLI).

-------------------------------------------------------------------------------
Version history
-------------------------------------------------------------------------------
<Complete version history of released code>

  The following versions of Controller Firmware that been released to date.


Summary of Changes

  Where: <   >        Package version
         [Important]  Important update
         (New)        New function or enhancement
         (Fix)        Correction to existing function



<v7.8.0.2>

- [Important] The following issues have been resolved with this release;

  Node asserts due to inconsistencies arising from the way cache interacts with 
  compression (HU01412)-Compression, Cache

  Upgrading to v7.7.1.5 or v7.8.0.1 with encryption enable will result in multiple 
  T2 recoveries and a loss of access (HU01442)-Encryption, System Update

  Due to an issue in the cache component nodes within an IO group are not able to 
  form a caching-pair and are serving IO through a single node (HU00762)-Reliability, 
  Availability and Serviceability

  Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP 
  address changing in the event of a fail over (HU01409)-Reliability, Availability and Serviceability

  Systems running v7.6.1 or earlier, with compressed volumes, that upgrade to v7.8.0 or 
  later will fail when the first node asserts and enters a service state (HU01426)-System Update


- (Fix) The following issues have been resolved with this release;
  
  Node assert due to an accounting issue within the cache component (HU01432)-Cache

  The event log indicates incorrect enclosure type (HU01459)-System Monitoring

  When rmmdisk is used with "force" the validation process is supplied with incorrect 
  parameters triggering a node assert (IT18752)-Command Line Interface



<v7.8.0.1>

- [Important] The following issues have been resolved with this release;

  Mishandling of extent migration following a rmarray command can lead to multiple simultaneous 
  node warmstarts with a loss of access (HU01382)-DRAID

  
<v7.8.0.0>

- [Important] The following issues have been resolved with this release;

  A fault in a backend controller can cause excessive path state changes leading to node 
  asserts and offline volumes (HU01021 & HU01157)-Backend Storage

  A drive failure whilst an array rebuild is in progress can lead to both nodes in an IO 
  group asserting (HU01193)-Distributed RAID

  An unusual interaction between Remote Copy and FlashCopy can lead to both nodes in an IO 
  group asserting (HU01267)-Global Mirror With Change Volumes

  A rare timing condition can cause hung IO leading to warmstarts on both nodes in an IO 
  group. Probability can be increased in the presence of failing drives (HU01320)-Hosts

  A port translation issue between v7.5 or earlier and v7.7.0 or later requires a T2 recovery 
  to complete an upgrade (HU01340)-System Update

  Under certain rare conditions FC mappings not in a consistency group can be added to a 
  special internal consistency group resulting in a T2 recovery (HU01392)-FlashCopy

  A small timing window issue exists where a node assert or power failure can lead to repeated 
  asserts of that node until a node rescue is performed (HU01177)-Reliability, Availability and Serviceability

  The handling of a rebooted node's return to the cluster can occasionally become delayed 
  resulting in a stoppage of inter cluster relationships (HU01223)-Metro Mirror

  Upgrade to 7.7.x fails on Storwize systems in the replication layer where a T3 recovery was 
  performed in the past (HU01268)-System Update

  During an upgrade to v7.7.1 a deadlock in node communications can occur leading to a timeout 
  and node asserts (HU01347)-Thin Provisioning

  Resource leak in the handling of Read Intensive drives leads to offline volumes (HU01379)

  A rare timing issue in FlashCopy may lead to a node asserting repeatedly and then entering a 
  service state (HU01381)-FlashCopy

  Node asserts due to a timing window in the cache component (IT14917)-Cache

- (Fix) The following issues have been resolved with this release;

  Single node assert due to hung IO caused by cache deadlock (HU00831)-Cache

  Some older backend controller code levels do not support C2 commands resulting in 1370 entries 
  in the Event Log for every detectmdisk (HU01098)-Backend Storage

  The LDAP password is visible in the auditlog (HU01213)

  Automatic T3 recovery may fail due to the handling of quorum registration generating duplicate 
  entries (HU01228)-Reliability, Availability and Serviceability

  The DMP for a 3105 event does not identify the correct problem canister (HU01229)

  A host aborting an outstanding logout command can lead to a single node assert (HU01230)

  When a FlashCopy consistency group is stopped more than once in rapid succession a node assert 
  may result (HU01247)-FlashCopy

  Node assert when due to an issue in the compression optimisation process (HU01264)-Compression

  A rare timing conflict between two process may lead to a node assert (HU01269)

  SSH authentication fails if multiple SSH keys are configured on the client (HU01304)

  Systems using Volume Mirroring that upgrade to v7.7.1.x and have a storage pool go offline may 
  experience a node assert (HU01323)-Volume Mirroring

  lsfabric command may not list all logins when it is used with parameters (HU01370)-Command Line Interface

  Where an issue with Global Mirror causes excessive IO delay a timeout may not function result in 
  a node assert (HU01374)-Global Mirror

  For certain config nodes the CLI Help commands may not work (HU01399)-Command Line Interface

  Unexpected 45034 1042 entries in the Event Log (IT17302)-System Monitoring

  When a vdisk is moved between IO groups a node may warmstart (IT18086) 

  
-------------------------------------------------------------------------------
INSTALLATION INSTRUCTIONS
-------------------------------------------------------------------------------

Review the procedures on how to update the Lenovo Storage V3700 V2 / V5030 Series systems under the "Updating the system" section
of the online documentation. 

Obtaining the software packages
Each update requires that you run the update test utility and then download the correct software package. Specific steps for these 
two processes are described in the topics that follow.


Update Test Utility
The update test utility indicates whether your current system has issues that need to be resolved before you update to the next 
level. 

The test utility is run as part of the system update process or for drive firmware.

The most current version of this tool or the system software packages can be downloaded from the following support website:  

http://support.lenovo.com/us/en/products/servers/lenovo-storage/v3700v2
or
http://support.lenovo.com/us/en/products/servers/lenovo-storage/v5030

After you download the update test utility, you have the following options:

    1. If you are using the, select Settings>System> Update system and click Update to run the test utility. 
       Complete directions are included in Updating the system automatically.

    2. If you are using the command-line interface (CLI), directions are included in 
       Updating the system automatically using the CLI.

    3. If you are using the manual update procedure, see Updating the system manually.

    4. To check drive firmware levels either by using the or the CLI, follow the directions in drive firmware package.



1. Updating the System Automatically
You can update the entire system in a coordinated process with no user intervention after the update is initiated.

Before you update your system, review all of the topics in "Updating" section of the documentation to understand how the process 
works. 

Allow adequate time, such as up to a week in some cases to look for potential problems or known bugs. Additional information on 
updating the system is available at the following website:

http://support.lenovo.com/us/en/products/servers/lenovo-storage/v3700v2
or
http://support.lenovo.com/us/en/products/servers/lenovo-storage/v5030

When the system detects that the hardware is not running at the expected level, the system applies the correct firmware.

If you want to update without host I/O, shut down all hosts before you start the update.
Complete the following steps to update the system automatically:

    1. In the management GUI, select Settings > System > Update System.
    2. Click Update.
    3. Select the test utility and the code package that you downloaded from the support site. The test utility verifies that the
	   system is ready to be updated.
    4. Click Update. As the canisters on the system are updated, the displays the progress for each canister.

Monitor the update information in the to determine when the process is complete.

If the process stalls during the update, click Resume to continue with the update or click Cancel to abandon the update and 
restore the previous level of code. 

You can also use the CLI command applysoftware -resume to resume the update.


2. Updating the System Automatically using the CLI
You can use the command-line interface (CLI) to install updates.

Start here to update to a later version.

Important: Before you start an update, you must check for offline or degraded volumes. An offline volume can cause write data 
that was modified to be pinned in the system cache. This action prevents volume failover and causes a loss of input/output (I/O)
access during the update. If the fast_write_state is empty, a volume can be offline and not cause errors during the update.

To update the system, follow these steps.

    1. Download, install, and run the latest version of the test utility to verify that there are no issues with the current system.
       You can download the most current version of this test utility tool and software package at the following website:
    
       http://support.lenovo.com/us/en/products/servers/lenovo-storage/v3700v2
       or
       http://support.lenovo.com/us/en/products/servers/lenovo-storage/v5030

    2. Use PuTTY scp (pscp) to copy the update files to the node.
    3. Ensure that the update file was successfully copied.
       
	   Before you begin the update, you must be aware of the following situations:

        * The installation process fails under the following conditions:
            - If the code that is installed on the remote system is not compatible with the new code or if an intersystem 
	      communication error does not allow the system to check that the code is compatible.
            - If any node in the system has a hardware type that is not supported by the new code.
            - If the system determines that one or more volumes in the system would be taken offline by rebooting the nodes 
	      as part of the update process. You can find details about which volumes would be affected by using 
	      the lsdependentvdisks command. If you are prepared to lose access to data during the update, you can use the force 
	      flag to override this restriction.
        
	* The update is distributed to all the nodes in the system by using internal connections between the nodes.
        * Nodes are updated one at a time.
        * Nodes run the new code concurrently with normal system activity.
        * While the node is updated, it does not participate in I/O activity in the I/O group. As a result, all I/O activity for
	  the volumes in the I/O group is directed to the other node in the I/O group by the host multipathing software.
        * There is a thirty-minute delay between node updates. The delay allows time for the host multipathing software to 
	  rediscover paths to the nodes that are updated. There is no loss of access when another node in the I/O group is updated.
        * The update is not committed until all nodes in the system are successfully updated to the new code level. If all nodes 
	  successfully restart with the new code level, the new level is committed. When the new level is committed, the system 
	  vital product data (VPD) is updated to reflect the new code level.
        * Wait until all member nodes are updated and the update is committed before you invoke the new functions of the updated 
	  code.
        * Because the update process takes some time, the installation command completes as soon as the code level is verified by 
	  the system. To determine when the update is completed, you must either display the code level in the system VPD or look 
	  for the Software update complete event in the error/event log. If any node fails to restart with the new code level or 
	  fails at any other time during the process, the code level is backed off.
        * During an update, the version number of each node is updated when the code is installed and the node is restarted. The 
	  system code version number is updated when the new code level is committed.
        * When the update starts, an entry is made in the error or event log and another entry is made when the update completes
 	  or fails.
    
   4. Issue this CLI command to start the update process:

		applysoftware -file software_update_file

      Where software_update_file is the name of the code update file in the directory you copied the file to in step 2.
	
      If the system identifies any volumes that would go offline as a result of rebooting the nodes as part of the system update, 
      the code update does not start. An optional force parameter can be used to indicate that the update continues regardless of 
      the problem identified. If you use the force parameter, you are prompted to confirm that you want to continue. The behavior
      of the force parameter changed, and it is no longer required when you apply an update to a system with errors in the event log.
    
   5. Issue the following CLI command to check the status of the code update process:

		lsupdate

      This command displays success when the update is complete.

      Note: If a status of stalled_non_redundant is displayed, proceeding with the remaining set of node updates might result in 
	    offline volumes. Contact a service representative to complete the update.
    
   6. To verify that the update successfully completed, issue the lsnodecanistervpd CLI command for each node that is in the 
      system. The code version field displays the new code level.

   When a new code level is applied, it is automatically installed on all the nodes that are in the system.
   
   Note: An automatic system update can take up to 30 minutes per node.


3. Updating the System Manually
During an automatic update procedure, the system updates each of the canisters systematically. The automatic method is the 
preferred procedure for updating the code on the canisters; however, to provide more flexibility in the update process, 
you can also update each canister manually.

During this manual procedure, you prepare the update, remove a canister from the system, update the code on the canister, and 
return the canister to the system. You repeat this process for the remaining canisters until the last canister is removed from 
the system. Every canister must be updated to the same code level. You cannot interrupt the update and switch to installing a 
different level. After all the canisters are updated, you must confirm the update to complete the process. The confirmation 
restarts each canister in order and takes about 30 minutes to complete.


Prerequisites

Start here to update to a later version.

Before you begin to update nodes manually, ensure that the following requirements are met:

    - The latest update test utility was downloaded to your .
    - The latest system update package was downloaded to your .
    - All node canisters are online.
    - Errors in the system event log are addressed and marked as fixed.
    - There are no volumes, MDisks, or storage systems with Degraded or Offline status.
    - The service assistant IP is configured to every node in the system.
    - The system superuser password is known.
    - The current system configuration was backed up and saved.
    - You have physical access to the hardware.

The following actions are not required; they are suggestions.

    - Stop all MetroMirror, Global Mirror, or HyperSwap operations during the update procedure.
    - Avoid running FlashCopy operations during this procedure.
    - Avoid migrating or formatting volumes during this procedure.
    - Stop collecting performance data for the system.
    - Stop any automated jobs that access the system before you update.
    - Ensure that no other processes are running on the system before you update.
    - If you want to update without host I/O, shut down all hosts before you start the update.


Preparing to update the system

The procedure to prepare for an update is run once for each system.

To prepare the system for an update, follow these steps:

    1. In the managment GUI, select Settings > System > Update System. The system automatically checks for updates and lists the 
       current level.
    2. Click Update.
    3. Select the test utility and update package that you downloaded. Enter the code level that you are updating to, such as 7.6.1.3.
    4. Click Update.  Wait for the files to upload.
    5. Select the type of update and click Finish.
       The test utility runs automatically and identifies any issues that it finds. Fix all problems before you proceed to step 6.
    6. When all issues are resolved, click Resume.
       The system is ready for a manual update when the status shows Prepared.


Preparing to update individual nodes

Before you update nodes individually, ensure that the system is ready for the update.

Before you begin;

Verify the prerequisites listed above.

After you verify that the prerequisites for a manual update are met, follow these steps:

    1. Use the management GUI to display the nodes in the system and record this information. For all the nodes in the system, 
       verify the following information:
        - Confirm that both canisters are online.
        - Identify which canister is acting as the configuration node.
        - Record the service IP address for each canister.
    2. If you are using the management GUI, view External Storage to ensure that everything is online and also verify that 
       internal storage is present.
    3. If you are using the command-line interface, submit this command for each storage system:

		lscontroller controller_name_or_controller_id

       where controller_name_or_controller_id is the name or ID of the storage system. Confirm that each storage system has 
       degraded=no status.
    
    4. Verify that all hosts have all paths available to all the volumes that are presented to them by the system. Ensure that 
       the multipathing driver is fully redundant, with every path available and online.
    5. If you did not do so previously, download the installation package for the level that you want to install. You can 
       download the most current package from the following website:

       http://support.lenovo.com/us/en/products/servers/lenovo-storage/v3700v2
       or
       http://support.lenovo.com/us/en/products/servers/lenovo-storage/v5030


	  
Updating all nodes except the configuration node

When you are updating nodes individually, before you update the configuration node, you must update all of the non-configuration 
nodes in the clustered system.

Before you update all the non-configuration nodes on the system, you must record each name for each node on the system. To view 
the node name for each node in the , select Setting > Network > iSCSI.

To update a non-configuration node canister, follow these steps:

    1. Ensure that all hosts have all paths available to volumes that are presented to them by the system.  If there are any 
       unavailable paths, wait up to 30 minutes, and check again. If any path is still not available, investigate and resolve 
       the connection problem before you start the code update. Ensure that the multi-pathing driver is fully redundant, with 
       every path available, and online. During the update, you might see multi-pathing driver errors that are related to paths 
       that are going away, and to the increased multi-pathing driver error count.
    
    2. In the management GUI, check that no incomplete volume synchronization tasks are in progress. In the status bars that are 
       at the bottom of the page, expand Running Tasks to display the progress of actions. Ensure that all synchronization tasks 
       are complete before you remove the node.
    
    3. In the management GUI, use the dynamic image of the system to view the canisters. Right-click the canister that you want 
       to remove and select Remove.
    
    4. Open a web browser and type https://service_ip in the address field, where service IP is the service IP address for the 
       node that you deleted. The service assistant login page displays.
    
    5. Verify that the node is no longer a member of the system by checking that the node status, as shown in the display, is 
       service.  The node has an error code of 690. The removed node is no longer visible to the system. If the node status is 
       active, you are probably connected to the wrong node.
    
    6. On the service assistant home page, select Update manually from the left menu.
    
    Attention: Each node must run the same code version; nodes with different versions are not compatible.
    
    7. Select the correct update package, and click Update. The node restarts and updates. Access to the service assistant is 
       lost while the node restarts, but you can still access the service assistant from a different node.
    
    8. Eventually the node canister that you removed and updated automatically rejoins the system.  When the canister is online, 
       go to step 10.
    
    9. After the all the nodes except configuration node are updated and added back to the system, rename the node canister to 
       the name it had before it was removed and updated. In the , select Setting > Network > iSCSI.
    
    10. If you have any remaining nodes to update that are not configuration nodes, repeat this task for the next 
        non-configuration node that is yet updated, starting at step 1.


Updating the configuration node

After all the other nodes are updated on the system, you can update the configuration node.

To update the configuration node, follow these steps:

    1. Ensure that all hosts have all paths available to volumes that are mapped to those hosts. If not, wait up to 30 minutes 
       and repeat this check. If some paths are still unavailable, investigate and resolve these connection problems before you 
       continue the system code update.
    
    2. Before you update the configuration node on the system, you must record the name of node. To view the node name in the 
       management GUI, select Setting > Network > iSCSI.
    
    3. In the management GUI, check that there are no incomplete volume synchronization tasks in progress. Click Running Tasks.
    
    4. Remove the configuration node from the system.
    
    Note: When the configuration node is removed from the system, the SSH connection to the system closes.
    
    5. Open a web browser and type http://service_assistant_ip in the address field. The service assistant IP address is the 
       IP address for the service assistant on the node that was deleted.
    
    6. On the service assistant home page, click Exit service state and press Go. The node is automatically added to the system. 
       Because the process of adding the node automatically updates the code on the node, it takes some time before the node is 
       fully online.

    This action automatically updates the code on this last node, which was the configuration node.
    
    7. After the configuration node is updated and added back to the system, rename the node canister to the name it had before 
       it was removed and updated. In the , select Setting > Network > iSCSI.

	   
Completing the update

After the configuration node is successfully rebooted and updated, verify the update and return the system to its original state 
by following these steps.

    1. Confirm the update:
        a. Enter the lsupdate command to determine if the update requires a further completion step.
        b. If the lsupdate command shows that the status is system_completion_required, enter svctask applysoftware -complete in 
           the command-line interface.
    
	Each canister is restarted in order. The update process takes approximately 30 minutes with 5 minutes per node. During the 
	confirmation step, the system is operational but no other updates can be started until the current update is confirmed.
    
	2. Verify that the system is running at the correct version and that no other errors in the system must be resolved.

    3. Verify that all the nodes are online.
    4. Verify that all volumes are online. In the , select Volumes > Volumes.
    5. Verify that all managed disks (MDisks) are online. In the , select Pools > MDisks by Pools.
    6. Restart any services, advanced functions, or scripts that were stopped before the update.

You completed the manual update.

-------------------------------------------------------------------------------
Limitations and considerations
-------------------------------------------------------------------------------

Some configuration information will be incorrect in Spectrum Control.

This does not have any functional impact and will be resolved in a future release of Spectrum control.

-------------------------------------------------------------------------------

No SAS Direct Attached Windows 2008 Hyper-V hosts will be able to connect to systems running v7.8.0.0.

-------------------------------------------------------------------------------

The drive limit remains 1056 drives per cluster.

-------------------------------------------------------------------------------

When a system first upgrades from pre-7.7.0 to 7.7.0 or later, the in-memory audit log will be cleared. This means the catauditlog 
CLI command, and the GUI, will not show any commands that were issued before the upgrade.

The upgrade test utility will write the existing auditlog to a file in /dumps/audit on the config node. If the auditlog is needed 
at a later point, this file can be copied from the config node.

Subsequent upgrades will not affect the contents of the auditlog.

-------------------------------------------------------------------------------

Priority Flow Control for iSCSI is not currently supported.

This is a temporary restriction that will be lifted in a future V7.8 PTF.

-------------------------------------------------------------------------------

It is not possible to replace the mid-plane in a SAS expansion enclosure.
If a mid-plane must be replaced then a new enclosure will be provided.

This is a temporary restriction that will be lifted in a future V7.8 PTF.

-------------------------------------------------------------------------------

It is not possible to apply software updates to all drives in systems running V7.6 where a distributed array contains more than 
16 drives. 

As a consequence the following actions should NOT be undertaken:

  â€¢ Using CLI command applydrivesoftware with the "-all" option;
  â€¢ Using the "Upgrade All" option in the GUI under "Pools > Internal Storage > Actions".
  
To workaround this restriction, please perform the software update on one drive at a time. This can be achieved using the CLI 
command applydrivesoftware with the "-drive" option for each drive. 

-------------------------------------------------------------------------------

The following new CLI commands are not supported in V7.6:

    addvolumecopy
    rmvolumecopy

Consequently, it is not possible to convert an existing volume to or from a HyperSwap volume by using these CLI commands or the 
management GUI. Administration of a HyperSwap volume can still be performed using the same CLI commands that were supported in 
version 7.5, configuring the component vdisks and active-active relationship that form the HyperSwap volume.

Additionally, it is not currently supported to use the '-consistgrp' parameter on the new mkvolume command when creating a 
HyperSwap volume. A HyperSwap volume can still be assigned to a consistency group after it has been created using the 
management GUI or CLI.

Attempting to use a feature which is not supported will return the following error:

CMMVC7205E The command failed because it is not supported.

This is a temporary restriction that will be lifted in a future V7.6 PTF.

-------------------------------------------------------------------------------

Systems using Internet Explorer 11 may receive an erroneous "The software version is not supported" message when viewing 
the "Update System" panel in the GUI. Internet Explorer 10 and Firefox do not experience this issue.

-------------------------------------------------------------------------------

Global Mirror relationships must be stopped when performing an upgrade 

-------------------------------------------------------------------------------

An automatic update may stall if the Enhanced Stretched System function is configured on a system with exactly four nodes and 
non-mirrored VDisks. It is therefore recommended to update such systems using the manual update procedure documented in the 
Knowledge Center. This does not apply to conventional Stretched Systems or Enhanced Stretched Systems with two, six or eight 
nodes. This also does not apply to Enhanced Stretched Systems that solely contain mirrored VDisks with a copy in both 
site 1 and site 2. 

-------------------------------------------------------------------------------

Intra-System Global Mirror not supported 

-------------------------------------------------------------------------------

Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0 

-------------------------------------------------------------------------------

If an update stalls or fails then Contact Lenovo Support for Further Assistance

-------------------------------------------------------------------------------



-----------------------------------------------------------------------------
TRADEMARKS
-----------------------------------------------------------------------------

* The following are registered trademarks of Lenovo.

  Lenovo
  The Lenovo Logo
  ThinkServer

* Intel is a registered trademark of Intel Corporation.

* Microsoft and Windows are registered trademarks of Microsoft Corporation.

* IBM is a registered trademark of International Business Machines

Other company, product, and service names may be registered trademarks,
trademarks or service marks of others.

LENOVO PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND,
EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE.

Some jurisdictions do not allow disclaimer of express or implied
warranties in certain transactions, therefore, this statement may not
apply to you. This information could include technical inaccuracies
or typographical errors.  Changes are periodically made to the
information herein; these changes will be incorporated in new editions
of the publication. Lenovo may make improvements and/or changes in
the product(s) and/or the program(s) described in this publication at
any time without notice.

BY FURNISHING THIS DOCUMENT, LENOVO GRANTS NO LICENSES TO ANY PATENTS
OR COPYRIGHTS.

(C) Copyright Lenovo 2001-2016. All rights reserved.