-------------------------------------------------------------------------------
Software name      	Lenovo Storage V5030 Controller Firmware Update Bundle

Supported Model		System name		
			Lenovo Storage V5030
			
Version           	7.6.1.7

Issue date        	February 17, 2017

Prerequisites:		None

-------------------------------------------------------------------------------
WHAT THIS PACKAGE DOES
-------------------------------------------------------------------------------
This firmware update enables you to update the controller firmware via;
1. Lenovo Storage V5030 - Web User Interface. 
2. Lenovo Storage V5030 - Command Line Interface (CLI).

-------------------------------------------------------------------------------
Version history
-------------------------------------------------------------------------------
<Complete version history of released code>

  The following versions of Controller Firmware that been released to date.


Summary of Changes

  Where: <   >        Package version
         [Important]  Important update
         (New)        New function or enhancement
         (Fix)        Correction to existing function


<v7.6.1.7>

- [Important] The following items were fixed in this release:

  A drive failure whilst an array rebuild is in progress can lead to both nodes in an IO 
  group asserting (HU01193)-Distributed RAID

  The management of FlashCopy grains during a restore process can miss some IOs (HU01447)-FlashCopy

  Due to an issue in the cache component nodes within an IO group are not able to form a 
  caching-pair and are serving IO through a single node (HU00762)-Reliability, Availability and Serviceability

  Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP address 
  changing in the event of a fail over (HU01409)-Reliability, Availability and Serviceability

- (Fix) The following items were fixed in this release: 

  When a FlashCopy consistency group is stopped more than once in rapid succession a node assert 
  may result (HU01247)-FlashCopy 

  For certain config nodes the CLI Help commands may not work (HU01399)-Command Line Interface

  Node assert due to an accounting issue within the cache component (HU01432)-Cache


<v7.6.1.6>

- [Important] The following items were fixed in this release:

  Multiple nodes can experience a lease expiry when a FC port is having 
  communications issues (HU01109)

  Node asserts due to an issue with the state machine transition in 16Gb HBA 
  firmware (HU01221)

  Changing max replication delay from the default to a small non-zero number can 
  cause hung IOs leading to multiple node asserts and a loss of access (HU01226)-Global Mirror

  Making any config change that may interact with the primary change volume of a GMCV 
  relationship, whilst data is being actively copied, can result in a node assert 
  (HU01245)-Global Mirror With Change Volumes


- (Fix) The following items were fixed in this release:

  DRAID rebuild incorrectly reports event code 988300 (HU01050)-Distributed RAID

  3PAR controllers do not support OTUR commands resulting in device port exclusions 
  (HU01063)-Backend Storage

  Circumstances can arise where more than one array rebuild operation can share the 
  same CPU core resulting in extended completion times (HU01187)

  After upgrade to 7.6 or later iSCSI hosts may incorrectly be shown as offline in 
  the CLI (HU01234)-iSCSI

  When following the DMP for a 1685 event, if the option for "drive reseat has already 
  been attempted" is selected, the process to replace a drive is not started (HU01251)-GUI Fix Procedure

  CLI allows the input of carriage return characters into certain fields, after cluster 
  creation, resulting in invalid cluster VPD and failed node adds (HU01353)
		 	 

<v7.6.1.5>

- [Important] The following items were fixed in this release:

  An extremely rare timing window condition in the way GM handles write sequencing 
  may cause multiple node asserts (HU00271)-Global Mirror

  A limitation in the RAID anti-deadlock page reservation process may lead to an 
  Mdisk group going offline (HU01082)-Hosts
		
  Easy Tier may unbalance the workloads on Mdisks using specific Nearline SAS drives 
  due to incorrect thresholds for their performance (HU01140)-EasyTier
		
  Node assert, possibly due to a network problem, when a CLI mkippartnership is issued. 
  This may lead to loss of the config node, requiring a T2 recovery (HU01141)-IP Replication

  Node assert due to 16Gb HBA firmware receiving an invalid SCSI TUR command (HU01182)
		
  When removing multiple Mdisks a T2 may occur (HU01184)

  The handling of a rebooted node's return to the cluster can occasionally become 
  delayed resulting in a stoppage of inter cluster relationships (HU01223)-Metro Mirror

  Hardware offloading in 16G FC adapters has introduced a deadlock condition that 
  causes many driver commands to time out leading to a node assert (IT16337)


- (Fix) The following items were fixed in this release:

  The result of CLI commands are sometimes not promptly presented in the GUI 
  (HU01017)-Graphical User Interface

  An unresponsive testemail command, possible due to a congested network, may result 
  in a single node assert (HU01074)

  svcconfig backup fails when an IO group name contains a hyphen (HU01089)

  For a small number of node asserts the SAS register values are retaining incorrect 
  values rendering the debug information invalid (HU01097)-Support Data Collection

  SVC supports SSH connections using RC4 based ciphers (HU01110)

  A single node assert may occur if CLI commands are received from the VASA provider 
  in very rapid succession. This is caused by a deadlock condition which prevents the 
  subsequent CLI command from completing (HU01194)-VVols

  Running the Comprestimator svctask analyzevdiskbysystem command may cause the config 
  node to assert (HU01198)-Comprestimator

  GUI displays an incorrect timezone description for Moscow (HU01212)-Graphical User Interface

  GUI and snap missing EasyTier heatmap information (HU01214)-Support Data Collection


<v7.6.1.4>

- (Fix) The following items were fixed in this release:

  "When creating a snapshot on an ESX host, using VVols, a T2 recovery may 
  occur (HU01180)-Hosts,VVols

  Node asserts due to a timing window in the cache component. For more details 
  refer to the following Flash (IT14917)-Cache
		
  "In certain configurations throttling too much may result in dropped IOs, which 
  can lead to a single node assert (HU01072)
		
  When using GMCV relationships if a node in an IO group loses communication with 
  its partner it may assert (HU01104)-Global Mirror With Change Volumes

  Where nodes are missing config files some services will be prevented from 
  starting (HU01143)
		
  CLI command lsportsas may show unexpected port numbering (IT15366)	


		 
<v7.6.1.3>

- (New) Initial Release

-------------------------------------------------------------------------------
INSTALLATION INSTRUCTIONS
-------------------------------------------------------------------------------

Review the procedures on how to update the Lenovo Storage V3700 V2 / V5030 Series systems under the "Updating the system" section
of the online documentation. 

Obtaining the software packages
Each update requires that you run the update test utility and then download the correct software package. Specific steps for these 
two processes are described in the topics that follow.


Update Test Utility
The update test utility indicates whether your current system has issues that need to be resolved before you update to the next 
level. 

The test utility is run as part of the system update process or for drive firmware.

The most current version of this tool or the system software packages can be downloaded from the following support website:  

http://support.lenovo.com/us/en/products/servers/lenovo-storage/v3700v2
or
http://support.lenovo.com/us/en/products/servers/lenovo-storage/v5030

After you download the update test utility, you have the following options:

    1. If you are using the, select Settings>System> Update system and click Update to run the test utility. 
       Complete directions are included in Updating the system automatically.

    2. If you are using the command-line interface (CLI), directions are included in 
       Updating the system automatically using the CLI.

    3. If you are using the manual update procedure, see Updating the system manually.

    4. To check drive firmware levels either by using the or the CLI, follow the directions in drive firmware package.



1. Updating the System Automatically
You can update the entire system in a coordinated process with no user intervention after the update is initiated.

Before you update your system, review all of the topics in "Updating" section of the documentation to understand how the process 
works. 

Allow adequate time, such as up to a week in some cases to look for potential problems or known bugs. Additional information on 
updating the system is available at the following website:

http://support.lenovo.com/us/en/products/servers/lenovo-storage/v3700v2
or
http://support.lenovo.com/us/en/products/servers/lenovo-storage/v5030

When the system detects that the hardware is not running at the expected level, the system applies the correct firmware.

If you want to update without host I/O, shut down all hosts before you start the update.
Complete the following steps to update the system automatically:

    1. In the management GUI, select Settings > System > Update System.
    2. Click Update.
    3. Select the test utility and the code package that you downloaded from the support site. The test utility verifies that the
	   system is ready to be updated.
    4. Click Update. As the canisters on the system are updated, the displays the progress for each canister.

Monitor the update information in the to determine when the process is complete.

If the process stalls during the update, click Resume to continue with the update or click Cancel to abandon the update and 
restore the previous level of code. 

You can also use the CLI command applysoftware -resume to resume the update.


2. Updating the System Automatically using the CLI
You can use the command-line interface (CLI) to install updates.

Start here to update to a later version.

Important: Before you start an update, you must check for offline or degraded volumes. An offline volume can cause write data 
that was modified to be pinned in the system cache. This action prevents volume failover and causes a loss of input/output (I/O)
access during the update. If the fast_write_state is empty, a volume can be offline and not cause errors during the update.

To update the system, follow these steps.

    1. Download, install, and run the latest version of the test utility to verify that there are no issues with the current system.
       You can download the most current version of this test utility tool and software package at the following website:
    
       http://support.lenovo.com/us/en/products/servers/lenovo-storage/v3700v2
       or
       http://support.lenovo.com/us/en/products/servers/lenovo-storage/v5030

    2. Use PuTTY scp (pscp) to copy the update files to the node.
    3. Ensure that the update file was successfully copied.
       
	   Before you begin the update, you must be aware of the following situations:

        * The installation process fails under the following conditions:
            - If the code that is installed on the remote system is not compatible with the new code or if an intersystem 
	      communication error does not allow the system to check that the code is compatible.
            - If any node in the system has a hardware type that is not supported by the new code.
            - If the system determines that one or more volumes in the system would be taken offline by rebooting the nodes 
	      as part of the update process. You can find details about which volumes would be affected by using 
	      the lsdependentvdisks command. If you are prepared to lose access to data during the update, you can use the force 
	      flag to override this restriction.
        
	* The update is distributed to all the nodes in the system by using internal connections between the nodes.
        * Nodes are updated one at a time.
        * Nodes run the new code concurrently with normal system activity.
        * While the node is updated, it does not participate in I/O activity in the I/O group. As a result, all I/O activity for
	  the volumes in the I/O group is directed to the other node in the I/O group by the host multipathing software.
        * There is a thirty-minute delay between node updates. The delay allows time for the host multipathing software to 
	  rediscover paths to the nodes that are updated. There is no loss of access when another node in the I/O group is updated.
        * The update is not committed until all nodes in the system are successfully updated to the new code level. If all nodes 
	  successfully restart with the new code level, the new level is committed. When the new level is committed, the system 
	  vital product data (VPD) is updated to reflect the new code level.
        * Wait until all member nodes are updated and the update is committed before you invoke the new functions of the updated 
	  code.
        * Because the update process takes some time, the installation command completes as soon as the code level is verified by 
	  the system. To determine when the update is completed, you must either display the code level in the system VPD or look 
	  for the Software update complete event in the error/event log. If any node fails to restart with the new code level or 
	  fails at any other time during the process, the code level is backed off.
        * During an update, the version number of each node is updated when the code is installed and the node is restarted. The 
	  system code version number is updated when the new code level is committed.
        * When the update starts, an entry is made in the error or event log and another entry is made when the update completes
 	  or fails.
    
   4. Issue this CLI command to start the update process:

		applysoftware -file software_update_file

      Where software_update_file is the name of the code update file in the directory you copied the file to in step 2.
	
      If the system identifies any volumes that would go offline as a result of rebooting the nodes as part of the system update, 
      the code update does not start. An optional force parameter can be used to indicate that the update continues regardless of 
      the problem identified. If you use the force parameter, you are prompted to confirm that you want to continue. The behavior
      of the force parameter changed, and it is no longer required when you apply an update to a system with errors in the event log.
    
   5. Issue the following CLI command to check the status of the code update process:

		lsupdate

      This command displays success when the update is complete.

      Note: If a status of stalled_non_redundant is displayed, proceeding with the remaining set of node updates might result in 
	    offline volumes. Contact a service representative to complete the update.
    
   6. To verify that the update successfully completed, issue the lsnodecanistervpd CLI command for each node that is in the 
      system. The code version field displays the new code level.

   When a new code level is applied, it is automatically installed on all the nodes that are in the system.
   
   Note: An automatic system update can take up to 30 minutes per node.


3. Updating the System Manually
During an automatic update procedure, the system updates each of the canisters systematically. The automatic method is the 
preferred procedure for updating the code on the canisters; however, to provide more flexibility in the update process, 
you can also update each canister manually.

During this manual procedure, you prepare the update, remove a canister from the system, update the code on the canister, and 
return the canister to the system. You repeat this process for the remaining canisters until the last canister is removed from 
the system. Every canister must be updated to the same code level. You cannot interrupt the update and switch to installing a 
different level. After all the canisters are updated, you must confirm the update to complete the process. The confirmation 
restarts each canister in order and takes about 30 minutes to complete.


Prerequisites

Start here to update to a later version.

Before you begin to update nodes manually, ensure that the following requirements are met:

    - The latest update test utility was downloaded to your .
    - The latest system update package was downloaded to your .
    - All node canisters are online.
    - Errors in the system event log are addressed and marked as fixed.
    - There are no volumes, MDisks, or storage systems with Degraded or Offline status.
    - The service assistant IP is configured to every node in the system.
    - The system superuser password is known.
    - The current system configuration was backed up and saved.
    - You have physical access to the hardware.

The following actions are not required; they are suggestions.

    - Stop all MetroMirror, Global Mirror, or HyperSwap operations during the update procedure.
    - Avoid running FlashCopy operations during this procedure.
    - Avoid migrating or formatting volumes during this procedure.
    - Stop collecting performance data for the system.
    - Stop any automated jobs that access the system before you update.
    - Ensure that no other processes are running on the system before you update.
    - If you want to update without host I/O, shut down all hosts before you start the update.


Preparing to update the system

The procedure to prepare for an update is run once for each system.

To prepare the system for an update, follow these steps:

    1. In the managment GUI, select Settings > System > Update System. The system automatically checks for updates and lists the 
       current level.
    2. Click Update.
    3. Select the test utility and update package that you downloaded. Enter the code level that you are updating to, such as 7.6.1.3.
    4. Click Update.  Wait for the files to upload.
    5. Select the type of update and click Finish.
       The test utility runs automatically and identifies any issues that it finds. Fix all problems before you proceed to step 6.
    6. When all issues are resolved, click Resume.
       The system is ready for a manual update when the status shows Prepared.


Preparing to update individual nodes

Before you update nodes individually, ensure that the system is ready for the update.

Before you begin;

Verify the prerequisites listed above.

After you verify that the prerequisites for a manual update are met, follow these steps:

    1. Use the management GUI to display the nodes in the system and record this information. For all the nodes in the system, 
       verify the following information:
        - Confirm that both canisters are online.
        - Identify which canister is acting as the configuration node.
        - Record the service IP address for each canister.
    2. If you are using the management GUI, view External Storage to ensure that everything is online and also verify that 
       internal storage is present.
    3. If you are using the command-line interface, submit this command for each storage system:

		lscontroller controller_name_or_controller_id

       where controller_name_or_controller_id is the name or ID of the storage system. Confirm that each storage system has 
       degraded=no status.
    
    4. Verify that all hosts have all paths available to all the volumes that are presented to them by the system. Ensure that 
       the multipathing driver is fully redundant, with every path available and online.
    5. If you did not do so previously, download the installation package for the level that you want to install. You can 
       download the most current package from the following website:

       http://support.lenovo.com/us/en/products/servers/lenovo-storage/v3700v2
       or
       http://support.lenovo.com/us/en/products/servers/lenovo-storage/v5030


	  
Updating all nodes except the configuration node

When you are updating nodes individually, before you update the configuration node, you must update all of the non-configuration 
nodes in the clustered system.

Before you update all the non-configuration nodes on the system, you must record each name for each node on the system. To view 
the node name for each node in the , select Setting > Network > iSCSI.

To update a non-configuration node canister, follow these steps:

    1. Ensure that all hosts have all paths available to volumes that are presented to them by the system.  If there are any 
       unavailable paths, wait up to 30 minutes, and check again. If any path is still not available, investigate and resolve 
       the connection problem before you start the code update. Ensure that the multi-pathing driver is fully redundant, with 
       every path available, and online. During the update, you might see multi-pathing driver errors that are related to paths 
       that are going away, and to the increased multi-pathing driver error count.
    
    2. In the management GUI, check that no incomplete volume synchronization tasks are in progress. In the status bars that are 
       at the bottom of the page, expand Running Tasks to display the progress of actions. Ensure that all synchronization tasks 
       are complete before you remove the node.
    
    3. In the management GUI, use the dynamic image of the system to view the canisters. Right-click the canister that you want 
       to remove and select Remove.
    
    4. Open a web browser and type https://service_ip in the address field, where service IP is the service IP address for the 
       node that you deleted. The service assistant login page displays.
    
    5. Verify that the node is no longer a member of the system by checking that the node status, as shown in the display, is 
       service.  The node has an error code of 690. The removed node is no longer visible to the system. If the node status is 
       active, you are probably connected to the wrong node.
    
    6. On the service assistant home page, select Update manually from the left menu.
    
    Attention: Each node must run the same code version; nodes with different versions are not compatible.
    
    7. Select the correct update package, and click Update. The node restarts and updates. Access to the service assistant is 
       lost while the node restarts, but you can still access the service assistant from a different node.
    
    8. Eventually the node canister that you removed and updated automatically rejoins the system.  When the canister is online, 
       go to step 10.
    
    9. After the all the nodes except configuration node are updated and added back to the system, rename the node canister to 
       the name it had before it was removed and updated. In the , select Setting > Network > iSCSI.
    
    10. If you have any remaining nodes to update that are not configuration nodes, repeat this task for the next 
        non-configuration node that is yet updated, starting at step 1.


Updating the configuration node

After all the other nodes are updated on the system, you can update the configuration node.

To update the configuration node, follow these steps:

    1. Ensure that all hosts have all paths available to volumes that are mapped to those hosts. If not, wait up to 30 minutes 
       and repeat this check. If some paths are still unavailable, investigate and resolve these connection problems before you 
       continue the system code update.
    
    2. Before you update the configuration node on the system, you must record the name of node. To view the node name in the 
       management GUI, select Setting > Network > iSCSI.
    
    3. In the management GUI, check that there are no incomplete volume synchronization tasks in progress. Click Running Tasks.
    
    4. Remove the configuration node from the system.
    
    Note: When the configuration node is removed from the system, the SSH connection to the system closes.
    
    5. Open a web browser and type http://service_assistant_ip in the address field. The service assistant IP address is the 
       IP address for the service assistant on the node that was deleted.
    
    6. On the service assistant home page, click Exit service state and press Go. The node is automatically added to the system. 
       Because the process of adding the node automatically updates the code on the node, it takes some time before the node is 
       fully online.

    This action automatically updates the code on this last node, which was the configuration node.
    
    7. After the configuration node is updated and added back to the system, rename the node canister to the name it had before 
       it was removed and updated. In the , select Setting > Network > iSCSI.

	   
Completing the update

After the configuration node is successfully rebooted and updated, verify the update and return the system to its original state 
by following these steps.

    1. Confirm the update:
        a. Enter the lsupdate command to determine if the update requires a further completion step.
        b. If the lsupdate command shows that the status is system_completion_required, enter svctask applysoftware -complete in 
           the command-line interface.
    
	Each canister is restarted in order. The update process takes approximately 30 minutes with 5 minutes per node. During the 
	confirmation step, the system is operational but no other updates can be started until the current update is confirmed.
    
	2. Verify that the system is running at the correct version and that no other errors in the system must be resolved.

    3. Verify that all the nodes are online.
    4. Verify that all volumes are online. In the , select Volumes > Volumes.
    5. Verify that all managed disks (MDisks) are online. In the , select Pools > MDisks by Pools.
    6. Restart any services, advanced functions, or scripts that were stopped before the update.

You completed the manual update.

----------------------------------------------------------------------
Limitations and considerations
----------------------------------------------------------------------

It is not possible to apply software updates to all drives in systems running V7.6 where a distributed array contains more than 
16 drives. 

As a consequence the following actions should NOT be undertaken:

   Using CLI command applydrivesoftware with the "-all" option;
   Using the "Upgrade All" option in the GUI under "Pools > Internal Storage > Actions".
  
To workaround this restriction, please perform the software update on one drive at a time. This can be achieved using the CLI 
command applydrivesoftware with the "-drive" option for each drive. 

-------------------------------------------------------------------------------

The following new CLI commands are not supported in V7.6:

    addvolumecopy
    rmvolumecopy

Consequently, it is not possible to convert an existing volume to or from a HyperSwap volume by using these CLI commands or the 
management GUI. Administration of a HyperSwap volume can still be performed using the same CLI commands that were supported in 
version 7.5, configuring the component vdisks and active-active relationship that form the HyperSwap volume.

Additionally, it is not currently supported to use the '-consistgrp' parameter on the new mkvolume command when creating a 
HyperSwap volume. A HyperSwap volume can still be assigned to a consistency group after it has been created using the 
management GUI or CLI.

Attempting to use a feature which is not supported will return the following error:

CMMVC7205E The command failed because it is not supported.

This is a temporary restriction that will be lifted in a future V7.6 PTF.

-------------------------------------------------------------------------------

Systems using Internet Explorer 11 may receive an erroneous "The software version is not supported" message when viewing 
the "Update System" panel in the GUI. Internet Explorer 10 and Firefox do not experience this issue.

-------------------------------------------------------------------------------

Global Mirror relationships must be stopped when performing an upgrade 

-------------------------------------------------------------------------------

An automatic update may stall if the Enhanced Stretched System function is configured on a system with exactly four nodes and 
non-mirrored VDisks. It is therefore recommended to update such systems using the manual update procedure documented in the 
Knowledge Center. This does not apply to conventional Stretched Systems or Enhanced Stretched Systems with two, six or eight 
nodes. This also does not apply to Enhanced Stretched Systems that solely contain mirrored VDisks with a copy in both 
site 1 and site 2. 

-------------------------------------------------------------------------------

Intra-System Global Mirror not supported 

-------------------------------------------------------------------------------

Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0 

-------------------------------------------------------------------------------

If an update stalls or fails then Contact Lenovo Support for Further Assistance

-------------------------------------------------------------------------------



-----------------------------------------------------------------------------
TRADEMARKS
-----------------------------------------------------------------------------

* The following are registered trademarks of Lenovo.

  Lenovo
  The Lenovo Logo
  ThinkServer

* Intel is a registered trademark of Intel Corporation.

* Microsoft and Windows are registered trademarks of Microsoft Corporation.

* IBM is a registered trademark of International Business Machines

Other company, product, and service names may be registered trademarks,
trademarks or service marks of others.

LENOVO PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND,
EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE.

Some jurisdictions do not allow disclaimer of express or implied
warranties in certain transactions, therefore, this statement may not
apply to you. This information could include technical inaccuracies
or typographical errors.  Changes are periodically made to the
information herein; these changes will be incorporated in new editions
of the publication. Lenovo may make improvements and/or changes in
the product(s) and/or the program(s) described in this publication at
any time without notice.

BY FURNISHING THIS DOCUMENT, LENOVO GRANTS NO LICENSES TO ANY PATENTS
OR COPYRIGHTS.

(C) Copyright Lenovo 2001-2016. All rights reserved.