Release Note for systems built with IBM Spectrum Virtualize


This is the release note for the 7.7.1 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 7.7.1.0 and 7.7.1.9. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 5 June 2020.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links
Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section

1. New Features

The following new features have been introduced in the 7.7.1 release:

2. Known Issues and Restrictions

Note: For clarity, the term "node" will be used to refer to a SAN Volume Controller node or Storwize system node canister.
Details Introduced

In the GUI, when filtering volumes by host, if there are more than 50 host objects, then the host list will not include the hosts’ names.

This issue will be fixed in a future PTF.

7.7.1.7

Systems with encrypted managed disks cannot be upgraded to v7.7.1.5.

This is a temporary restriction that is policed by the software upgrade test utility. Please check this release note regularly for updates.

7.7.1.5

There is a small possibility that during an upgrade, a 2145-SV1 node may fail to reboot correctly.

If any 2145-SV1 nodes fail to come back online after 30 minutes, please check whether the power LED on the front of the machine is lit. If the power LED is not lit then the fix is to remove the input power from both PSUs, wait 10 seconds and then re-insert the power and turn the node on. The software upgrade will then continue as normal.

The new BIOS that is being installed will fix the issue that causes the reboot to fail for future upgrades.

7.7.1.1

When a system first upgrades from pre-7.7.0 to 7.7.0 or later, the in-memory audit log will be cleared. This means the catauditlog CLI command, and the GUI, will not show any commands that were issued before the upgrade.

The upgrade test utility will write the existing auditlog to a file in /dumps/audit on the config node. If the auditlog is needed at a later point, this file can be copied from the config node.

Subsequent upgrades will not affect the contents of the auditlog.

7.7.0.0

Priority Flow Control for iSCSI is only supported on Brocade VDX 10GbE switches.

This is a temporary restriction that will be lifted in a future V7.7 PTF.

7.7.0.0

It is not possible to replace the mid-plane in a SVC 12F SAS expansion enclosure.

If a SVC 12F mid-plane must be replaced then a new enclosure will be provided.

This is a temporary restriction that will be lifted in a future V7.7 PTF.

7.7.0.0
V3500 and V3700 systems with 4GB of memory cannot be upgraded to V7.6 or later.

It should be noted that V3500 systems with 4GB of memory will need to upgrade to V7.5.0.4 or a later V7.5.0 PTF before attempting to increase memory to 8GB.

7.6.0.0
Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0

Refer to this flash for more information

n/a
If an update stalls or fails then Contact IBM Support for Further Assistance n/a

3. Issues Resolved

This release contains all of the fixes included in the 7.7.0.1 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier
Link for additional Information
Resolved in
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
CVE-2016-10708 ibm10717661 7.7.1.9
CVE-2016-10142 ibm10717931 7.7.1.9
CVE-2017-11176 ibm10717931 7.7.1.9
CVE-2018-1433 ssg1S1012263 7.7.1.9
CVE-2018-1434 ssg1S1012263 7.7.1.9
CVE-2018-1438 ssg1S1012263 7.7.1.9
CVE-2018-1461 ssg1S1012263 7.7.1.9
CVE-2018-1462 ssg1S1012263 7.7.1.9
CVE-2018-1463 ssg1S1012263 7.7.1.9
CVE-2018-1464 ssg1S1012263 7.7.1.9
CVE-2018-1465 ssg1S1012263 7.7.1.9
CVE-2018-1466 ssg1S1012263 7.7.1.9
CVE-2016-6210 ssg1S1012276 7.7.1.9
CVE-2016-6515 ssg1S1012276 7.7.1.9
CVE-2013-4312 ssg1S1012277 7.7.1.9
CVE-2015-8374 ssg1S1012277 7.7.1.9
CVE-2015-8543 ssg1S1012277 7.7.1.9
CVE-2015-8746 ssg1S1012277 7.7.1.9
CVE-2015-8812 ssg1S1012277 7.7.1.9
CVE-2015-8844 ssg1S1012277 7.7.1.9
CVE-2015-8845 ssg1S1012277 7.7.1.9
CVE-2015-8956 ssg1S1012277 7.7.1.9
CVE-2016-2053 ssg1S1012277 7.7.1.9
CVE-2016-2069 ssg1S1012277 7.7.1.9
CVE-2016-2384 ssg1S1012277 7.7.1.9
CVE-2016-2847 ssg1S1012277 7.7.1.9
CVE-2016-3070 ssg1S1012277 7.7.1.9
CVE-2016-3156 ssg1S1012277 7.7.1.9
CVE-2016-3699 ssg1S1012277 7.7.1.9
CVE-2016-4569 ssg1S1012277 7.7.1.9
CVE-2016-4578 ssg1S1012277 7.7.1.9
CVE-2016-4581 ssg1S1012277 7.7.1.9
CVE-2016-4794 ssg1S1012277 7.7.1.9
CVE-2016-5412 ssg1S1012277 7.7.1.9
CVE-2016-5828 ssg1S1012277 7.7.1.9
CVE-2016-5829 ssg1S1012277 7.7.1.9
CVE-2016-6136 ssg1S1012277 7.7.1.9
CVE-2016-6198 ssg1S1012277 7.7.1.9
CVE-2016-6327 ssg1S1012277 7.7.1.9
CVE-2016-6480 ssg1S1012277 7.7.1.9
CVE-2016-6828 ssg1S1012277 7.7.1.9
CVE-2016-7117 ssg1S1012277 7.7.1.9
CVE-2016-10229 ssg1S1012277 7.7.1.9
CVE-2016-0634 ssg1S1012278 7.7.1.9
CVE-2017-5647 ssg1S1010892 7.7.1.7
CVE-2017-5638 ssg1S1010113 7.7.1.6
CVE-2016-6796 ssg1S1010114 7.7.1.6
CVE-2016-6816 ssg1S1010114 7.7.1.6
CVE-2016-6817 ssg1S1010114 7.7.1.6
CVE-2016-2177 ssg1S1010115 7.7.1.6
CVE-2016-2178 ssg1S1010115 7.7.1.6
CVE-2016-2183 ssg1S1010115 7.7.1.6
CVE-2016-6302 ssg1S1010115 7.7.1.6
CVE-2016-6304 ssg1S1010115 7.7.1.6
CVE-2016-6306 ssg1S1010115 7.7.1.6
CVE-2016-5696 ssg1S1010116 7.7.1.6
CVE-2016-2834 ssg1S1010117 7.7.1.6
CVE-2016-5285 ssg1S1010117 7.7.1.6
CVE-2016-8635 ssg1S1010117 7.7.1.6
CVE-2016-2183 ssg1S1010205 7.7.1.6
CVE-2016-5546 ssg1S1010205 7.7.1.6
CVE-2016-5547 ssg1S1010205 7.7.1.6
CVE-2016-5548 ssg1S1010205 7.7.1.6
CVE-2016-5549 ssg1S1010205 7.7.1.6
CVE-2016-5385 ssg1S1009581 7.7.1.3
CVE-2016-5386 ssg1S1009581 7.7.1.3
CVE-2016-5387 ssg1S1009581 7.7.1.3
CVE-2016-5388 ssg1S1009581 7.7.1.3
CVE-2016-3092 ssg1S1009284 7.7.1.2
CVE-2016-4430 ssg1S1009282 7.7.1.0
CVE-2016-4431 ssg1S1009282 7.7.1.0
CVE-2016-4433 ssg1S1009282 7.7.1.0
CVE-2016-4436 ssg1S1009282 7.7.1.0
CVE-2016-4461 ssg1S1010883 7.7.1.0

3.2 APARs Resolved

Show details for all APARs
APAR
Affected Products
Severity
Description
Resolved in
Feature Tags
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
HU01866 SVC HIPER A faulty PSU sensor, in a node, can fill the sel log causing the service processor (BMC) to disable logging. If a snap is subsequently taken, from the node, a timeout will occur and it will be taken offline. It is possible for this to affect both nodes in an IO group (show details) 7.7.1.9 System Monitoring
HU01767 All Critical Reads of 4K/8K from an array can under exceptional circumstances return invalid data (show details) 7.7.1.9 RAID, Thin Provisioning
HU01771 SVC, V7000 High Importance An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline (show details) 7.7.1.9 System Monitoring
HU01445 SVC, V7000 Suggested Systems with heavily used RAID-1 or RAID-10 arrays may experience a node warmstart (show details) 7.7.1.9
HU01624 All Suggested GUI response can become very slow in systems with a large number of compressed and uncompressed volume (show details) 7.7.1.9 Graphical User Interface
HU01628 All Suggested In the GUI on the Volumes page whilst using the filter function some volumes entries may not be displayed until the page has completed loading (show details) 7.7.1.9 Graphical User Interface
HU01664 All Suggested A timing window issue during an upgrade can cause the node restarting to warmstart stalling the upgrade (show details) 7.7.1.9 System Update
HU01687 All Suggested For 'volumes by host' 'ports by host' and 'volumes by pool' pages in the GUI when the number of items is greater than 50 then the item name will not be displayed (show details) 7.7.1.9 Graphical User Interface
HU01698 All Suggested A node warmstart may occur when deleting a compressed volume if a host has written to the volume minutes before the volume is deleted. (show details) 7.7.1.9 Compression
HU01730 SVC Suggested When running the DMP for a 1046 error the picture may not indicate the correct position of the failed adapter (show details) 7.7.1.9 GUI Fix Procedure
HU01763 SVC Suggested A single node warmstart may occur on a DH8 config node when inventory email is created. The issue only occurs if this coincides with a very high rate of CLI commands and high I/O workload on the config node (show details) 7.7.1.9 System Monitoring, Command Line Interface
HU01706 All HIPER Areas of volumes written with all-zero data may contain non-zero data. For more details refer to the following Flash (show details) 7.7.1.8
HU00744 (reverted) All Suggested This APAR has been reverted in light of issues with the fix. This APAR will be re-applied in a future PTF 7.7.1.8
HU01239 & HU01255 & HU01586 All HIPER The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access (show details) 7.7.1.7 Reliability Availability Serviceability
HU01505 All HIPER A non-redundant drive experiencing many errors can be taken offline obstructing rebuild activity (show details) 7.7.1.7 Backend Storage, RAID
HU01646 All HIPER A new failure mechanism in the 16Gb HBA driver can under certain circumstances lead to a lease expiry of the entire cluster (show details) 7.7.1.7 Reliability Availability Serviceability
HU01267 All Critical An unusual interaction between Remote Copy and FlashCopy can lead to both nodes in an IO group warmstarting (show details) 7.7.1.7 Global Mirror With Change Volumes
HU01490 All Critical When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across IO groups (show details) 7.7.1.7 iSCSI
HU01519 V7000 Critical One PSU may silently fail leading to the possibility of a dual node reboot (show details) 7.7.1.7 Reliability Availability Serviceability
HU01528 SVC Critical Both nodes may warmstart due to Sendmail throttling (show details) 7.7.1.7
HU01549 All Critical During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes (show details) 7.7.1.7 iSCSI, System Update
HU01572 All Critical SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access (show details) 7.7.1.7 iSCSI
HU01635 All Critical A slow memory leak in the host layer can lead to an out-of-memory condition resulting in offline volumes or performance degradation (show details) 7.7.1.7 Hosts, Performance
IT20627 All Critical When Samsung RI drives are used as quorum disks a drive outage can occur. Under some circumstances this can lead to a loss of access (show details) 7.7.1.7 Quorum
HU00762 All High Importance Due to an issue in the cache component nodes within an IO group are not able to form a caching-pair and are serving IO through a single node (show details) 7.7.1.7 Reliability Availability Serviceability
HU01416 All High Importance ISL configuration activity may cause a cluster-wide lease expiry (show details) 7.7.1.7 Reliability Availability Serviceability
HU01428 V7000, V5000, V3700, V3500 High Importance Scheduling issue adversely affects performance resulting in node warmstarts (show details) 7.7.1.7 Reliability Availability Serviceability
HU01477 V7000, V5000, V3700, V3500 High Importance Due to the way enclosure data is read it is possible for a firmware mismatch between nodes to occur during an upgrade (show details) 7.7.1.7 System Update
HU01488 V7000, V5000 High Importance SAS transport errors on an enclosure slot can affect an adjacent slot leading to double drive failures (show details) 7.7.1.7 Drives
HU01506 All High Importance Creating a vdisk copy with the -autodelete option can cause a timer scheduling issue leading to node warmstarts (show details) 7.7.1.7 Volume Mirroring
HU01569 SVC High Importance When compression utilisation is high the config node may exhibit longer IO response times than non-config nodes (show details) 7.7.1.7 Compression
HU01579 All High Importance In systems where all drives are of type HUSMM80xx0ASS20 it will not be possible to assign a quorum drive (show details) 7.7.1.7 Quorum, Drives
HU01609 & IT15343 All High Importance When the system is busy the compression component may be paged out of memory resulting in latency that can lead to warmstarts (show details) 7.7.1.7 Compression
HU01614 All High Importance After a node is upgraded hosts defined as TPGS may have paths set to inactive (show details) 7.7.1.7 Hosts
HU01636 V5000, V3700, V3500 High Importance A connectivity issue with certain host SAS HBAs can prevent hosts from establishing stable communication with the storage controller (show details) 7.7.1.7 Hosts
HU01638 All High Importance When upgrading to v7.6 or later if there is another cluster in the same zone which is at v5.1 or earlier then nodes will warmstart and the upgrade will fail. (show details) 7.7.1.7 System Update
IT17564 All High Importance All nodes in an IO group may warmstart when a DRAID array experiences drive failures (show details) 7.7.1.7 Distributed RAID
IT19726 SVC High Importance Warmstarts may occur when the attached SAN fabric is congested and HBA transmit paths becomes stalled preventing the HBA firmware from generating the completion for a FC command (show details) 7.7.1.7 Hosts
IT21383 SVC, V7000, V5000 High Importance Heavy IO may provoke inconsistencies in resource allocation leading to node warmstarts (show details) 7.7.1.7 Reliability Availability Serviceability
IT22376 V5000 High Importance Upgrade of V5000 Gen 2 systems, with 16GB node canisters, can become stalled with multiple warmstarts on first node to be upgraded (show details) 7.7.1.7 System Update
HU00744 (reverted in v7.7.1.8) All Suggested Single node warmstart due to an accounting issue within the cache component (show details) 7.7.1.7
HU00763 & HU01237 V7000, V5000, V3700, V3500 Suggested A node warmstart may occur when a quorum disk is accessed at the same time as the login to that disk is closed (show details) 7.7.1.7 Quorum
HU01098 All Suggested Some older backend controller code levels do not support C2 commands resulting in 1370 entries in the Event Log for every detectmdisk (show details) 7.7.1.7 Backend Storage
HU01228 All Suggested Automatic T3 recovery may fail due to the handling of quorum registration generating duplicate entries (show details) 7.7.1.7 Reliability Availability Serviceability
HU01229 V7000, V5000, V3700, V3500 Suggested The DMP for a 3105 event does not identify the correct problem canister (show details) 7.7.1.7
HU01332 All Suggested Performance monitor and Spectrum Control show zero CPU utilisation for compression (show details) 7.7.1.7 System Monitoring
HU01385 All Suggested A warmstart may occur if a rmvolumecopy or rmrcrelationship command are issued on a volume while IO is being forwarded to the associated copy (show details) 7.7.1.7 HyperSwap
HU01391 & HU01581 V7000, V5000, V3700, V3500 Suggested Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware (show details) 7.7.1.7 Drives
HU01430 V5000, V3700, V3500 Suggested Memory resource shortages in systems with 8GB of RAM can lead to node warmstarts (show details) 7.7.1.7
HU01457 V7000 Suggested In a hybrid V7000 cluster where one IO group supports 10k volumes and another does not some operations on volumes may incorrectly be denied in the GUI (show details) 7.7.1.7 Graphical User Interface
HU01466 SVC, V7000, V5000 Suggested Stretched cluster and HyperSwap IO routing does not work properly due to incorrect ALUA data (show details) 7.7.1.7 HyperSwap, Hosts
HU01467 All Suggested Failures in the handling of performance statistics files may lead to missing samples in Spectrum Control and other tools (show details) 7.7.1.7 System Monitoring
HU01469 V3700, V3500 Suggested Resource exhaustion in the iSCSI component can result in a node warmstart (show details) 7.7.1.7 iSCSI
HU01484 All Suggested During a RAID array rebuild there may be node warmstarts (show details) 7.7.1.7 RAID
HU01566 SVC Suggested After upgrading, numerous 1370 errors are seen in the Event Log (show details) 7.7.1.7 System Update
HU01582 All Suggested A compression issue in IP replication can result in a node warmstart (show details) 7.7.1.7 IP Replication
HU01474 SVC, V7000 HIPER Host writes to a read-only secondary volume trigger IO timeout warmstarts (show details) 7.7.1.6 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU01479 All HIPER The handling of drive reseats can sometimes allow IO to occur before the drive has been correctly failed resulting in offline mdisks (show details) 7.7.1.6 Distributed RAID
HU01483 All HIPER mkdistributedarray command may get stuck in the prepare state. Any interaction with the volumes in that array will result in multiple warmstarts (show details) 7.7.1.6 Distributed RAID
HU01500 All HIPER Node warmstarts can occur when the iSCSI Ethernet MTU is changed (show details) 7.7.1.6 iSCSI
HU01225 & HU01330 & HU01412 All Critical Node warmstarts due to inconsistencies arising from the way cache interacts with compression (show details) 7.7.1.6 Compression, Cache
HU01371 SVC, V7000, V5000 High Importance A remote copy command related to HyperSwap may hang resulting in a warmstart of the config node (show details) 7.7.1.6 HyperSwap
HU01480 All High Importance Under some circumstances the config node does not fail over properly when using IPv6 adversely affecting management access via GUI and CLI (show details) 7.7.1.6 Graphical User Interface, Command Line Interface
HU01473 All Suggested Easy Tier migrates an excessive number of cold extents to an overloaded nearline array (show details) 7.7.1.6 EasyTier
HU01487 All Suggested Small increase in read response time for source volumes with incremental FC maps (show details) 7.7.1.6 FlashCopy, Global Mirror With Change Volumes
HU01498 All Suggested GUI may be exposed to CVE-2017-5638 (see Section 3.1) 7.7.1.6
IT18752 All Suggested When the config node processes an lsdependentvdisks command, issued via the GUI, that has a large number of objects in its parameters, it may warmstart (show details) 7.7.1.6 Graphical User Interface
HU01193 All HIPER A drive failure whilst an array rebuild is in progress can lead to both nodes in an IO group warmstarting (show details) 7.7.1.5 Distributed RAID
HU01382 All HIPER Mishandling of extent migration following a rmarray command can lead to multiple simultaneous node warmstarts with a loss of access (show details) 7.7.1.5 Distributed RAID
HU01340 All Critical A port translation issue between v7.5 or earlier and v7.7.0 or later requires a T2 recovery to complete an upgrade (show details) 7.7.1.5 System Update
HU01392 All Critical Under certain rare conditions FC mappings not in a consistency group can be added to a special internal consistency group resulting in a T2 recovery (show details) 7.7.1.5 FlashCopy
HU01223 All High Importance The handling of a rebooted node's return to the cluster can occasionally become delayed resulting in a stoppage of inter-cluster relationships (show details) 7.7.1.5 Metro Mirror
HU01254 SVC High Importance A fluctuation of input AC power can cause a 584 error on a node (show details) 7.7.1.5 Reliability Availability Serviceability
HU01402 V7000 High Importance Nodes can power down unexpectedly as they are unable to determine from their partner whether power is available (show details) 7.7.1.5 Reliability Availability Serviceability
HU01409 All High Importance Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP address changing in the event of a fail over (show details) 7.7.1.5 Reliability Availability Serviceability
HU01410 SVC High Importance An issue in the handling of FlashCopy map preparation can cause both nodes in an IO group to be put into service state (show details) 7.7.1.5 FlashCopy
IT14917 All High Importance Node warmstarts due to a timing window in the cache component (show details) 7.7.1.5 Cache
HU00831 All Suggested Single node warmstart due to hung IO caused by cache deadlock (show details) 7.7.1.5 Cache
HU01022 SVC, V7000 Suggested Fibre channel adapter encountered a bit parity error resulting in a node warmstart (show details) 7.7.1.5 Hosts
HU01269 All Suggested A rare timing conflict between two process may lead to a node warmstart (show details) 7.7.1.5
HU01399 All Suggested For certain config nodes the CLI Help commands may not work (show details) 7.7.1.5 Command Line Interface
HU01432 All Suggested Node warmstart due to an accounting issue within the cache component (show details) 7.7.1.5 Cache
IT17302 V5000, V3700, V3500 Suggested Unexpected 45034 1042 entries in the Event Log (show details) 7.7.1.5 System Monitoring
IT18086 All Suggested When a vdisk is moved between IO groups a node may warmstart (show details) 7.7.1.5
HU01379 All HIPER Resource leak in the handling of SSDs leads to offline volumes (show details) 7.7.1.4 Backend Storage
HU01783 All Critical Replacing a failed drive in a DRAID array, with a smaller drive, may result in multiple T2 recoveries putting all nodes in service state with error 564 and/or 550 (show details) 7.7.1.4 Distributed RAID
HU01347 All High Importance During an upgrade to v7.7.1 a deadlock in node communications can occur leading to a timeout and node warmstarts (show details) 7.7.1.4 Thin Provisioning
HU01381 All High Importance A rare timing issue in FlashCopy may lead to a node warmstarting repeatedly and then entering a service state (show details) 7.7.1.4 FlashCopy
HU01247 All Suggested When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result (show details) 7.7.1.4 FlashCopy
HU01323 All Suggested Systems using Volume Mirroring that upgrade to v7.7.1.x and have a storage pool go offline may experience a node warmstart (show details) 7.7.1.4 Volume Mirroring
HU01374 All Suggested Where an issue with Global Mirror causes excessive IO delay a timeout may not function result in a node warmstart (show details) 7.7.1.4 Global Mirror
HU01226 All High Importance Changing max replication delay from the default to a small non-zero number can cause hung IOs leading to multiple node warmstarts and a loss of access (show details) 7.7.1.3 Global Mirror
HU01257 All High Importance Large (>1MB) write IOs to volumes can lead to a hung IO condition resulting in node warmstarts (show details) 7.7.1.3
HU01386 All High Importance Where latency between sites is greater than 1ms host write latency can be adversely impacted. This is can be more likely in the presence of large IO transfer sizes or high IOPS (show details) 7.7.1.3 HyperSwap
HU01017 All Suggested The result of CLI commands are sometimes not promptly presented in the GUI (show details) 7.7.1.3 Graphical User Interface
HU01227 All Suggested High volumes of events may cause the email notifications to become stalled (show details) 7.7.1.3 System Monitoring
HU01234 All Suggested After upgrade to 7.6 or later iSCSI hosts may incorrectly be shown as offline in the CLI (show details) 7.7.1.3 iSCSI
HU01251 V7000, V5000, V3700, V3500 Suggested When following the DMP for a 1685 event, if the option for "drive reseat has already been attempted" is selected, the process to replace a drive is not started (show details) 7.7.1.3 GUI Fix Procedure
HU01292 All Suggested Under some circumstances the re-calculation of grains to clean can take too long after a FlashCopy done event has been sent resulting in a node warmstart (show details) 7.7.1.3 FlashCopy
IT17102 All Suggested Where the maximum number of IO requests for a FC port has been exceeded, if a SCSI command, with an unsupported opcode, is received from a host then the node may warmstart (show details) 7.7.1.3
HU01272 All HIPER Replacing a drive in a system with a DRAID array can result in T2 recovery warmstarts. For more details refer to the following Flash (show details) 7.7.1.2 Distributed RAID
HU01208 V7000, V5000, V3700, V3500 HIPER After upgrading to v7.7 and later from v7.5 and earlier, and then creating a DRAID array with a node reset, the system may encounter repeated node warmstarts which will require a T3 recovery (show details) 7.7.1.1 Distributed RAID
HU01140 All High Importance Easy Tier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance (show details) 7.7.1.1 EasyTier
HU00271 All High Importance An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts (show details) 7.7.1.1 Global Mirror
HU00734 All High Importance Multiple node warmstarts due to deadlock condition during RAID group rebuild (show details) 7.7.1.1
HU01109 SVC, V7000, V5000 High Importance Multiple nodes can experience a lease expiry when a FC port is having communications issues (show details) 7.7.1.1
HU01118 V7000, V5000, V3700, V3500 High Importance Due to a firmware issue both nodes in a V7000 Gen 2 may be powered off (show details) 7.7.1.1
HU01141 All High Importance Node warmstart (possibly due to a network problem) when a CLI mkippartnership is issued. This may lead to loss of the config node requiring a T2 recovery (show details) 7.7.1.1 IP Replication
HU01180 All High Importance When creating a snapshot on an ESX host using VVols a T2 may occur (show details) 7.7.1.1 Hosts, VVols
HU01182 SVC, V7000, V5000 High Importance Node warmstarts due to 16Gb HBA firmware receiving an invalid SCSI TUR command (show details) 7.7.1.1
HU01184 All High Importance When removing multiple mdisks a T2 may occur (show details) 7.7.1.1
HU01185 All High Importance iSCSI target closes connection when there is a mismatch in sequence number (show details) 7.7.1.1 iSCSI
HU01189 All High Importance Improvement to DRAID dependency calculation when handling multiple drive failures (show details) 7.7.1.1 Distributed RAID
HU01210 SVC High Importance A small number of systems have broken or disabled TPMs. For these systems the generation of a new master key may fail preventing the system joining a cluster (show details) 7.7.1.1
HU01221 SVC, V7000, V5000 High Importance Node warmstarts due to an issue with the state machine transition in 16Gb HBA firmware (show details) 7.7.1.1
HU01250 All High Importance When using lsvdisklba to find a bad block on a compressed volume the vdisk can go offline (show details) 7.7.1.1 Compression
HU01516 All High Importance When node configuration data exceeds 8K in size some user defined settings may not be stored permanently resulting in node warmstarts (show details) 7.7.1.1 Reliability Availability Serviceability
IT16148 All High Importance When accelerate mode is enabled due to the way promote/swap plans are prioritized over demote Easy Tier is only demoting 1 extent every 5 minutes (show details) 7.7.1.1 EasyTier
IT16337 SVC, V7000, V5000 High Importance Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart. For more details refer to the following Flash (show details) 7.7.1.1
HU01024 V7000, V5000, V3700, V3500 Suggested A single node warmstart may occur when the SAS firmware's ECC checking detects a single bit error. The warmstart clears the error condition in the SAS chip (show details) 7.7.1.1
HU01050 All Suggested DRAID rebuild incorrectly reports event code 988300 (show details) 7.7.1.1 Distributed RAID
HU01063 SVC, V7000, V5000 Suggested 3PAR controllers do not support OTUR commands resulting in device port exclusions (show details) 7.7.1.1 Backend Storage
HU01074 All Suggested An unresponsive testemail command (possible due to a congested network) may result in a single node warmstart (show details) 7.7.1.1
HU01143 All Suggested Where nodes are missing config files some services will be prevented from starting (show details) 7.7.1.1
HU01155 All Suggested When a lsvdisklba or lsmdisklba command is invoked, for an MDisk with a back end issue, a node warmstart may occur (show details) 7.7.1.1 Compression
HU01187 All Suggested Circumstances can arise where more than one array rebuild operation can share the same CPU core resulting in extended completion times (show details) 7.7.1.1
HU01192 V7000 Suggested Some V7000 gen1 systems have an unexpected WWNN value which can cause a single node warmstart when upgrading to v7.7.0.0 (show details) 7.7.1.1
HU01194 All Suggested A single node warmstart may occur if CLI commands are received from the VASA provider in very rapid succession. This is caused by a deadlock condition which prevents the subsequent CLI command from completing (show details) 7.7.1.1 VVols
HU01198 All Suggested Running the Comprestimator svctask analyzevdiskbysystem command may cause the config node to warmstart (show details) 7.7.1.1 Comprestimator
HU01214 All Suggested GUI and snap missing EasyTier heatmap information (show details) 7.7.1.1 Support Data Collection
HU01219 SVC, V7000, V5000 Suggested Single node warmstart due to an issue in the handling of ECC errors within 16G HBA firmware (show details) 7.7.1.1
HU01244 All Suggested When a node is transitioning from offline to online it is possible for excessive CPU time to be used on another node in the cluster which may lead to a single node warmstart (show details) 7.7.1.1
HU01258 SVC Suggested A compressed Vdisk copy will result in an unexpected 1862 message when site/node fails over in a stretched cluster configuration (show details) 7.7.1.1 Compression

4. Useful Links

Description Link
Support Websites
Update Matrices, including detailed build version
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Limitations and restrictions, including maximum configuration limits
Supported Drive Types and Firmware Levels
SAN Volume Controller and Storwize Family Inter-cluster Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning