Release Note for systems built with IBM Spectrum Virtualize


This is the release note for the 7.8 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 7.8.0.0 and 7.8.1.10. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 22 October 2019.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links
Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section

1. New Features

The following new features have been introduced in the 7.8.0 release: The following new features have been introduced in the 7.8.1 release: The following feature has been introduced in the 7.8.1.9 release:

2. Known Issues and Restrictions

Note: For clarity, the term "node" will be used to refer to a SAN Volume Controller node or Storwize system node canister.
Details Introduced
During upgrade node failover does not bring up the normal alert message requiring a refresh of the GUI. Customers will need to manually refresh the GUI upon upgrade to v7.8.1.6.

This is a temporary restriction that will be lifted in a future PTF.

7.8.1.6

In the GUI, when filtering volumes by host, if there are more than 50 host objects, then the host list will not include the host names.

This issue will be fixed in a future PTF.

7.8.1.3

There is a known issue with 8-node systems and IBM Security Key Lifecycle Manager 3.0 that can cause the status of key server end points, on the system, to occasionally report as degraded or offline. The issue intermittently occurs when the system attempts to validate the key server but the server response times out to some of the nodes. When the issue occurs Error Code 1785 (A problem occurred with the Key Server) will be visible in the system event log.

This issue will not cause any loss of access to encrypted data.

7.8.0.0

There is an extremely small possibility that, on a system using both Encryption and Transparent Cloud Tiering, the system can enter a state where an encryption re-key operation is stuck in 'prepared' or 'prepare_failed' state, and a cloud account is stuck in 'offline' state.

The user will be unable to cancel or commit the encryption rekey, because the cloud account is offline. The user will be unable to remove the cloud account because an encryption rekey is in progress.

The system can only be recovered from this state using a T4 Recovery procedure.

It is also possible that SAS-attached storage arrays go offline.

7.8.0.0

Spectrum Virtualize as Software customers should not enable the Transparent Cloud Tiering function.

This restriction will be removed under APAR HU01495.

7.8.0.0

Due to memory constraints, Storwize V7000 Gen1 control enclosures with one or more compressed volumes cannot be upgraded to V7.8.0.0 or later.

7.8.0.0

Some configuration information will be incorrect in Spectrum Control.

This does not have any functional impact and will be resolved in a future release of Spectrum control.

7.8.0.0

No SAS Direct Attached Windows 2008 Hyper-V hosts will be able to connect to Storwize systems running v7.8.0.0.

7.8.0.0

When a system first upgrades from pre-7.7.0 to 7.7.0 or later, the in-memory audit log will be cleared. This means the catauditlog CLI command, and the GUI, will not show any commands that were issued before the upgrade.

The upgrade test utility will write the existing auditlog to a file in /dumps/audit on the config node. If the auditlog is needed at a later point, this file can be copied from the config node.

Subsequent upgrades will not affect the contents of the auditlog.

7.7.0.0

Priority Flow Control for iSCSI is only supported on Brocade VDX 10GbE switches.

This is a temporary restriction that will be lifted in a future V7.8 PTF.

7.7.0.0
Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0

Refer to this flash for more information

n/a
If an update stalls or fails then Contact IBM Support for Further Assistance n/a
The following restrictions were valid for previous PTFs, but have now been lifted

Systems attached to 3PAR controllers cannot be upgraded to V7.8.1.1 or later.

This is a temporary restriction.

7.8.1.1

No upgrade or service activity should be attempted while a Transparent Cloud Tiering Snapshot task is in progress.

7.8.0.0

The drive limit remains 1056 drives per cluster.

7.8.0.0

3. Issues Resolved

This release contains all of the fixes included in the 7.7.1.3 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier
Link for additional Information
Resolved in
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
CVE-2019-2602 1073958 7.8.1.10
CVE-2018-3180 ibm10884526 7.8.1.10
CVE-2018-12547 ibm10884526 7.8.1.10
CVE-2008-5161 ibm10874368 7.8.1.9
CVE-2018-5391 ibm10872368 7.8.1.8
CVE-2018-5732 ibm10741135 7.8.1.8
CVE-2018-11776 ibm10741137 7.8.1.8
CVE-2017-17449 ibm10872364 7.8.1.8
CVE-2017-18017 ibm10872364 7.8.1.8
CVE-2018-1517 ibm10872456 7.8.1.8
CVE-2018-2783 ibm10872456 7.8.1.8
CVE-2018-12539 ibm10872456 7.8.1.8
CVE-2018-1775 ibm10872486 7.8.1.8
CVE-2017-17833 ibm10872546 7.8.1.8
CVE-2018-11784 ibm10872550 7.8.1.8
CVE-2016-10708 ibm10717661 7.8.1.6
CVE-2016-10142 ibm10717931 7.8.1.6
CVE-2017-11176 ibm10717931 7.8.1.6
CVE-2018-1433 ssg1S1012263 7.8.1.6
CVE-2018-1434 ssg1S1012263 7.8.1.6
CVE-2018-1438 ssg1S1012263 7.8.1.6
CVE-2018-1461 ssg1S1012263 7.8.1.6
CVE-2018-1462 ssg1S1012263 7.8.1.6
CVE-2018-1463 ssg1S1012263 7.8.1.6
CVE-2018-1464 ssg1S1012263 7.8.1.6
CVE-2018-1465 ssg1S1012263 7.8.1.6
CVE-2018-1466 ssg1S1012263 7.8.1.6
CVE-2016-6210 ssg1S1012276 7.8.1.6
CVE-2016-6515 ssg1S1012276 7.8.1.6
CVE-2013-4312 ssg1S1012277 7.8.1.6
CVE-2015-8374 ssg1S1012277 7.8.1.6
CVE-2015-8543 ssg1S1012277 7.8.1.6
CVE-2015-8746 ssg1S1012277 7.8.1.6
CVE-2015-8812 ssg1S1012277 7.8.1.6
CVE-2015-8844 ssg1S1012277 7.8.1.6
CVE-2015-8845 ssg1S1012277 7.8.1.6
CVE-2015-8956 ssg1S1012277 7.8.1.6
CVE-2016-2053 ssg1S1012277 7.8.1.6
CVE-2016-2069 ssg1S1012277 7.8.1.6
CVE-2016-2384 ssg1S1012277 7.8.1.6
CVE-2016-2847 ssg1S1012277 7.8.1.6
CVE-2016-3070 ssg1S1012277 7.8.1.6
CVE-2016-3156 ssg1S1012277 7.8.1.6
CVE-2016-3699 ssg1S1012277 7.8.1.6
CVE-2016-4569 ssg1S1012277 7.8.1.6
CVE-2016-4578 ssg1S1012277 7.8.1.6
CVE-2016-4581 ssg1S1012277 7.8.1.6
CVE-2016-4794 ssg1S1012277 7.8.1.6
CVE-2016-5412 ssg1S1012277 7.8.1.6
CVE-2016-5828 ssg1S1012277 7.8.1.6
CVE-2016-5829 ssg1S1012277 7.8.1.6
CVE-2016-6136 ssg1S1012277 7.8.1.6
CVE-2016-6198 ssg1S1012277 7.8.1.6
CVE-2016-6327 ssg1S1012277 7.8.1.6
CVE-2016-6480 ssg1S1012277 7.8.1.6
CVE-2016-6828 ssg1S1012277 7.8.1.6
CVE-2016-7117 ssg1S1012277 7.8.1.6
CVE-2016-10229 ssg1S1012277 7.8.1.6
CVE-2016-0634 ssg1S1012278 7.8.1.6
CVE-2017-5647 ssg1S1010892 7.8.1.3
CVE-2016-2183 ssg1S1010205 7.8.1.1
CVE-2016-5546 ssg1S1010205 7.8.1.1
CVE-2016-5547 ssg1S1010205 7.8.1.1
CVE-2016-5548 ssg1S1010205 7.8.1.1
CVE-2016-5549 ssg1S1010205 7.8.1.1
CVE-2017-5638 ssg1S1010113 7.8.1.0
CVE-2016-4461 ssg1S1010883 7.8.1.0
CVE-2016-5385 ssg1S1009581 7.8.0.2
CVE-2016-5386 ssg1S1009581 7.8.0.2
CVE-2016-5387 ssg1S1009581 7.8.0.2
CVE-2016-5388 ssg1S1009581 7.8.0.2
CVE-2016-6796 ssg1S1010114 7.8.0.2
CVE-2016-6816 ssg1S1010114 7.8.0.2
CVE-2016-6817 ssg1S1010114 7.8.0.2
CVE-2016-2177 ssg1S1010115 7.8.0.2
CVE-2016-2178 ssg1S1010115 7.8.0.2
CVE-2016-2183 ssg1S1010115 7.8.0.2
CVE-2016-6302 ssg1S1010115 7.8.0.2
CVE-2016-6304 ssg1S1010115 7.8.0.2
CVE-2016-6306 ssg1S1010115 7.8.0.2
CVE-2016-5696 ssg1S1010116 7.8.0.2
CVE-2016-2834 ssg1S1010117 7.8.0.2
CVE-2016-5285 ssg1S1010117 7.8.0.2
CVE-2016-8635 ssg1S1010117 7.8.0.2
CVE-2017-6056 ssg1S1010022 7.8.0.0

3.2 APARs Resolved

Show details for all APARs
APAR
Affected Products
Severity
Description
Resolved in
Feature Tags
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
HU01781 All Critical An issue with workload balancing in the kernel scheduler can deprive some processes of the necessary resource to complete successfully resulting in a node warmstarts, that may impact performance, with the possibility of a loss of access to volumes (show details) 7.8.1.10
HU01888 & HU01997 All Critical An issue with restore mappings, in the FlashCopy component, can cause an IO group to warmstart (show details) 7.8.1.10 FlashCopy
HU01972 All High Importance When an array is in a quiescing state, for example where a member has been deleted, IO may become pended leading to multiple warmstarts (show details) 7.8.1.10 RAID, Distributed RAID
HU00744 All Suggested Single node warmstart due to an accounting issue within the cache component (show details) 7.8.1.10 Cache
HU00921 All Suggested A node warmstart may occur when a MDisk state change gives rise to duplicate discovery processes (show details) 7.8.1.10
HU01737 All Suggested On the "Update System" screen, for "Test Only", if a valid code image is selected, in the "Run Update Test Utility" dialog, then clicking the "Test" button will initiate a system update (show details) 7.8.1.10 System Update
HU01915 & IT28654 All Suggested Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers' SSL certificates are part of a chain of trust (show details) 7.8.1.10 Encryption
HU01617 All HIPER Due to a timing window issue, stopping a FlashCopy mapping, with the -autodelete option, may result in a Tier 2 recovery (show details) 7.8.1.9 FlashCopy
HU01913 All HIPER A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access (show details) 7.8.1.9 Distributed RAID
HU01708 All Critical A node removal operation during an array rebuild can cause a loss of parity data leading to bad blocks (show details) 7.8.1.9 RAID
HU01723 All Critical A timing window issue, around nodes leaving and re-joining clusters, can lead to hung I/O and node warmstarts (show details) 7.8.1.9 Reliability Availability Serviceability
HU01865 SVC, V7000, V5000 Critical When creating a Hyperswap relationship using addvolumecopy, or similar methods, the system should perform a synchronisation operation to copy the data of the original copy to the new copy. In some cases this synchronisation is skipped, leaving the new copy with bad data (all zeros) (show details) 7.8.1.9 HyperSwap
HU01876 All Critical Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur (show details) 7.8.1.9 Backend Storage
IT27460 All Critical Lease expiry can occur between local nodes when remote connection is lost, due to the mishandling of messaging credits (show details) 7.8.1.9 Reliability Availability Serviceability
IT29040 All Critical Occasionally a DRAID rebuild, with drives of 8TB or more, can encounter an issue which causes node warmstarts and potential loss of access (show details) 7.8.1.9 RAID, Distributed RAID
HU01907 SVC High Importance An issue in the handling of the power cable sense registers can cause a node to be put into service state with a 560 error (show details) 7.8.1.9 Reliability Availability Serviceability
HU01485 SVC Suggested When a SV1 node is started, with only one PSU powered, powering up the other PSU will not extinguish the Power Fault LED.
Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed (show details)
7.8.1.9 System Monitoring
HU01659 SVC Suggested Node Fault LED can be seen to flash in the absence of an error condition.
Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed (show details)
7.8.1.9 System Monitoring
HU01849 All Suggested An excessive number of SSH sessions may lead to a node warmstart (show details) 7.8.1.9 System Monitoring
IT26049 All Suggested An issue with CPU scheduling may cause the GUI to respond slowly (show details) 7.8.1.9 Graphical User Interface
HU01492 SVC, V7000, V5000 HIPER All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter (show details) 7.8.1.8 Reliability Availability Serviceability
HU01726 All HIPER A slow raid member drive in a mdisk may cause node warmstarts and the mdisk to go offline for a short time (show details) 7.8.1.8 Distributed RAID
HU01940 All HIPER Changing the use of a drive can cause a Tier 2 recovery (warmstarts on all nodes in the cluster). This occurs only if the drive change occurs within a small timing window, so the probability of the issue occurring is low (show details) 7.8.1.8 Drives
HU01572 All Critical SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access (show details) 7.8.1.8 iSCSI
HU01678 All Critical Entering an invalid parameter in the addvdiskaccess command may initiate a Tier 2 recovery (show details) 7.8.1.8 Command Line Interface
HU01735 All Critical Mutiple power failures can cause a RAID array to get into a stuck state leading to offlne volumes (show details) 7.8.1.8 RAID
HU01774 All Critical After a failed mkhost command for an iSCSI host any IO from that host will cause multiple warmstarts (show details) 7.8.1.8 iSCSI
HU01799 All Critical Timing window issue can affect operation of the HyperSwap addvolumecopy command causing all nodes to warmstart (show details) 7.8.1.8 HyperSwap
HU01825 All Critical Invoking a chrcrelationship command when one of the relationships in a consistency group is running in the opposite direction to the others may cause a node warmstart followed by a Tier 2 recovery (show details) 7.8.1.8 FlashCopy
HU01847 All Critical FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts (show details) 7.8.1.8 FlashCopy
HU01899 All Critical In a HyperSwap cluster, when the primary IO group has a dead domain, nodes will repeatedly warmstart (show details) 7.8.1.8 HyperSwap
IT25850 All Critical IO performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access (show details) 7.8.1.8 Distributed RAID
HU01507 All High Importance Until the initial synchronisation process completes high system latency may be experienced when a volume is created with two compressed copies or when space-effiecient copy is added to a volume with an existing compressed copy (show details) 7.8.1.8 Volume Mirroring
HU01579 All High Importance In systems where all drives are of type HUSMM80xx0ASS20 it will not be possible to assign a quorum drive (show details) 7.8.1.8 Quorum, Drives
HU01661 All High Importance A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synching (show details) 7.8.1.8 HyperSwap
HU01733 All High Importance Canister information, for the High Density Expansion Enclosure, may be incorrectly reported (show details) 7.8.1.8 Reliability Availability Serviceability
HU01797 All High Importance Hitachi G1500 backend controllers may exhibit higher than expected latency (show details) 7.8.1.8 Backend Storage
HU01813 All High Importance An issue with Global Mirror stream recovery handling at secondary sites can adversely impact replication performance (show details) 7.8.1.8 Global Mirror
HU01824 All High Importance Switching replication direction for HyperSwap relationships can lead to long IO timeouts (show details) 7.8.1.8 HyperSwap
HU01839 All High Importance Where a VMware host is being served volumes, from two different controllers, and an issue, on one controller, causes the related volumes to be taken offline then IO performance, for the volumes from the other controller, will be adversely affected (show details) 7.8.1.8 Hosts
HU01842 All High Importance Bursts of IO to Samsung Read-Intensive Drives can be interpreted as dropped frames against the resident slots leading to redundant drives being incorrectly failed (show details) 7.8.1.8 Drives
HU01846 SVC High Importance Silent battery discharge condition will unexpectedly take a SVC node offline putting it into a 572 service state (show details) 7.8.1.8 Reliability Availability Serviceability
HU01902 V7000, V5000, V3700, V3500 High Importance During an upgrade, an issue with VPD migration, can cause a timeout leading to a stalled upgrade (show details) 7.8.1.8 System Update
HU01276 SVC, V7000, V5000 Suggested An issue in the handling of debug data from the FC adapter can cause a node warmstart (show details) 7.8.1.8 Reliability Availability Serviceability
HU01467 All Suggested Failures in the handling of performance statistics files may lead to missing samples in Spectrum Control and other tools (show details) 7.8.1.8 System Monitoring
HU01512 All Suggested During a DRAID Mdisk copy-back operation a miscalculation of the remaining work may cause a node warmstart (show details) 7.8.1.8 Distributed RAID
HU01523 SVC, V7000, V5000 Suggested An issue with FC adapter initialisation can lead to a node warmstart (show details) 7.8.1.8 Reliability Availability Serviceability
HU01556 All Suggested The handling of memory pool usage by Remote Copy may lead to a node warmstart (show details) 7.8.1.8 Global Mirror, Metro Mirror
HU01564 All Suggested FlashCopy maps cleaning process is not monitoring the grains correctly which may cause FlashCopy maps to not stop (show details) 7.8.1.8 FlashCopy
HU01657 SVC, V7000, V5000 Suggested The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart (show details) 7.8.1.8 Reliability Availability Serviceability
HU01715 All Suggested Issuing a rmvolumecopy command followed by an expandvdisksize command may result in hung IO leading to a node warmstart (show details) 7.8.1.8 HyperSwap
HU01719 SVC, V7000, V5000 Suggested Node warmstart due to a parity error in the HBA driver firmware (show details) 7.8.1.8 Reliability Availability Serviceability
HU01751 All Suggested When RAID attempts to flag a strip as bad and that strip has already been flagged a node may warmstart (show details) 7.8.1.8 RAID
HU01760 All Suggested FlashCopy map progress appears to be stuck at zero percent (show details) 7.8.1.8 FlashCopy
HU01786 All Suggested An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log (show details) 7.8.1.8 Drives
HU01790 All Suggested On the "Create Volumes" page the "Accessible I/O Groups" selection may not update when the "Caching I/O group" selection is changed (show details) 7.8.1.8 Graphical User Interface
HU01793 All Suggested The Maximum final size value in the Expand Volume dialog can display an incorrect value preventing expansion (show details) 7.8.1.8 Graphical User Interface
HU02028 All Suggested An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart (show details) 7.8.1.8 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
IT19561 SVC, V7000, V5000 Suggested An issue with register clearance in the FC driver code may cause a node warmstart (show details) 7.8.1.8 Reliability Availability Serviceability
IT22591 SVC, V7000, V5000 Suggested An issue in the HBA adapter firmware may result in node warmstarts (show details) 7.8.1.8 Reliability Availability Serviceability
IT24900 V7000, V5000 Suggested Whilst replacing a control enclosure midplane an issue at boot can prevent VPD being assigned delaying a return to service (show details) 7.8.1.8 Reliability Availability Serviceability
IT26836 V7000, V5000 Suggested Loading drive firmware may cause a node warmstart (show details) 7.8.1.8 Drives
HU01802 SVC, V7000, V5000 Critical USB encryption key can become inaccessible after upgrade. If the system is later rebooted then any encrypted volumes will be unavailable (show details) 7.8.1.7 Encryption
HU01785 All High Importance An issue with memory mapping may lead to multiple node warmstarts (show details) 7.8.1.7
HU01866 SVC HIPER A faulty PSU sensor, in a node, can fill the sel log causing the service processor (BMC) to disable logging. If a snap is subsequently taken, from the node, a timeout will occur and it will be taken offline. It is possible for this to affect both nodes in an IO group (show details) 7.8.1.6 System Monitoring
HU01792 All HIPER When a DRAID array has multiple drive failures and the number of failed drives is greater than the number of rebuild areas in the array it is possible that the storage pool will be taken offline during the copyback phase of a rebuild. For more details refer to the following Flash (show details) 7.8.1.6 Distributed RAID
HU01524 All Critical When a system loses input power, nodes will shut down until power is restored. If a node was in the process of creating a bad block for an MDisk, at the moment it shuts down, then there is a chance that the system will hit repeated Tier 2 recoveries when it powers back up (show details) 7.8.1.6 RAID
HU01767 All Critical Reads of 4K/8K from an array can under exceptional circumstances return invalid data. For more details refer to the following Flash (show details) 7.8.1.6 RAID, Thin Provisioning
IT17919 All Critical A rare timing window issue in the handling of Remote Copy state can result in multi-node warmstarts (show details) 7.8.1.6 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01420 All High Importance An issue in DRAID can cause repeated node warmstarts in the circumstances of a degraded copyback operation to a drive (show details) 7.8.1.6 Distributed RAID
HU01476 All High Importance A remote copy relationship may suffer a loss of synchronisation when the relationship is renamed (show details) 7.8.1.6 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01623 All High Importance An issue in the handling of inter-node communications can lead to latency for Remote Copy relationships (show details) 7.8.1.6 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01630 All High Importance When a system with FlashCopy mappings is upgraded there may be multiple node warmstarts (show details) 7.8.1.6 FlashCopy
HU01697 All High Importance A timeout issue in RAID member management can lead to multiple node warmstarts (show details) 7.8.1.6 RAID
HU01771 SVC, V7000 High Importance An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline (show details) 7.8.1.6 System Monitoring
HU01446 All Suggested Where host workload overloads the back-end controller and VMware hosts are issuing ATS commands a race condition may be triggered leading to a node warmstart (show details) 7.8.1.6 Hosts
HU01472 All Suggested A locking issue in Global Mirror can cause a warmstart on the secondary cluster (show details) 7.8.1.6 Global Mirror
HU01619 All Suggested A misreading of the PSU register can lead to failure events being logged incorrectly (show details) 7.8.1.6 System Monitoring
HU01628 All Suggested In the GUI on the Volumes page whilst using the filter function some volumes entries may not be displayed until the page has completed loading (show details) 7.8.1.6 Graphical User Interface
HU01664 All Suggested A timing window issue during an upgrade can cause the node restarting to warmstart stalling the upgrade (show details) 7.8.1.6 System Update
HU01698 All Suggested A node warmstart may occur when deleting a compressed volume if a host has written to the volume minutes before the volume is deleted (show details) 7.8.1.6 Compression
HU01740 All Suggested The timeout setting for key server commands may be too brief when the server is busy causing those commands to fail (show details) 7.8.1.6 Encryption
HU01747 All Suggested The incorrect detection of a cache issue can lead to a node warmstart (show details) 7.8.1.6 Cache
HU00247 All Critical A rare deadlock condition can lead to a RAID5 or RAID6 array rebuild stalling at 99% (show details) 7.8.1.5 RAID, Distributed RAID
HU01620 All Critical Configuration changes can slow critical processes and, if this coincides with cloud account statistical data being adjusted, a Tier 2 may occur (show details) 7.8.1.5 Transparent Cloud Tiering
IC57642 SVC, V7000, V5000 Critical A complex combination of failure conditions in the fabric connecting nodes can result in lease expiries, possibly cluster-wide (show details) 7.8.1.5 Reliability Availability Serviceability
IT19192 All Critical An issue in the handling of GUI certificates may cause warmstarts leading to a Tier 2 recovery (show details) 7.8.1.5 Graphical User Interface, Reliability Availability Serviceability
IT23747 All High Importance For large drive sizes the DRAID rebuild process can consume significant CPU resource adversely impacting system performance (show details) 7.8.1.5 Distributed RAID
HU01655 All Suggested The algorithm used to calculate an SSD's replacement date can sometimes produce incorrect results leading to a premature End-of-Life error being reported (show details) 7.8.1.5 Drives
HU01679 All Suggested An issue in the RAID component can, very occasionally, cause a single node warmstart (show details) 7.8.1.5 RAID
HU01687 All Suggested For 'volumes by host', 'ports by host' and 'volumes by pool' pages, in the GUI, when the number of items is greater than 50 then the item name will not be displayed (show details) 7.8.1.5 Graphical User Interface
HU01704 SVC, V7000, V5000 Suggested In systems using HyperSwap a rare timing window issue can result in a node warmstart (show details) 7.8.1.5 HyperSwap
HU01724 All Suggested An IO lock handling issue between nodes can lead to a single node warmstart (show details) 7.8.1.5 RAID
HU01729 All Suggested Remote copy uses multiple streams to send data between clusters. During a stream disconnect a node unable to progress may warmstart (show details) 7.8.1.5 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01730 SVC Suggested When running the DMP for a 1046 error the picture may not indicate the correct position of the failed adapter (show details) 7.8.1.5 GUI Fix Procedure
HU01731 SVC, V7000, V5000 Suggested When a node is placed into service mode it is possible for all compression cards within the node to be marked as failed (show details) 7.8.1.5 Compression
HU01763 SVC Suggested A single node warmstart may occur on a DH8 config node when inventory email is created. The issue only occurs if this coincides with a very high rate of CLI commands, and high I/O workload on the config node (show details) 7.8.1.5 System Monitoring, Command Line Interface
IT23140 All Suggested When viewing the licenced functions GUI page the individual calculations for SCUs, for each tier, may be wrong. However the total is correct (show details) 7.8.1.5 Graphical User Interface
HU01706 All HIPER Areas of volumes written with all-zero data may contain non-zero data. For more details refer to the following Flash (show details) 7.8.1.4
HU01490 SVC, V7000, V5000 Critical When attempting to add/remove multiple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across IO groups (show details) 7.8.1.3 iSCSI
HU01549 SVC, V7000, V5000 Critical During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes (show details) 7.8.1.3 iSCSI, System Update
HU01625 All Critical In systems with a consistency group of HyperSwap or Metro Mirror relationships if an upgrade attempts to commit whilst a relationship is out of synch then there may be multiple warmstarts and a Tier 2 recovery (show details) 7.8.1.3 System Update, HyperSwap, Metro Mirror
HU01646 All Critical A new failure mechanism in the 16Gb HBA driver can under certain circumstances lead to a lease expiry of the entire cluster (show details) 7.8.1.3 Reliability Availability Serviceability
IT23034 SVC, V7000, V5000 Critical With hyperswap volumes and mirrored copies at a single site using rmvolumecopy to remove a copy may result in a cluster-wide warmstart necessitating a Tier 2 recovery (show details) 7.8.1.3 HyperSwap
HU01321 SVC, V7000, V5000 High Importance Multi-node warmstarts may occur when changing the direction of a remote copy relationship whilst write IO to the (former) primary volume is still occurring (show details) 7.8.1.3 Metro Mirror, Global Mirror, Global Mirror With Change Volumes
HU01481 SVC, V7000, V5000 High Importance A failed IO can trigger HyperSwap to unexpectedly change the direction of the relationship leading to node warmstarts (show details) 7.8.1.3 HyperSwap
HU01525 All High Importance During an upgrade a resource locking issue in the compression component can cause a node to warmstart multiple times and become unavailable (show details) 7.8.1.3 Compression, System Update
HU01569 SVC High Importance When compression utilisation is high the config node may exhibit longer IO response times than non-config nodes (show details) 7.8.1.3 Compression
HU01584 SVC, V7000, V5000 High Importance An issue in array indexing can cause a RAID array to go offline repeatedly (show details) 7.8.1.3 RAID
HU01614 SVC, V7000, V5000 High Importance After a node is upgraded hosts defined as TPGS may have paths set to inactive (show details) 7.8.1.3 Hosts
HU01632 All High Importance A congested fabric causes the Fibre Channel adapter firmware to abort IO resulting in node warmstarts (show details) 7.8.1.3 Reliability Availability Serviceability
HU01636 V5000, V3700, V3500 High Importance A connectivity issue with certain host SAS HBAs can prevent hosts from establishing stable communication with the storage controller (show details) 7.8.1.3 Hosts
HU01638 All High Importance When upgrading to v7.6 or later if there is another cluster in the same zone which is at v5.1 or earlier then nodes will warmstart and the upgrade will fail (show details) 7.8.1.3 System Update
HU01645 SVC High Importance After upgrading to v7.8 a reboot of a node will initiate a continual boot cycle (show details) 7.8.1.3 System Update
HU01385 SVC, V7000, V5000 Suggested A warmstart may occur if a rmvolumecopy or rmrcrelationship command are issued on a volume while IO is being forwarded to the associated copy (show details) 7.8.1.3 HyperSwap
HU01457 V7000 Suggested In a hybrid V7000 cluster where one IO group supports 10k volumes and another does not some operations on volumes may incorrectly be denied in the GUI (show details) 7.8.1.3 Graphical User Interface
HU01535 All Suggested An issue with Fibre Channel driver handling of command processing can result in a node warmstart (show details) 7.8.1.3
HU01563 SVC, V7000, V5000 Suggested Where an IBM SONAS host id is used it can under rare circumstances cause a warmstart (show details) 7.8.1.3
HU01582 All Suggested A compression issue in IP replication can result in a node warmstart (show details) 7.8.1.3 IP Replication
HU01624 All Suggested GUI response can become very slow in systems with a large number of compressed and uncompressed volume (show details) 7.8.1.3 Graphical User Interface
HU01631 SVC, V7000, V5000 Suggested A memory leak in Easy Tier when pools are in Balanced mode can lead to node warmstarts (show details) 7.8.1.3 EasyTier
HU01654 SVC, V7000, V5000 Suggested There may be a node warmstart when a switch of direction in a HyperSwap relationship fails to complete properly (show details) 7.8.1.3 HyperSwap
HU01255 & HU01586 SVC, V7000, V5000 HIPER The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access (show details) 7.8.1.2 Reliability Availability Serviceability
HU01626 All High Importance Node downgrade from v7.8.x to v7.7.1 or earlier (e.g. during an aborted upgrade) may prevent the node from rejoining the cluster. Systems that have already completed upgrade to v7.8.x are not affected by this issue (show details) 7.8.1.2 System Update
HU01505 All HIPER A non-redundant drive experiencing many errors can be taken offline obstructing rebuild activity (show details) 7.8.1.1 Backend Storage, RAID
HU01570 V7000, V5000, V3700, V3500 HIPER Reseating a drive in an array may cause the mdisk to go offline (show details) 7.8.1.1 Drives, RAID
IT20627 All Critical When Samsung RI drives are used as quorum disks a drive outage can occur. Under some circumstances this can lead to a loss of access (show details) 7.8.1.1 Quorum
HU01477 V7000, V5000, V3700, V3500 High Importance Due to the way enclosure data is read it is possible for a firmware mismatch between nodes to occur during an upgrade (show details) 7.8.1.1 System Update
HU01503 All High Importance When the 3PAR host type is set to 'legacy' the round robin algorithm, used to select the mdisk port for IO submission to 3PAR controllers, does not work correctly and IO may be submitted to fewer controller ports, adversely affecting performance (show details) 7.8.1.1 Backend Storage
HU01609 & IT15343 All High Importance When the system is busy the compression component may be paged out of memory resulting in latency that can lead to warmstarts (show details) 7.8.1.1 Compression
IT19726 SVC High Importance Warmstarts may occur when the attached SAN fabric is congested and HBA transmit paths become stalled, preventing the HBA firmware from generating the completion for a FC command (show details) 7.8.1.1 Hosts
HU00763 V7000, V5000, V3700, V3500 Suggested A node warmstart may occur when a quorum disk is accessed at the same time as the login to that disk is closed (show details) 7.8.1.1 Quorum
HU01332 All Suggested Performance monitor and Spectrum Control show zero CPU utilisation for compression (show details) 7.8.1.1 System Monitoring
HU01353 All Suggested CLI allows the input of carriage return characters into certain fields, after cluster creation, resulting in invalid cluster VPD and failed node adds (show details) 7.8.1.1 Command Line Interface
HU01391 & HU01581 V7000, V5000, V3700, V3500 Suggested Storwize systems may experience a warmstart due to an uncorrectable error in the SAS firmware (show details) 7.8.1.1 Drives
HU01430 V7000, V5000, V3700, V3500 Suggested Memory resource shortages in systems with 8GB of RAM can lead to node warmstarts (show details) 7.8.1.1
HU01469 V3700, V3500 Suggested Resource exhaustion in the iSCSI component can result in a node warmstart (show details) 7.8.1.1 iSCSI
HU01471 V5000 Suggested Power system down using the GUI on V5000 causes the fans to run high while the system is offline but power is still applied to the enclosure (show details) 7.8.1.1
HU01484 All Suggested During a RAID array rebuild there may be node warmstarts (show details) 7.8.1.1 RAID
HU01496 SVC Suggested SVC node type SV1 reports wrong FRU part number for compression accelerator (show details) 7.8.1.1 Command Line Interface
HU01520 V3700, V3500 Suggested Where the system is being used as secondary site for Remote Copy during an upgrade to v7.8.1 the node may warmstart (show details) 7.8.1.1 System Update, Global Mirror, Metro Mirror
HU01531 All Suggested Spectrum Control is unable to receive notifications from SVC/Storwize. Spectrum Control may experience an out-of-memory condition (show details) 7.8.1.1 System Monitoring
HU01566 SVC Suggested After upgrading, numerous 1370 errors are seen in the Event Log (show details) 7.8.1.1 System Update
IT19973 All Suggested Call home emails may not be sent due to a failure to retry (show details) 7.8.1.1
HU01474 SVC, V7000 HIPER Host writes to a read-only secondary volume trigger IO timeout warmstarts (show details) 7.8.1.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU01479 All HIPER The handling of drive reseats can sometimes allow IO to occur before the drive has been correctly failed resulting in offline mdisks (show details) 7.8.1.0 Distributed RAID
HU01483 All HIPER mkdistributedarray command may get stuck in the prepare state any interaction with the volumes in that array will result in multiple warmstarts (show details) 7.8.1.0 Distributed RAID
HU01675 V7000 HIPER Memory allocation issues may cause GUI and I/O performance issues (show details) 7.8.1.0 Compression, Graphical User Interface
HU01220 All Critical Changing the type of a RC consistency group when a volume in a subordinate relationship is offline will cause a Tier 2 recovery (show details) 7.8.1.0 Global Mirror With Change Volumes, Global Mirror
HU01252 SVC Critical Where a SVC is presenting storage from an 8-node V7000, an upgrade to that V7000 can pause IO long enough for the SVC to take related Mdisks offline (show details) 7.8.1.0
HU01416 All Critical ISL configuration activity may cause a cluster-wide lease expiry (show details) 7.8.1.0 Reliability Availability Serviceability
HU00747 V7000, V5000, V3700, V3500 High Importance Node warmstarts can occur when drives become degraded (show details) 7.8.1.0 Backend Storage
HU01309 SVC High Importance For FC logins, on a node that is online for more than 200 days, if a fabric event makes a login inactive then the node may be unable to re-establish the login (show details) 7.8.1.0 Backend Storage
HU01371 SVC, V7000, V5000 High Importance A remote copy command related to HyperSwap may hang resulting in an warmstart of the config node (show details) 7.8.1.0 HyperSwap
HU01388 SVC, V7000, V5000 High Importance Where a HyperSwap volume is the source of a FlashCopy mapping and the HyperSwap relationship is out of sync when the HyperSwap volume comes back online a switch of direction will occur and FlashCopy operation may delay IO leading to node warmstarts (show details) 7.8.1.0 HyperSwap, FlashCopy
HU01394 All High Importance Node warmstarts may occur on systems which are performing Global Mirror replication, due to a low-probability timing window (show details) 7.8.1.0 Global Mirror
HU01395 All High Importance Malformed URLs sent by security scanners whilst correctly discarded can cause considerable exception logging on config nodes leading to performance degradation that can adversely affect remote copy (show details) 7.8.1.0 Global Mirror
HU01413 All High Importance Node warmstarts when establishing an FC partnership between a system on v7.7.1 or later with another system which in turn has a partnership to another system running v6.4.1 or earlier (show details) 7.8.1.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU01428 V7000, V5000, V3700, V3500 High Importance Scheduling issue adversely affects performance resulting in node warmstarts (show details) 7.8.1.0 Reliability Availability Serviceability
HU01480 All High Importance Under some circumstances the config node does not fail over properly when using IPv6 adversely affecting management access via GUI and CLI (show details) 7.8.1.0 Graphical User Interface, Command Line Interface
IT19019 V5000 High Importance V5000 control enclosure midplane FRU replacement may fail leading to both nodes reporting a 506 error (show details) 7.8.1.0 Reliability Availability Serviceability
HU01057 SVC Suggested Slow GUI performance for some pages as the lsnodebootdrive command generates unexpected output (show details) 7.8.1.0 Graphical User Interface
HU01227 All Suggested High volumes of events may cause the email notifications to become stalled (show details) 7.8.1.0 System Monitoring
HU01404 All Suggested A node warmstart may occur when a new volume is created using fast format and foreground IO is submitted to the volume (show details) 7.8.1.0
HU01445 SVC, V7000 Suggested Systems with heavily used RAID-1 or RAID-10 arrays may experience a node warmstart (show details) 7.8.1.0
HU01463 All Suggested SSH Forwarding is enabled on the SSH server (show details) 7.8.1.0
HU01466 SVC, V7000, V5000 Suggested Stretched cluster and HyperSwap IO routing does not work properly due to incorrect ALUA data (show details) 7.8.1.0 HyperSwap, Hosts
HU01470 All Suggested T3 might fail during svcconfig recover -execute while running chemail if the email_machine_address contains a comma (show details) 7.8.1.0 Reliability Availability Serviceability
HU01473 All Suggested Easy Tier migrates an excessive number of cold extents to an overloaded nearline array (show details) 7.8.1.0 EasyTier
HU01487 All Suggested Small increase in read response time for source volumes with additional FlashCopy maps (show details) 7.8.1.0 FlashCopy, Global Mirror With Change Volumes
HU01497 All Suggested A drive can still be offline even though the error is showing as corrected in the Event Log (show details) 7.8.1.0 Distributed RAID
HU01498 All Suggested GUI may be exposed to CVE-2017-5638 (see Section 3.1) 7.8.1.0
IT19232 V7000, V5000, V3700, V3500 Suggested Storwize systems can report unexpected drive location errors as a result of a RAID issue (show details) 7.8.1.0
HU01410 SVC Critical An issue in the handling of FlashCopy map preparation can cause both nodes in an IO group to be put into service state (show details) 7.8.0.2 FlashCopy
HU01225 & HU01330 & HU01412 All Critical Node warmstarts due to inconsistencies arising from the way cache interacts with compression (show details) 7.8.0.2 Compression, Cache
HU01442 All Critical Upgrading to v7.7.1.5 or v7.8.0.1 with encryption enable will result in multiple Tier 2 recoveries and a loss of access (show details) 7.8.0.2 Encryption, System Update
HU00762 All High Importance Due to an issue in the cache component nodes within an IO group are not able to form a caching-pair and are serving IO through a single node (show details) 7.8.0.2 Reliability Availability Serviceability
HU01409 All High Importance Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP address changing in the event of a fail over (show details) 7.8.0.2 Reliability Availability Serviceability
HU01426 All High Importance Systems running v7.6.1 or earlier, with compressed volumes, that upgrade to v7.8.0 or later will fail when the first node warmstarts and enters a service state (show details) 7.8.0.2 System Update
HU01432 All Suggested Node warmstart due to an accounting issue within the cache component (show details) 7.8.0.2 Cache
HU01459 V7000, V5000 Suggested The event log indicates incorrect enclosure type (show details) 7.8.0.2 System Monitoring
IT18752 All Suggested When the config node processes an lsdependentvdisks command, issued via the GUI, that has a large number of objects in its parameters, it may warmstart (show details) 7.8.0.2 Graphical User Interface
HU01382 All HIPER Mishandling of extent migration following a rmarray command can lead to multiple simultaneous node warmstarts with a loss of access (show details) 7.8.0.1 Distributed RAID
HU01415 V3700 Critical When a V3700 with 1GE adapters is upgraded to v7.8.0.0 iSCSI hosts will lose access to volumes (show details) 7.8.0.1 iSCSI, Hosts
HU01193 All HIPER A drive failure whilst an array rebuild is in progress can lead to both nodes in an IO group warmstarting (show details) 7.8.0.0 Distributed RAID
HU00906 All Critical When a compressed volume mirror copy is taken offline, write response times to the primary copy may reach prohibitively high levels leading to a loss of access to that volume (show details) 7.8.0.0 Compression, Volume Mirroring
HU01021 & HU01157 All Critical A fault in a backend controller can cause excessive path state changes leading to node warmstarts and offline volumes (show details) 7.8.0.0 Backend Storage
HU01267 All Critical An unusual interaction between Remote Copy and FlashCopy can lead to both nodes in an IO group warmstarting (show details) 7.8.0.0 Global Mirror With Change Volumes
HU01320 All Critical A rare timing condition can cause hung IO leading to warmstarts on both nodes in an IO group. Probability can be increased in the presence of failing drives. (show details) 7.8.0.0 Hosts
HU01340 All Critical A port translation issue between v7.5 or earlier and v7.7.0 or later requires a Tier 2 recovery to complete an upgrade (show details) 7.8.0.0 System Update
HU01392 All Critical Under certain rare conditions FC mappings not in a consistency group can be added to a special internal consistency group resulting in a Tier 2 recovery (show details) 7.8.0.0 FlashCopy
HU01455 All Critical VMWare hosts with ATS enabled can see LUN disconnects to volumes when GMCV is used (show details) 7.8.0.0 Global Mirror With Change Volumes
HU01519 V7000 Critical One PSU may silently fail leading to the possibility of a dual node reboot (show details) 7.8.0.0 Reliability Availability Serviceability
HU01635 All Critical A slow memory leak in the host layer can lead to an out-of-memory condition resulting in offline volumes (show details) 7.8.0.0 Hosts
HU01783 All Critical Replacing a failed drive in a DRAID array, with a smaller drive, may result in multiple Tier 2 recoveries putting all nodes in service state with error 564 and/or 550 (show details) 7.8.0.0 Distributed RAID
HU01831 All Critical Cluster-wide warmstarts may occur when the SAN delivers a FDISC frame with an invalid WWPN (show details) 7.8.0.0 Reliability Availability Serviceability
HU01177 All High Importance A small timing window issue exists where a node warmstart or power failure can lead to repeated warmstarts of that node until a node rescue is performed (show details) 7.8.0.0 Reliability Availability Serviceability
HU01223 All High Importance The handling of a rebooted node's return to the cluster can occasionally become delayed resulting in a stoppage of inter cluster relationships (show details) 7.8.0.0 Metro Mirror
HU01254 SVC High Importance A fluctuation of input AC power can cause a 584 error on a node (show details) 7.8.0.0 Reliability Availability Serviceability
HU01268 V7000, V5000, V3700, V3500 High Importance Upgrade to 7.7.x fails on Storwize systems in the replication layer where a T3 recovery was performed in the past (show details) 7.8.0.0 System Update
HU01347 All High Importance During an upgrade to v7.7.1 a deadlock in node communications can occur leading to a timeout and node warmstarts (show details) 7.8.0.0 Thin Provisioning
HU01379 All High Importance Resource leak in the handling of Read Intensive drives leads to offline volumes (show details) 7.8.0.0
HU01381 All High Importance A rare timing issue in FlashCopy may lead to a node warmstarting repeatedly and then entering a service state (show details) 7.8.0.0 FlashCopy
HU01402 V7000 High Importance Nodes can power down unexpectedly as they are unable to determine from their partner whether power is available (show details) 7.8.0.0 Reliability Availability Serviceability
HU01488 V7000, V5000 High Importance SAS transport errors on an enclosure slot have the potential to affect an adjacent slot leading to double drive failures (show details) 7.8.0.0 Drives
HU01516 All High Importance When node configuration data exceeds 8K in size some user defined settings may not be stored permanently resulting in node warmstarts (show details) 7.8.0.0 Reliability Availability Serviceability
IT14917 All High Importance Node warmstarts due to a timing window in the cache component (show details) 7.8.0.0 Cache
IT16012 SVC High Importance Internal node boot drive RAID scrub process at 1am every Sunday can impact system performance (show details) 7.8.0.0
IT17564 All High Importance All nodes in an IO group may warmstart when a DRAID array experiences drive failures (show details) 7.8.0.0 Distributed RAID
HU00831 All Suggested Single node warmstart due to hung IO caused by cache deadlock (show details) 7.8.0.0 Cache
HU01098 All Suggested Some older backend controller code levels do not support C2 commands resulting in 1370 entries in the Event Log for every detectmdisk (show details) 7.8.0.0 Backend Storage
HU01213 All Suggested The LDAP password is visible in the auditlog (show details) 7.8.0.0
HU01228 All Suggested Automatic T3 recovery may fail due to the handling of quorum registration generating duplicate entries (show details) 7.8.0.0 Reliability Availability Serviceability
HU01229 V7000, V5000, V3700, V3500 Suggested The DMP for a 3105 event does not identify the correct problem canister (show details) 7.8.0.0
HU01230 All Suggested A host aborting an outstanding logout command can lead to a single node warmstart (show details) 7.8.0.0
HU01247 All Suggested When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result (show details) 7.8.0.0 FlashCopy
HU01264 All Suggested Node warmstart due to an issue in the compression optimisation process (show details) 7.8.0.0 Compression
HU01269 All Suggested A rare timing conflict between two process may lead to a node warmstart (show details) 7.8.0.0
HU01304 All Suggested SSH authentication fails if multiple SSH keys are configured on the client (show details) 7.8.0.0
HU01323 All Suggested Systems using Volume Mirroring that upgrade to v7.7.1.x and have a storage pool go offline may experience a node warmstart (show details) 7.8.0.0 Volume Mirroring
HU01370 All Suggested lsfabric command may not list all logins when it is used with parameters (show details) 7.8.0.0 Command Line Interface
HU01374 All Suggested Where an issue with Global Mirror causes excessive IO delay a timeout may not function result in a node warmstart (show details) 7.8.0.0 Global Mirror
HU01399 All Suggested For certain config nodes the CLI Help commands may not work (show details) 7.8.0.0 Command Line Interface
IT17302 V5000, V3700, V3500 Suggested Unexpected 45034 1042 entries in the Event Log (show details) 7.8.0.0 System Monitoring
IT18086 All Suggested When a vdisk is moved between IO groups a node may warmstart (show details) 7.8.0.0

4. Useful Links

Description Link
Support Websites
Update Matrices, including detailed build version
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Limitations and restrictions, including maximum configuration limits
Supported Drive Types and Firmware Levels
SAN Volume Controller and Storwize Family Inter-cluster Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning