Release Note for systems built with IBM Spectrum Virtualize


This is the release note for the 8.2.1 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 8.2.1.0 and 8.2.1.8. This document will be updated with additional information whenever a PTF is released.

Note: The 8.2.1.7 release is only provided, pre-installed, on new systems and will not be available on IBM Fix Central.

This document was last updated on 22 November 2019.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links
Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section

1. New Features

The following new features have been introduced in the 8.2.1 release: The following new feature has been introduced in the 8.2.1.3 release:

2. Known Issues and Restrictions

Note: For clarity, the terms "node" and "canister" are use interchangeably.
Details Introduced

Customers with more than 5 x non-NVMe over FC hosts (i.e FC SCSI or iSCSI) in an IO group must not attach any NVMe over FC hosts to that IO group.
Customers with more than 20 x non-NVMe over FC hosts (i.e FC SCSI or iSCSI) in a cluster must not attach any NVMe over FC hosts to that cluster.

For new clusters without any hosts please refer to the appropriate V8.2.1 Configuration Limits and Restrictions pages for details of the maximum number of hosts that can be attached.

These limits will not be policed by the Spectrum Virtualize software. Any configurations that exceed these limits will experience significant adverse performance impact.

These limits will be lifted in a future major release.

8.2.1.0

Customers using Transparent Cloud Tiering should not upgrade to v8.2.1.0.

This is a restriction that may be lifted in a future PTF.

8.2.1.0

Spectrum Virtualize for Public Cloud v8.2.1 is not available.

8.2.1.0

Customers using iSCSI to virtualize backend controllers should not upgrade to v8.2.0 or later

This is a restriction that may be lifted in a future PTF.

8.2.0.0

Customers upgrading systems with more than 64GB of RAM to v8.1 or later will need to run chnodehw to enable access to the extra memory above 64GB.

Under some circumstances it may also be necessary to remove and re-add each node in turn.

8.1.0.0
If an update stalls or fails then contact IBM Support for further assistance n/a
The following restrictions were valid but have now been lifted

Customers with direct attached external storage controllers cannot upgrade to v8.2.1.6.

This has been resolved, under APAR HU02077, in v8.2.1.8.

Please note that v8.2.1.5, or earlier, is not exposed to this restriction.

8.2.1.6

Systems containing FlashCore Modules (FCMs), running the v1.1.0 firmware level, are currently unable to perform software updates.

If the system is currently running 8.2.1.4, or later, then please upgrade the FCM firmware, to v1.2.7, before upgrading the system firmware.

If the system is running 8.2.1.3 or earlier - the restriction is temporary and will be lifted shortly.

8.2.1.6

With Gemalto SafeNet KeySecure, the chkeyserverkeysecure -username <username> command is used to set the KeySecure username credential. If this is changed to a username that is not recognised by the key server to be the valid username, associated with the Spectrum Virtualize encryption key, then a subsequent re-key operation can cause key servers to appear offline.

This is a an issue that will be resolved in a future PTF.

8.2.1.0

3. Issues Resolved

This release contains all of the fixes included in the 8.1.3.1 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier
Link for additional Information
Resolved in
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
CVE-2019-2602 1073958 8.2.1.6
CVE-2018-3180 ibm10884526 8.2.1.4
CVE-2018-12547 ibm10884526 8.2.1.4
CVE-2008-5161 ibm10874368 8.2.1.2
CVE-2018-5391 ibm10872368 8.2.1.2
CVE-2018-11776 ibm10741137 8.2.1.0
CVE-2017-17833 ibm10872546 8.2.1.0
CVE-2018-11784 ibm10872550 8.2.1.0
CVE-2018-5732 ibm10741135 8.2.1.0
CVE-2018-1517 ibm10872456 8.2.1.0
CVE-2018-2783 ibm10872456 8.2.1.0
CVE-2018-12539 ibm10872456 8.2.1.0
CVE-2018-1775 ibm10872486 8.2.1.0

3.2 APARs Resolved

Show details for all APARs
APAR
Affected Products
Severity
Description
Resolved in
Feature Tags
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
HU02064 SVC, V7000 HIPER An issue in the firmware for compression accelerator cards can cause offline compressed volumes. For more details refer to the following Flash  (show details) 8.2.1.8 Compression
HU01967 All Critical When IO, in remote copy relationships, experiences delays (1720 and/or 1920 errors are logged) an IO group may warmstart (show details) 8.2.1.8 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU02036 All Critical It is possible for commands, that alter pool-level extent reservations (i.e. migratevdisk or rmmdisk), to conflict with an ongoing Easy Tier migration, resulting in a Tier 2 recovery (show details) 8.2.1.8 EasyTier
HU02044 All Critical Multiple DRAID arrays can, where one is performing a rebuild, be exposed to a RAID deadlock condition resulting in multiple node warmstarts and a loss of access to data (show details) 8.2.1.8 Distributed RAID
HU02050 FS9100, V5000, V7000 Critical Compression hardware can have an issue processing certain types of data resulting in node reboots and marking the compression hardware as faulty even though it is serviceable (show details) 8.2.1.8 Compression
HU02063 All Critical HyperSwap clusters with only two surviving nodes may experience warmstarts on both of those nodes where rcbuffersize is set to 512MB (show details) 8.2.1.8 HyperSwap
HU02077 All Critical Upgrading to 8.2.1.6 or 8.3.0.0 can cause a loss of access to direct-attached Fibre Channel controllers (show details) 8.2.1.8 Backend Storage
HU02083 All Critical During DRAID rebuilds, an issue in the handling of memory buffers can lead to multiple node warmstarts and a loss of access to data (show details) 8.2.1.8 Distributed RAID
HU02089 All Critical Due to changes to quorum management, during an upgrade to v8.2.x, or later, there may be multiple warmstarts, with the possibility of a loss of access to data (show details) 8.2.1.8 System Update
IT26257 All Critical Starting a relationship, when the remote volume is offline, may result in a T2 recovery (show details) 8.2.1.8 HyperSwap
IT30595 All Critical A resource shortage in the RAID component can cause mdisks to be taken offline (show details) 8.2.1.8 RAID
HU01836 All High Importance When an auxiliary volume is moved an issue with pausing the master volume can lead to node warmstarts (show details) 8.2.1.8 HyperSwap
HU01942 FS9100, V7000, V5000 High Importance NVMe drive ports can go offline, for a very short time, when an upgrade of that drive's firmware commences (show details) 8.2.1.8 Drives
HU02049 All High Importance GUI session handling has an issue that can generate many exceptions, adversely impacting GUI performance (show details) 8.2.1.8 Graphical User Interface
HU02078 SVC High Importance Heavily unbalanced workloads, in stretched-cluster configurations, can bias inter-node traffic through one port, adversely affecting performance (show details) 8.2.1.8 Performance
HU01880 All Suggested When a write, to a secondary volume, becomes stalled, a node at the primary site may warmstart (show details) 8.2.1.8 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU01936 All Suggested When shrinking a volume, that has host mappings, there may be recurring node warmstarts (show details) 8.2.1.8 Cache
HU02021 All Suggested Disabling garbage collection may cause a node warmstart (show details) 8.2.1.8 Data Reduction Pools
HU02085 All Suggested Freeze time of Global Mirror remote copy consistency groups may not be updated correctly in certain scenarios (show details) 8.2.1.8 Global Mirror
HU02093 V5000 Suggested A locking issue in the inter-node communications, of V5030 systems, can lead to a deadlock condition, resulting in a node warmstart (show details) 8.2.1.8 Reliability Availability Serviceability
IT30448 All Suggested If an IP Quorum app is killed, during the commit phase of a code upgrade, then that offline IP Quorum device cannot be removed, post upgrade (show details) 8.2.1.8 IP Quorum
IT30449 V5000 Suggested Attempting to activate USB encryption on a new V5030E will fail with a CMMVCU6054E error (show details) 8.2.1.8 Encryption
HU02014 SVC HIPER After a loss of power, where a node has a dead CMOS battery, it will fail to restart correctly. It is possible for both nodes in an IO group to experience this issue (show details) 8.2.1.6 Reliability Availability Serviceability
HU01888 & HU01997 All Critical An issue with restore mappings, in the FlashCopy component, can cause an IO group to warmstart (show details) 8.2.1.6 FlashCopy
HU01933 All Critical Under rare circumstances the DRP deduplication rehoming process can become truncated. Subsequent detection of inconsistent metadata can lead to offline Data Reduction Pools (show details) 8.2.1.6 Data Reduction Pools, Deduplication
HU01985 All Critical As a consequence of a DRP recovery bad metadata may be created. When the region of disk associated with the bad metadata is accessed there may be an IO group warmstarts (show details) 8.2.1.6 Data Reduction Pools
HU01989 All Critical For large drives, bitmap scanning, during a rebuild, can timeout resulting in multiple node warmstarts, possibly leading to offline IO groups (show details) 8.2.1.6 Distributed RAID
HU01998 All Critical All SCSI command types can set volumes as busy resulting in IO timeouts, and multiple node warmstarts, with the possibility of an offline IO group (show details) 8.2.1.6 Hosts
HU02016 SVC Critical A memory leak in the component that handles thin-provisioned mdisks can lead to an adverse performance impact with the possibility of offline mdisks (show details) 8.2.1.6 Backend Storage
HU02027 All Critical Fabric congestion can cause internal resource constraints, in 16Gb HBAs, leading to lease expiries (show details) 8.2.1.6 Reliability Availability Serviceability
HU02043 All Critical Collecting a snap can cause nodes to run out of boot drive space and go offline with node error 565 (show details) 8.2.1.6 Support Data Collection
HU02045 All Critical When a node is removed from the cluster, using CLI, it may still be shown as online in the GUI. If an attempt is made to shutdown this node, from the GUI, whilst it appears to be online, then the whole cluster will shutdown (show details) 8.2.1.6 Graphical User Interface
HU01890 All High Importance FlashCopy mappings, from master volume to primary change volume, may become stalled when a T2 recovery occurs whilst the mappings are in a 'copying' state (show details) 8.2.1.6 Global Mirror with Change Volumes
HU02037 All High Importance A FlashCopy consistency group, with a mix of mappings in different states, cannot be stopped (show details) 8.2.1.6 FlashCopy
HU02055 All High Importance Creating a FlashCopy snapshot, in the GUI, does not set the same preferred node for both source and target volumes. This may adversely impact performance (show details) 8.2.1.6 FlashCopy
HU02072 All High Importance An issue in the handling of email transmission can write a large file to the node boot drive. If this causes the boot drive to become full, the node will go offline with error 565 (show details) 8.2.1.6 System Monitoring
HU01843 All Suggested A node hardware issue can cause a CLI command to timeout resulting in a node warmstart (show details) 8.2.1.6 Command Line Interface
HU01892 All Suggested LUNs of greater than 2TB, presented by HP XP7 storage controllers, are not supported (show details) 8.2.1.6 Backend Storage
HU01974 All Suggested With all Remote Support Assistant connections closed, the GUI may show that a connection is still in progress (show details) 8.2.1.6 System Monitoring
HU01978 All Suggested Unable to create HyperSwap volumes. The mkvolume command fails with CMMVC7050E error (show details) 8.2.1.6 HyperSwap
HU01979 All Suggested The figure for used_virtualization, in the output of a lslicense command, may be unexpectedly large (show details) 8.2.1.6 Command Line Interface
HU01982 All Suggested In an environment, with multiple IP Quorum servers, if the quorum component encounters a duplicate UID then a node may warmstart (show details) 8.2.1.6 IP Quorum
HU01983 All Suggested Improve debug data capture to assist in determining the reason for a Data Reduction Pool to be taken offline (show details) 8.2.1.6 Data Reduction Pools
HU01986 All Suggested An accounting issue in the FlashCopy component may cause node warmstarts (show details) 8.2.1.6 FlashCopy
HU01991 All Suggested An issue in the handling of extent allocation, in the DRP component, can cause a node warmstart (show details) 8.2.1.6 Data Reduction Pools
HU02020 FS9100, V7000, V5000 Suggested An internal hardware bus, running at the incorrect speed, may give rise to spurious DIMM over-temperature errors (show details) 8.2.1.6 Reliability Availability Serviceability
HU02029 All Suggested An issue with the SSMTP process may result in failed callhome, inventory reporting and user notifications. A testemail command will fail with a CMMVC9051E error (show details) 8.2.1.6 System Monitoring
HU02039 All Suggested An issue in the management steps of DRP recovery may lead to a node warmstart (show details) 8.2.1.6 Data Reduction Pools
HU02067 All Suggested If multiple recipients are specified, for callhome emails, then no callhome emails will be sent (show details) 8.2.1.6 System Monitoring
HU02007 All HIPER During volume migration an issue, in the handling of old to new extents transfer, can lead to cluster-wide warmstarts (show details) 8.2.1.5 Storage Virtualisation
HU02009 All Critical Systems which are using DRP, with the maximum possible extent size of 8GB, and which experience a very specific IO workload, may experience an issue due to garbage collection. This can cause repeated node warmstarts and loss of access to data (show details) 8.2.1.5 Data Reduction Pools
HU02011 All High Importance When a node warmstart occurs on a system using Data Reduction Pools, there is a small possibility that the node will not automatically return online. If the partner node is also offline, this can cause temporary loss of access to data (show details) 8.2.1.5 Data Reduction Pools
HU02012 All High Importance Under certain IO workloads the garbage collection process can adversely impact volume write response times (show details) 8.2.1.5 Data Reduction Pools
HU01918 All HIPER Where Data Reduction Pools have been created on earlier code levels, upgrading the system, to an affected release, can cause an increase in the level of concurrent flushing to disk. This may result in a loss of access to data (show details) 8.2.1.4 Data Reduction Pools
HU02008 All HIPER When a DRAID rebuild occurs, occasionally a RAID deadlock condition can be triggered by a particular type of IO workload. This can lead to repeated node warmstarts and a loss of access to data (show details) 8.2.1.4 Distributed RAID
HU01865 All Critical When creating a HyperSwap relationship, using addvolumecopy (or similar methods), the system should perform a synchronisation operation to copy the data from the original copy to the new copy. In some rare cases this synchronisation is skipped, leaving the new copy with bad data (all zeros) (show details) 8.2.1.4 HyperSwap
HU01887 All Critical In circumstances where host configuration data becomes inconsistent, across nodes, an issue in the CLI policing code may cause multiple warmstarts (show details) 8.2.1.4 Command Line Interface, Host Cluster
HU01900 All Critical Executing a command, that can result in a shared mapping being created or destroyed, for an individual host, in a host cluster, without that command applying to all hosts in the host cluster, may lead to multiple node warmstarts with the possibility of a T2 recovery (show details) 8.2.1.4 Host Cluster
HU01910 All Critical When FlashCopy mappings are created, with a grain size of 64KB, it is possible for an overflow condition in the bitmap to occur. This can resulting in multiple node warmstarts with a possible loss of access to data (show details) 8.2.1.4 FlashCopy
HU01928 All Critical When two IOs attempt to access the same address, the state of the data may be incorrectly set to invalid causing offline volumes and, possibly, offline pools (show details) 8.2.1.4 Data Reduction Pools
HU01987 SVC Critical During SAN fabric power maintenance a cluster may breech resource limits, on the remaining node to node links, resulting in system-wide lease expiry (show details) 8.2.1.4 Reliability Availability Serviceability
HU02000 All Critical Data Reduction Pools may go offline due to a timing issue in metadata handling (show details) 8.2.1.4 Data Reduction Pools
HU02013 All Critical A race condition, between the extent invalidation and destruction, in the garbage collection process may cause a node warmstart with the possibility of offline volumes (show details) 8.2.1.4 Data Reduction Pools
HU02025 All Critical An issue with metadata handling, where a pool has been taken offline, may lead to an out of space condition in that pool preventing its return to operation (show details) 8.2.1.4 Data Reduction Pools
HU01886 All High Importance The Unmap function can leave volume extents, that have not been freed, preventing managed disk and pool removal (show details) 8.2.1.4 SCSI Unmap
HU01902 V7000, V5000 High Importance During an upgrade, an issue with VPD migration, can cause a timeout leading to a stalled upgrade (show details) 8.2.1.4 System Update
HU01925 FS9100 High Importance Systems will incorrectly report offline and unresponsive NVMe drives after an IO group outage. These errors will fail to auto-fix and must be manually marked as fixed (show details) 8.2.1.4 System Monitoring
HU01930 FS9100 High Importance Certain types of FlashCore Module (FCM) failure may not result in a call home, delaying the shipment of a replacement (show details) 8.2.1.4 Drives
HU01937 FS9100, V7000 High Importance DRAID copy-back operation can overload NVMe drives resulting in high IO latency (show details) 8.2.1.4 Distributed RAID, Drives
HU01939 FS9100, V7000 High Importance After replacing a canister, and attempting to bring the new canister into the cluster, it may remain offline (show details) 8.2.1.4 Reliability Availability Serviceability
HU01941 All High Importance After upgrading the system to v8.2, or later, when expanding a mirrored volume, the formatting of additional space may become stalled (show details) 8.2.1.4 Volume Mirroring
HU01944 All High Importance Proactive host failover not waiting for 25 seconds before allowing nodes to go offline during upgrades or maintenance (show details) 8.2.1.4 Reliability Availability Serviceability
HU01945 All High Importance Systems with Flash Core Modules are unable to upgrade the firmware for those drives (show details) 8.2.1.4 Drives
HU01971 FS9100, V7000 High Importance Spurious DIMM over-temperature errors may cause a node to go offline with node error 528 (show details) 8.2.1.4 Reliability Availability Serviceability
HU01972 All High Importance When an array is in a quiescing state, for example where a member has been deleted, IO may become pended leading to multiple warmstarts (show details) 8.2.1.4 RAID, Distributed RAID
HU00744 All Suggested Single node warmstart due to an accounting issue within the cache component (show details) 8.2.1.4 Cache
HU01485 SVC Suggested When a SV1 node is started, with only one PSU powered, powering up the other PSU will not extinguish the Power Fault LED.
Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed (show details)
8.2.1.4 System Monitoring
HU01659 SVC Suggested Node Fault LED can be seen to flash in the absence of an error condition.
Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed (show details)
8.2.1.4 System Monitoring
HU01857 All Suggested Improved validation of user input in GUI (show details) 8.2.1.4 Graphical User Interface
HU01860 All Suggested During garbage collection the flushing of extents may become stuck leading to a timeout and a single node warmstart (show details) 8.2.1.4 Data Reduction Pools
HU01869 All Suggested Volume copy deletion, in a DRP, triggered by rmvdiskcopy rmvolumecopy or addvdiskcopy -autodelete (or similar) may become stalled with the copy being left in "deleting" status (show details) 8.2.1.4 Data Reduction Pools
HU01912 All Suggested Systems with iSCSI-attached controllers may see node warmstarts due to I/O request timeouts (show details) 8.2.1.4 Backend Storage
HU01915 & IT28654 All Suggested Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers' SSL certificates are part of a chain of trust (show details) 8.2.1.4 Encryption
HU01916 All Suggested The GUI Dashboard and the CLI lssystem command report physical capacity incorrectly (show details) 8.2.1.4 Graphical User Interface, Command Line Interface
HU01926 SVC, V7000 Suggested When a node, with 32GB of RAM, is upgraded to v8.2.1 it may experience a warmstart resulting in a failed upgrade (show details) 8.2.1.4 System Update
HU01929 FS9100, V7000 Suggested Drive fault type 3 (error code 1686) may be seen in the Event Log for empty slots (show details) 8.2.1.4 System Monitoring
HU01959 All Suggested An timing window issue in the Thin Provisioning component can cause a node warmstart (show details) 8.2.1.4 FlashCopy, Thin Provisioning
HU01961 V7000, V5000 Suggested A hardware issue can provoke the system to repeatedly try to collect a statesave, from the enclosure management firmware, causing 1048 errors in the Event Log (show details) 8.2.1.4 System Monitoring
HU01962 All Suggested When Call Home servers return an invalid message it can be incorrectly reported as an error 3201 in the Event Log (show details) 8.2.1.4 System Monitoring
HU01976 All Suggested A new mdisk array may not be encrypted even though encryption is enabled on the system (show details) 8.2.1.4 Encryption
HU02001 All Suggested During a system upgrade an issue in callhome may cause a node warmstart stalling the upgrade (show details) 8.2.1.4 System Monitoring
HU02002 All Suggested On busy systems, diagnostic data collection may not complete correctly producing livedumps with missing pages (show details) 8.2.1.4 Support Data Collection
HU02019 All Suggested When the master and auxiliary volumes, in a relationship, have the same name it is not possible, in the GUI, to determine which is master or auxiliary (show details) 8.2.1.4 Graphical User Interface
IT28433 All Suggested Timing window issue in the DRP rehoming component can cause a single node warmstart (show details) 8.2.1.4 Data Reduction Pools
IT28728 All Suggested Email alerts will not work where the mail server does not allow unqualified client host names (show details) 8.2.1.4 System Monitoring
HU01932 All Critical When a rmvdisk command initiates a DRP rehoming process any IO to the removed volume may cause multiple warmstarts leading to a loss of access (show details) 8.2.1.2 Deduplication
HU01920 All Critical An issue in the garbage collection process can cause node warmstarts and offline pools (show details) 8.2.1.1 Data Reduction Pools
HU01492 & HU02024 SVC, V7000, V5000 HIPER All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports used for inter-node communication are on the same FC adapter (show details) 8.2.1.0 Reliability Availability Serviceability
HU01617 All HIPER Due to a timing window issue, stopping a FlashCopy mapping, with the -autodelete option, may result in a Tier 2 recovery (show details) 8.2.1.0 FlashCopy
HU01828 All HIPER Node warmstarts may occur during deletion of deduplicated volumes due to a timing-related issue (show details) 8.2.1.0 Deduplication
HU01851 All HIPER When a deduplicated volume is deleted there may be multiple node warmstarts and offline pools (show details) 8.2.1.0 Deduplication
HU01873 FS9100, V7000, V5000 HIPER Deleting a volume, in a Data Reduction Pool, while vdisk protection is enabled and when the vdisk was not explicitly unmapped, before deletion, may result in simultaneous node warmstarts. For more details refer to the following Flash  (show details) 8.2.1.0 Data Reduction Pools
HU01906 FS9100 HIPER Low-level hardware errors may not be recovered correctly, causing a canister to reboot. If multiple canisters reboot, this may result in loss of access to data (show details) 8.2.1.0 Reliability Availability Serviceability
HU01913 All HIPER A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access (show details) 8.2.1.0 Distributed RAID
HU01743 All Critical Where hosts are directly attached a mishandling of the login process, by the fabric controller, may result in dual node warmstarts (show details) 8.2.1.0 Hosts
HU01758 All Critical After an unexpected power loss, all nodes, in a cluster, may warmstart repeatedly, necessitating a Tier 3 recovery (show details) 8.2.1.0 Reliability Availability Serviceability
HU01799 All Critical Timing window issue can affect operation of the HyperSwap addvolumecopy command causing all nodes to warmstart (show details) 8.2.1.0 HyperSwap
HU01825 All Critical Invoking a chrcrelationship command when one of the relationships, in a consistency group, is running in the opposite direction to the others may cause a node warmstart followed by a T2 recovery (show details) 8.2.1.0 FlashCopy
HU01833 All Critical If both nodes, in an IO group, start up together a timing window issue may occur, that would prevent them running garbage collection, leading to a related DRP running out of space (show details) 8.2.1.0 Data Reduction Pools
HU01845 All Critical If the execution of a rmvdisk -force command, for the FlashCopy target volume in a GMCV relationship, coincides with the start of a GMCV cycle all nodes may warmstart (show details) 8.2.1.0 Global Mirror with Change Volumes
HU01847 All Critical FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts (show details) 8.2.1.0 FlashCopy
HU01850 All Critical When the last deduplication-enabled volume copy in a Data Reduction Pool is deleted the pool may go offline temporarily (show details) 8.2.1.0 Data Reduction Pools, Deduplication
HU01855 All Critical Clusters using Data Reduction Pools can experience multiple warmstarts on all nodes putting them in a service state (show details) 8.2.1.0 Data Reduction Pools
HU01862 All Critical When a Data Reduction Pool is removed, and the -force option is specified, there may be a temporary loss of access (show details) 8.2.1.0 Data Reduction Pools
HU01876 All Critical Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur (show details) 8.2.1.0 Backend Storage
HU01878 All Critical During an upgrade, from v7.8.1 or earlier to v8.1.3 or later, if a mdisk goes offline then, at completion, all volumes may go offline (show details) 8.2.1.0 System Update
HU01885 All Critical As writes are made to a Data Reduction Pool it is necessary to allocate new physical capacity. Under unusual circumstances it is possible for the handling of an expansion request to stall further IO leading to node warmstarts (show details) 8.2.1.0 Data Reduction Pools
HU01901 V7000 Critical Enclosure management firmware, in an expansion enclosure, will reset a canister after a certain number of discovery requests have been received, from the controller, for that canister. It is possible simultaneous resets may occur in adjacent canisters causing a temporary loss of access to data (show details) 8.2.1.0 Reliability Availability Serviceability
HU01957 All Critical Due to an issue in Data Reduction Pools, when the system attempts an upgrade, there may be node warmstarts (show details) 8.2.1.0 Data Reduction Pools, System Update
HU01965 All Critical A timing window issue in the deduplication component can lead to IO timeouts, and a node warmstart, with the possibility of an offline mdisk group (show details) 8.2.1.0 Deduplication
HU02042 All Critical An issue in the handling of metadata, after a DRP recovery operation, can lead to repeated node warmstarts, putting an IO group into a service state (show details) 8.2.1.0 Data Reduction Pools
IT25850 All Critical IO performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access (show details) 8.2.1.0 Distributed RAID
IT27460 All Critical Lease expiry can occur between local nodes when remote connection is lost, due to the mishandling of messaging credits (show details) 8.2.1.0 Reliability Availability Serviceability
IT29040 All Critical Occasionally a DRAID rebuild, with drives of 8TB or more, can encounter an issue which causes node warmstarts and potential loss of access (show details) 8.2.1.0 RAID, Distributed RAID
IT29853 V5000 Critical After upgrading to v8.1.1, or later, V5000 Gen 2 systems, with Gen 1 expansion enclosures, may experience multiple node warmstarts leading to a loss of access (show details) 8.2.1.0 System Update
HU01507 All High Importance Until the initial synchronisation process completes, high system latency may be experienced when a volume is created with two compressed copies or when space-efficient copy is added to a volume with an existing compressed copy (show details) 8.2.1.0 Volume Mirroring
HU01661 All High Importance A cache-protection mechanism flag setting can become stuck, leading to repeated stops of consistency group synching (show details) 8.2.1.0 HyperSwap
HU01733 All High Importance Canister information, for the High Density Expansion Enclosure, may be incorrectly reported (show details) 8.2.1.0 Reliability Availability Serviceability
HU01761 All High Importance Entering multiple addmdisk commands, in rapid succession, to more than one storage pool, may cause node warmstarts (show details) 8.2.1.0 Backend Storage
HU01797 All High Importance Hitachi G1500 backend controllers may exhibit higher than expected latency (show details) 8.2.1.0 Backend Storage
HU01810 All High Importance Deleting volumes, or using FlashCopy/Global Mirror with Change Volumes, in a Data Reduction Pool, may impact the performance of other volumes in the pool (show details) 8.2.1.0 Data Reduction Pools
HU01837 All High Importance In systems, where a VVols metadata volume has been created, an upgrade to v8.1.3 or later will cause a node warmstart, stalling the upgrade (show details) 8.2.1.0 VVols
HU01839 All High Importance Where a VMware host is being served volumes, from two different controllers, and an issue, on one controller, causes the related volumes to be taken offline then IO performance, for the volumes from the other controller, will be adversely affected (show details) 8.2.1.0 Hosts
HU01842 All High Importance Bursts of IO to Samsung Read-Intensive Drives can be interpreted as dropped frames, against the resident slots, leading to redundant drives being incorrectly failed (show details) 8.2.1.0 Drives
HU01846 SVC High Importance Silent battery discharge condition will, unexpectedly, take a SVC node offline, putting it into a 572 service state (show details) 8.2.1.0 Reliability Availability Serviceability
HU01852 All High Importance The garbage collection rate can lead to Data Reduction Pools running out of space even though reclaimable capacity is available (show details) 8.2.1.0 Data Reduction Pools
HU01858 All High Importance Total used capacity of a Data Reduction Pool within a single I/O group is limited to 256TB. Garbage collection does not correctly recognise this limit. This may lead to a pool running out of free capacity and going offline (show details) 8.2.1.0 Data Reduction Pools
HU01881 FS9100 High Importance An issue within the compression card in FS9100 systems can result in the card being incorrectly flagged as failed leading to warmstarts (show details) 8.2.1.0 Compression
HU01883 All High Importance Config node processes may consume all available memory, leading to node warmstarts. This can be caused, for example, by large numbers of concurrent SSH connections being opened (show details) 8.2.1.0 Reliability Availability Serviceability
HU01907 SVC High Importance An issue in the handling of the power cable sense registers can cause a node to be put into service state with a 560 error (show details) 8.2.1.0 Reliability Availability Serviceability
HU01934 FS9100 High Importance An issue in the handling of faulty canister components can lead to multiple node warmstarts (show details) 8.2.1.0 Reliability Availability Serviceability
HU00921 All Suggested A node warmstart may occur when a MDisk state change gives rise to duplicate discovery processes (show details) 8.2.1.0
HU01276 All Suggested An issue in the handling of debug data from the FC adapter can cause a node warmstart (show details) 8.2.1.0 Reliability Availability Serviceability
HU01523 All Suggested An issue with FC adapter initialisation can lead to a node warmstart (show details) 8.2.1.0 Reliability Availability Serviceability
HU01564 All Suggested FlashCopy maps cleaning process is not monitoring the grains correctly which may cause FlashCopy maps to not stop (show details) 8.2.1.0 FlashCopy
HU01571 All Suggested An upgrade can become stalled due to a node warmstart (show details) 8.2.1.0 System Update
HU01657 SVC, V7000, V5000 Suggested The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart (show details) 8.2.1.0 Reliability Availability Serviceability
HU01667 All Suggested A timing-window issue, in the remote copy component, may cause a node warmstart (show details) 8.2.1.0 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU01719 All Suggested Node warmstart due to a parity error in the HBA driver firmware (show details) 8.2.1.0 Reliability Availability Serviceability
HU01737 All Suggested On the "Update System" screen, for "Test Only", if a valid code image is selected, in the "Run Update Test Utility" dialog, then clicking the "Test" button will initiate a system update (show details) 8.2.1.0 System Update
HU01751 All Suggested When RAID attempts to flag a strip as bad, and that strip has already been flagged, a node may warmstart (show details) 8.2.1.0 RAID
HU01760 All Suggested FlashCopy map progress appears to be stuck at zero percent (show details) 8.2.1.0 FlashCopy
HU01765 All Suggested Node warmstart may occur when there is a delay to IO at the secondary site (show details) 8.2.1.0 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU01784 All Suggested If a cluster using IP quorum experiences a site outage, the IP quorum device may become invalid. Restarting the quorum application will resolve the issue (show details) 8.2.1.0 HyperSwap, Quorum
HU01786 All Suggested An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log (show details) 8.2.1.0 Drives
HU01791 All Suggested Using the chhost command will remove stored CHAP secrets (show details) 8.2.1.0 iSCSI
HU01807 All Suggested The lsfabric command may show incorrect local node id and local node name for some Fibre Channel logins (show details) 8.2.1.0 Command Line Interface
HU01811 All Suggested DRAID rebuilds, for large (>10TB) drives, may require lengthy metadata processing leading to a node warmstart (show details) 8.2.1.0 Distributed RAID
HU01815 All Suggested In Data Reduction Pools, volume size is limited to 96TB (show details) 8.2.1.0 Data Reduction Pools
HU01817 All Suggested Volumes used for VVols metadata or cloud backup, that are associated with a FlashCopy mapping, cannot be included in any further FlashCopy mappings (show details) 8.2.1.0 FlashCopy
HU01821 SVC Suggested An attempt to upgrade a two-node enhanced stretched cluster fails due to incorrect volume dependencies (show details) 8.2.1.0 System Update, Data Reduction Pools
HU01832 All Suggested Creation and distribution of the config file may cause an out-of-memory condition, leading to a node warmstart (show details) 8.2.1.0
HU01849 All Suggested An excessive number of SSH sessions may lead to a node warmstart (show details) 8.2.1.0 System Monitoring
HU01856 All Suggested A garbage collection process can time out waiting for an event in the partner node resulting in a node warmstart (show details) 8.2.1.0 Data Reduction Pools
HU01863 All Suggested In rare circumstances, a drive replacement may result in a "ghost drive" (i.e. a drive with the same ID as the replaced drive stuck in a permanently offline state) (show details) 8.2.1.0 Drives
HU01871 All Suggested An issue with bitmap synchronisation can lead to a node warmstart (show details) 8.2.1.0 Data Reduction Pools
HU01879 All Suggested Latency induced by DWDM inter-site links may result in a node warmstart (show details) 8.2.1.0
HU01893 SVC, V7000, FS9100 Suggested Excessive reporting frequency of NVMe drive diagnostics generates large numbers of callhome events (show details) 8.2.1.0 Drives
HU01895 All Suggested Where a banner has been created, without a new line at the end, any subsequent T4 recovery will fail (show details) 8.2.1.0 Distributed RAID
HU01981 All Suggested Although an issue, in the HBA firmware, is handled correctly it can still cause a node warmstart (show details) 8.2.1.0 Reliability Availability Serviceability
HU02028 All Suggested An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart (show details) 8.2.1.0 Metro Mirror, Global Mirror, Global Mirror with Change Volumes
IT19561 All Suggested An issue with register clearance in the FC driver code may cause a node warmstart (show details) 8.2.1.0 Reliability Availability Serviceability
IT25457 All Suggested Attempting to remove a copy of a volume, which has at least one image mode copy and at least one thin/compressed copy, in a Data Reduction Pool will always fail with a CMMVC8971E error (show details) 8.2.1.0 Data Reduction Pools
IT25970 All Suggested After a FlashCopy consistency group is started a node may warmstart (show details) 8.2.1.0 FlashCopy
IT26049 All Suggested An issue with CPU scheduling may cause the GUI to respond slowly (show details) 8.2.1.0 Graphical User Interface

4. Useful Links

Description Link
Support Websites
Update Matrices, including detailed build version
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Limitations and restrictions, including maximum configuration limits
Spectrum Virtualize Family of Products Inter-System Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning