This document was last updated on 21 November 2019.
Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section| Details | Introduced |
|---|---|
|
SRA does not work after changing SSH node method from password to key on AWS. This is a known issue that may be lifted in a future PTF. |
8.3.0.0 |
|
Customers using iSER attached hosts, with Mellanox 25G adapters, should be aware that IPv6 sessions will not failover, for example, during a cluster upgrade. This is a known issue that may be lifted in a future PTF. |
8.3.0.0 |
|
Customers with 32GB DH8 systems cannot upgrade to v8.3 or later. This is a restriction that may be lifted in a future PTF. |
8.3.0.0 |
|
Customers upgrading systems with more than 64GB of RAM to v8.1 or later will need to run chnodehw to enable access to the extra memory above 64GB. Under some circumstances it may also be necessary to remove and re-add each node in turn. |
8.1.0.0 |
| If an update stalls or fails then contact IBM Support for further assistance | n/a |
A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.
CVE Identifier |
Link for additional Information |
Resolved in |
|---|---|---|
| ||
| CVE-2019-2602 | 1073958 | 8.3.0.0 |
APAR |
Affected Products |
Severity |
Description |
Resolved in |
Feature Tags |
|---|---|---|---|---|---|
|
|
| |||
| HU02014 | SVC | HIPER | After a loss of power, where a node has a dead CMOS battery, it will fail to restart correctly. It is possible for both nodes in an IO group to experience this issue (show details) | 8.3.0.1 | Reliability Availability Serviceability |
| HU02064 | SVC, V7000 | HIPER | An issue in the firmware for compression accelerator cards can cause offline compressed volumes. For more details refer to the following Flash (show details) | 8.3.0.1 | Compression |
| HU01924 | All | Critical | Migrating extents to a mdisk, that is not a member of a mdisk group, may result in a Tier 2 recovery (show details) | 8.3.0.1 | Thin Provisioning |
| HU01998 | All | Critical | All SCSI command types can set volumes as busy, resulting in IO timeouts and multiple node warmstarts, with the possibility of an offline IO group (show details) | 8.3.0.1 | Hosts |
| HU02016 | SVC | Critical | A memory leak in the component that handles thin-provisioned mdisks can lead to an adverse performance impact with the possibility of offline mdisks (show details) | 8.3.0.1 | Backend Storage |
| HU02036 | All | Critical | It is possible for commands, that alter pool-level extent reservations (i.e. migratevdisk or rmmdisk), to conflict with an ongoing Easy Tier migration, resulting in a Tier 2 recovery (show details) | 8.3.0.1 | EasyTier |
| HU02043 | All | Critical | Collecting a snap can cause nodes to run out of boot drive space and go offline with node error 565 (show details) | 8.3.0.1 | Support Data Collection |
| HU02044 | All | Critical | A deadlock condition, affecting DRP process interaction with DRAID, can cause multiple warmstarts with the possibility of a loss of access to data (show details) | 8.3.0.1 | Distributed RAID, Data Reduction Pools |
| HU02045 | All | Critical | When a node is removed from the cluster, using CLI, it may still be shown as online in the GUI. If an attempt is made to shutdown this node, from the GUI, whilst it appears to be online, then the whole cluster will shutdown (show details) | 8.3.0.1 | Graphical User Interface |
| HU02050 | FS9100, V5000, V7000 | Critical | Compression hardware can have an issue processing certain types of data resulting in node reboots and marking the compression hardware as faulty even though it is serviceable (show details) | 8.3.0.1 | Compression |
| HU02077 | All | Critical | A node upgrading to v8.3.0.0 will lose access to controllers directly-attached to its FC ports and the upgrade will stall (show details) | 8.3.0.1 | Backend Storage |
| HU02083 | All | Critical | During DRAID rebuilds, an issue in the handling of memory buffers can lead to multiple node warmstarts and a loss of access to data (show details) | 8.3.0.1 | Distributed RAID |
| HU02086 | All | Critical | An issue, in IP Quorum, may cause a Tier 2 recovery, during initial connection to a candidate device (show details) | 8.3.0.1 | IP Quorum |
| HU02089 | All | Critical | Due to changes to quorum management, during an upgrade to v8.2.x, or later, there may be multiple warmstarts, with the possibility of a loss of access to data (show details) | 8.3.0.1 | System Update |
| HU02097 | All | Critical | Workloads, with data that is highly suited to deduplication, can provoke high CPU utilisation, as multiple destinations try to dedupe to one source. This adversely impacts performance with the possibility of offline mdisk groups (show details) | 8.3.0.1 | Data Reduction Pools |
| IT30595 | All | Critical | A resource shortage in the RAID component can cause mdisks to be taken offline (show details) | 8.3.0.1 | RAID |
| HU02006 | All | High Importance | Garbage collection behaviour can become overzealous, adversely affect performance (show details) | 8.3.0.1 | Data Reduction Pools |
| HU02053 | FS9100, V7000G3, V5100 | High Importance | An issue with canister BIOS update can stall system upgrades (show details) | 8.3.0.1 | System Update |
| HU02055 | All | High Importance | Creating a FlashCopy snapshot, in the GUI, does not set the same preferred node for both source and target volumes. This may adversely impact performance (show details) | 8.3.0.1 | FlashCopy |
| HU02072 | All | High Importance | An issue in the handling of email transmission can write a large file to the node boot drive. If this causes the boot drive to become full, the node will go offline with error 565 (show details) | 8.3.0.1 | System Monitoring |
| HU02080 | All | High Importance | When a DRP is running low on free space, the credit allocation algorithm, for garbage collection, can be exposed to a race condition, adversely affecting performance (show details) | 8.3.0.1 | Data Reduction Pools |
| IT29975 | All | High Importance | During Ethernet port configuration, netmask validation will only accept a fourth octet of zero. Non-zero values will cause the interface to remain inactive (show details) | 8.3.0.1 | iSCSI |
| HU02067 | All | Suggested | If multiple recipients are specified, for callhome emails, then no callhome emails will be sent (show details) | 8.3.0.1 | System Monitoring |
| HU02073 | All | Suggested | Detection of an invalid list entry in the parity handling process can lead to a node warmstart (show details) | 8.3.0.1 | RAID |
| HU02079 | All | Suggested | Starting a FlashCopy mapping, within a Data Reduction Pool, a large number of times may cause a node warmstart (show details) | 8.3.0.1 | Data Reduction Pools, FlashCopy |
| HU02087 | All | Suggested | LDAP users with SSH keys cannot create volumes after upgrading to 8.3.0.0 (show details) | 8.3.0.1 | LDAP |
| HU02084 | FS9100, V5000, V7000 | Suggested | If a node goes offline, after the firmware of multiple NVMe drives has been upgraded, then incorrect 3090/90021 errors may be seen in the Event Log (show details) | 8.3.0.1 | Drives |
| IT30448 | All | Suggested | If an IP Quorum app is killed, during the commit phase of a code upgrade, then that offline IP Quorum device cannot be removed, post upgrade (show details) | 8.3.0.1 | IP Quorum |
| HU02007 | All | HIPER | During volume migration an issue, in the handling of old to new extents transfer, can lead to cluster-wide warmstarts (show details) | 8.3.0.0 | Storage Virtualisation |
| HU01888 & HU01997 | All | Critical | An issue with restore mappings, in the FlashCopy component, can cause an IO group to warmstart (show details) | 8.3.0.0 | FlashCopy |
| HU01909 | All | Critical | Upgrading a system with Read-Intensive drives to 8.2, or later, may result in node warmstarts (show details) | 8.3.0.0 | System Update, DRAID, Drives |
| HU01921 | All | Critical | Where FlashCopy mapping targets are also in remote copy relationships there may be node warmstarts with a temporary loss of access to data (show details) | 8.3.0.0 | FlashCopy, Global Mirror, Metro Mirror |
| HU01933 | All | Critical | Under rare circumstances the DRP deduplication rehoming process can become truncated. Subsequent detection of inconsistent metadata can lead to offline Data Reduction Pools (show details) | 8.3.0.0 | Data Reduction Pools, Deduplication |
| HU01985 | All | Critical | As a consequence of a DRP recovery bad metadata may be created. When the region of disk associated with the bad metadata is accessed there may be an IO group warmstarts (show details) | 8.3.0.0 | Data Reduction Pools |
| HU01989 | All | Critical | For large drives, bitmap scanning, during an array rebuild, can timeout resulting in multiple node warmstarts, possibly leading to offline IO groups (show details) | 8.3.0.0 | Distributed RAID |
| HU01990 | All | Critical | Bad return codes from the partnership compression component can cause multiple node warmstarts taking nodes offline (show details) | 8.3.0.0 | Metro Mirror, Global Mirror, Global Mirror With Change Volumes |
| HU02005 | All | Critical | An issue in the background copy process prevents grains, above a 128TB limit, from being cleaned properly. As a consequence there may be multiple node warmstarts with the potential for a loss of access to data (show details) | 8.3.0.0 | Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
| HU02009 | All | Critical | Systems which are using DRP, with the maximum possible extent size of 8GB, and which experience a very specific IO workload, may experience an issue due to garbage collection. This can cause repeated node warmstarts and loss of access to data (show details) | 8.3.0.0 | Data Reduction Pools |
| HU02027 | All | Critical | Fabric congestion can cause internal resource constraints, in 16Gb HBAs, leading to lease expiries (show details) | 8.3.0.0 | Reliability Availability Serviceability |
| IT25367 | All | Critical | A T2 recovery may occur when an attempt is made to upgrade, or downgrade, the firmware for an unsupported drive type (show details) | 8.3.0.0 | Drives |
| IT26257 | All | Critical | Starting a relationship, when the remote volume is offline, may result in a T2 recovery (show details) | 8.3.0.0 | HyperSwap |
| HU01836 | All | High Importance | When an auxiliary volume is moved an issue with pausing the master volume can lead to node warmstarts (show details) | 8.3.0.0 | HyperSwap |
| HU01904 | All | High Importance | A timing issue can cause a remote copy relationship to become stuck, in a pausing state, resulting in a node warmstart (show details) | 8.3.0.0 | Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
| HU01919 | FS9100, V7000 | High Importance | During an upgrade some components may take too long to initialise causing node warmstarts (show details) | 8.3.0.0 | System Update |
| HU01942 | FS9100, V7000, V5000 | High Importance | NVMe drive ports can go offline, for a very short time, when an upgrade of that drive's firmware commences (show details) | 8.3.0.0 | Drives |
| HU01969 | All | High Importance | It is possible, after an rmrcrelationship command is run, that the connection to the remote cluster may be lost (show details) | 8.3.0.0 | Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
| HU02011 | All | High Importance | When a node warmstart occurs on a system using Data Reduction Pools, there is a small possibility that the node will not automatically return online. If the partner node is also offline, this can cause temporary loss of access to data (show details) | 8.3.0.0 | Data Reduction Pools |
| HU02012 | All | High Importance | Under certain IO workloads the garbage collection process can adversely impact volume write response times (show details) | 8.3.0.0 | Data Reduction Pools |
| HU02051 | All | High Importance | If unexpected actions are taken during node replacement, node warmstarts and temporary loss of access to data may occur. This issue can only occur if a node is replaced, and then the old node is re-added to the cluster (show details) | 8.3.0.0 | Reliability Availability Serviceability |
| HU01777 | All | Suggested | Where not all IO groups have NPIV enabled, hosts may be shown as "Degraded" with an incorrect count of node logins (show details) | 8.3.0.0 | Command Line Interface |
| HU01843 | All | Suggested | A node hardware issue can cause a CLI command to timeout resulting in a node warmstart (show details) | 8.3.0.0 | Command Line Interface |
| HU01868 | All | Suggested | After deleting an encrypted external mdisk, it is possible for the 'encrypted' status of volumes to change to 'no', even though all remaining mdisks are encrypted (show details) | 8.3.0.0 | Encryption |
| HU01872 | All | Suggested | An issue with cache partition fairness can favour small IOs over large ones leading to a node warmstart (show details) | 8.3.0.0 | Cache |
| HU01880 | All | Suggested | When a write, to a secondary volume, becomes stalled, a node at the primary site may warmstart (show details) | 8.3.0.0 | Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
| HU01892 | All | Suggested | LUNs of greater than 2TB, presented by HP XP7 storage controllers, are not supported (show details) | 8.3.0.0 | Backend Storage |
| HU01936 | All | Suggested | When shrinking a volume, that has host mappings, there may be recurring node warmstarts (show details) | 8.3.0.0 | Cache |
| HU01955 | All | Suggested | The presence of unsupported configurations, in a Spectrum Virtualize environment, can cause a mishandling of unsupported commands leading to a node warmstart (show details) | 8.3.0.0 | Reliability Availability Serviceability |
| HU01956 | All | Suggested | The output from a lsdrive command shows the write endurance usage, for SSDs, as blank rather than 0 (show details) | 8.3.0.0 | Command Line Interface |
| HU01963 | All | Suggested | A deadlock condition in the deduplication component can lead to a node warmstart (show details) | 8.3.0.0 | Deduplication |
| HU01974 | All | Suggested | With all Remote Support Assistant connections closed, the GUI may show that a connection is still in progress (show details) | 8.3.0.0 | System Monitoring |
| HU01978 | All | Suggested | Unable to create HyperSwap volumes. The mkvolume command fails with CMMVC7050E error (show details) | 8.3.0.0 | HyperSwap |
| HU01979 | All | Suggested | The figure for used_virtualization, in the output of a lslicense command, may be unexpectedly large (show details) | 8.3.0.0 | Command Line Interface |
| HU01982 | All | Suggested | In an environment, with multiple IP Quorum servers, if the quorum component encounters a duplicate UID then a node may warmstart (show details) | 8.3.0.0 | IP Quorum |
| HU01983 | All | Suggested | Improve debug data capture to assist in determining the reason for a Data Reduction Pool to be taken offline (show details) | 8.3.0.0 | Data Reduction Pools |
| HU01986 | All | Suggested | An accounting issue in the FlashCopy component may cause node warmstarts (show details) | 8.3.0.0 | FlashCopy |
| HU01991 | All | Suggested | An issue in the handling of extent allocation, in the DRP component, can cause a node warmstart (show details) | 8.3.0.0 | Data Reduction Pools |
| HU02020 | FS9100, V7000, V5000 | Suggested | An internal hardware bus, running at the incorrect speed, may give rise to spurious DIMM over-temperature errors (show details) | 8.3.0.0 | Reliability Availability Serviceability |
| HU02029 | All | Suggested | An issue with the SSMTP process may result in failed callhome, inventory reporting and user notifications. A testemail command will fail with a CMMVC9051E error (show details) | 8.3.0.0 | System Monitoring |
| HU02039 | All | Suggested | An issue in the management steps of DRP recovery may lead to a node warmstart (show details) | 8.3.0.0 | Data Reduction Pools |
| Description | Link |
|---|---|
| Support Websites | |
| Update Matrices, including detailed build version | |
Support Information pages providing links to the following information:
|
|
| Spectrum Virtualize Family of Products Inter-System Metro Mirror and Global Mirror Compatibility Cross Reference | |
| Software Upgrade Test Utility | |
| Software Upgrade Planning |