This document was last updated on 1 October 2019.
Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section| Details | Introduced |
|---|---|
|
The v8.1.3 code level introduces strict enforcement of the IETF RFC1035 specification. If unsupported characters are present in the URL used to launch the management GUI, either a blank page or http error 400 is displayed (depending on the browser that was used). Please see this TechNote for more information. |
8.1.3.0 |
|
Customers using Spectrum Virtualize as Software clusters must ensure that all spare hardware exactly matches active hardware before installing v8.1.2 or later. This issue will be resolved in a future release. |
8.1.2.0 |
|
Customers wishing to run RACE compression (including data reduction compression) on Spectrum Virtualize as Software clusters will need to install a 2nd CPU for both nodes within the IO group. This is a restriction that may be lifted in a future PTF. |
8.1.2.0 |
|
Customers with FlashSystem V840 systems with Flash code v1.1 on the backend enclosure should not upgrade to v8.1.1.1 or later. This is a temporary restriction that will be lifted in a future PTF. |
8.1.1.1 |
|
Customers upgrading systems with more than 64GB of RAM to v8.1 or later will need to run chnodehw to enable access to the extra memory above 64GB. Under some circumstances it may also be necessary to remove and re-add each node in turn. |
8.1.0.0 |
|
There is a known issue with 8-node systems and IBM Security Key Lifecycle Manager 3.0 that can cause the status of key server end points, on the system, to occasionally report as degraded or offline. The issue intermittently occurs when the system attempts to validate the key server but the server response times out to some of the nodes. When the issue occurs Error Code 1785 (A problem occurred with the Key Server) will be visible in the system event log. This issue will not cause any loss of access to encrypted data. |
7.8.0.0 |
|
There is an extremely small possibility that, on a system using both Encryption and Transparent Cloud Tiering, the system can enter a state where an encryption re-key operation is stuck in 'prepared' or 'prepare_failed' state, and a cloud account is stuck in 'offline' state. The user will be unable to cancel or commit the encryption rekey, because the cloud account is offline. The user will be unable to remove the cloud account because an encryption rekey is in progress. The system can only be recovered from this state using a T4 Recovery procedure. It is also possible that SAS-attached storage arrays go offline. |
7.8.0.0 |
|
Spectrum Virtualize as Software customers should not enable the Transparent Cloud Tiering function. This restriction will be removed under APAR HU01495. |
7.8.0.0 |
|
Some configuration information will be incorrect in Spectrum Control. This does not have any functional impact and will be resolved in a future release of Spectrum control. |
7.8.0.0 |
| Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0 | n/a |
| If an update stalls or fails then contact IBM Support for further assistance | n/a |
A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.
CVE Identifier |
Link for additional Information |
Resolved in |
|---|---|---|
| ||
| CVE-2018-3180 | ibm10884526 | 8.1.3.6 |
| CVE-2018-12547 | ibm10884526 | 8.1.3.6 |
| CVE-2008-5161 | ibm10874368 | 8.1.3.5 |
| CVE-2018-5391 | ibm10872368 | 8.1.3.5 |
| CVE-2017-17833 | ibm10872546 | 8.1.3.4 |
| CVE-2018-11784 | ibm10872550 | 8.1.3.4 |
| CVE-2018-5732 | ibm10741135 | 8.1.3.3 |
| CVE-2018-11776 | ibm10741137 | 8.1.3.3 |
| CVE-2017-17449 | ibm10872364 | 8.1.3.3 |
| CVE-2017-18017 | ibm10872364 | 8.1.3.3 |
| CVE-2018-1517 | ibm10872456 | 8.1.3.3 |
| CVE-2018-2783 | ibm10872456 | 8.1.3.3 |
| CVE-2018-12539 | ibm10872456 | 8.1.3.3 |
| CVE-2018-1775 | ibm10872486 | 8.1.3.3 |
| CVE-2016-10708 | ibm10717661 | 8.1.3.0 |
| CVE-2016-10142 | ibm10717931 | 8.1.3.0 |
| CVE-2017-11176 | ibm10717931 | 8.1.3.0 |
APAR |
Affected Products |
Severity |
Description |
Resolved in |
Feature Tags |
|---|---|---|---|---|---|
|
|
| |||
| HU01617 | All | HIPER | Due to a timing window issue, stopping a FlashCopy mapping, with the -autodelete option, may result in a Tier 2 recovery (show details) | 8.1.3.6 | FlashCopy |
| HU01913 | All | HIPER | A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access (show details) | 8.1.3.6 | Distributed RAID |
| HU01865 | All | Critical | When creating a Hyperswap relationship using addvolumecopy, or similar methods, the system should perform a synchronisation operation to copy the data of the original copy to the new copy. In some cases this synchronisation is skipped, leaving the new copy with bad data (all zeros) (show details) | 8.1.3.6 | HyperSwap |
| HU01876 | All | Critical | Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur (show details) | 8.1.3.6 | Backend Storage |
| HU01887 | All | Critical | In circumstances where host configuration data becomes inconsistent, across nodes, an issue in the CLI policing code may cause multiple warmstarts (show details) | 8.1.3.6 | Command Line Interface, Host Cluster |
| HU01888 & HU01997 | All | Critical | An issue with restore mappings, in the FlashCopy component, can cause an IO group to warmstart (show details) | 8.1.3.6 | FlashCopy |
| HU01910 | All | Critical | When FlashCopy mappings are created, with a grain size of 64KB, it is possible for an overflow condition in the bitmap to occur. This can resulting in multiple node warmstarts with a possible loss of access to data (show details) | 8.1.3.6 | FlashCopy |
| HU01928 | All | Critical | When two IOs attempt to access the same address, the state of the data may be incorrectly set to invalid causing offline volumes and, possibly, offline pools (show details) | 8.1.3.6 | Data Reduction Pools |
| HU01957 | All | Critical | Due to an issue in Data Reduction Pools, when the system attempts an upgrade, there may be node warmstarts (show details) | 8.1.3.6 | Data Reduction Pools, System Update |
| HU02013 | All | Critical | A race condition, between the extent invalidation and destruction, in the garbage collection process, may cause a node warmstart with the possibility of offline volumes (show details) | 8.1.3.6 | Data Reduction Pools |
| HU02025 | All | Critical | An issue with metadata handling, where a pool has been taken offline, may lead to an out of space condition in that pool preventing its return to operation (show details) | 8.1.3.6 | Data Reduction Pools |
| IT25850 | All | Critical | IO performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access (show details) | 8.1.3.6 | Distributed RAID |
| IT27460 | All | Critical | Lease expiry can occur between local nodes when remote connection is lost, due to the mishandling of messaging credits (show details) | 8.1.3.6 | Reliability Availability Serviceability |
| IT29040 | All | Critical | Occasionally a DRAID rebuild, with drives of 8TB or more, can encounter an issue which causes node warmstarts and potential loss of access (show details) | 8.1.3.6 | RAID, Distributed RAID |
| HU01507 | All | High Importance | Until the initial synchronisation process completes, high system latency may be experienced when a volume is created with two compressed copies or when space-efficient copy is added to a volume with an existing compressed copy (show details) | 8.1.3.6 | Volume Mirroring |
| HU01761 | All | High Importance | Entering multiple addmdisk commands, in rapid succession, to more than one storage pool, may cause node warmstarts (show details) | 8.1.3.6 | Backend Storage |
| HU01886 | All | High Importance | The Unmap function can leave volume extents, that have not been freed, preventing managed disk and pool removal (show details) | 8.1.3.6 | SCSI Unmap |
| HU01972 | All | High Importance | When an array is in a quiescing state, for example where a member has been deleted, IO may become pended leading to multiple warmstarts (show details) | 8.1.3.6 | RAID, Distributed RAID |
| HU00744 | All | Suggested | Single node warmstart due to an accounting issue within the cache component (show details) | 8.1.3.6 | Cache |
| HU01485 | SVC | Suggested | When a SV1 node is started, with only one PSU powered, powering up the other PSU will not extinguish the Power Fault LED. Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed (show details) |
8.1.3.6 | System Monitoring |
| HU01659 | SVC | Suggested | Node Fault LED can be seen to flash in the absence of an error condition. Note: To apply this fix (in new BMC firmware) each node will need to be power cycled (i.e. remove AC power and battery), one at a time, after the upgrade has completed (show details) |
8.1.3.6 | System Monitoring |
| HU01737 | All | Suggested | On the "Update System" screen, for "Test Only", if a valid code image is selected, in the "Run Update Test Utility" dialog, then clicking the "Test" button will initiate a system update (show details) | 8.1.3.6 | System Update |
| HU01857 | All | Suggested | Improved validation of user input in GUI (show details) | 8.1.3.6 | Graphical User Interface |
| HU01860 | SVC | Suggested | During garbage collection the flushing of extents may become stuck leading to a timeout and a single node warmstart (show details) | 8.1.3.6 | Data Reduction Pools |
| HU01869 | All | Suggested | Volume copy deletion, in a DRP, triggered by rmvdiskcopy rmvolumecopy or addvdiskcopy -autodelete (or similar) may become stalled with the copy being left in "deleting" status (show details) | 8.1.3.6 | Data Reduction Pools |
| HU01915 & IT28654 | All | Suggested | Systems, with encryption enabled, that are using key servers to manage encryption keys, may fail to connect to the key servers if the servers' SSL certificates are part of a chain of trust (show details) | 8.1.3.6 | Encryption |
| HU01916 | All | Suggested | The GUI Dashboard and the CLI lssystem command report physical capacity incorrectly (show details) | 8.1.3.6 | Graphical User Interface, Command Line Interface |
| IT28433 | All | Suggested | Timing window issue in the DRP rehoming component can cause a single node warmstart (show details) | 8.1.3.6 | Data Reduction Pools |
| HU01918 | All | HIPER | Where Data Reduction Pools have been created on earlier code levels, upgrading the system, to an affected release, can cause an increase in the level of concurrent flushing to disk. This may result in a loss of access to data (show details) | 8.1.3.5 | Data Reduction Pools |
| HU01920 | All | Critical | An issue in the garbage collection process can cause node warmstarts and offline pools (show details) | 8.1.3.5 | Data Reduction Pools |
| HU01492 | All | HIPER | All ports of a 16Gb HBA can be affected when a single port is congested. This can lead to lease expiries if all ports, used for inter-node communication, are on the same FC adapter (show details) | 8.1.3.4 | Reliability Availability Serviceability |
| HU01873 | V7000, V5000 | HIPER | Deleting a volume, in a Data Reduction Pool, while vdisk protection is enabled and when the vdisk was not explicitly unmapped, before deletion, may result in simultaneous node warmstarts. For more details refer to the following Flash (show details) | 8.1.3.4 | Data Reduction Pools |
| HU01825 | All | Critical | Invoking a chrcrelationship command, when one of the relationships in a consistency group is running in the opposite direction to the others, may cause a node warmstart followed by a T2 recovery (show details) | 8.1.3.4 | FlashCopy |
| HU01833 | All | Critical | If both nodes, in an IO group, start up together a timing window issue may occur, that would prevent them running garbage collection, leading to a related DRP running out of space (show details) | 8.1.3.4 | Data Reduction Pools |
| HU01855 | All | Critical | Clusters using Data Reduction Pools can experience multiple warmstarts, on all nodes, putting them in a service state (show details) | 8.1.3.4 | Data Reduction Pools |
| HU01862 | All | Critical | When a Data Reduction Pool is removed and the -force option is specified there may be a temporary loss of access (show details) | 8.1.3.4 | Data Reduction Pools |
| HU01878 | All | Critical | During an upgrade from v7.8.1 or earlier to v8.1.3 or later if a mdisk goes offline then at completion all volumes may go offline (show details) | 8.1.3.4 | System Update |
| HU01885 | All | Critical | As writes are made to a Data Reduction Pool it is necessary to allocate new physical capacity. Under unusual circumstances it is possible for the handling of an expansion request to stall further IO leading to node warmstarts (show details) | 8.1.3.4 | Data Reduction Pools |
| HU02042 | All | Critical | An issue in the handling of metadata, after a DRP recovery operation, can lead to repeated node warmstarts, putting an IO group into a service state (show details) | 8.1.3.4 | Data Reduction Pools |
| IT29853 | V5000 | Critical | After upgrading to v8.1.1, or later, V5000 Gen 2 systems, with Gen 1 expansion enclosures, may experience multiple node warmstarts leading to a loss of access (show details) | 8.1.3.4 | System Update |
| HU01661 | All | High Importance | A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synchronisation (show details) | 8.1.3.4 | HyperSwap |
| HU01733 | All | High Importance | Canister information, for the High Density Expansion Enclosure, may be incorrectly reported (show details) | 8.1.3.4 | Reliability Availability Serviceability |
| HU01797 | All | High Importance | Hitachi G1500 backend controllers may exhibit higher than expected latency (show details) | 8.1.3.4 | Backend Storage |
| HU01824 | All | High Importance | Switching replication direction, for HyperSwap relationships, can lead to long IO timeouts (show details) | 8.1.3.4 | HyperSwap |
| HU01839 | All | High Importance | Where a VMware host is being served volumes, from two different controllers, and an issue, on one controller, causes the related volumes to be taken offline then IO performance, for the volumes from the other controller, will be adversely affected (show details) | 8.1.3.4 | Hosts |
| HU01842 | All | High Importance | Bursts of IO to Samsung high capacity flash drives can be interpreted as dropped frames, against the resident slots, leading to redundant drives being incorrectly failed (show details) | 8.1.3.4 | Drives |
| HU01846 | SVC | High Importance | Silent battery discharge condition will unexpectedly take a node offline putting it into a 572 service state (show details) | 8.1.3.4 | Reliability Availability Serviceability |
| HU01902 | V7000, V5000 | High Importance | During an upgrade, an issue with VPD migration, can cause a timeout leading to a stalled upgrade (show details) | 8.1.3.4 | System Update |
| HU01907 | SVC | High Importance | An issue in the handling of the power cable sense registers can cause a node to be put into service state with a 560 error (show details) | 8.1.3.4 | Reliability Availability Serviceability |
| HU01657 | SVC, V7000, V5000 | Suggested | The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart (show details) | 8.1.3.4 | Reliability Availability Serviceability |
| HU01719 | All | Suggested | Node warmstart due to a parity error in the HBA driver firmware (show details) | 8.1.3.4 | Reliability Availability Serviceability |
| HU01760 | All | Suggested | FlashCopy map progress appears to be stuck at zero percent (show details) | 8.1.3.4 | FlashCopy |
| HU01778 | All | Suggested | An issue, in the HBA adapter, is exposed where a switch port keeps the link active but does not respond to link resets resulting in a node warmstart (show details) | 8.1.3.4 | Reliability Availability Serviceability |
| HU01786 | All | Suggested | An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log (show details) | 8.1.3.4 | Drives |
| HU01791 | All | Suggested | Using the chhost command will remove stored CHAP secrets (show details) | 8.1.3.4 | iSCSI |
| HU01821 | SVC | Suggested | An attempt to upgrade a two-node enhanced stretched cluster fails due to incorrect volume dependencies (show details) | 8.1.3.4 | System Update, Data Reduction Pools |
| HU01849 | All | Suggested | An excessive number of SSH sessions may lead to a node warmstart (show details) | 8.1.3.4 | System Monitoring |
| HU02028 | All | Suggested | An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart (show details) | 8.1.3.4 | Metro Mirror, Global Mirror, Global Mirror With Change Volumes |
| IT22591 | All | Suggested | An issue in the HBA adapter firmware may result in node warmstarts (show details) | 8.1.3.4 | Reliability Availability Serviceability |
| IT25457 | All | Suggested | Attempting to remove a copy of a volume which has at least one image mode copy and at least one thin/compressed copy in a Data Reduction Pool will always fail with a CMMVC8971E error (show details) | 8.1.3.4 | Data Reduction Pools |
| IT26049 | All | Suggested | An issue with CPU scheduling may cause the GUI to respond slowly (show details) | 8.1.3.4 | Graphical User Interface |
| HU01828 | All | HIPER | Node warmstarts may occur during deletion of deduplicated volumes, due to a timing-related issue (show details) | 8.1.3.3 | Deduplication |
| HU01847 | All | Critical | FlashCopy handling of medium errors, across a number of drives on backend controllers, may lead to multiple node warmstarts (show details) | 8.1.3.3 | FlashCopy |
| HU01850 | All | Critical | When the last deduplication-enabled volume copy, in a Data Reduction Pool, is deleted the pool may go offline temporarily (show details) | 8.1.3.3 | Data Reduction Pools, Deduplication |
| HU01852 | All | High Importance | The garbage collection rate can lead to Data Reduction Pools running out of space even though reclaimable capacity is available (show details) | 8.1.3.3 | Data Reduction Pools |
| HU01858 | All | High Importance | Total used capacity of a Data Reduction Pool, within a single I/O group, is limited to 256TB. Garbage collection does not correctly recognise this limit. This may lead to a pool running out of free capacity and going offline (show details) | 8.1.3.3 | Data Reduction Pools |
| HU01870 | All | High Importance | LDAP server communication fails with SSL or TLS security configured (show details) | 8.1.3.3 | LDAP |
| HU01790 | All | Suggested | On the "Create Volumes" page the "Accessible I/O Groups" selection may not update when the "Caching I/O group" selection is changed (show details) | 8.1.3.3 | Graphical User Interface |
| HU01815 | All | Suggested | In Data Reduction Pools, volume size is limited to 96TB (show details) | 8.1.3.3 | Data Reduction Pools |
| HU01856 | All | Suggested | A garbage collection process can time out waiting for an event in the partner node resulting in a node warmstart (show details) | 8.1.3.3 | Data Reduction Pools |
| HU01851 | All | HIPER | When a deduplicated volume is deleted there may be multiple node warmstarts and offline pools (show details) | 8.1.3.2 | Data Reduction Pools, Deduplication |
| HU01837 | All | High Importance | In systems, where a VVols metadata volume has been created, an upgrade to v8.1.3 or later will cause a node warmstart, stalling the upgrade (show details) | 8.1.3.2 | VVols, System Update |
| HU01835 | All | HIPER | Multiple warmstarts may be experienced due to an issue with DRP garbage collection where data for a volume is detected after the volume itself has been removed (show details) | 8.1.3.1 | Data Reduction Pools |
| HU01840 | All | HIPER | When removing large numbers of volumes each with multiple copies it is possible to hit a timeout condition leading to warmstarts (show details) | 8.1.3.1 | SCSI Unmap |
| HU01829 | All | High Importance | An issue in statistical data collection can prevent Easy Tier from working with Data Reduction Pools (show details) | 8.1.3.1 | EasyTier, Data Reduction Pools |
| HU01867 | All | HIPER | Expansion of a volume may fail due to an issue with accounting of physical capacity. All nodes will warmstart in order to clear the problem. The expansion may be triggered by writing data to a thin-provisioned or compressed volume. (show details) | 8.1.3.0 | Thin Provisioning, Compression |
| HU01877 | All | HIPER | Where a volume is being expanded, and the additional capacity is to be formatted, the creation of a related volume copy may result in multiple warmstarts and a potential loss of access to data (show details) | 8.1.3.0 | Volume Mirroring, Cache |
| HU01708 | All | Critical | A node removal operation during an array rebuild can cause a loss of parity data leading to bad blocks (show details) | 8.1.3.0 | RAID |
| HU01774 | All | Critical | After a failed mkhost command for an iSCSI host any IO from that host will cause multiple warmstarts (show details) | 8.1.3.0 | iSCSI |
| HU01780 | All | Critical | Migrating a volume to an image-mode volume on controllers that support SCSI unmap will trigger repeated cluster recoveries (show details) | 8.1.3.0 | SCSI Unmap |
| HU01781 | All | Critical | An issue with workload balancing in the kernel scheduler can deprive some processes of the necessary resource to complete successfully resulting in a node warmstarts, that may impact performance, with the possibility of a loss of access to volumes (show details) | 8.1.3.0 | |
| HU01798 | All | Critical | Manual (user-paced) upgrade to 8.1.2 may invalidate hardened data putting all nodes in service state if they are shutdown and then restarted. Automatic upgrade is not affected by this issue. For more details refer to the following Flash (show details) | 8.1.3.0 | System Update |
| HU01802 | All | Critical | USB encryption key can become inaccessible after upgrade. If the system is later rebooted then any encrypted volumes will be unavailable (show details) | 8.1.3.0 | Encryption |
| HU01804 | All | Critical | During a system upgrade the processing required to upgrade the internal mapping between volumes and volume copies can lead to high latency impacting host IO (show details) | 8.1.3.0 | System Update, Hosts |
| HU01809 | SVC, V7000 | Critical | An issue in the handling of extent allocation in Data Reduction Pools can result in volumes being taken offline (show details) | 8.1.3.0 | Data Reduction Pools |
| HU01853 | All | Critical | In a Data Reduction Pool, it is possible for metadata to be assigned incorrect values leading to offline managed disk groups (show details) | 8.1.3.0 | Data Reduction Pools |
| HU01752 | SVC, V7000 | High Importance | A problem with the way IBM FlashSystem FS900 handles SCSI WRITE SAME commands (without the Unmap bit set) can lead to port exclusions (show details) | 8.1.3.0 | Backend Storage |
| HU01803 | All | High Importance | The garbage collection process in DRP may become stalled resulting in no reclamation of free space from removed volumes (show details) | 8.1.3.0 | Data Reduction Pools |
| HU01818 | All | High Importance | Excessive debug logging in the Data Reduction Pools component can adversely impact system performance (show details) | 8.1.3.0 | Data Reduction Pools |
| HU01460 | All | Suggested | If during an array rebuild another drive fails the high processing demand in RAID for handling many medium errors during the rebuild can lead to a node warmstart (show details) | 8.1.3.0 | RAID |
| HU01724 | All | Suggested | An IO lock handling issue between nodes can lead to a single node warmstart (show details) | 8.1.3.0 | RAID |
| HU01751 | All | Suggested | When RAID attempts to flag a strip as bad, and that strip has already been flagged, a node may warmstart (show details) | 8.1.3.0 | RAID |
| HU01795 | All | Suggested | A thread locking issue in the Remote Copy component may cause a node warmstart (show details) | 8.1.3.0 | |
| HU01800 | All | Suggested | Under some rare circumstance a node warmstart may occur whilst creating volumes in a Data Reduction Pool (show details) | 8.1.3.0 | Data Reduction Pools |
| HU01801 | All | Suggested | An issue in the handling of unmaps for mdisks can lead to a node warmstart (show details) | 8.1.3.0 | SCSI Unmap |
| HU01820 | All | Suggested | When an unusual IO request pattern is received it is possible for the handling of Data Reduction Pool metadata to become stuck, leading to a node warmstart (show details) | 8.1.3.0 | Data Reduction Pools |
| IT24900 | V7000, V5000 | Suggested | Whilst replacing a control enclosure midplane, an issue at boot can prevent VPD being assigned, delaying a return to service (show details) | 8.1.3.0 | Reliability Availability Serviceability |
| Description | Link |
|---|---|
| Support Websites | |
| Update Matrices, including detailed build version | |
Support Information pages providing links to the following information:
|
|
| Supported Drive Types and Firmware Levels | |
| SAN Volume Controller and Storwize Family Inter-cluster Metro Mirror and Global Mirror Compatibility Cross Reference | |
| Software Upgrade Test Utility | |
| Software Upgrade Planning |