You can use the command-line interface (CLI) to install
software upgrades.
Attention: Before you start a software upgrade,
you must check for offline or degraded VDisks. An offline VDisk can
cause write data that has been modified to be pinned in the SAN Volume Controller cache.
This prevents VDisk failover and causes a loss of I/O access during
the software upgrade. If the fast_write_state is empty, a VDisk can
be offline and not cause errors during the software upgrade.
Perform the following steps to upgrade the software:
- Download the SAN Volume Controller code
from the Support for SAN Volume Controller (2145)
Web site:
www.ibm.com/storage/support/2145
- If you want to write the SAN Volume Controller code
to a CD, you must download the CD image.
- If you do not want to write the SAN Volume Controller code
to a CD, you must download the install image.
- Use PuTTY scp (pscp) to copy the software upgrade files
to the node.
- Ensure that the software upgrade file has been successfully
copied.
Before you begin the software upgrade, you must
be aware of the following:
- The install process fails under the following conditions:
- If the software that is installed on the remote cluster is not
compatible with the new software or if there is an intercluster communication
error that does not allow the software to check that the software
is compatible.
- If any node in the cluster has
a hardware type that is not supported by the new software.
- .If the SAN Volume Controller software
determines that one or more virtual disks (VDisks) in the cluster
would be taken offline by rebooting the nodes as part of the upgrade
process. Details about which VDisks would be affected can be found
by using the svcinfo lsnodedependentvdisks command
or the View Dependent VDisks action from the
Viewing Nodes panel. You can use the force flag to override this restriction
if you are prepared to lose access to data during the upgrade.
- The software upgrade is distributed to all the nodes in the cluster
using fibre-channel connections between the nodes.
- Nodes are updated one at a time.
- Nodes will run the new software, concurrently with normal cluster
activity.
- While the node is updated, it does not participate in I/O activity
in the I/O group. As a result, all I/O activity for the VDisks in
the I/O group is directed to the other node in the I/O group by the
host multipathing software.
- There is a 30-minute delay between node updates. The delay allows
time for the host multipathing software to rediscover paths to the
nodes which have been upgraded, so that there is no loss of access
when another node in the I/O group is updated.
- The software update is not committed until all nodes in the cluster
have been successfully updated to the new software level. If all nodes
successfully restart with the new software level, the new level is
committed. When the new level is committed, the cluster vital product
data (VPD) is updated to reflect the new software level.
- You cannot invoke the new functions of the upgraded software until
all member nodes are upgraded and the update has been committed.
- Because the software upgrade process takes some time, the install
command completes as soon as the software level is verified by the
cluster. To determine when the upgrade has completed, you must either
display the software level in the cluster VPD or look for the Software
upgrade complete event in the error/event log. If any node fails to
restart with the new software level or fails at any other time during
the process, the software level is backed-off.
- During a software upgrade the version number of each node is updated
when the software has been installed and the node has been restarted.
The cluster software version number is updated when the new software
level is committed.
- When the software upgrade starts an entry is made in the error
or event log and another entry is made when the upgrade completes
or fails.
- Issue the following CLI command to start the software upgrade
process:
where software_upgrade_file is
the name of the software upgrade file.
The software upgrade does
not start if the cluster identifies any VDisks that would go offline
as a result of rebooting the nodes as part of the cluster upgrade.
An optional force parameter can be used to indicate that the upgrade
should continue in spite of the problem identified. Use the svcinfo
lsnodedependentvdisks command to identify the cause for
the failed upgrade. If you use this parameter, you are prompted to
confirm that you want to continue. The behavior of the force parameter
has changed, and is no longer required when applying an upgrade to
a cluster with errors in the error log.
- Issue the following CLI command to check the status of
the software upgrade process:
svcinfo lssoftwareupgradestatus
Note: If a status of stalled_non_redundant is
displayed, proceeding with the remaining set of node upgrades might
result in offline VDisks. Contact an IBM® service
representative to complete the upgrade.
- Perform the following steps to verify that the software
upgrade successfully completed:
- Issue the svctask dumperrlog CLI
command to send the contents of the error log to a text file.
The following output is displayed in the text file if the
software is successfully upgraded:
Upgrade completed successfully
- Issue the svcinfo lsnodevpd CLI
command for each node that is in the cluster. The software version
field displays the new software level.
When a new software level is applied, it is automatically
installed on all the nodes that are in the cluster.
Note: The software
upgrade can take up to 30 minutes per node.