You can replace SAN Volume Controller 2145-8F2, SAN Volume Controller 2145-8F4, SAN Volume Controller 2145-8A4,
or SAN Volume Controller 2145-8G4 nodes
with SAN Volume Controller 2145-8A4 or SAN Volume Controller 2145-CF8 nodes. The
following procedures are disruptive, because you do not use the same
WWNN and WWPNs for the new node. You must rezone your SAN and
the host multipathing device drivers must discover new paths. Access
to virtual disks (VDisks) is lost during this task.
This task assumes that the following conditions exist:
- The cluster software is at 5.1.0
or higher
- All nodes that are configured in the cluster are present
- All errors in the cluster error log are fixed
- All managed disks (MDisks) are online
- You have a 2145 UPS-1U unit
for each new SAN Volume Controller 2145-8G4 node.
Perform the following steps to replace nodes:
- (If the cluster software
version is at 5.1, complete this step.)
Confirm that no hosts have dependencies
on the node.
When shutting down a
node that is part of a cluster, or when deleting the node from a cluster,
use the Show Dependent VDisks menu option on
the Viewing Nodes panel in the SAN Volume Controller
Console to display all the VDisks that are dependent on a node, or
use the svcinfo lsnodedependentvdisk command to
view dependent VDisks.
If dependent
VDisks exist, determine if the VDisks are being used. If the VDisks
are being used, either restore the redundant configuration or suspend
the host application. If a dependent quorum disk is reported, repair
the access to the quorum disk or modify the quorum disk configuration.
- Quiesce all I/O from the hosts that access
the I/O group of the node that you are replacing.
- Delete the node that you want to replace from the cluster
and I/O group.
Notes: - The node is not deleted until the SAN Volume Controller cache
is destaged to disk. During this time, the partner node in the I/O
group transitions to write through mode.
- You can use the command-line interface (CLI) or the SAN Volume Controller Console to
verify that the deletion process has completed.
- Ensure that the node is no longer a member of the cluster.
- Power-off the node and remove it from the rack.
- Install the replacement (new) node in the rack and connect
the uninterruptible
power supply cables
and the fibre-channel cables.
- Power-on the node.
- Rezone your switch zones to remove the ports of the node
that you are replacing from the host and storage zones. Replace these
ports with the ports of the replacement node.
- Add the replacement node to the cluster and I/O group.
Important: Both nodes in the I/O group cache data;
however, the cache sizes are asymmetric if the remaining partner node
in the I/O group is a SAN
Volume Controller 2145-4F2 node.
The replacement node is limited by the cache size of the partner node
in the I/O group. Therefore, the replacement node does not use the
full 8 GB cache size until you replace the other SAN
Volume Controller 2145-4F2 node
in the I/O group.
- From each host, issue a rescan of the multipathing software
to discover the new paths to VDisks.
Notes: - If your system is inactive, you can perform this step after you
have replaced all nodes in the cluster.
- The host multipathing device drivers take approximately
30 minutes to recover the paths.
- See the documentation that is provided
with your multipathing device driver for information on how to query
paths to ensure that all paths have been recovered before proceeding
to the next step.
- Repeat steps 2 to 11 for
the partner node in the I/O group.
- Repeat steps 2 to 12 for
each node in the cluster that you want to replace.
- Resume host I/O.