You might have to remove a node from a cluster if the node
has failed and is being replaced with a new node or if the repair
that has been performed has caused that node to be unrecognizable
by the cluster.
The cache on the selected node is flushed before the node
is taken offline. In some circumstances, such as when the system is
already degraded (for example, when both nodes in the I/O group are
online and the virtual disks within the I/O group are degraded), the
system ensures that data loss does not occur as a result of deleting
the only node with the cache data. The cache is flushed before the
node is deleted to prevent data loss if a failure occurs on the other
node in the I/O group.
Before
deleting a node from the cluster, record the node serial number, worldwide
node name (WWNN), all worldwide port names (WWPNs), and the I/O group
that the node is currently part of. Recording this node information
can avoid data corruption if the node is re-added to the cluster at
a later time.
Attention: - If you are removing a single node and the remaining node in the
I/O group is online, the data on the remaining node goes into write-through
mode. This data can be exposed to a single point of failure if the
remaining node fails.
- If virtual disks (VDisks) are already degraded before
you delete a node, redundancy to the VDisks is degraded. Removing
a node might result in a loss of access to data and data loss.
- Removing the last node in the cluster destroys the cluster. Before
you delete the last node in the cluster, ensure that you want to destroy
the cluster.
- When you delete a node, you remove all redundancy from the I/O
group. As a result, new or existing failures can cause I/O errors
on the hosts. The following failures can occur:
- Host configuration errors
- Zoning errors
- Multipathing software configuration errors
- If you are deleting the last node in an I/O group and there are
VDisks assigned to the I/O group, you cannot delete the node from
the cluster if the node is online. You must back up or migrate all
data that you want to save before you delete the node. If the node
is offline, you can delete the node.
This task assumes that you have already launched the SAN Volume Controller Console.
Complete the following steps to delete a node from a cluster:
- Unless this is the last node in the cluster,
turn the power off to the node that you are removing using the Shut
Down a Node option on the SAN Volume Controller Console.
This step ensures that the multipathing device driver does not rediscover
the paths that are manually removed before you issue the delete node
request.
Attention: - When you remove the configuration node, the configuration
function moves to a different node within the cluster. This process
can take a short time, typically less than a minute. The SAN Volume Controller Console reattaches
to the new configuration node transparently.
- If you turn the power on to the node that has been removed and
it is still connected to the same fabric or zone, it attempts to rejoin
the cluster. At this point, the cluster tells the node to remove itself
from the cluster and the node becomes a candidate for addition to
this cluster or another cluster.
- If you are adding this node into the cluster, ensure that you
add it to the same I/O group that it was previously a member of. Failure
to do so can result in data corruption.
- In the portfolio, click . The Viewing Nodes panel is displayed.
- Find the node that you
want to delete.
If the node that you want to delete
is shown as Offline, then the node is
not participating in the cluster.
If the node that you want
to delete is shown as Online, deleting
the node can result in the dependent VDisks to also go offline. Verify
whether or not the node has any dependent VDisks.
- To check for dependent
VDisks before attempting to delete the node, select the node and click Show
Dependent VDisks from the drop-down menu.
If
any VDisks are listed, you should determine why and if access to the
VDisks is required while the node is deleted from the cluster. If
the VDisks are assigned from MDisk groups that contain solid-state drives (SSDs) that
are located in the node, you should check why the VDisk mirror, if
it is configured, is not synchronized. There can also be dependent
VDisks because the partner node in the I/O group is offline. Fabric
issues can also prevent the VDisk from communicating with storage
systems. You should resolve these problems before continuing with
the node deletion.
- Select the node that you want to delete and select Delete
a Node from the task list. Click Go.
The Deleting Node from Cluster panel is displayed.
- Click OK to
delete the node. Before a node is delete the SAN Volume Controller checks to determine if there are any virtual disks (VDisks)
that depend on that node. If the node that you selected contains VDisks
within the following situations, VDisks go offline and become unavailable
if the node is deleted:
- The node contains solid-state drives (SSD) and also contains the
only synchronized copy of a mirrored VDisk
- The other node in the I/O group is offline
If you select a node to delete that has these dependencies,
another panel displays confirming the deletion. To delete the node
in this case, click Force Delete message panel
that displays.