You can use the command-line interface (CLI) to remove
a node from a cluster.
After the node is deleted, the other node in the I/O group
enters write-through mode until another node is added back into the
I/O group.
By default, the rmnode command
flushes the cache on the specified node before taking the node offline.
When operating in a degraded state, the SAN Volume Controller ensures
that data loss does not occur as a result of deleting the only node
with the cache data.
Attention: - If you are removing a single node and the remaining node in the
I/O group is online, the data can be exposed to a single point of
failure if the remaining node fails.
- If both nodes in the I/O group are online and the VDisks are already
degraded prior to deleting the node, redundancy to the VDisks is already
degraded and loss of access to data and loss of data might occur if
the -force option is used.
- Removing the last node in the cluster destroys the cluster. Before
you delete the last node in the cluster, ensure that you want to destroy
the cluster.
- To take the specified node offline immediately without
flushing the cache or ensuring data loss does not occur, run the rmnode command
with the -force parameter. The force parameter forces
continuation of the command even though any node-dependent VDisks
will be taken offline. Use the force parameter
with caution; access to data on node-dependent VDisks will be lost.
Perform the following steps to delete a node:
- If you are deleting the last node in an I/O
group, determine the VDisks that are still assigned to this I/O group:
- Issue the following CLI command to request a filtered
view of the VDisks:
svcinfo lsvdisk -filtervalue IO_group_name=name
Where name is
the name of the I/O group.
- Issue the following CLI command to list the hosts that
this VDisk is mapped to:
svcinfo lsvdiskhostmap vdiskname/id
Where vdiskname/id is
the name or ID of the VDisk.
- If VDisks are assigned to this I/O group that contain data
that you want to continue to access, back up the data or migrate the
VDisks to a different (online) I/O group.
- If this is not the last node in the cluster, turn
the power off to the node that you intend to remove. This step ensures
that the multipathing device driver, such as the subsystem
device driver (SDD),
does not rediscover the paths that are manually removed before you
issue the delete node request.
Attention: - If you are removing the configuration node, the rmnode command
causes the configuration node to move to a different node within the
cluster. This process may take a short time, typically less than a
minute. The cluster IP address remains unchanged, but any SSH client
attached to the configuration node might need to reestablish a connection.
The SAN Volume Controller Console reattaches
to the new configuration node transparently.
- If you turn on the power to the node that has been removed and
it is still connected to the same fabric or zone, it attempts to rejoin
the cluster. At this point, the cluster causes the node to remove
itself from the cluster and the node becomes a candidate for addition
to this cluster or another cluster.
- If you are adding this node into the cluster, ensure that you
add it to the same I/O group that it was previously a member of. Failure
to do so can result in data corruption.
- In a service
situation, a node should normally be added back into a cluster using
the original node name. As long as the partner node in the I/O group
has not been deleted too, this is the default name used if -name is
not specified.
- Before you delete the node, update the multipathing device
driver configuration on the host to remove all device identifiers
that are presented by the VDisk that you intend to remove. If you
are using the subsystem device driver (SDD), the device identifiers
are referred to as virtual paths (vpaths).
Attention: Failure to perform this step can result in data
corruption.
See the IBM® System Storage® Multipath
Subsystem Device Driver User's
Guide for
details about how to dynamically reconfigure SDD for
the given host operating system.
- Issue the following CLI command to delete a node from the
cluster:
Attention: Before you delete the node: The
rmnode command
checks for node-dependent VDisks, which are not mirrored at the time
that the command is run. If any node-dependent VDisks are found, the
command stops and returns a message. To continue removing the node
despite the potential loss of data, run the rmnode command with the
-force parameter. Alternatively, follow these steps before you remove
the node to ensure that all VDisks are mirrored:
- Run the lsnodedependentvdisks command.
- For each node-dependent VDisk that is returned, run the lsvdisk command.
- Ensure that each VDisk returns in-sync status.
svctask rmnode node_name_or_id
Where node_name_or_id is
the name or ID of the node.
Note: Before
removing a node, the command checks for any node-dependent VDisks
that would go offline. If the node that you selected to delete contains
a solid-state drive (SSD) that
has dependent VDisks, VDisks that use the SSDs go
offline and become unavailable if the node is deleted. To maintain
access to VDisk data, mirror these VDisks before removing the node.
To continue removing the node without mirroring the VDisks, specify
the force parameter.