The rmnode command deletes a node from
the cluster. You can enter this command any time after a cluster has
been created.
Syntax

>>- svctask -- -- rmnode -- --+----------+-- ------------------->
'- -force -'
>--+- node_name -+---------------------------------------------><
'- node_id ---'
Parameters
- -force
- (Optional) Overrides the checks that this command runs. The parameter
overrides the following two checks:
- If the command results in VDisks going offline, the command fails
unless the force parameter is used.
- If the command results in a loss of data because there is unwritten
data in the write cache that is contained only within the node to
be removed, the command fails unless the force parameter
is used.
If you use the force parameter as a result
of an error about VDisks going offline, you force the node removal
and run the risk of losing data from the write cache. The force parameter
should always be used with caution.
- node_name | node_id
- Specifies the node to be deleted. The value for this parameter can
be one of the following:
- The node name that you assigned when you added the node
to the cluster
- The node ID that is assigned to the node (not the worldwide
node name).
Description
This
command removes a node from the cluster. This makes the node a candidate
to be added back into this cluster or into another cluster. After
the node is deleted, the other node in the I/O group enters write-through
mode until another node is added back into the I/O group.
By
default, the rmnode command flushes the cache on the specified
node before the node is taken offline. In some circumstances, such
as when the system is already degraded (for example, when both nodes
in the I/O group are online and the virtual disks within the I/O group
are degraded), the system ensures that data loss does not occur as
a result of deleting the only node with the cache data.
The
cache is flushed before the node is deleted to prevent data loss if
a failure occurs on the other node in the I/O group.
To take
the specified node offline immediately without flushing the cache
or ensuring data loss does not occur, run the rmnode command
with the -force parameter.
Prerequisites:
Before you issue the rmnode command,
perform the following tasks and read the following Attention notices
to avoid losing access to data:
- Determine which virtual disks (VDisks) are still assigned to this
I/O group by issuing the following command. The command requests a
filtered view of the VDisks, where the filter attribute is the I/O
group.
svcinfo lsvdisk -filtervalue IO_group_name=name
where name is
the name of the I/O group. Note: Any VDisks that are assigned
to the I/O group that this node belongs to are assigned to
the other node in the I/O group; the preferred node is changed.
You cannot change this setting back.
- Determine the hosts that the VDisks are mapped to by issuing the svcinfo
lsvdiskhostmap command.
- Determine if any of the VDisks that are assigned to this
I/O group contain data that you need to access:
- If you do not want to maintain access to these VDisks,
go to step 5.
- If you do want to maintain access to some or all of the
VDisks, back up the data or migrate the data to a different (online)
I/O group.
- Determine if you need to turn the power off to the node:
- If this is the last node in the cluster, you do not need to turn
the power off to the node. Go to step 5.
- If this is not the last node in the cluster, turn the
power off to the node that you intend to remove. This step ensures
that the Subsystem Device Driver (SDD) does not rediscover the paths
that are manually removed before you issue the delete node request.
- Update the SDD configuration for each
virtual path (vpath) that is presented by the VDisks that you intend
to remove. Updating the SDD configuration removes the vpaths from
the VDisks. Failure to update the configuration can result in data
corruption. See the Multipath Subsystem Device Driver: User's Guide for
details about how to dynamically reconfigure SDD for the given host
operating system.
- Quiesce all I/O operations that are destined for the node that you
are deleting. Failure to quiesce the operations can result
in failed I/O operations being reported to your host operating systems.
Attention: - Removing the last node in the cluster destroys the cluster. Before
you delete the last node in the cluster, ensure that you want to destroy
the cluster.
- If you are removing a single node and the remaining node in the
I/O group is online, the data can be exposed to a single point of
failure if the remaining node fails.
- This command might take some time to complete since the cache
in the I/O group for that node is flushed before the node is removed.
If the -force parameter is used, the cache is not flushed and
the command completes more quickly. However, if the deleted node is
the last node in the I/O group, using the -force option results
in the write cache for that node being discarded rather than flushed,
and data loss can occur. The -force option should be used with
caution.
- If both nodes in the I/O group are online and the VDisks are already
degraded before deleting the node, redundancy to the VDisks is already
degraded and loss of access to data and loss of data might occur if
the -force option is used.
Notes: - If you are removing the configuration node, the rmnode command
causes the configuration node to move to a different node within the
cluster. This process might take a short time: typically less than
a minute. The cluster IP address remains unchanged, but any SSH client
attached to the configuration node might need to reestablish a connection.
The SAN Volume Controller Console reattaches
to the new configuration node transparently.
- If this is the last node in the cluster or if it is currently
assigned as the configuration node, all connections to the cluster
are lost. The user interface and any open CLI sessions are lost if
the last node in the cluster is deleted. A time-out might occur if a command cannot be completed before
the node is deleted.
An invocation example
svctask rmnode 1
The
resulting output
No feedback