Restoring the cluster configuration using the CLI

You can restore your cluster configuration data using the command-line interface (CLI).

Attention: This procedure is designed to restore information about your cluster configuration, such as virtual disks (VDisks), local Metro Mirror information, local Global Mirror information, managed disk (MDisk) groups, and nodes. All other data that you have written to the VDisks is not restored. To restore the data on the VDisks, you must restore application data from any application that uses the VDisks on the cluster as storage separately. Therefore, it is essential that you have a backup of this data before you follow the cluster configuration recovery process.

You must regularly back up your cluster configuration data and your application data to avoid data loss. If a cluster is lost after a severe failure occurs, both cluster configuration and application data is lost. You must reinstate the cluster to the exact state it was in before the failure and then recover the application data.

Following this procedure without a backup copy of application data will result in data loss. If you do not have a backup copy of application data, contact the IBM® Support Center. The IBM Support Center has an alternative procedure that can restore the cluster while preserving most of the application data.

Important: There are two phases during the restore process: prepare and execute. You must not make any changes to the fabric or cluster between these two phases.

Complete the following steps to restore your cluster configuration data:

  1. Select delete cluster from the front panel on each node in the cluster that does not display Cluster : on the front panel. If the front panel of the node displays Cluster :, the node is already a candidate node.
  2. Create a new cluster from the front panel of any node in the cluster. If possible, use the node that was originally the configuration node for the cluster.
  3. Generate an SSH key pair for all of the hosts to use to access the CLI.
  4. Start the SAN Volume Controller Console.
  5. On the Viewing Clusters panel, select the cluster that you are recovering from the list. Select Remove the Cluster from the task list and click Go. The Remove Cluster panel displays. Click Yes to confirm the removal of the cluster. The Viewing Clusters panel displays.
  6. On the Viewing Clusters panel, select Add a Cluster from the task list and click Go. The Adding a Cluster panel displays. From this panel, initialize the new cluster by completing these steps:
    1. Enter the IP address for the cluster that you are recovering. Select Create (Initialize) Cluster. Click OK.
    2. The Sign on to Cluster panel appears. On this panel, enter superuser and the initial password when the cluster was created in step 2.
    3. Configure the new cluster with required settings. For details, see the information about creating a new cluster.
  7. To work with the command-line interface to finish restoring the cluster, assign an SSH key to the user that has Security Administrator role on the cluster by completing these steps:
    1. On the Viewing Cluster panel, select the new cluster and select Launch SAN Volume Controller Console from the task list and click Go.
    2. Click Manage Authentication > Users in the portfolio. The Modifying User superuser panel is displayed.
    3. To optionally modify the password, enter a new password in the New Password field. In the Re-Enter New Password field, retype the new password.
    4. To assign the SSH key that you generated in Step 3 to the user, enter the name of SSH key file in the SSH Key Public File field or click Browse to select the file.
    5. Click OK.
  8. Using the command-line interface, issue the following command to log onto the cluster:
    ssh -l admin your_cluster_name -p 22

    Where your_cluster_name is the name of the cluster for which you want to restore the cluster configuration.

    Note: Because the RSA host key has changed, a warning message displays when connecting to the cluster using SSH.
  9. Issue the following CLI command to ensure that only the configuration node is online.
    svcinfo lsnode

    The following is an example of the output that is displayed:

    id		name		status		IO_group_id			IO_group_name			config_node
     1		node1	online		0		io_grp0		yes
  10. Verify that the most recent version of your /tmp/svc.config.backup.xml configuration file has been copied to your SSPC. The most recent file is located on your configuration node in the /tmp or /dumps directory. In addition, a /dumps/svc.config.cron.xml_node_name configuration file is created daily on the configuration node. In certain cases, you might prefer to copy an earlier configuration file. If necessary, back up your configuration file as described in the information about backing up the cluster configuration using the CLI.
  11. Issue the following CLI command to remove all of the existing backup and restore cluster configuration files that are located on your configuration node in the /tmp directory:
    svcconfig clear -all
  12. Copy the svc.config.backup.xml file from the IBM System Storage® Productivity Center or master console to the /tmp directory of the cluster using the PuTTY pscp program. Perform the following steps to use the PuTTY pscp program to copy the file:
    1. Open a command prompt from the IBM System Storage Productivity Center or master console.
    2. Set the path in the command line to use pscp with the following format:
      set PATH=C:\path\to\putty\directory;%PATH%
    3. Issue the following command to specify the location of your private SSH key for authentication:
      pscp <private key location> source [source...] 
      [user@]host:target
  13. If the cluster contains any SAN Volume Controller 2145-CF8 nodes with internal solid-state drives (SSDs), these nodes must be added to the cluster now. To add these nodes, determine the panel name, node name, and I/O groups of any such nodes from the configuration backup file. To add the nodes to the cluster, issue this command:
    source svctask addnode -panelname panel_name 
    -iogrp iogrp_name_or_id -name node_name
    where panel_name is the name that is displayed on the panel, iogrp_name_or_id is the name or ID of the I/O group to which you want to add this node, and node_name is the name of the node.
  14. Issue the following CLI command to compare the current cluster configuration with the backup configuration data file:
    svcconfig restore -prepare

    This CLI command creates a log file in the /tmp directory of the configuration node. The name of the log file is svc.config.restore.prepare.log.

    Note: It can take up to a minute for each 256-MDisk batch to be discovered. If you receive error message CMMVC6119E for an MDisk after you enter this command, all the managed disks (MDisks) might not have been discovered yet. Allow a suitable time to elapse and try the svcconfig restore -prepare command again.
  15. Issue the following command to copy the log file to another server that is accessible to the cluster:
    pscp -i <private key location> [user@]host:source target
  16. Open the log file from the server where the copy is now stored.
  17. Check the log file for errors.
    • If there are errors, correct the condition that caused the errors and reissue the command. You must correct all errors before you can proceed to step 18.
    • If you need assistance, contact the IBM Support Center.
  18. Issue the following CLI command to restore the cluster configuration:
    svcconfig restore -execute
    Note: Issuing this CLI command on a single node cluster adds the other nodes and hosts to the cluster.

    This CLI command creates a log file in the /tmp directory of the configuration node. The name of the log file is svc.config.restore.execute.log.

  19. Issue the following command to copy the log file to another server that is accessible to the cluster:
    pscp -i private_key_location [user@]host:source target
  20. Open the log file from the server where the copy is now stored.
  21. Check the log file to ensure that no errors or warnings have occurred.
    Note: You might receive a warning that states that a licensed feature is not enabled. This means that after the recovery process, the current license settings do not match the previous license settings. The recovery process continues normally and you can enter the correct license settings in the SAN Volume Controller Console at a later time.

    The following output is displayed after a successful cluster configuration restore operation:

    IBM_2145:your_cluster_name:admin>
  22. After the cluster configuration is restored, verify that the quorum disks are restored to the MDisks that you want by using the svcinfo lsquorum command. To restore the quorum disks to the correct MDisks, issue the appropriate svctask setquorum CLI commands.
You can remove any unwanted configuration backup and restore files from the /tmp directory on your configuration by issuing the svcconfig clear -all CLI command.
Note: The recovery process does not re-create the superuser password and SSH keys. Ensure that the superuser password and SSH keys are created again before managing the recovered cluster.
Library | Support | Terms of use | Feedback
© Copyright IBM Corporation 2003, 2009. All Rights Reserved.