To use the Global Mirror feature, all components in the
SAN must be capable of sustaining the workload that is generated by
application hosts and the Global Mirror background copy process. If
all of the components in the SAN cannot sustain the workload, the
Global Mirror relationships are automatically stopped to protect your
application hosts from increased response times.
When using the Global Mirror feature, follow these
best practices:- Use IBM® Tivoli® Storage Productivity Center or
an equivalent SAN performance analysis tool to monitor your SAN environment.
The IBM Tivoli Storage Productivity Center provides
an easy way to analyze the SAN Volume Controller performance
statistics.
- Analyze the SAN Volume Controller performance
statistics to determine the peak application write workload that the
link must support. Gather statistics over a typical application I/O
workload cycle.
- Set the background copy rate to a value
that can be supported by the intercluster link and the backend storage
controllers at the remote cluster.
- Do not use cache-disabled VDisks in Global Mirror relationships.
- Set the gmlinktolerance parameter to an appropriate value. The
default value is 300 seconds (5 minutes).
- When you perform SAN maintenance tasks, take one of the following
actions:
- Reduce the application I/O workload for the duration of the maintenance
task.
- Disable the gmlinktolerance feature or increase the gmlinktolerance
value.
Note: If the gmlinktolerance value is increased during the maintenance
task, do not set it to the normal value until the maintenance task
is complete. If the gmlinktolerance feature is disabled for the duration
of the maintenance task, enable it after the maintenance task is complete.
- Stop the Global Mirror relationships.
- Evenly distribute the preferred nodes for the Global Mirror VDisks
between the nodes in the clusters. Each VDisk in an I/O group has
a preferred node property that can be used to balance the I/O load
between nodes in the I/O group. The preferred node property is also
used by the Global Mirror feature to route I/O operations between
clusters. A node that receives a write for a VDisk is normally the
preferred node for that VDisk. If the VDisk is in a Global Mirror
relationship, the node is responsible for sending the write to the
preferred node of the secondary VDisk. By default, the preferred node of a new VDisk is the
node that owns the fewest VDisks of the two nodes in the I/O group. Each
node in the remote cluster has a set pool of Global Mirror system
resources for each node in the local cluster. To maximize Global Mirror
performance, set the preferred nodes for the VDisks of the remote
cluster to use every combination of primary nodes and secondary nodes.