For intracluster partnerships, all clusters can be considered
as candidates for Metro
Mirror or Global Mirror operations.
For intercluster partnerships, cluster pairs must be separated by
a number of moderately high bandwidth links.
Figure 1 shows
an example of a configuration that uses dual redundant fabrics. Part
of each fabric is located at the local cluster and the remote cluster.
There is no direct connection between the two fabrics.
Figure 1. Redundant fabrics
You can use fibre-channel extenders or SAN routers to
increase the distance between two clusters. Fibre-channel extenders
transmit fibre-channel packets across long links without changing
the contents of the packets. SAN routers provide virtual nPorts on
two or more SANs to extend the scope of the SAN. The SAN router distributes
the traffic from one virtual nPort to the other virtual nPort. The
two fibre-channel fabrics are independent of each other. Therefore,
nPorts on each of the fabrics cannot directly log into each other.
See the following Web site for specific firmware levels and the latest
supported hardware:
www.ibm.com/storage/support/2145
If
you use fibre-channel extenders or SAN routers, you must meet the
following requirements:
- For SAN Volume Controller software
level 4.1.0, the round-trip latency between sites cannot exceed 68
ms for fibre-channel extenders or 20 ms for SAN routers.
- For SAN Volume Controller software
level 4.1.1 or higher, the round-trip latency between sites cannot
exceed 80 ms for either fibre-channel extenders or SAN routers.
- The configuration must be tested with the expected peak workloads.
- Metro
Mirror and Global Mirror require
a specific amount of bandwidth for intercluster heartbeat traffic.
The amount of traffic depends on the number of nodes that are in both
the local cluster and the remote cluster. Table 1 lists
the intercluster heartbeat traffic for the primary cluster and the
secondary cluster. These numbers represent the total traffic between
two clusters when there are no I/O operations running on the copied
VDisks. Half of the data is sent by the primary cluster and half of
the data is sent by the secondary cluster so that traffic is evenly
divided between all of the available intercluster links. If you have
two redundant links, half of the traffic is sent over each link.
Table 1. Intercluster heartbeat traffic in Mbps| Cluster 1 |
Cluster 2 |
| |
2 nodes |
4 nodes |
6 nodes |
8 nodes |
| 2 nodes |
2.6 |
4.0 |
5.4 |
6.7 |
| 4 nodes |
4.0 |
5.5 |
7.1 |
8.6 |
| 6 nodes |
5.4 |
7.1 |
8.8 |
10.5 |
| 8 nodes |
6.7 |
8.6 |
10.5 |
12.4 |
- The bandwidth between two sites must meet the peak workload requirements
and maintain the maximum round-trip latency between the sites. When
you evaluate the workload requirement, you must consider the average
write workload over a period of one minute or less and the required
synchronization copy bandwidth. If there are no active synchronization
copies and no write I/O operations for VDisks that are in the Metro
Mirror or Global Mirror relationship,
the SAN Volume Controller protocols
operate with the bandwidth that is indicated in Table 1.
However, you can only determine the actual amount of bandwidth that
is required for the link by considering the peak write bandwidth to
VDisks that are participating in Metro
Mirror or Global Mirror relationships
and then adding the peak write bandwidth to the peak synchronization
bandwidth.
- If the link between two sites is configured with redundancy so
that it can tolerate single failures, the link must be sized so that
the bandwidth and latency statements are correct during single failure
conditions.
- The channel must not be used for links between nodes in a single
cluster. Configurations that use long distance links in a single cluster
are not supported and can cause I/O errors and loss of access.
- The configuration is tested to confirm that any failover mechanisms
in the intercluster links interoperate satisfactorily with the SAN Volume Controller.
- All other SAN Volume Controller configuration
requirements are met.
Limitations on host to cluster distances
There is no limit on the fibre-channel optical distance
between SAN Volume Controller nodes
and host servers. You can attach a server to an edge switch in a core-edge
configuration with the SAN Volume Controller cluster
at the core. SAN Volume Controller clusters
support up to three ISL hops in the fabric. Therefore, the host server
and the SAN Volume Controller cluster
can be separated by up to five fibre-channel links. If you use longwave
SFPs, four of the fibre-channel links can be up to 10
km long.