The background copy bandwidth determines the rate at which
the background copy for Metro
Mirror or
Global Mirror Copy
Services are attempted.
The background copy bandwidth can affect foreground I/O
latency in one of three ways:
- If the background copy bandwidth is set too high for the intercluster
link capacity, the following results can occur:
- The background copy I/Os can back up on the intercluster link
- For Metro
Mirror,
there is a delay in the synchronous secondary writes of foreground
I/Os
- For Global Mirror,
the work is backlogged, which delays the processing of writes and
causes the relationship to stop
- The foreground I/O latency increases as detected
by applications
- If the background copy bandwidth is set too high for the storage
at the primary site, background copy read I/Os overload the
primary storage and delay foreground I/Os.
- If the background copy bandwidth is set too high for the storage
at the secondary site, background copy writes at the secondary
overload the secondary storage and again delay the synchronous secondary
writes of foreground I/Os.
- For Global Mirror,
the work is backlogged and again the relationship is stopped
To set the background copy
bandwidth optimally, you must consider all three resources (the primary
storage, the intercluster link bandwidth, and the secondary storage).
Provision the most restrictive of these three resources between the
background copy bandwidth and the peak foreground I/O workload. You
must also consider concurrent host I/O because if other write operations
arrive at the primary cluster for copy to the remote site, these write
operations can be delayed by a high level of background copy and the
hosts at the primary site receive poor write-operation response times.
This
provisioning can be done by the calculation above or by determining
how much background copy can be allowed before the foreground I/O
latency becomes unacceptable and then backing off to allow for peaks
in workload and some safety margin.
Example
If the bandwidth setting at the
primary site for the secondary cluster is set to 200
MBps (megabytes
per second) and the relationships are not synchronized, the
SAN Volume Controller attempts
to resynchronize the relationships at a maximum rate of 200
MBps with
a 25
MBps restriction for each individual relationship. The
SAN Volume Controller cannot
resynchronize the relationship if the throughput is restricted. The
following can restrict throughput:
- The read response time of backend storage at the primary cluster
- The write response time of the backend storage at the secondary
site
- Intercluster link latency