This topic provides information about requirements and guidelines
for using the point-in-time copy and remote mirror and copy features of Copy
Services.
It is assumed that you have obtained the information that you need
to activate the Copy Services licenses using the IBM Disk Storage Feature
Activation (DSFA) Web site at: http://www.ibm.com/storage/dsfa/.
After you obtain the activation keys, it is also assumed that you have entered
them in the DS Storage Manager Web
interface.
You can use the
DS CLI or
DS Storage Manager (GUI)
to perform Copy Services tasks.
Note: For a listing of Copy Services commands, see the Command-line interface
section of the DS6000 Information
Center. For a listing of Copy Services tasks that you can perform from the DS Storage Manager,
see the Managing section .
The following
rules apply when using Copy Services functions:
- One or more storage units must be assigned. Ensure that
one or more storage units are configured, assigned, and operating in a normal
state. See “Storage
Units — Main Page” for more information. The number of
required storage units depends on the function. For example, FlashCopy® operations
require one storage unit, but Metro Mirror and Global Mirror require two.
Note: If
you plan to use Remote FlashCopy (known as Inband FlashCopy commands on the ESS 2105),
two storage units are required for this configuration.
- Physical connection must be established between two storage
units. If you plan to use remote mirror and copy functions, such as Metro
Mirror, Global Copy or Global Mirror), ensure that a physical connection is
established between two storage units. Two (or more) storage units can be
connected using a fibre channel direct connection or connect through a switch.
To connect the storage units, it is recommended that you have one cable from
c0 to c0 and one from c1 to c1, for example, and that you have the proper
port topology configuration for those connections. To configure
I/O ports, select, in the navigation, .
- Logical configuration must be created. Consider the following
requirements:
- Volume capacity: Ensure that the capacity of your target
volumes is equal to or greater than your source volumes. When you select target
volumes from the DS Storage Manager, it verifies that the capacities of the
target volumes are at least as big as the source volumes. It does not allow
you to select smaller-sized target volumes.
Note: Be aware that
for failover and failback operations to complete successfully, the volumes
must be the same size and type.
- Volume quantity: Ensure that you have at least one target
volume for each source volume that is of equal or greater capacity than the
source volume. You can create up to 256 volumes per LSS.
- Volume sizes: Capacities of the volume are configured
using the following conventions:
- Decimal
- 1 GB (10 9) = 1,000,000,000 bytes (ESS 2105 volumes are configured
in decimal format.)
- Binary
- 1 GB (2 30) = 1,073,741,824 bytes (DS volumes are configured
in binary format.)
This method provides volumes that fully use the capacity
in every extent.
- Block
- 1GB = (2 30) = 1,073,741,824 (iSeries™ volumes are configured in this
format.)
This method supports volume capacity in bytes (512-byte logical
blocks). Supported storage sizes range from 1 to 4G blocks (the actual number
of gigabytes is the number of blocks times 512).
Note: You must consider the gigabyte definitions. In many applications,
the source and target of a remote mirror and copy relationship must be exactly
the same size. For example, if you plan to use DS6000 and ESS 2105 volumes for remote mirror and copy
functions, the volumes on the DS6000 must
be created in decimal format to be compatible with ESS volumes.
- Logical subsystem: You can configure
up to 32 LSSs. Each LSS is made up of either CKD or FB volumes. An LSS
that consists of CKD addresses requires that other LSSs also be made up of
CKD addresses. You can have both CKD and FB LSSs on the same storage unit.
Note: CKD
LSSs are referred to as LCUs in the DS Storage Manager.
- Paths must be created: You must define paths for Metro Mirror,
Global Copy, and Global Mirror functions. Fibre channel is used as the communications
link between source and target volumes. To create paths, select,
in the navigation, From the Select
Action drop-down list, select Create... and
then Go. See Creating
remote mirror and copy paths for more information.
- Relationships must be created: Determine which source and
target volumes you wish to pair for Copy Services relationships. To
create relationships, select, in the navigation, .
From the Select Action drop-down list, select Create... and then Go. See Creating FlashCopy volume
pairs or Creating
Metro Mirror volume pairs, for example.
z/OS® Global
Mirror limitation:If you plan to use z/OS Global Mirror (previously known as
Extended Remote Copy or XRC), be aware that a z/OS Global Mirror environment that includes
a DS8000 as a primary storage unit and a DS6000 as a secondary storage unit
is not recommended for failover and failback operations because of the following
limitations:
- Performance mismatch (mirroring)
- If the secondary storage unit (the DS6000) and its connectivity to the
System Data Mover (SDM) that runs on z/OS Global Mirror is significantly less
capable (lower performing) than the primary storage unit and its connectivity
to the application systems, the overall z/OS Global Mirror performance may suffer
degraded performance. That is, if applications can write faster to primary
storage units than the SDM can write to the secondary storage units, then
implementation problems will result. (The SDM is the function that copies
data from the primary storage unit to the secondary storage unit in a z/OS Global
Mirror environment.)
- Performance mismatch (running applications)
- Suppose a disaster or failure occurs and applications failover to the
secondary (or recovery) site and are running using the secondary storage units.
If the secondary storage unit (the DS6000) is less capable (performance-wise)
than the primary storage unit, it is likely that you will not be able
to complete primary business applications in the required or expected time
frame.
- z/OS Global
Mirror-capable local (or primary) storage units
- Suppose a disaster or failure occurs in an z/OS Global Mirror environment and applications
failover to the secondary site and are running at the secondary site on the
secondary storage units. Later, after the primary site has been repaired and
is ready to resume as the primary site, the secondary storage unit can then
use z/OS Global
Mirror to failback to the primary site. However, for the failover and failback
operations to work successfully, the secondary storage unit must be a z/OS Global
Mirror-capable primary storage unit, which means it must be capable of being
an z/OS Global
Mirror primary storage unit. The DS6000 does not have the appropriate microcode
functionality to be a z/OS Global Mirror-capable primary storage unit, and
therefore cannot be used to failback to the primary site.
General considerations include: - If you plan to issue DS6000 commands,
you must have the DS CLI prompt and be connected to a storage unit that will be used for open
systems or zSeries® host
system storage. The DS CLI helps enable open systems hosts to invoke and manage FlashCopy and
remote mirror and copy operations through batch processes and scripts. For
more information, see the IBM® System Storage™ DS6000 Command-Line Interface
Guide.
Note: For more complex Copy Services environments, you might find
invoking and managing Copy Services functions with the DS CLI is easier. With
the DS CLI, you can save commands as scripts, which significantly reduces
the time to create, edit, and verify their content.
- When you issue a FlashCopy command with the Initiate background
copy option enabled, the FlashCopy relationship is established, but put
in a queue for background copying. The time that the background copying starts
for the specific relationship depends on the number of FlashCopy volumes that
have begun background copying or are waiting to begin. When the copy starts,
the status displays as "background copy running" for that FlashCopy volume
pair.
How long the actual physical copy takes can depend on the amount of
data being copied and other activity that is occurring on the storage unit.
for information on monitoring when the copy completes, see Viewing information about FlashCopy relationships.
- You should be aware of some FlashCopy data consistency considerations.
For example, there are environments where data is stored in server memory
cache and written to disk at some later time. Buffers for a database management
subsystem (DBMS) or metadata for a journaled file system are two examples
of these environments. If a FlashCopy operation copies a source volume to a
target volume, but buffers from the DBMS or metadata from the
journaled file system are not flushed first, you might have to perform an
incremental update. For a DBMS, you might have to back out of current transactions.
For a journaled file system, you might have to run the fsck utility
on the target volume.
To avoid these types of restart actions, ensure that
all data that is related to the FlashCopy source volume has been written
to disk before you perform the FlashCopy operation. For a DBMS, you
can quiesce the subsystem or use a DBMS command such as DB2’s LOG SUSPEND.
For a journaled file system, you can unmount the source volume before you
perform a FlashCopy operation.
- For FlashCopy operations: If
you are going to automate your FlashCopy procedures, consider verifying the
data consistency on your target volumes frequently. On some systems, such
as AIX®, Windows®,
and Linux®,
before performing FlashCopy operations, you must quiesce your applications
that access FlashCopy source volumes. The source volumes must then be unmounted
during the FlashCopy establishment. This is to ensure that there is no data
in the buffers that could be flushed to the target volumes and potentially
corrupt them.
- You can use Global Mirror to
create consistent copies of your data at a secondary site, with minimal impact
to the local (or primary) site. Global Mirror uses
the concept of sessions to internally manage data consistency across
storage units. You can also use Metro Mirror, Global Copy,
and FlashCopy (without Global Mirror)
to create data consistency. However, this requires that you use either external
automated software or manually suspend your applications at the local site
to create consistency at your recovery (or secondary) site.
- The DS Storage Manager can
be used for almost all functions for Copy Services. However, you cannot issue
the following functions from the DS Storage Manager.
They are available only through the DS CLI:
- FlashCopy consistency
groups
- Consistency group commands allow the storage unit to freeze I/O activity
to a LUN or volume until you issue the FlashCopy consistency group command.
Consistency groups help create a consistent point-in-time copy across multiple
LUNs or volumes, and even across multiple storage units.
- Remote FlashCopy (known
as Inband FlashCopy commands
on the ESS 2105)
- Remote FlashCopy commands
are issued to a source volume of a remote mirror and copy volume pair on a
local storage unit and sent across paths (acting as a conduit) to a remote
storage unit to enable a FlashCopy pair to be established at the remote
site. This eliminates the need for a network connection to the remote site
solely for the management of FlashCopy.
- If you perform scenarios that call for freeze and run operations for remote
mirror and copy operations, you must issue these requests from the command
line interface, together with external automated software. These requests
are not supported by the DS Storage Manager.
(Automation software is not provided with the storage unit; it must be supplied
by the user. However, IBM has offerings to assist with this automation. For
more information, contact your IBM storage representative.)