With the 2.1 release of the HP 3PAR StoreServ Management Console, HP has enabled the creation and control of Peer Persistence configurations within the management tool. Peer Persistence is the HP branding for transparent LUN failover between storage arrays with no downtime – a concept VMware administrators will recognize is Metro Storage Clustering in the VMware vernacular. HP 3PAR Peer Persistence relies on operating systems which can use the ALUA command set of of the SCSI bus to recognize open and closed paths to a single volume. The source array provides open paths while the replication target shows its paths in standby. At a high level, when a switchover command is issued, the path states change from active to standby and vice-versus. ALUA allows the OS to recognize and redirect IO. Add a quorum witness server to the switchover magic and you have a
This addition to SSMC brings a simple three step process for provisioning new Peer Persistence volumes in the management GUI: Create a new volume, add it to Remote Copy and then Export the volume from both arrays. When an administrator adds the volume it into a remote copy group, the group auto-provisions and assigns the same World Wide Name (WWN) to the target, secondary volume and that is required for Peer Persistence. Previously, administrators performed this using the 3PAR Command Line Interface, and early adopters had to manually lookup and assign the WWN on the secondary array. The auto-provisioning will simplify the process and the chance for human error. The auto-provisioning can be enabled on any Remote Copy group through the SSMC interface. The administrator pre-defines the CPG for the remote array and enables the auto_failover and path_management policies to make a normal Remote Copy group into a Peer Persistent remote copy group.
Initiating a switchover of a Peer Persistence Remote Copy Group is now an options within the SSMC interface. The switchover command is found under the Remote Copy Group within the interface and is found with all the other operations under the Actions menu for a Remote Copy Group. When a switchover is initiated, the interface shows a status of that command, shows the original Remote Copy Group go into a stopped state, the target-side Remote Copy Group named with a similar name then appears and shows that the switchover has completed.
In order to make use of Peer Persistence, there is a fairly strict list of requirements you must meet. The most restrictive of the requirements is latency, since Peer Persistence is built on synchronous replication which the 3PAR platforms limit to 2.6 milliseconds or less of latency. This amount of latency allows for fairly liberal definitions of metro distances. So, first a quick review of all of the requirements for Peer Persistence and then a quick tour of the simple SSMC provisioning process.
Requirements for Peer Persistence
Below is a consolidated list of all the requirements for running HP 3PAR StoreServ arrays with Peer Persistence.
- Round trip latency of 2.6 milliseconds or less.
- 2 arrays with Remote Copy 1-to-1 configuration in synchronous mode.
- 3PAR StoreServ Firmware 3.1.2 MU2 or newer for VMware and Firmware 3.2.1 or newer for Windows.
- VMware ESXi 5.x or newer and Windows Server 2008 R2, and Windows Server 2012 R2 for host OS.
- All associated hosts are connected to both arrays.
- Hosts created with 3PAR host persona 11 for VMware hosts and 15 for Windows hosts, which supports Asymmetric Logical Unit Access (ALUA).
- Quorum Witness virtual machine at a third site, reachable by TCP/IP from the management port of the two 3PAR arrays.
- All 3PAR Virtual Volumes (VV) exported from both arrays must have the same volume WWN, both source and target WWN should match.
- Fibre Channel (FC), iSCSI and Fibre Channel over Ethernet (FCoE) are supported. iSCSI and FCoE require later version of 3PAR StoreServ Firmware.
- Both auto_failover and path_management polices in 3PAR Remote Copy Group configuration are required to enable automatic transparent failover.
Provisioning Peer Persistent Volumes with SSMC
Step 1: Create a new virtual volume by going to the Main Menu and choosing Virtual Volumes under Block Persona.
(File Persona is hidden in the screenshot below because the arrays connected to SSMC do not support File Persona, otherwise it is displayed next to Block Persona).
Complete the General section, but do not export the LUN yet.
Step 2: Create the Remote Copy Group located under the Main Menu in the Replication section. If you have an existing Remote Copy Group, click on the name of the group to edit it.
For Peer Persistence, the configuration is set here. The sections outlined in red are the sections required for Peer Persistence. To make it easiest, select the Create automatically option under Source, you must set Synchronous replication under Backup and you need to tick the Advanced Options checkbox and then check the Auto failover checkbox under Backup. Checking the Auto failover checkbox automatically enables Path Management, too.
Also within Remote Copy Group, you will add your virtual volume from the source array under the Volume Pairs section. Once you select it in the dialog box, you will notice that it is added to the group and the target volume shows a message stating “auto-create”. Click Create or Ok and the changes will be made.
Step 3: Export the volumes from both arrays to the hosts. To do this, you go back to either Virtual Volumes or Virtual Volume Sets in the Main Menu and you choose the Export command to export the volumes to the desired hosts.
This assumes that you’ve already created host records, so if not, you need to create the host records using host persona 11 for VMware or host persona 15 for Windows. In SSMC, you cannot select the host persona directly. Instead, you must choose an operating system from a drop-down menu. You will either choose ESXi 4.x/5.x, Windows 2008/2008 R2 or Windows 2012/2012 R2. The host persona is also displayed after you choose the OS, so you can confirm the setting.
For all the detailed configuration and detail about implementing HP 3PAR Peer Persistence, refer to these two guides that outline the specifics for VMware hosta and for Windows hosts: