Peer Persistence brings Metro Clustering to 3PAR Storage

This is part 3 of a multipart series focused on 3PAR storage from an EVA Administrator’s perspective. Check out parts one and two – Understanding the 3PAR difference in array architecture and Learning the 3PAR lingo for the EVA administrator

3PAR for EVA Administrator

HP 3PAR Peer Persistence is the most exciting feature for me in the new 3PAR StoreServ 7000 series.  I’ve referred to it has a ‘holy grail’ feature to my sales team and to our internal team at work.  While its not the first to market with metro storage clustering, it brings the feature set to the 3PAR lineage and features a non-disruptive failover between the arrays.  VMware officially started supporting metro storage clusters in 2011 with a limited number of storage vendors and solutions and as of posting, 3PAR does not have official support stated from VMware, but it should soon.

Peer Persistence rides on top of the Remote Copy replication feature set and uses synchronous replication.  The two key pieces to the Peer Persistence solution are assigning the secondary copy the same volume World Wide Name (WWN) as the primary and the use of Asymmetric Logical Unit Assignment (ALUA) on the VMware side to recognize the active paths when a failover is invoked.  To a VMware host, you see active and standby paths to the VLUN thanks to the shared volume WWN.

EVA Administrators are experienced with ALUA for finding and using the optimized active paths to a LUN.  On the EVA, only one controller has true write access to the LUN and the paths to that controller are marked as “Active (I/O)” and the paths to the secondary controller are marked as “Active”.  The secondary paths would not be used for IO.

Within 3PAR, Peer Persistence uses the same ALUA technology to discover the active paths to the active array and to set the paths to the target/secondary 3PAR array into StandBy mode. It is also critical in the discovery of a switchover condition where the primary paths become blocked, ALUA issues a discovery command the the Secondary array’s paths are found as active.  With the 3PAR, all paths to the active array are labeled as “Active (I/O).”

Is Peer Persistence right for your environment?  Let’s start with the list of requirements:

  • Close proximity of your two datacenters with large pipes and low latency.
  • The ability to perform synchronous Remote Copy between your 3PAR arrays in 1-to-1 configuration.
    •  Requirements of Synchronous Replication
  • The volumes exported from primary and secondary must share the same volume WWN.
  • ESX hosts must be 5.0 or higher and must be exported with host persona 11 (VMware)

Clearly, this isn’t for everyone.  The fact that you must be within synchronous distances for replication is a deal breaker for many customers, but not all.

Setup Peer Persistence

Peer Persistence sets up just like any other Remote Copy scenario.  I won’t cover that in this post, but I suggest you check out the HP 3PAR Remote Copy Software User’s Guide for all the details on setting up Remote Copy in various scenarios and for more details on Peer Persistence.

After you have created your Remote Copy group with virtual volumes inside and before you export the volumes to the hosts, changes for Peer Persistence are necessary.  On the Primary 3PAR array, you will need to enumerate a list of the virtual volumes including their WWN.  You can display this in the 3PAR CLI using the showvv command.

showvv -d 

The output should be a single line including the WWN of the virtual volume.  I suggest that you highlight and copy the WWN for use in the next step.

Next you will assign the secondary virtual volume the same WWN.  Open a 3PAR CLI connection to the secondary 3PAR array and in this CLI environment, you will issue a setvv command.

setvv -wwn  

On the secondary array, issue another showvv command and confirm that the WWN on the secondary now matches the WWN on the primary.

Once you have completed the WWN assignment on the secondary volume, you may start replication and export the Virtual Volumes to the hosts.  When exporting, ensure host person 11 (VMware) is used for all hosts.

Invoking a switchover

A switchover is easy.  From the 3PAR CLI, a single command will allow you to switch over an entire Remote Copy group of Virtual Volumes or all Virtual Volumes with Remote Copy targeted to a specific 3PAR array.  However, there are several piece of information you will want to check before issuing the switchover command.

To find the names of the Remote Copy groups and/or to see which Remote Copy groups are targeted towards a particular 3PAR array, you issue the showrcopy command in the CLI.   In the output, you will find the name of the Remote Copy group and the array it is targeted to replicate to.   A sample of showrcopy output is below:


With the information, you can initiate a switchover from primary to secondary with no interruption in service.  From a showrcopy output, you can determine which target 3PAR array is part of the synchronous replication (outlined above in green).  The target name can be used to failover all Virtual Volumes hosted by this 3PAR array to the target/secondary array for those VV’s.

Otherwise, if you want to issue the The switchover is initiated from the primary array for an Remote Copy group, so you will want to look at the Role column (outlined above in blue) of the Group Information in the output.  In this example, only sync_group_1 may be issued the switchover command on this 3PAR array.  To failover sync_group_2, you must connect to CLI on the System2 3PAR array.  To failover a Remote Copy group of Virtual Volumes, you have to issue a command using the Remote Copy Group name (outlined above in red). You should also ensure that Remote Copy is running and in Synced status by checking the SyncStatus column (outlined in orange above).  In our example above, you will need to wait for syncing to complete for localvv.0 and localvv.1 before issuing a switchover command.

Failover all volumes replicating to a Target 3PAR 

Taking the target 3PAR array name, outlined in the example in green, issue the following command:

setrcopygroup switchover -t 

Using the output from our example, we would issue the following:

setrcopygroup switchover -t System2

Failover a particular Remote Copy Group

To switch a particular Remote Copy Group, take the remote copy group name – for instance sync_group_1  from the example above and issue this command in the 3PAR CLI:

setrcopygroup switchover 

For our example with sync_group_1, you would run:

setrcopygroup switchover sync_group_1

Within a matter of seconds, the 3PAR arrays will fail from the primary to secondary and your active and standby paths will switch in VMware.

Behind the Scenes

Behind the scenes, the primary 3PAR array is in control of the synchronous Remote Copy. The primary array stops accepting IO on its interfaces, commits all writes, then swaps control to the secondary array which then opens its paths and begins accepting IO. At the same time, the VMware hosts are notified that the path is blocked and it initiates a ALUA discovery to find its active paths.

In part 4, we will focus on best practices for vSphere 5.1 with the 3PAR StoreServ 7200/7400 arrays.  

Tags: , , , , , ,


About the Post

Author Information

Philip is a IT solutions engineer working for AmWINS Group, Inc., an insurance brokerage firm in Charlotte, NC. With a focus on data center technologies, he has built a career helping his customers and his employers deploy better IT solutions to solve their problems. Philip holds certifications in VMware and Microsoft technologies and he is a technical jack of all trades that is passionate about IT infrastructure and all things Apple. He's a part-time blogger and author here at

14 Responses to “Peer Persistence brings Metro Clustering to 3PAR Storage”

  1. Yves Pelster #

    Thanks, great information !
    Is the system able to do a transparent failover even in case it’s an unplanned problem in one of the sites ?
    Is there some kind of tie-breaker facility, or how can 3PAR distinguish between a site desaster and a split brain ?
    Thanks for your insight !

    July 1, 2013 at 11:31 am Reply
    • Philip #

      Yves – yes, the latest version of 3PAR OS added the ability to have a quorum witness node located in a third site that can serve as tie breaker and keep the systems from experiencing split brain. Before this, the administrator had to manually failover the operations. I have a post coming about shortly with details on the quorum witness addition.

      July 3, 2013 at 10:50 am Reply
  2. Håvard #


    Is quorum Witness supported for ESX 5.5 ?

    January 14, 2014 at 7:31 am Reply
  3. Priyank #

    I am trying to configure peer persistence in our system with two v400,

    And having a query, if vv on source system is increased so does it replicate the same on target as well ?

    June 15, 2015 at 10:51 am Reply
    • Philip Sellers #

      From a process standpoint, you have to stop the RC group before you can change the size of the source remote copy disk. Once you stop RC, you grow the disk to the new size and then just start the RC group and the destination will also be resized. Not quite as easy as just growing it online, but still not bad.

      June 16, 2015 at 10:49 am Reply
  4. 3parlrn #

    Does Peer persistence is compatible with vCentre Server Appliance ? We are going to deploy vCSA and wondering if it will still support 3PAR Peer Persistence.

    July 9, 2015 at 2:18 pm Reply
    • Philip Sellers #

      Peer Persistence is completely handled by the ESXi hypervisor and the 3PAR storage arrays and does not require anything at the vCenter Server layer.

      July 19, 2015 at 6:54 pm Reply
  5. KBPS #

    When exporting source VV and target VV, is both soruce and target virtual volumes be exported as a single VV with 4 paths 2 active and 2 passive or two virtual volumes separately.

    if they export as separate, can you please provide me how that concept works? request to please explain on exporting virtual volumes stuff and replication paths.

    July 17, 2015 at 9:57 pm Reply
    • Philip Sellers #

      The source and target VV’s are exported as the same VV. The source and target share the same WWN and the ESXi host or the Windows host will see them as the same volume. The source is active and the target is in standby. This is how the seamless failover occurs – because it is the same volume on both arrays. The paths change state when a failover occurs and they flip from active to standby and vice versus.

      July 19, 2015 at 6:56 pm Reply
  6. Ashokkumar Swaminathan #

    Thanks for the great post. We have configured as per HP White paper, synchronous replication with VMware metro cluster. Switchover works fine but when we simulate failover (remove all fc and management Ethernet cables from source 3par) secondary 3par is not picking up. showvlun is still showing as standby and datastores disconnected from esxi hosts.

    Is there any simulating process for failover?

    February 16, 2016 at 1:51 pm Reply
    • Philip Sellers #

      You can always force a planned switchover using the Switchover option in SSMC. This white paper may help but it may also be the one you’re referencing:

      If you want to simulate an unplanned switchover, which is what it sounds like, if you remove FC and Ethernet connections on the primary and it doesn’t failover, my first guess is there is a problem with the quorum node of quorum configuration. The quorum node is required to direct this operation on the 3PAR. From my experiences, I’d almost positively guess its a quorum issue. If switchover is working – that would make sense because the two 3PARs can talk and coordinate a switchover. If you have a failed quorum configuration and one 3PAR goes down, there is no majority to assume control.

      Now, all that said, there is one additional possibility. I have run into issues with earlier 3PAR firmware and VMware Metro clustering where the standby LUNs did not change state on the host. I’d make sure I am up to date on the 3PAR firmware to avoid this nasty issue. We ran into this during year 1 of the 7000 series, but it has been solved in firmware. When the LUNs were presented to the host, they did not group into the same target port group (TPG). We had issues where some LUNs would switchover but others would not. You can use esxcli to enumerate the LUNs including their TPG and if you have LUNs sources from the same arrays grouped into different TPGs, its a sign of this problem. A reboot generally clears up the misassignment and groups the LUNs correctly. But, we have not seen this issue in a couple years since the firmware upgrades addressed it.

      Sorry for the slow reply – just getting to these comments!

      March 6, 2016 at 6:25 pm Reply


  1. HP enhances 3PAR StoreServ Peer Persistence with automatic failover | Tech Talk - July 3, 2013

    […] it detects the status and begins failover on the remaining 3PAR StoreServ array. Because the 3PAR StoreServ uses ALUA to block and open the active IO paths, the failover is seamless to the VMware workloads.  All of this is accomplished by using the same […]

  2. Peer Persistence and Adaptive Optimization interoperation on 3PAR | Techazine - July 7, 2014

    […] Peer Persistence is 3PAR’s flavor of Metro Storage Clustering for VMware.  Peer Persistence allows synchronous replication between the two arrays and a seamless failover from one set of LUNs to another between arrays without interruption to the VMware workloads.  The primary 3PAR presents its LUNs under a WWN as active storage paths and the secondary/replication array presents its LUNs in a secondary/passive state until the failover occurs.  When failover occurs, the replica becomes the active paths and the other array becomes secondary/inactive.  The failover and monitoring can be automated with a Quorum Witness to handle the failover in the event of an array failure. […]

Leave a Reply

%d bloggers like this: