Month of silence, because of a blade enclosure

I can’t believe it, but its been almost a month since my last post.  And what a month its been around my work.  This has been one of the busiest and most difficult months that I can remember with the company.  I have my hands in several different technologies, VMware and our blades are just two of my primary responsiblities.  Over the past month, though, we’ve experienced a catastrophic failure of one of our blade enclosures.   The failure has only occurred once, but the fall-out from this has taken almost a month to work out.  And honestly, we’re still not through working out the kinks.  

Of course, my story has to begin on Friday the 13th…  Sometime around 9:00am, we started getting calls for both our SQL 2005 database cluster and our Exchange cluster.  After investigation, we found that the active nodes were both in the same enclosure and a third ESX host in the same was experiencing problems, too.  The problems were affecting both network and disk IO on the blades.  All of our blades are boot from SAN, so the IO had to be a fiber-channel issue.  

Several hours later, we were finally able to get enough response out of the nodes to be able to force a failover of services for Exchange, shortly followed by SQL 2005.  As I worked with HP support, nothing improved on the affected servers.  We were finally diagnosed with a problem mid-plane on the enclosure.  

 While waiting for the mid-plane to be dispatched to the field service folks, I requested that we go ahead and do a complete power-down on the enclosure and bring it up clean.  This required physically removing power from the enclosure after powering down everything that I could from the onboard administrator.  

After the reboot, everything looked much healthier.  The blades came back to life and everything began operating as expected.  After intense discussions on the HP side, we reseated our OA’s and the sleeve that they plug into on the back side of the enclosure.  Net outcome was the same – everything still operating well.  The OA’s nor the sleeve were loose, so we doubted that was the cause.  

One nugget I learned from HP support (please vett this information on your own), is that the Virtual Connect interconnect modules require communication with the onboard administrators (OA’s).  I’m still not sure I fully understand, but HP support did tell us that if VC lost communication to the OA, its possible that it caused our problems.  If this is so, this smells like very, very bad engineering and design…

Continued investigation on HP’s part has pointed us back to the original diagnosis – a faulty mid-plane.  Only by default did we return to that conculsion, however.  This is the only piece of hardware common to the problems.  Our only other conclusion was that this was a very bad, “hiccup” — which obviously buys us no real peace of mind…  

So, sometime soon, we will be replacing the mid-plane of our enclosure.   I have, of course, lost some faith in the HP blade ecosystem.  We have plans to migrate our corporate VMware cluster onto blades, as well as some Citrix and other servers.  Losing an enclosure like this has un-nerved those plans.  We were fortunate to have drug our feet  to only have 3 blades populated and serving anything at the time this happened.  I will post updates as we move forward…

Tags: , , , , , , , , ,


About the Post

Author Information

Philip is a IT solutions engineer working for AmWINS Group, Inc., an insurance brokerage firm in Charlotte, NC. With a focus on data center technologies, he has built a career helping his customers and his employers deploy better IT solutions to solve their problems. Philip holds certifications in VMware and Microsoft technologies and he is a technical jack of all trades that is passionate about IT infrastructure and all things Apple. He's a part-time blogger and author here at

2 Responses to “Month of silence, because of a blade enclosure”

  1. That’s really too bad! You really hate to have problems like this. It would be one thing if you had an obviously faulty piece of hardware, but if the problem is systemic, you’ve got whole other can of worms.

    I hope it works out alright for you. Whenever I discover a problem like this, I’m prone to mentally panicking, and I have to remind myself that despite the problems that I’m having right now, lots of large scale infrastructures rely on the same technology, and if it were inherently as deeply flawed as it seems at the moment, then all of those organizations would have problems too. Breath deeply and work with the provider on the fix. It’ll get better.

    March 5, 2009 at 9:55 am Reply
  2. Philip #

    Thanks Matt. I’m hoping this is a simple case of faulty hardware…

    March 5, 2009 at 10:26 am Reply

Leave a Reply

%d bloggers like this: