HPE Synergy Automation - An Introduction

I am starting this post with a disclaimer.  All of this information is provided by me and is not endorsed by my employer and is not sponsored by HPE. Just being safe and protecting myself here.

For the past 10.5 years, I have worked in healthcare IT, initially starting out in Long Term Post Acute healthcare and in the past year or so transitioning to Acute Healthcare.  One of the biggest differences I found in moving between the two employers (and Long Term Post Acute and Acute healthcare in general) is the software stack used by the two different areas. 

Earlier this summer I was brought into a project that had a short timeline which needed to be hit in order to allow for proper software updates/migration to occur.  I don't have DSC setup at this new place of employment and don't plan on it anytime soon due to the current state of DSC.  I love the concept of DSC, but with a third write coming up soon it just does not seem prudent to be investing in that technology right now.  With that in mind, I was not going to setup a DSC environment to complete the project and I was still evaluating Puppet and Chef so those items could not be used either.  Another layer to the equation that added a wrinkle is that the project required physical servers instead of virtual. 

Now thankfully my work has a fairly modern infrastructure in place in the form of HPE Synergy with OneView sitting on top of it.  We do still have HPE C-Class servers, and this project needed to utilize both sets of servers across multiple data centers and across multiple hardware generations and operating systems.  Thankfully the C-Class servers were using OneView as well which allowed for most of the same functionality to be used on the non-synergy hardware (at least as far as setting up servers is concerned). 

For those unfamiliar with Synergy, this is a product that HPE calls composable infrastructure.  At a basic level, Synergy is a more advanced version of software-defined infrastructure.  Since everything is right in the synergy module (storage, compute, networking) its relatively simple to write code/scripts against the infrastructure and configure it rapidly without the need to involve the ops/hardware team once the synergy module is fully configured.  At a high level, Synergy utilizes templates which store the configurations of the hardware such as which vlans to connect to, how the disks are configured (raid, etc), bios settings, boot order, and more.  When a new server is created in Synergy, it's possible to create a new profile based on one of the templates.  Beyond setting up the server, OneView also monitors the server and lets you know when the profile is not in compliance with the template the server was created from.  In other words, OneView + Synergy is like the DSC of physical server configuration and that is wonderful.  Synergy and OneView can do a lot more, and if you are an HPE shop you may want to read the above link and check out Synergy.

The advantages of using the composable infrastructure of Synergy is not just marketing materials, but are easily measured.  For example, running through the process to setup the servers was measured at tasking 1.87 hours for one Gen 10 server.  With 12 servers per enclosure, assuming one server was done at a time, this means it would take 22.44 hours to fully configure one enclosure of servers.  Scripted, this same process took only 2.78 hours per enclosure of 12 servers.  This is a significant time saver, and since it is scripted, this ensures that each server is setup the same and easily can be replaced without fear of configuration drift.

I am still currently working on writing some wrapper code around HPEs multiple modules used during the project to help streamline things as much as possible.  Since this project was not only an excuse to learn the HPE/OneView commands while creating the necessary servers, the tight timeline prevented me from writing the code exactly as I wanted to at first.  This project forced me to pull out my inner Battle Faction to meet the deadline, and just like Battle the code was a little unoptimized and uglier than what I would like.  Now that the project has come to a close, I need to not only document what was done for myself in the future but for others as well in hopes that this experience can be of some assistance.

In order to automate the HPE physical server creation, multiple commands from HPE were required.  All of the required modules can be installed from the PowerShell gallery.  The list of required modules are:

*When installing the HPOneView modules you only need to install the versions which are found in your environment.

  1. HPRestCmdlets
  2. HPERedfishCmdlets
  3. HPOneView.310*
  4. HPOneView.400*

In addition to the above modules, in order for this to work properly, you must be using HPE OneView - version 3.1 or 4.0. This should work with OneView 4.10 but has not been tested as none of the environments I work with are at this level yet.  Preferably HPE Synergy is being used, but C-Class will work as well as long as OneView is being used.  The actual physical server hardware must also be either Gen 9 or Gen 10 hardware running the latest ILO updates.

And here is where one of the first difficulties in this process comes into view.  The version of OneView running in your environment dictates the version of HPOneView cmdlet that you must use to connect to the OneView appliance.  Since the HPOneView Cmdlet uses DLL files, even unloading the module through the command Remove-Module does not unload the DLL file which prevents the other version of the HPOneView Cmdlet from being loaded in the same PowerShell session.  This is not an issue of the Cmdlet, but instead is a limitation of Windows PowerShell itself.  If only one version of HPOneView is installed this typically is not an issue, but when multiple versions are installed side by side you may encounter some errors if you're not careful.   By default Windows PowerShell loads the lowest version of the HPOneView module, and since you can't load another version without starting another PowerShell session this can be an inconvenience.  For example, if both HPOneView.310 and HPOneView.400 are installed and the command Connect-HPOVMgmt is run PowerShell will load the module HPOneView.310.  If you are trying to connect to an environment running OneView 4.0 using this method without explicitly loading the HPOneView.400 module the attempt to connect will error out.  The same error happens when attempting to connect to a OneView 3.10 environment with the HPOneView.400 module loaded.

The second difficulty is the change that HPE made between the Gen 9 and Gen 10 platforms.  Gen 9 hardware uses the HPRestCmdlets by default while Gen 10 servers utilize HPERedfishCmdlets.  What makes this even more fun is that Gen 9 servers can be adjusted to use the HPERedfishCmdlets as well.  This produces multiple permutations of possible combinations of commands to use as the version of OneView are not restricted by the server generation.  

Phew, that was a lot and was only the introduction to this series.

In the next segment, I will go over how to quickly deploy server profiles in OneView from templates.  While there are simple ways to create new server profiles from templates, most of the examples found online only deal with one profile at a time.  The information I will go over will include how to automate the creation so that you can quickly and easily create multiple profiles in sequence automatically.  There will be some side subjects that may spring up as part of this, but anything related to this will be tagged as a category of HPE Automation.

Until next time.