How to Create a DellEMC XC CORE Solution With the Dell Solutions Configurator

Nutanix and DellEMC announced a new way to purchase a Nutanix solution on Dell hardware this week in the form of Dell XC CORE. Customers can purchase hardware delivered to give a turnkey Nutanix experience and still purchase software separately preventing a license from being tied to a specific appliance.

For Nutanix partner resellers there has been a bit of confusion regarding how we can configure and quote the XC CORE nodes for customers seeking these options. After a little bit of searching I have completed a couple of these configurations for my customers.

It looks like some partners have different versions of the Dell configurator with some allowing them to select some options at the time of solution creation. The version I have access to does not so I had to dig a bit to find where I could configure XC CORE nodes.

New Solution

After creating a new solution, I navigated down to the available Dell XC nodes that were previously available to me.

DeviceSelection

By selecting a Dell XC node that was listed in the XC Core datasheet and expanding the first item, “XC640ENT” in my case, I found a new option. Dell EMC XC640ENT XC CORE.

ConfiguratorXC640ENT

The rest of the configuration was as familiar as any Nutanix configuration and I was able to complete the solution according to the sizing data I had already configured.

Proud to be Selected as a Veeam Vanguard 2018

I woke up on the morning of March 6, 2018 to a very pleasant surprise. I had received the email pictured below from Rick Vanover, Director of Product Strategy at Veeam Software inviting me to be a member of the Veeam Vanguard.

Screenshot_20180307-103741~2.png

Needless to say I immediately thanked Rick for selecting me and accepted his invitation. I had the pleasure of meeting several members of the Veeam Vanguard when attending VeeamON last year. A few of them encouraged me to apply and I want to specifically thank Matt Crape, Rhys Hammond, and Jim Jones for their support.

 

I gained my VMCE certification while in New Orleans and have since led the majority of Veeam PreSales discussions on behalf of my employer. I also had opportunity to deliver vBrownBag tech talk on the VMTN stage at VMworld last year about my experience Automating Repetitive task in Veeam Backup and Replication using the Veeam PowerShell Snap-In. I look forward to continued involvement in the Veeam community and look forward to being an active participant in the Veeam Vanguard Program.

 

Building an On-Demand Cloud Lab

This time last week, I was in Palo Alto, California for the VMUG leader summit, which was great.  Prior to the summit I submitted an entry into a new VMUG program called SPARK.

To participate in SPARK a leader was asked to submit two power point slides outlining an idea for a way to provide value back to the VMUG community. Additionally we were asked to submit a video that would be played back at the summit which would be played back before all leaders present voted for the favorite submission. The Indianapolis VMUG submission was an On-Demand Cloud lab built on VMware Cloud on AWS.

It was a tight race, but I’m proud to say that Indianapolis VMUG won the first VMUG SPARK contest and will receive $5000 funding to help make this idea a reality.

Here’s the point of the post though. I can’t do this alone, and in fact I don’t want to. I want to involve the community to make it as great as it can be. I want to hear other’s ideas as to how we can create an infrastructure that can be spun up quickly and easily and help other VMUG members learn new skills.

We will be holding our initial planning calls after the new year at whatever date/time works for the best for the most participants. If you would like participate please reach out to me via Twitter. My DMs are wide open and all are welcome. We can make a great platform for everyone to learn on if we work together!

 

Backup pfSense from Ubuntu 14.04

Back in mid 2016 Github user James Lavoy released a python script to backup pfSense. I was excited because pfSense didn’t (and still doesn’t) have a built in backup scheduler. Sure, you can backup manually from the GUI, but I don’t trust myself to remember to do that after time I make a change to my config.

I downloaded the script and changed the necessary options to point it to my pfSense box and supplied the credentials necessary for backup. I unfortunately received the following error:

AttributeError: 'module' object has no attribute '_create_unverified_context'

I quickly found out that this was due to my OS running Python 2.7.6 and SSLContext was introduced Python 2.7.9.  I reported this issue on James’ Github and he suggested installing Python 2.7.9 or later from another ppa as coding around the issue would require a complete rewrite of the script.

James has updated the README.md to indicate this requirement but his instructions are a bit out of date. The ppa specified is not being maintained and the fkrull/deadsnakes ppa should be used instead. To install python 2.7.12 and mechanize and meet all requirements of this script run the following commands:

sudo add-apt-repository ppa:fkrull/deadsnakes
sudo apt-get update
sudo apt-get install python2.7 python-pip
pip install mechanize

Before running the script, make sure you edit it to point to your pfSense box’s IP address (and https port if necessary) and supply the correct credentials. Whether you are running this manually or automated you will need to specify the path for Python 2.7.9+ as it is not necessarily invoked by default by the “python” command. For example I have to run:

40 14 * * * /usr/local/lib/python2.7.12/bin/python /media/backups/pfsense-backup-master/pfsense_backup.py

In order to automate this, you’ll want to add a cron job. Do so by editing your crontab:

crontab -e

My crontab entry looks like this:

40 14 * * * /usr/local/lib/python2.7.12/bin/python /media/backups/pfsense-backup-master/pfsense_backup.py

This will run the script every day at 2:40 pm. Why 2:40? Why not?

Some may be wondering why I didn’t just upgrade my VM to Ubuntu 16.04. I tried and many services failed to load or lost their config after the upgrade, so I rolled back to a snapshot. 14.04 will continue to receive updates until 2019 as it is an “LTS” release and I anticipate migrating off the server by then anyway.

If you are like me and don’t need to be on the latest and greatest OS, but want to be able to use scripts like this, hopefully this will help.

OVA Template Deployment Stuck “Validating”? Try PowerCLI!

I recently made the switch from working as a customer to working as a Solutions Architect at a VAR. I had bought a number of Intel servers from various OEMs during my career but never Cisco UCS. However I have plenty of customers these days who are currently UCS customers or are interested in UCS in their infrastructure.

For this reason I decided to download the Cisco UCS Platform Emulator. The UCS Platform Emulator is a free tool that allows risk-free experimentation in a UCS manager environment. It can be downloaded as a .zip containing all virtual disks and metadata, or simply as a singly .ova file for easy deployment. Naturally I opted for the .ova file as I have a full vSphere environment running in my homelab thanks to VMUG Advantage.

Once I had the bits in hand I fired up the new HTML5 vSphere client and started the “Deploy OVF Template” wizard. Even though the new HTML5 client is new to me, the wizard was intuitive and similar to what I was previously used to with the C# client and Flash based vSphere Web Client. I hit a roadblock at one point though when the wizard display a message that it was “Validating” and appeared to make no progress.

Validating

Ooookay, well I guess I’ll fire up the Flash client, wait for it to load and deploy from there.

Unsupported

Well it looks like I can no longer deploy templates from the vSphere Web client in vSphere 6.5. Apparently my choices are troubleshooting the HTML5 client or nothing….or are they?

Enter PowerCLI

I’ve spent the last 12-16 months familiarizing myself with PowerCLI so this was the perfect opportunity to see if there was a way to deploy my template without the need of the GUI. I quickly found the Import-Vapp cmdlet which is thoroughly documented here.

Running through the options available I constructed the test below:

Import-VApp -Source \\192.168.2.6\Data\HomeLab\CiscoUCS\UCSPE_3.1.2e.ova -Name UCSPE -VMHost (Get-VMHost -Name esx06.kennalbone.com) -Datastore (Get-Datastore -Name NFS-FS2-ProductionFast) -DiskStorageFormat Thin -Location (Get-ResourcePool -Name Normal) -Whatif
What if: Performing the operation "Importing '\\192.168.2.6\Data\HomeLab\CiscoUCS\UCSPE_3.1.2e.ova'" on target "Host 'esx06.kennalbone.com'".

Adding -Whatif allows testing before making any actual changes to objects in PowerCLI/PowerShell. With this test behind me I dropped the -WhatIf parameter and deployed my OVA file for real.

Import-VApp -Source \\192.168.2.6\Data\HomeLab\CiscoUCS\UCSPE_3.1.2e.ova -Name UCSPE -VMHost (Get-VMHost -Name esx06.kennalbone.com) -Datastore (Get-Datastore -Name NFS-FS2-ProductionFast) -DiskStorageFormat Thin -Location (Get-ResourcePool -Name Normal)

The deployment went by so quickly I wasn’t sure if everything completed properly. A quick check with “Get-VM” showed that the new VM did exist.

Get-VM -Name UCSPE

Name                 PowerState Num CPUs MemoryGB
----                 ---------- -------- --------
UCSPE                PoweredOff 1        1.000

A quick power on and check of the VM console showed that the VM was in fact deployed and booting properly.

VMConsoleWindow

You can try this for yourself. Just replace the text in the brackets with the necessary information in your own environment.

Import-VApp -Source <FullPathtoTemplate.ova> -Name <VMName> -VMHost (Get-VMHost -Name <ESXiHostName>) -Datastore (Get-Datastore -Name <DatastoreName>) -DiskStorageFormat Thin -Location (Get-ResourcePool -Name <ResourcePoolName>)

Mixing CPU/Server Generations in a vSAN Cluster – What’s Supported?

When designing vSphere clusters, vSAN or not, its pretty common knowledge that the hardware should match as much as possible. This includes the server make and model, CPU/RAM configuration, HBA, disks, NICs etc. There are times when this is not possible, or doesn’t make sense. Examples would be when the servers in your cluster are EOL or you know they will be soon and you would rather purchase the newer server generation than buy hardware that will soon not be supported by the manufacturer.

I recently asked my local vSAN SE if VMware had an official stance regarding mixing server or CPU generations in a vSAN cluster. His response was that he certainly wouldn’t recommend it but he stopped short of saying mixed configurations would not be supported. We didn’t have enough time to discuss further so I basically walked away with the attitude that I would have to ensure that whenever I was designing a vSAN cluster for a customer I would ensure that I was using hardware that they would be able to easily duplicate for the next few years should they have the need to expand their cluster.

Fast forward to this week when this question was brought up by another individual on the vExpert Slack channel and it cause a bit of a debate. Several of us were discussing this and we basically came to the conclusion that we could not find an official VMware stance on this. Some of the documentation found on storagehub.vmware.com gave some guidance but nothing definitive.

So assuming you can keep everything the same aside from CPU/motherboard does vSAN care? The conclusion we came to during this discussion is that as long as you are using EVC then vSAN shoudln’t care. It is certainly considered best practice to keep hardware identical whenever possible. It is important to be mindful when mixing CPUs that vSAN will essentially be bound by your weakest CPU when serving IOs in this type of scenario.

During the conversation a couple of VMware employees chimed in and confirmed that as long as you are keeping CPU/RAM/Storage balanced accross hosts in the cluster, it shouldn’t matter. There was even an implication that this is officially documented somewhere but I have not been able to find any source for this as of yet.

Bottom line, when expanding a vSAN cluster keep the hardware identical if possible. When its not possible pay close attention to the hardware in the new hosts and make it as balanced as possible.

My VMworld 2017 Experience

August 2017 marked the first VMworld that I was able to attend in person. After years of hoping to attend, I was fortunate to obtain a free pass due to my community involvement. Additionally I started a new job in April of 2017 and as a condition of accepting my new employer’s offer I requested that they put in writing that they would pay my travel and expenses to attend VMworld 2017 in Las Vegas.

VCDX Workshop

IMG_20170826_074909.jpg

With that all behind me I was off to Vegas! I flew out earlier than most so that I could take advantage of the VCDX Workshop on the morning of 8/26/2017 that we being run by Joe Silvagi (@vmprime).

This was a great workshop that illuminated the process involved in becoming a VCDX, but more importantly helped me learn how to get into the mindset of an IT architect. Best of all, its free!

After spending over a decade working primarily in roles of IT administration and engineering I am now trying to live up to the title of Architect. I am primarily responsible for infrastructure design on a daily basis as opposed to the operational responsibilities that I have previously held. The VCDX is the premier IT Architect certification in the world and this workshop provides tremendous value to anyone seeking to improve their skills as an architect, regardless of whether they plan on pursuing a VCDX certification. I would encourage checking the VCDX program calendar if you are interested in attending one of these workshops in person or online.

I am currently so far from obtaining a VCDX that it is not even on my long term goals list at this time. I currently hold a VCP5-DCV so I plan on first passing the VCP6 delta exam then pursuing both the VCAP6-DCV Deploy and Design certifications. After that I will evaluate whether I have the desire, time, and funds to create a design and submit it for VCDX defense.

Community Events

Before VMworld officially kicked off there was one more important activity that I was involved in. Dodgleball! More specifically #v0dgeball according to the community. This was a very fun event and more importantly it raised money for a good cause, the Wounded Warrior Project. I was fortunate enough to participate with the VMUG team along with several of my fellow VMUG leaders. We came in 3rd out of 9 teams and had a blast.

This slideshow requires JavaScript.

After #v0dgeball I spent a bit of time in the Solutions Exchange on Sunday night before heading over to the VMUG party. This party was a blast and more importantly everyone now knows what a goofball I am as I took home the VMUG lipsync battle crown.

 

The Conference

IMG_20170828_085751.jpg

The keynotes from Day 1 and Day 2 were very enjoyable to me, mostly because I viewed them from the hang space as opposed to making the trek to the events center.

The rest of the conference was kind of a blur. I did attend several sessions but I spent most of my time in the VMvillage and Solutions Exchange.  The VMvillage was a great place to get to know many of the folks in the community that I had previously only known via twitter.

The VMTN community stage was located here and I was proud to participate in the vBrownBag tech talks, both with my own session about Automating Repetitive tasks in Veeam Backup and Replication and also Wednesday morning’s vExpert daily panel. I also viewed many session while I was there. If you missed them they are thankfully all on vBrownBag’s YouTube page.

The Aftermath

I have personally gotten incredible value out of attending VMworld 2017 and hope to attend next year as well. In they days and weeks following the conference, when I’m not busy catching up on work, I have been spending a lot of time watching all the sessions I missed on the VMworld YouTube page. Additionally the vBrownBag tech talks mentioned previously are a great way to get caught up on all the community goodness that happened during the conference.

If you are in the Indianapolis area, be sure to keep a look out for an announcement from Indy VMUG. We are in the midst of planning our Q4 2017 meeting and will be including plenty of information from VMworld 2017 for those who were not fortunate enough to attend.