Looking Forward to Cloud Field Day 10

Back in late 2019 I made the decision to transition away from being a Tech Field Day event lead and Analyst for Gestalt IT and get back to my roots as an engineer. Thankfully Stephen Foskett, my employer and the creator of Tech Field Day, was understanding and we parted on good terms. We even discussed my eventual return to Field Day as a delegate after I had settled in at the new gig. Well I’m thrilled that the time has finally come as Stephen has invited me to Cloud Field Day 10, which is taking place March 10-12, 2021.

The event looks to have an excellent line up of sponsors which can be found on the event page.

veeAM 
DOLL-Technologies 
STORAGEOS 
komprise 
vmware 
ORACLE 
intel 
n NetApp 
SCALITY
Cloud Field Day 10 presenter lineup

As far as I can tell, all of the companies presenting are Field Day veterans, but I haven’t personally heard from all of them. Sometimes the Field Day crew are able to get information about the presentations ahead of time from the sponsors to help delegates prep.

Some presenters have shared info thus far while others have not. In the case where the presentation topic is unknown I am just going to review the lineup and take a wild guess about what kind of presentation to expect.

Veeam

 Having presented at 13 previous Field Day events, I would expect the Product Strategy Team, led by former Field Day Delegate Rick Vanover to nail this event. I imagine we will be hearing a lot about the new features of their newly released version 11 of their product. As a former Veeam Vanguard and long time fan of the company, I have high hopes for this one.

Komprise

Komprise has previously presented at Storage Field Day a couple of times previously, but not Cloud Field Data. They bill themselves as a data management company. There is no doubt that the modern enterprise is drowning in data whether their applications live in the cloud or not. I’m interested to hear what they have to say about their capabilities specific to the enterprise cloud.

Intel

Intel is no stranger to Tech Field Day, or the tech industry in general. They are tech titan in fact. Surprising, with all of Intel’s experience with Field Day events, this will be their first appearance at a Cloud Field Day event. Having said that, being involved in the Cloud Discussion is not new to Intel as they took part in a Cloud Influencer Roundtable at VMworld 2019.

Dell Technologies

The Dell Technologies Cloud strategy is one that we have seen evolving over the years. I would expect to see a more robust offering around creating a private and hybrid cloud centered around their VxRail platform on-premises and offerings like VMware Cloud on AWS. However, last time Dell presented at a Cloud Field Day event, they basically co-presented with VMware. However this time, though both companies will be present at the event, they will be presenting on different days. Does this mean we will hear about more than their optimized solutions around VMware technologies? I look forward to finding out!

VMware

No stranger to Cloud Field Day, or Tech Field Day in general It appears as though VMware will be providing an update on VMware Cloud on AWS at Cloud Field Day 10. Multiple VMware on “X” cloud solutions exist now, but VMC on AWS was the first. VMware’s main differentiation for their offering up to this point has been that their offering was both developed by VMware engineers and is supported by VMware. VMware recently published a blog post in preparation for CFD10 which can be found here. Looks like the topic of discussion will be around VMC as a platform for modern applications (read K8S based on Tanzu). I’m looking forward to this one and the interactions to come.

NetApp

NetApp is another prolific Tech Field Day sponsor and well known in the industry. Known as a storage company, NetApp has surprised a few folks with past presentations that have more to do with modern applications and cloud than you would expect. That looks to be the case this time around as well. I have seen a preview of NetApp’s agenda and it looks to be focused on Kubernetes and NetApp’s recent acquisition of Astra, a data management platform for Kubernetes workloads.

Scality

 Another Field Day vet, Scaility is known for the scale-out storage platform that provides both object storage and file services. Having seen a preview of Scality’s agenda, I can say that this is another session that I am really looking forward to. The topics look to be centered around data sovereignty and regulation.

Why would I be looking forward to this conversation you may ask? As technologists, we often get excited and geek out about techs and specs while ignoring or disregard the non-technical realities that affect our industry. While I’m sure Scality will find opportunities to turn these issues back to their product and how it helps solve these issues for enterprise IT shops, it is this kind of mindfulness that is missing from many IT vendors and practitioners alike. Being able to map actual business concerns back to a solution is how IT can really show it’s value.

StorageOS

 I’ll admit that I’m not very familiar with StorageOS, but I am looking forward to hearing from them. A quick look at their website gave me a pretty good idea of what their pitch is. Primarily a persistent storage platform for stateful applications, StorageOS is leaning into K8S as much as anyone else in the industry. It appears that this will be the company’s first appearance at Cloud Field Day, and their first event sine their debut at Tech Field Day 12 over 4 years ago! I’m sure a lot has change since then, so I’m going to try to go back and watch their previous appearance ahead of the CFD10 presentation for comparison’s sake.

Oracle

Oracle has been at previous Field Day events, but this is their first time presenting since they released their VMware Cloud solution. I don’t know for a fact that this is what they will be presenting on, but I’m willing to bet that they will. I’ve noticed a focus in marketing and promotion of this solution recently throughout the tech community and expect them to do the same at Field Day. As there are similar solutions on pretty much every hyperscaler at this point, I’m looking forward to hearing how Oracle’s offering is uniquely differentiated.

Wow, that’s a lot of presentations! I’m really looking forward to this event, and hope you can join in! Be sure to tune into the presentations next week from March 10-12. All presentation times can be found on the the Cloud Field Day 10 event page. Be sure to tune in and interact with the delegates and presenters on Twitter using the hashtag #CFD10. See you then!

Live Outside Your Comfort Zone. It Will Be Worth It.

On October 22, 2018, I will be joining Gestalt IT an Event Lead for Tech Field Day. This is an exciting moment for me and the culmination of nearly 3 years of effort consciously attempting to grow my career and change my attitude about work in general. In essence, I decided to live outside my comfort zone. If you are interested in learning how I reached this point, read below.

Some Background

Folks who know me through my involvement in the tech community may think I materialized out of nowhere some time in 2016. From the time I started my career in the early 2000’s until just a few short years ago, I was just another anonymous systems administrator, and I was fine with that.

I stuck in one role in particular for a very long time. I joined a financial services company just a couple years into my career and during my time the organization experienced tremendous growth, as did my experience and responsibilities.

The Turning Point

Somewhere around year 10 though, things started to stagnate. I was working on less new and exciting projects, and I was spending more time maintaining the existing infrastructure. I also started to wonder what would happen if I kept doing the same thing for another 10 years. Would I wake up one day as the tech dinosaur with out of date skills, clinging on to my legacy infrastructure to maintain relevance?

I didn’t want that to happen and decided in late 2015 to start looking for something new. In early 2016 I started updating my resume and looking at what was out there. I also got more involved by visiting user groups and networking with people. I would introduce myself, ask people what they do, listen and find out who was hiring.

A New Attitude

All that hard work paid off when I started a new job as an Infrastructure Engineer with a service provider in April 2016. During my time there I was fearless about asking people questions about how we did things and why. When a new project came up, I would gladly volunteer but admit that it was something new to me and I may need help along the way. If I made a mistake, I would own up to it, learn from it, and not make the same mistake again.

All of these activities were counter to my first instincts. As a long time systems administrator with sole possession of most projects at my previous employer, I was THE guy. I knew everything about the infrastructure; and when we had a new initiative, I would take it upon myself to learn the new technology in question and become the master. Sometimes this came at the expense of the project timeline or a less than ideal architecture for the solution.

Being willing to admit I was over my head from time to time and ask for help meant that a project would be completed on time. Having an expanded professional network meant I now had connections that could help answer questions when I was trying to familiarize myself with new technologies.

The Effects

I worked on a great team and learned a lot in a short time span. During this time I also took on the leadership role at my local VMUG and started engaging more with people on social media, particularly Twitter. This led to a greatly expanded professional network. After only 1 year in my new role I had the opportunity to change organizations once again thanks to the connections I had made. In May of 2016 I joined a SIS as a pre-sales architect.

I was glad I made the change almost immediately, but also a bit nervous. This was a brand new role for me. I had been an IT practitioner for my entire career up until now. Now most of my time would be spent designing solutions, presenting to customers, and learning the new technologies that may be of interest to my customers when I wasn’t busy with everything else.

The payoffs from this gamble were almost immediate. I was glad to leave behind the on-call life of an engineer and have a more predictable work schedule, not to mention the nice pay increase that comes with a sales aligned position. My public speaking skills were also improving as a function of my job.

An Ever Expanding Network

The need to keep up to date with new technologies meant I was traveling to multiple conferences per year. As a result I met many of the people I knew via social media, blogs and podcasts. I decided to march straight up to anyone I recognized that I had not spoken with before and introduce myself.

I consider myself an introvert like most IT geeks, but it didn’t take much to have a short conversation. I would tell them that I liked their podcast or a story about how their blog helped me out, something like that. But as a result of all this socializing I was forcing myself to do, I was growing my professional network, making some new friends, and actually having some fun!

Enter Tech Field Day

For those unfamiliar with Tech Field Day, it is the original “IT influencer event” and is the brainchild of Stephen Foskett. I first attended Cloud Field Day in August of 2018 and also participated in Tech Field Day Extra at VMworld 2018 a couple of weeks later. I had wanted to attend for awhile and had previously applied to be a delegate. But it wasn’t until my network of peers and friends recommended me personally to Stephen that I received an invitation. Chalk it up to spending more time making an effort to meet people and make friends.

Less than a week after VMworld I received a couple of texts from Stephen.

“Hey it’s Stephen Foskett I’m wondering if we could talk sometime about your job and your future.”

“You seem to have a talent for organizing nerds and getting stuff done, and that’s a rare gift.”

I had a pretty good idea of what Stephen wanted to talk about based on these two texts and the story Tom Hollingsworth told me about how he came to work for Stephen. Over the next few weeks I had several conversations with Stephen and other members of his team.

Just three weeks later I was offered the opportunity to join Gestalt IT and become an Event Lead for Tech Field Day. I will be responsible for scouting new sponsors and delegates for Field Day events and for leading and ensuring the success of the events themselves.

There is no denying that this new role will be an enormous change for me and brings things like risk and uncertainty. This is the definition of living outside of my comfort zone. Every job I’ve had until now has been much more technical.

Although I’ve spent time organizing events, securing sponsors, etc. as a VMUG leader, this is on a much larger scale and the paycheck that my family depends on now relies on me excelling at a role that I am completely new to. And I couldn’t be more excited.

I directly attribute this incredible opportunity and massive career change to my change in attitude less than three years ago. I will try to sum up below some principles that I live by that I believe have helped me several times.

Avoid Smart Kid Syndrome

What is Smart Kid Syndrome you may ask? I had not heard the term until a recent episode of the Nerd Journey podcast, but it sums up the point I want to make very well. Imagine a kid who is good at everything. You’ve probably met more than one in your life.

Math and reading come to them easily and they excel at everything in school. Every new thing they try they are instantly good at, until they aren’t. At some point every smart kid finds out that they aren’t necessarily the smartest kid in the room or the best at something that their peers are better at.

At this point the child can either grow as a person and accept that they need to try hard and always look for ways to improve, or they will recede into their comfortable or familiar skills and will not be willing to try anything new.

As I mentioned in the introduction of this post, I spent a long time being an anonymous sysadmin and being content with it. I was good at my job and didn’t see a need to try anything new. When I made the decision to challenge myself on a regular basis, the benefits were readily apparent. I was constantly learning new skills, meeting new people, and having fun.

Ask for Help

Once you get used the idea of not being the smartest person in the room, you’re also going to have to get used the idea that you can get things done quicker or more effectively if you ask someone with more skill or experience for help. In my experience, most people who are an expert at something are very willing to help or even teach their fellow nerd in the realm of their expertise.

Let’s face it, we IT geeks like to show how smart we are. If you give one of your peers a chance to show off their skill they tend to relish in it. When you try something new, hit a wall, and subsequently break through it (even with some outside help), you’ll be prepared for the next time that task comes up. You’ll also have grown your skill set and become more well rounded as a person.

Get Comfortable Being Uncomfortable

Make no mistake, living outside your comfort zone is a lifestyle change, not just a temporary project. If you are going to succeed in this mindset, you need to commit to it and grow to accept it.

I think of it as similar to dieting. If you are not a healthy eater who occasionally diets to lose weight, your weight will yo-yo up and down but you will not necessarily be living a healthy lifestyle. Once you commit to exercise regularly and eat healthy foods (with the occasional cheat meal), you’ll find permanent lasting change and a different attitude to life in general.

The same is true to living outside your comfort zone. If you make a commitment to always be open to new challenges and constantly be on the lookout for new opportunities, I guarantee you will be glad you did.

Cohesity: Much More than Hyperconverged Secondary Storage

Recently I had the opportunity to attend Cloud Field Day 4 in Silicon Valley. While in attendance, one of the briefings I attended was provided by Cohesity. For those who are not already familiar with Cohesity, It was founded in 2013 by Mohit Aron. Aron cofounded Nutanix and was previously lead on Google filesystem. So it’s safe to say that he created Cohesity with a solid foundation in storage. The platform was created as a scale out secondary storage platform. But as I discovered during my time there, the use cases for Cohesity’s platform have grown very broadly beyond a secondary place to store data.

Cohesity spent very little time getting the delegates up to speed on their platform and the SpanFS Distributed fileystem that powers it. That information has been covered in past Field Day events and can be found in archived videos. We spent the majority if our time with Cohesity covering higher level features and functionality which I will review in this blog post.

Cloud Adoption Trends

The first “session” of the briefing was delivered by Sai Mukundan and past Field Day delegate Jon Hildebrand.

Sai covered some of the trends that Cohesity sees in customer adopting cloud and the Cohesity Data Platform specific use case. The first use case being long-term retention and VM-migration.

Because Cohesity supports AWS, Azure, GCS, and any S3 compatibly object storage, a customer can choose the cloud storage provider that suits them best as a target for long-term storage of their data. Indexes of customer data can enable search across local and cloud instances of data stored by Cohesity. This is especially valuable in helping customers to avoid unnecessary egress charges when a file already exists on-premises.

My favorite part of most briefings is the demos of course, and Cohesity did not disappoint. During his time in the limelight, Jon showed off how he could create a policy that would archive data to multiple public clouds at once. In this case, he created a single policy that would archive data to AWS, Azure, and GCP all at the same time. I actually managed to get a question in during this demo to and in case you are curious, not only can you set a bandwidth limit for each cloud target but also a global limit to ensure that the aggregate of your cloud archive jobs will not consume an unwanted amount of bandwidth. Jon and Sai also showed that once the data exists in multiple locations all will be shown when a restore is initiated.

Migration of VMs Is handled in Cohesity by a feature named “CloudSpin”

CloudSpin

This feature was also showcased in demo form. I won’t describe the demo in detail because you can just watch it at your leisure. I will however mention one thing that struck me during Cohesity’s briefing. The UI is not only slick and responsive, but also well thought out. While watching demos I was impressed by how intuitive everything seemed and how easy I felt navigating the platform would be for someone who was unfamiliar with the interface.

Application Test/Dev

Within the context of VM Migration that was previously mentioned, another potential use case of the Cohesity platform is application mobility for the purposes of testing and development. Again, this functionality was demonstrated rather than just explained.

Again, I won’t spend a lot of time rehashing what took place during the demo. But as the demonstration of the power available to developers unfolded, the panel of mostly infrastructure professionals started discussing the implications of these capabilities brought up concerns about access and cost control. The Cohesity team did a very good job of addressing roles with built-in RBAC capabilities, but it is clear that there is no built-in cost control capability at this point in time. It was pointed out that the extensibility of the platform through use of APIs means that customers could implement cost control using a third party management plane of their choice. This is an indirect answer the question though, and I would like to see Cohesity implement these features natively. A customer can make the decision to implement them in the Cohesity platform, leverage a third party management plane, or simply let the developer run wild (bad idea.)

Cloud Native Backup

Within the Cohesity model, cloud-native backups are a three step process. The image below depicts the scenario specific to AWS, but the process for Azure or GCP workloads is largely the same. First, a snapshot of an EBS volume is taken and placed in an S3 bucket. Second, the snapshot is transformed into an EBS volume. To complete the process, the volume is copied to Cohesity Cloud Edition.

CNBackup

Multi-Cloud Mobility

A common first use-case for many customers when they initially put data into the cloud is for long term retention. With this in mind, Cohesity seeks to enable customers to store an move data to the cloud provider of their choice. The three big clouds (AWS, Azure, and GCP) are all supported, but a customer could choose to leverage an entirely different service provider as long as they offer NFS or S3 compatible storage.

I expected Cohesity to show of some kind of data movement feature during a demo of this use case, but I was wrong. What I got instead was data consistency with Cohesity, even when data that had been archived to one cloud vendor was migrated to another by a third party tool. This ensures that the cluster will maintain access to the data and be able to continue performing such tasks as incremental backup. This is accomplished by changing the metadata within a Cohesity cluster. There are multiple ways to execute this task, be it GUI, API, or in the case of the demo, a CLI tool called icebox_tool.

icebox_tool

Summary

While Cohesity may have started life as a “Hyper-Converged Secondary Storage” platform, the use cases have increased greatly as the platform has matured. While this makes for a very powerful platform that can fit a multitude of customer types, it has led to confusing messaging.

UseCases

Is Cohesity a data archival platform, a backup platform, or a data mobility platform? The answer is “all of the above” which is fine, but doesn’t really help deliver a clear message that can be brought to market and keep the product at the front of mind for customers who are seeking a product to address their needs.

I’m not a marketing genius so I have no idea what this message would look like. However, Cohesity has been bringing in a lot of top talent lately and I think they should have no problem clearing confusion that exists around their platform. Because it is clearly a powerful and capable platform and once customers know off all the use cases and how the product is relevant to them, it is sure to gain popularity.

Disclaimer: Although this is not a sponsored blog post, much of the information used for it was gathered at Cloud Field Day 4. All of my expenses were paid by GestaltIT who coordinate all Tech Field Day events.

Move Your Data into the Cloud NOW with SoftNAS

Recently, I had the opportunity to attend Cloud Field Day 4 in Silicon Valley. While there, one of the companies that presented to the delegates was SoftNAS. For those unfamiliar with SoftNAS, their solution allows organizations to present cloud storage from platforms such as AWS and Azure using familiar protocols such as iSCSI and NFS.

This approach to cloud storage has both benefits and drawbacks. On one hand, SoftNAS allows companies to overcome data inertia easily without refactoring their applications. On the flipside, an application can only be considered cloud native when it is designed to take advantage of the elasticity of services and resources made available when using public cloud platforms like AWS S3 and Azure Blob Storage. How SoftNAS helps customers accomplish this is with a number of features.

SmartTiers

SmartTiers is meant to leverage multiple cloud storage services and aggregate them in a single target for applications that are not able to utilize cloud storage services natively. With SmartTiers, data can be automatically aged from the quickest and most expensive tier of storage made available to the application to lower cost, longer term storage. Think of it as putting hot data in flash storage, cool data in traditional block storage, and cold data in object storage such as S3.

SmartTiers

UltraFast

UltraFast is the SoftNAS answer to the problem of network links that tend to have unpredictably high latency and packet loss. Using bandwidth consumption scheduling SoftNAS claims UltraFast will achieve the best possible performance gains on networks with the aforementioned problems with performance and reliability. Performance of UltraFast is monitored through a dashboard and can also be measured on demand with integrated speed tests.

Lift and Shift

Lift and Shift is a common term used to describe the process of moving data off a legacy platform and into the cloud without refactoring. It is often seen as an intermediate step to eventually adopting a cloud native architecture. SoftNAS helps customers achieve this by moving data to an on-premises appliance that will continually sync with another appliance in the cloud service of their choice. Synchronization of data can be accelerated by Ultrafast. When the customer is ready to complete the migration of their application, only a final delta sync will be needed and the most recent version will be present in the cloud platform of their choice.

LiftNShift

FlexFiles

FlexFiles is a SoftNAS feature that is based on Apache NiFi. It solves the problem of collecting and analyzing data in remote locations. IoT devices can generate extremely high amounts of data that cannot possibly be transferred back to a data center or public cloud over the types of WAN/Internet links available at most remote locations. By placing a SoftNAS appliance in the location where data is to be collected, FlexFiles will allow customers to filter data. Once data is captured and filtered locally, only that data deemed necessary is transferred securely to the public or private cloud where it can be acted upon (transformed in SoftNAS terms) and processed by an application.

Summary

The first reaction that some may have to SoftNAS is that the product is not cloud native and therefore not worth their time. I would caution against this line of thinking and encourage taking time to consider the use cases of SoftNAS’s solutions. Much of what SoftNAS does enables customers to move large amounts of data without refactoring their applications for the public cloud. This can be extremely valuable for organizations that do not have the time or skills in house to completely rearchitect the applications that their business relies on for critical operations.

Yes, if you are moving your data to the cloud you would be better off adopting a cloud native architecture long term. But if you have an immediate need to move your data off site, a solution like SoftNAS will remove the barrier for entry that can exist in many organizations and serve as an on-ramp to the cloud.

One More Thing

While they were presenting at Cloud Field Day 4, SoftNAS mentioned that they also have a partnership with Veeam that will allow use of the platform as a target for synthetic full backups. There was not enough time to get a deep dive on this functionality at the time. I have reached out to SoftNAS and hope to get more information soon and follow up with some specifics on the offering.

Disclaimer: Although this is not a sponsored blog post, much of the information used for it was gathered at Cloud Field Day 4. All of my expenses were paid by GestaltIT who coordinate all Tech Field Day events.

From Automation Noob to…Automation Noob with a Plan

Note: This post is being published at the same time as a lightning talk that is being delivered at Nutanix .NEXT 2018 by the same title. It contains link to the various resources mentioned during the talk.

I’ve been working in IT for 15 years and I think my story is very similar to that of many of my peers. I had some programming courses in college, but decided that I was more interested in infrastructure and chose the route of a systems administrator over that of an application developer.

Early in my career most of my tasks were GUI or CLI driven. Although I would occasionally script repetitive tasks, that would usually consist of googling until I found someone else’s script that I could easily change for my purposes. Most of the coding knowledge I had gained in college I either forgot or was quickly outdated.

Fast forward to the current IT infrastructure landscape and automation and infrastructure as code are taking over the data center. Not wanting to let my skills languish, I have embarked upon improving my skills embrace the role of an infrastructure developer.

The purpose of this blog post is not to teach the reader how to do anything specific, but to share what methods and tools I’ve found to be useful as I attempt to grow my skill set. My hope is that someone will be undergoing the same career change as myself and find this to be useful. I’ve heard many tips and tricks some of which I have taken to heart as I work towards my goal. I will devote a bit of time to each of these:

  • “Learn an language.”
  • “Pick a configuration management tool.”
  • “Learn to consume and leverage APIs.”
  • “Have a problem to solve.”

I’m going to take these one by one and break down my approach so far.

Learn a Language

When I first heard this I was unsure which language I should learn and where I should start. I actually cheated a bit on this one. I chose two languages: Python and PowerShell. I chose these based on the fact that they are both powerful, widespread, and well suited to most tasks I would want to automate.

PowerShell

To get away from googling other people’s PowerShell scripts and actually create something myself I wanted to actually understand the language and how it handles objects.

I’d heard many mentions of the the YouTube series “Learn PowerShell in a Month of Lunches” by Don Jones. I made my way through this series and have found it very valuable in understanding PowerShell. There is a companion book by Don and Jeff Hicks available as well. I have not purchased the book myself, but have heard good things.

Another great way to learn PowerShell or any other number of technologies is Pluralsight. Jeff Hicks, one of the authors of the previously mentioned book, also authored a course named “Automation with PowerShell Scripts.” I am still making my way through this course but like pretty much everything on Pluralsight it is high quality. If you have access to Pluralsight as a resource I highly recommend taking advantage of it.

Python

I was even less familiar with Python than I was with PowerShell before making the decision to enhance my automation skills. Although I understand general program principals, I needed to learn the syntax from the beginning. Codecademy is a great, free resource for learning the basics of a language. I made my way through their Learn Python course just to get basics under my belt.

Codecademy was a great way to get started understanding Python syntax, but left me with a lot of questions about actual use cases and how to start with scripting. Enter another “freeish” resource, Packt. I say freeish because Packt gives away free ebook every day. I check this site every day and have noticed that some books have popped up multiple times. Many of these are Python books and one that I have been spending my time on in particular is Learning Python by Fabrizio Romano. My favorite method of learning is to have a Python interpreter and the book opened in side by side windows on my laptop. Not pictured is the code editor will keep opened in the background or on another monitor.

LearningPython

Another resource worth mentioning is Google’s free Python course. I’ve only looked at this briefly and have not spent much time on it yet. It appears to be high quality and easy to follow, and at $0 the price is right!

Pick a Configuration Management Tool

There are many of these out there and choosing the right one can be difficult. What makes one better than the other? You’ve got Puppet, Chef, and Ansible for starters. Fortunately for me the decision was easy as my employer at the time was already using Puppet, so I just decided to dive in.

The first thing I did was download the Puppet Learning VM. This a a free VM from Puppet that will walk you through learning the functionality of puppet, not just by reading the included lessons (pictured below), but by accessing the VM itself by SSH and actually interacting with Puppet.

LearnPuppet

You can run this VM in a homelab or even on your old PC or laptop given you install something like VMware Workstation or VirtualBox. Learning Puppet takes place in the form of quests. You are given an objective at the end of a module that you must achieve within the Puppet VM and you will see progress indicator on the screen informing you when you have completed various tasks. This is one of the coolest “getting started” guides that I have ever come across and I cannot recommend it highly enough.

As great as the Puppet Learning VM was, I wanted to strike out on my own and really get to know the product. To do this, I used old hardware at work to setup a lab and mirror our production environment as best I could. When things didn’t work my troubleshooting efforts usually led me to the Puppet User Group on Google.

This is a great resource for anyone who has questions while trying to learn Puppet. I’ll be honest and admit that I’m not very active in this group. I mostly just subscribe to the daily digest, but reading other people’s questions and seeing how they resolved their issues has been very helpful for me.

Where things got really interesting with Puppet was when I broke my vCloud Director lab entirely. By recycling many of the Puppet modules and manifests from my production environment I managed to overwrite the response file of my vCD lab. These response files are specific to each installation of vCD and are not portable. Although this was a lab and I could just start over, I was determined to fix it in order to get a better understanding of vCD. This taught me an important lesson about paying attention to the entirety of the configuration you will be enforcing, regardless of the tool you are using.

Learn to consume and leverage APIs

This one is the newest too me and I am only getting started. Recently vBrownBag hosted a series called “API Zero to Hero” and I watched them all. I also have downloaded Postman and use it to play with an API every time I want to learn a little more.

When it comes to actually doing something with an API, I am interested in leveraging the API in Netbox, the excellent DCIM tool by Jeremy Stretch of Digital Ocean, to provision and assign things like IP addresses, VLANs, etc. and save myself some manual work.

Have a problem to solve

All the learning in the world won’t do you much good if you don’t apply it. Shortly after acquiring a little bit of PowerShell knowledge I felt comfortable enough to write some basic scripts that helped me save some time in the Veeam Console. I delivered a vBrownBag tech talk on this at VMworld 2017 and you can view it in the video below if you are interested.

Long story short. I wanted to quickly change a single backup job on dozens of Veeam Backup jobs. Rather than click through all of them I dug through the Veeam PowerShell documentation and tested things out until I came up with 7 lines of code that would allow me to toggle storage integration on or off for all backup jobs.

Bonus Tips

Now that I’ve walked through my approach to the various pieces of advice I received over the years, I’m going to leave you with a couple more.

Pick a code editor and use it

We are long past the days of quickly editing .conf files in vi or parsing data in Notepad++. There are numerous code editors available and I’ve tried a few like Sublime Text and Atom Text. My favorite that I’ve settled on is Microsoft Visual Studio Code. It’s robust, lightweight, and extensible. What more could you want?

Use version control

Project.ps1, ProjectV2.ps1, and ProjectV2final.ps1 are not acceptable change tracking in an era of Infrastructure as Code. Its important to learn and use version control as you are creating and updating your automation projects. The most popular and widely used resource for this is git and more specifically GitHub if you want to host your code publicly, or GitLab to host the repositories yourself.

To get started learning Git, once again I would recommend starting with CodeCademy if you are completely new. They have a Learning Git class that is a suitable introduction. If you already have the basics handled, then you may want to learn some of the more advanced concepts outline in the vBrownBag #Commitmas series.

Making sense of it all

At this point you can tell that I have my work cut out for me. I have listed several different skills that I am trying to learn or plan on learning to stitch together into a useful skill set. If you find yourself in the same situation, the best encouragement I can give you is to keep at it.

You may not be able to sit and learn everything you need without interruption. I certainly haven’t. But whenever I have some spare time available I go back to the latest course or YouTube series I’m working on and try to make progress. If you are reading this post and find yourself in a similar position, I encourage you to do the same and keep at it!

Slides from the lightning talk related to this post are available here. Links to all the resources from this blog post are embedded in the slides as well.

News from Nutanix .NEXT 2018

Nutanix is holding their annual .NEXT conference in New Orleans this year and have made several new announcements and enhancements to their platform. I will be highlighting some of my favorites

Flow

Flow is the name of the Nutanix Software Defined Networking (SDN) offering. As of version 5.6 Nutanix had the capability to perform MicroSegmentation of workloads. With release of Flow, Nutanix will be extending their SDN capabilities to include network visibility, network automation and service chaining.

FLOW

Flow is available for customers using the Nutanix Acropolis Hypervisor (AHV). There will be no additional software to install. Many of the SDN enhancements coming to AHV are a result of the recent Nutanix acquisition of Netsil.

FLOWNetsil

If you are in New Orleans for .NEXT this week there are a few sessions/labs that will be a great opportunity to learn more about SDN in Acropolis.

FLOWSessions.png

ERA

Era is Nutanix’ first foray into PaaS and is a Database management solution. Using ERA, Nutanix customers will have the ability to manage their databases natively on their Nutanix clusters.

ERAcdm.png

ERA’s Time Machine feature will allow customers to stream their Database to their Nutanix cluster and clone or restore to any point in time up to the most recent transaction.

ERATimeMachine.png

The first release of ERA will support Oracle and PostgreSQL databases.

ERA1.0

Long term, the ERA of vision aligns well with the overall Nutanix philosophy of giving customers choice. Having the ability to manage a DB of a customer’s choice and in the cloud of their choice is the reality that Nutanix is aiming for with ERA.

ERAvision.png

Beam

Beam is the result of the Nutanix acquisition of Botmetric. Beam is targeted at enabling customers to have visibility and control of cost across multiple clouds. In the first release Beam will support Nutanix Private clouds as well as AWS and Azure, with GCP support coming in a future release.

Nutanix Beam Botmetric

Nutanix Beam NTC

Nutanix Beam Cost Optimization

Its clear with the announcements by Nutanix at .NEXT 2018 that hybrid or multi cloud strategies are the goal of Nutanix and their platform. Giving customers the freedom and choice and the ability to provide the right services in place to enable their business to succeed in the 21st century.

How to Create a DellEMC XC CORE Solution With the Dell Solutions Configurator

Nutanix and DellEMC announced a new way to purchase a Nutanix solution on Dell hardware this week in the form of Dell XC CORE. Customers can purchase hardware delivered to give a turnkey Nutanix experience and still purchase software separately preventing a license from being tied to a specific appliance.

For Nutanix partner resellers there has been a bit of confusion regarding how we can configure and quote the XC CORE nodes for customers seeking these options. After a little bit of searching I have completed a couple of these configurations for my customers.

It looks like some partners have different versions of the Dell configurator with some allowing them to select some options at the time of solution creation. The version I have access to does not so I had to dig a bit to find where I could configure XC CORE nodes.

New Solution

After creating a new solution, I navigated down to the available Dell XC nodes that were previously available to me.

DeviceSelection

By selecting a Dell XC node that was listed in the XC Core datasheet and expanding the first item, “XC640ENT” in my case, I found a new option. Dell EMC XC640ENT XC CORE.

ConfiguratorXC640ENT

The rest of the configuration was as familiar as any Nutanix configuration and I was able to complete the solution according to the sizing data I had already configured.

Proud to be Selected as a Veeam Vanguard 2018

I woke up on the morning of March 6, 2018 to a very pleasant surprise. I had received the email pictured below from Rick Vanover, Director of Product Strategy at Veeam Software inviting me to be a member of the Veeam Vanguard.

Screenshot_20180307-103741~2.png

Needless to say I immediately thanked Rick for selecting me and accepted his invitation. I had the pleasure of meeting several members of the Veeam Vanguard when attending VeeamON last year. A few of them encouraged me to apply and I want to specifically thank Matt Crape, Rhys Hammond, and Jim Jones for their support.

 

I gained my VMCE certification while in New Orleans and have since led the majority of Veeam PreSales discussions on behalf of my employer. I also had opportunity to deliver vBrownBag tech talk on the VMTN stage at VMworld last year about my experience Automating Repetitive task in Veeam Backup and Replication using the Veeam PowerShell Snap-In. I look forward to continued involvement in the Veeam community and look forward to being an active participant in the Veeam Vanguard Program.

 

Building an On-Demand Cloud Lab

This time last week, I was in Palo Alto, California for the VMUG leader summit, which was great.  Prior to the summit I submitted an entry into a new VMUG program called SPARK.

To participate in SPARK a leader was asked to submit two power point slides outlining an idea for a way to provide value back to the VMUG community. Additionally we were asked to submit a video that would be played back at the summit which would be played back before all leaders present voted for the favorite submission. The Indianapolis VMUG submission was an On-Demand Cloud lab built on VMware Cloud on AWS.

It was a tight race, but I’m proud to say that Indianapolis VMUG won the first VMUG SPARK contest and will receive $5000 funding to help make this idea a reality.

Here’s the point of the post though. I can’t do this alone, and in fact I don’t want to. I want to involve the community to make it as great as it can be. I want to hear other’s ideas as to how we can create an infrastructure that can be spun up quickly and easily and help other VMUG members learn new skills.

We will be holding our initial planning calls after the new year at whatever date/time works for the best for the most participants. If you would like participate please reach out to me via Twitter. My DMs are wide open and all are welcome. We can make a great platform for everyone to learn on if we work together!

 

Backup pfSense from Ubuntu 14.04

Back in mid 2016 Github user James Lavoy released a python script to backup pfSense. I was excited because pfSense didn’t (and still doesn’t) have a built in backup scheduler. Sure, you can backup manually from the GUI, but I don’t trust myself to remember to do that after time I make a change to my config.

I downloaded the script and changed the necessary options to point it to my pfSense box and supplied the credentials necessary for backup. I unfortunately received the following error:

AttributeError: 'module' object has no attribute '_create_unverified_context'

I quickly found out that this was due to my OS running Python 2.7.6 and SSLContext was introduced Python 2.7.9.  I reported this issue on James’ Github and he suggested installing Python 2.7.9 or later from another ppa as coding around the issue would require a complete rewrite of the script.

James has updated the README.md to indicate this requirement but his instructions are a bit out of date. The ppa specified is not being maintained and the fkrull/deadsnakes ppa should be used instead. To install python 2.7.12 and mechanize and meet all requirements of this script run the following commands:

sudo add-apt-repository ppa:fkrull/deadsnakes
sudo apt-get update
sudo apt-get install python2.7 python-pip
pip install mechanize

Before running the script, make sure you edit it to point to your pfSense box’s IP address (and https port if necessary) and supply the correct credentials. Whether you are running this manually or automated you will need to specify the path for Python 2.7.9+ as it is not necessarily invoked by default by the “python” command. For example I have to run:

40 14 * * * /usr/local/lib/python2.7.12/bin/python /media/backups/pfsense-backup-master/pfsense_backup.py

In order to automate this, you’ll want to add a cron job. Do so by editing your crontab:

crontab -e

My crontab entry looks like this:

40 14 * * * /usr/local/lib/python2.7.12/bin/python /media/backups/pfsense-backup-master/pfsense_backup.py

This will run the script every day at 2:40 pm. Why 2:40? Why not?

Some may be wondering why I didn’t just upgrade my VM to Ubuntu 16.04. I tried and many services failed to load or lost their config after the upgrade, so I rolled back to a snapshot. 14.04 will continue to receive updates until 2019 as it is an “LTS” release and I anticipate migrating off the server by then anyway.

If you are like me and don’t need to be on the latest and greatest OS, but want to be able to use scripts like this, hopefully this will help.