Although Kubernetes has been around for quite some time, the open-source container orchestration platform has recently gained traction with public announcements of support by big vendors like Google Cloud and VMware. Continuous integration, continuous delivery (CI/CD) software development continues to be what’s driving the greater awareness. 

As enterprises consider moving more workloads to the cloud, it’s a great time to consider approaches that include containerization orchestrated by Kubernetes instead of just lifting and shifting traditional VMs that have been architected to run on premises in a data center. Kubernetes provides more agility and control of modular computing containers running microservices that can scale with demand. 

Here’s a quick summary of how Kubernetes is deployed and managed.

Deploying a Kubernetes Cluster

The first step in getting started with Kubernetes is to deploy at least one controller node and typically two or more worker nodes as virtual machines running Kubernetes. This set of nodes as a cluster is where all the configuration and containerization will be deployed and run. The cluster makes up the capacity of compute power for containers to run. 

Major public cloud providers provide managed Kubernetes clusters as a service. There’s Google Kubernetes Service (GKE), Amazon Elastic Kubernetes Service (EKS), and Azure Kubernetes Service (AKS). In addition, other vendors provide native Kubernetes cluster configurations and services. Once the clusters are set up and are running, you can manage the orchestration and configuration of containerization, typically Docker as the container service. 

Kubernetes Pods and Nodes

The nodes in a Kubernetes cluster act as workers for the most part with a controller (master node) providing the instructions for each node. Each node will have pods that can dynamically change based on the controller management.They can scale up or down and as well as get started and stopped. As pods are scheduled to start and stop on the cluster worker nodes they run the containers specified to run at any given time or current state of the Kubernetes cluster. One or more containers can run on a pod and containers are where applications and services run at scale in the Kubernetes environment. 

Managing a Kubernetes Cluster

Kubernetes clusters are managed using a Yet Another Markup Language (YAML) file. These files are simple text files that can be updated with conditions for the state of any Kubernetes cluster and managed in a central repository outside of the Kubernetes environment. For example, DevOps engineers can have a repository of YAML files in a Github account and several team members can be working any aspect of the Kubernetes environment to maintain and run containers in the target Kubernetes cluster environment. The files can be retrieved and executed from a command line or any Kubernetes management platform. 

CloudBolt and Kubernetes

CloudBolt supports Kubernetes orchestration for enterprises in the following ways:

See also: Orchestrating Docker Containerization with Kubernetes


Agile software development practices have emerged as the preferred way to deliver digital solutions to market. Instead of defined stages of software development, sometimes referred to as “waterfall” approaches, software changes are continuous. We now consider almost any software delivery process as agile as long as it combines development and operations (DevOps) and the releases are frequent. 

The DevOps term is often used even when it is more like mini-stages in comparison to the past or simply has the developers not only write the code but also are the ones who deploy it in production. A DevOps engineering team can include software coders, security experts, and IT system admins. 

DevOps teams have the objective of a continuous integration, continuous delivery (CI/CD) pipeline of code from “story” to production, the more that is automated in the process, the faster the delivery. Stories are created from customer or user driven needs or can be part of a product vision for new capabilities. The story describes the intended behavior and user experience of a software solution when the code becomes live in production. Because delivery is continuous, the stories can change over time and the code is modified and delivered. 

Beside the coding expertise, DevOp engineers use many IT tools that can help in the infrastructure provisioning side as well as on the software coding side. In this post, we’ll look at some of the key automation and provisioning tools.


As a configuration management tool based on cooking, Chef creates a way to deploy “recipes” for application configuration and synchronization. Recipes can be combined and modularized with “cookbooks” to help organize the management of configuration automation using Chef. 

Chef deployments consist of three components:

The servers manage the environment and workstations are used by developers to create and deploy cookbooks. The clients are the managed nodes or targets for configuration.


Released in 2005, Puppet has been around longest as a configuration automation tool. Puppet uses a declarative approach to create a state and the Puppet executes the changes. There are controller and agent components. The agent on a managed client polls a Puppet controller on another node (master) to see if it needs to update anything based on what is declared in Puppet modules

Puppet uses a unique configuration language based on the Nagios file format. The state of the configuration can be defined with manifests and modules that are typically shared in a repository such as Github. The file format accepts Ruby functions as well as other conditional statements and variables.


As the most popular configuration management tool among DevOps engineers, Ansible doesn’t require agents to run on client machines. Instead, it’s possible to secure shell (SSH) directly to the managed nodes and issues commands directly on the virtual machine. The Ansible management software can be installed on any machine that supports Python 2 or higher and it’s a popular notion that DevOps engineers run Ansible updates from the Mac laptops.

In order for an update to occur, the change must be “pushed” to the managed node and the approach is procedural as opposed to the declarative approach of Puppet. 


For infrastructure provisioning and configuration automation, Terraform builds infrastructure in the most popular public and private cloud environments and can be managed with versioning. As an infrastructure as code (IaC) DevOps tool environments can be built from any device running Terraform, connectivity to an environment as a resource is specified.  

Terraform plans are declarative and describe the infrastructure components necessary to run specific apps as well as a whole virtual data center of networking components and integrated with other configuration and management tools. Terraform determines what changed and then creates incremental execution plans that get applied as necessary to achieve the desired state of infrastructure and applications. 


As an IaC DevOps tool for Amazon Web Services (AWS), CloudFormation is a service that helps configure EC2 instances and Amazon RDS DB instances from a template that describes everything that needs to be provisioned. CloudFormation is specific only to AWS and helps AWS users who don’t want to configure some of the backend complexity that is automatically configured by using CloudFormation as a service. CloudFormation is free to use for anyone subscribed to AWS.

CloudBolt and DevOps Tools for Success

DevOps tools used to configure infrastructure and the applications and services running on them vary by enterprise and often in different teams in the same enterprise. Having the visibility and control of the DevOps tools used to configure resources and the resources themselves gives IT admins using CloudBolt a faster way to find out who’s using what and where in a potentially complex and siloed hodge-podge of technology.

To see for yourself how our platform helps you reach DevOps Success, download CloudBolt for FREE today!

As enterprises and agencies continue to expand on cloud initiatives, there’s an increasing number of choices from public and private cloud offerings that integrate with existing on-premises data center virtualization technologies and configuration management tools. Choices include lifting and shifting basic virtual machines (VMs) from one workload environment to another, to re-architecting legacy solutions altogether with distributed microservices running in containers along with new innovation strategies underway. A hybrid cloud approach continues to be extremely relevant. 

In spite of what might be considered daunting—if done carefully—strategic workload placement in the cloud can dramatically reduce costs, speed up development, and accelerate the return on investment (ROI) for digital initiatives. Gartner identifies the crucial time to consider moving to the cloud as now in its Top 10 Trends Impacting Infrastructure and Operations for 2019.

The main drivers for transitioning to the cloud (if one hasn’t already) is the ability to consume data-driven insights from just about anywhere from internet of things (IoT) devices intertwined with real-time, responsive applications that help enterprises and organizations take a leap ahead of what they’ve done in the past. Distributed cloud-native architectures and edge computing makes this all possible.

We now respond faster to consumer data, go to market with right-fit goods and services, and in some cases save lives with time-sensitive insights and action. For example, consider the interactive wearable fitness app, Fitbit and health apps like Noom that are enabled by digital innovation using cloud services. They both scale directly with usage and geographical regions because of the cloud.

Cloud Architecture Approaches

As successful approaches to cloud adoption emerge from just about every sector almost everywhere, enterprises and institutions must keep up and hopefully win against their competition. There’s little room for failure when the competition is ready and willing to snatch up unhappy customers or when the failure puts lives at risk for healthcare and some agency initiatives. The US Department of Defense (DoD) has initiated a “Cloud Smart” plan to make sure cloud adoption works without catastrophic consequences. 

Every technology and associated vendor will have a spin from open source solutions to enterprise-class incumbents like IBM, Dell, HPE, and BMC. There’s Splunk, ServiceNow, and New Relic as other emergent and dominant technology offerings to consider, each in their own best-of-breed swim lanes, so to speak. Data centers have large footprints of VMware along with Nutanix and OpenStack emerging as private clouds on-premises.

While getting ramped up and running in the cloud, key decisions can’t be overlooked by today’s IT leaders. As industries are shifting in this cloud and digital era, no one wants to make mistakes. Trusting one source over another and trying to understand what’s best for the organization, there’s unfortunately no “one-size-fits-all” approach.

The approaches pitched all have merit and common elements. And a good approach is better than no approach, even if it is self-serving to some extent. 

Amazon Web Services (AWS) is betting on a “Well-Architected Framework” as its signature methodology to approach cloud services. As CloudBolt is an Advanced Technology Partner in the APN for AWS, we’re ready to integrate the design principles as needed for any organization on the journey. 

Getting Cloud Savvy

AWS cloud architects, developers, and other IT pros have spent a great deal of time and resources to encourage others to follow a framework for success. With input from hundreds of seasoned AWS Solution Architects and their customers, AWS pitches a Well-Architected Framework

This framework has been evolving for a decade but more formally “as a thing” since 2017. 

Here’s a quick introduction to the five pillars of the AWS Well-Architected Framework that include:

AWS provides an incredible amount of detail for each of these five pillars and design principles for each one on their website. This introductory white paper, published in July 2019, is a good start.

As a quick summary of the pillars in action, check out how CloudBolt can help with specific aspects for each of them.  

Operational Excellence—The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. 

CloudBolt provides inventory classification and tagging for each synchronized resource in AWS as shown in this example:

Security—The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. 

CloudBolt provides role-based access controls (RBAC) and permissions to implement a least-privilege strategy for end user access. In this example, a Group Admin controls all aspects of the group permissions. 

Reliability—The ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. 

In this example, CloudBolt provides a customized framework of actions that can execute any business logic to meet reliability demands, such as notifying group admins when quota usage is over a threshold in this example. 

Performance Efficiency—The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve. 

CloudBolt blueprints can provide standard access to efficient builds of automated resources without requiring any specific domain knowledge from the end users. 

Cost Optimization—The ability to run systems to deliver business value at the lowest price point.

CloudBolt provides many ways to build in cost optimization with workflows that automate best venue execution for cost as well as power scheduling and the enforcement of expiration dates when necessary. In this example, for AWS, CloudBolt can recommend reserved instances after analyzing spending patterns. 

Although the reserved instance example shows only minimal savings in a lab environment, consider the dollar number in terms of a percentage. The reserved instances recommended would yield a $39.37 per year savings that is reaching almost a 40% reduction in savings for the year! (39.37/109.47)

For more detailed information about each of the pillars, check them out here

CloudBolt and AWS

CloudBolt provides visibility and governance of the workloads you can deploy and manage in AWS. CloudBolt’s powerful blueprint orchestration and provision capabilities provides a way for experts on the AWS platform to curate and provide right-fit resources to end users who just need one-click access to infrastructure without needing to know or understand the backend technology. 

For more information on how CloudBolt works alongside AWS, request a demo today!

ServiceNow, as an enterprise Software-as-a-Service (SaaS) cloud offering for enterprises, delivers and manages just about everything. They lead the pack for Information Technology Service Management (ITSM) tools among a long history of other IT vendor tools for service desk and incident management. Their “Now” platform runs IT, employee, and customer digital programs. For IT cloud provisioning, ServiceNow includes a Cloud Management application in its IT Operations Management (ITOM) suite.

It makes sense to integrate with ServiceNow whenever and wherever you can as an independent software vendor (ISV). The question is where and who’s leading the dance?

In fact, dancing by yourself, although not necessarily a bad thing, might just send you off the floor. IT shops invested heavily in ServiceNow want to achieve the promised business value so all enterprise consumers of IT can access and manage whatever they need from any connected system. 

Their core value comes from two things. First, they maintain an industry standard “system of record” for everything that is created, deleted, changed, and archived in what’s called a Configuration Management Database (CMDB). Second, they have an incredibly easy-to-use interface that is powered by a SaaS application that accounts for and manages usage for the cost of the digital services across all lines of business for the enterprise.

ServiceNow Attracts the CMDB-Friendly, and Like-Minded

Back to dancing. What makes a ServiceNow integration possible? If the CMDB gets updated, chances are you’ll at least make it to the dance floor. For example, in this workflow for a CloudBolt provisioning process, an Action in the order form will update the CMDB in ServiceNow: 

Actions in CloudBolt provide orchestration steps in the provisioning process and can be customized to the exact specifications required by the enterprise. The key value is that a system of record is maintained in one place so troubleshooting scenarios are streamlined and identifying at what stage and where a configuration item (CI) is impacted is easy to find. 

ServiceNow Cloud Catalog

ServiceNow provides a way to include a catalog of cloud resources from popular vendor sources such as AWS, Azure, and VMware so that developers and anyone needing virtual infrastructure can request resources and have it provisioned if approved. They use the term “Launch a Stack” from the ServiceNow Cloud User Portal page, as shown in the following example.

What goes on next is up to the IT admins and system administrators in the enterprise. They can choose to develop a workflow to provision all the resources using ServiceNow or they can integrate with other solutions to achieve the same goal. They must decide if another solution that integrates with ServiceNow might be a better alternative than just sticking with the platform. 

ServiceNow Integration Decisions

Integration with ServiceNow might include either a “front-end” or “back-end” capability. 

Front End

When a ServiceNow cloud consumer “Launches a Stack,” the user interface (UI) could change to a third-party portal that has a service catalog where ordering and managing resources takes place. This works for some organizations that might already have a robust self-service provisioning tool that they want to keep rather than configuring everything all over again in ServiceNow.  A key part of this integration should include inventory management that can be accomplished with updating the CMDB that is managed by ServiceNow and the industry standard for IT processes. Any platform like ServiceNow would typically have a CMDB and the integration tool should update and “sync” with the main system of record for the enterprise. 

Back End

Another scenario for the integration is having the Cloud Management application in ServiceNow call another solution to do the orchestration, automation, and provisioning of cloud infrastructure resources. The ServiceNow enterprise consumer would not even notice that the “back end” is being run by another technology provider. In order for this decision to be wise, either the integration technology for provisioning is superior to ServiceNow or it is already in place. It could also be both. The technology is already in use and is superior. 

Extending Beyond Cloud Provisioning

Other third-party systems integrate with any of the wide range of workflows in ServiceNow as a starting point. For example, a monitoring solution that has identified a network, application, or infrastructure condition that the service desk in ServiceNow needs to get could be automated to generate a service ticket. There can be manual steps and even approval processes configured for all or some of the steps of any integration with ServiceNow. A service desk request could then kick off a cloud provisioning workflow that is completely automated. 

CloudBolt and ServiceNow

CloudBolt can be part of any of the integration workflows in ServiceNow and has been proven to be particularly useful as the provisioning engine for complex orchestration, automation, and provision workflows for the enterprise.

For more information on how CloudBolt works alongside ServiceNow, request a demo today!

Faster time to value for developers means that Jamf customers have new features in their hands before having to request them

Jamf has made all things Apple easier for individuals and organizations since 2002. Zach Halmstad, Jamf co-founder, originally worked full-time in the IT department at the University of Wisconsin on the Eau Claire (UWEC) campus deploying, updating and tracking over 400 student and faculty Macs. He soon partnered with Chip Pearson who led a premiere Minneapolis IT consultancy and they co-founded Jamf. 

The core mission at Jamf is to empower all its customers to succeed with Apple and deliver the gold standard in Apple device management.

Jamf Business Challenge

Just over three years ago Jamf, like most digital business enterprises, needed to go to market faster with new features and solutions continuously. The problem was getting virtual machines (VMs) and test environments fast enough to developers who were streamlining their processes. 

“We didn’t want to become the ‘no’ team,” as Jason Gamroth, Enterprise Services Manager at Jamf describes. “We are always striving to provide the best solutions for real problems that our customers have, and do it in a way that empowers them and gets IT out of the way.” 

Jason had come from a more controlled IT environment and getting secure, sanctioned infrastructure resources wasn’t trivial. Not everything was approved for one reason or another—cost, security, and performance all played a role in the decision.

Jamf developers needed more continuous integration and continuous delivery (CI/CD) solutions to match the pace of their more agile development schedules. They had a solid base of existing VMs, which were leveraged for much of their existing workloads, but keeping them up-to-date and issuing new ones for new development initiatives did not scale well for Jamf. They were in a fast-paced startup mode. Non-technical people at Jamf also had a tough time getting resources and did not have the expertise to provision for themselves.

Jason and his team at Jamf wanted a solution that could be provisioned for self-service as well as efficient and user-friendly. From a technical perspective, he also wanted to orchestrate sophisticated, complex provisioning scenarios behind the scenes for the developers. 

Getting to the Solution

Let’s face it. Jason did not want his department to be the bottleneck in a thriving startup industry where they had to wait for VMs and then have to do a lot of post-provisioning after they had their resources running. As long as the builds were what they asked for, the developers did not care where they came from or how they were configured with backend complexity. The team went from being an IT champion to a hero using CloudBolt. They leveraged CloudBolt’s ability to orchestrate, automate, and provision resources into standardized “application development stacks” to request with just one click.

Since that time, the number of VMs they manage has more than tripled and they have successfully implemented many new initiatives using CloudBolt. In addition, not only the developers on the front lines, but also the security, support, and QA teams are now using CloudBolt to provision ready-made sets of VMs that can be configured quickly and maintained consistently. 

According to Jason, “CloudBolt was immediately straight-forward and elegant. We had tried other similar products but got further in 30 minutes with CloudBolt than we did in 3 months with another solution.”

Highlights of CloudBolt at Jamf

Jamf leveraged CloudBolt to do the following:

Looking Ahead

CloudBolt makes it easier for Jamf to roll out access to resources in AWS and Google Cloud. Super users have access to these public resources now as sandboxes in these environments. They plan to set them up to configure blueprints in CloudBolt that will be accessed by the same easy-to-use self-service portal that they use now for their on-premises VMs. 

To see for CloudBolt’s self-service enablement for yourself request a demo today!.


The race to build digital solutions that entice us with new experiences continues its momentum for ride sharing, home sharing, food delivery, and financial planning. Energized by the ability to build apps that scale quickly with demand, digital disruptors have made it possible for us to now rent cars hourly from private owners. Drivers can self pick up a vehicle on-demand enabled by an Internet of Things (IoT) sensor and mobile app. Financial planning disruptors make it now possible for us to buy and sell partial shares of financial stocks based on our moods or core beliefs at any time of the day.

Recently, this digital momentum surprised me when I found out that you can now order fast food and convenience groceries for delivery from 7-Eleven in minutes, wherever you are—in a park, on the beach—as long as you’re within the required radius of at least one 7-Eleven store. With a mobile app and a network of delivery pros from DoorDash, you can have a Slurpee and a pack of Twizzlers in less than 30 minutes. The DoorDash drivers work whatever hours they want and are incentivised by real-time market analysis and demand. 

A complex digital ecosystem supports all of this interconnectedness with apps running on individual mobile devices for both the consumers and the delivery people. There’s also the backend systems of complex application architectures for the retail chains and, of course,the financial transaction processing with connectivity to bank accounts for payment through a network of providers to deposit in the banks of the retailer and delivery person. Add to all of that the supply chain apps that are used to produce and merchandise those end products like Slurpees and groceries and take-out meals, and you have a digital universe that is humming along like no one could have imagined even a few years ago. Technology is the business.

The API Economy Enables Extensibility 

What enables this digital disruption? It’s the application programming interfaces (APIs) that provide a way for applications and digital endpoints to transfer data. That’s it. As simple as that sounds, it’s a lot more complex than meets the eye.

For example, how does one device or application respond to a request from a mobile device to add more french fries to your takeout order? There are wireless networks and protocols to communicate through and the Internet with gateways that require security “hand-shakes” entering and exiting one domain into another. How does it all work? You may have already guessed it, but if not…

It’s all of the coding that software engineers have been doing with a maniacal focus on connecting, building, and even monetizing APIs that supports this interconnectivity.

In order for software to thrive in this new digital universe, it must have, at the very least, an “open API” that allows other digital assets to interact with it. The authors of the software must expose something public and referenceable for other systems to connect to it while locking the proprietary portion that is licensed for sale. In other cases, “open source” software is used where all the code is exposed for developers to use and modify as they see fit. 

This interconnectivity is characterized as “extensibility”. Extensibility is the ability for an application to extend its reach, its value, its data to another system and, likewise, its ability to consume or integrate the value, data, or functionality from another system. The execs at 7-Eleven turned this digital extensibility switch on years ago with the adoption of handheld merchandising apps that track usage patterns and make sure each store has the most relevant supply of whatever is cherished the most in whatever community it serves. They’ve taken it one step further with the new delivery service, 7Now, that is in place for many locations. 

Impact on Infrastructure in the Era of Cloud

Considering today’s rapidly changing digital environments and all the apps that support the disruption occurring almost everywhere, static infrastructure is no longer a viable solution. Enterprises now realize that planning, sizing, installing, and maintaining all the compute power to support the software code that runs the business is a losing proposition. By the time everything is approved with the proper specs, the market has changed. The big conundrum, though, is balancing the existing investment in all of the infrastructure still on premises for many seasoned enterprises and the need for agility in the cloud. Public cloud offerings, like AWS, Azure, and Google Cloud have made it easy to have compute power on demand so these disruptors can both build and pay as they go. The digital value that they create has a direct relation to its cost in real time. 

With a multitude of offerings for private and public cloud resources, disruptors can build just about any application and service delivery platform from just about anywhere to serve whatever need is out there. The big problem enterprises have today is that this disruption can go awry if not handled properly. 

Here’s a list of what happens when it goes wrong: 

To find out more about how to fix this, see our previous blog on Advancing a Cloud Control Strategy

A solid cloud management platform designed for complete cloud control will have extensibility to future and legacy technology so that the resulting provisioned infrastructure can be traced back to central control on one platform. A key element of today’s extensible platform is the ability to connect to APIs from any environment to include the technology in any upstream or downstream process. For example, CloudBolt’s enterprise customers often use ServiceNow for the system of record for all infrastructure and maintain it in a configuration management database (CMDB). The central platform must be able to update that record in the IT provisioning process. 

CloudBolt Provides a Solid Foundation

CloudBolt provides these three elements for a solid IT provisioning process and more. 

For more information, read our CloudBolt Product Overview

To try CloudBolt on your own click here.

The hidden cost of cloud sprawl is similar to unaccounted costs associated with tasks in our everyday lives, like shopping for food and household items. In many cases, our shopping habits are inefficient, but we don’t take the time to analyze it thorough enough to take action.

We typically grocery shop at a variety of places. We have a convenience store around the corner usually at a higher price (managed services). We have choices from competing tiers of grocery store chains (public clouds) and an outlet-type store that requires bagging our own groceries (open source). The choice and variety are great for us as individuals and family shoppers. We can usually find what we need without relying on any one particular store (lock-in). Making a special trip to get something is not a hassle.

Total Cost of Ownership and Economies of Scale

Consider, however, that when buying groceries for significantly more people per household—or more consumers of IT—a little more planning goes a long way in terms of savings. If you would tally the per-person grocery bill when shopping for yourself or in a couple, you would most likely have a larger bill per person than shopping for five or six in a family. Include the cost for the time spent planning, coming and going, shopping in the store, and making decisions about what to buy. This is the overhead expense for shopping. No one typically thinks of themselves as an hourly employee when doing these domestic tasks, but the time adds up.

When we factor in the price for transportation, along with the time spent doing all of the associated tasks, we have a pretty compelling case for why it costs a lot more per person with fewer people to shop for in a household. It’s the same for IT. If you have people in your organization individually or even in small teams all going out to shop for IT resources from public and private cloud offerings, the overhead expense is factored in and you end up with cloud sprawl and hidden costs.

Essentially, when you add more people to the list you’re shopping for, trips could be planned in advance, less frequent, and the result will be that the added expense will be less per person. There’s a term—“Economies of Scale”—in business to describe this phenomenon. We can also add the term “Total Cost of Ownership” or (TCO) for this shopping scenario from end to end. It includes the planning and time spent going to and from a store.

Squelch Hidden Costs for Cloud Sprawl

What can you do to squelch the hidden cost of cloud sprawl? Adopt a centralized platform as a gateway to all your cloud resources to help mitigate the individual, duplicated effort. No one as an IT consumer will have to “shop around” and you could set up bulk ordering for any sets of resources common across several users and teams.

Taking it one step further, when enterprises grow and onboard franchises, satellite offices, or, in the case of banks, branch offices, they need a set of IT resources that are typically standardized. Theses are roughly the same for each store in a chain or whatever organizational entity exists at the front lines. You can see this in action when you go to the venues and see that the menus serving two chain restaurants are almost identical to each other no matter where you go. The digital services that run behind the scenes will likely be identical too.

Modern enterprises plan for the economies of scale associated with building out the IT infrastructure required to run these businesses. Instead of having each branch or store set up their own IT resources by business unit or create each one as a custom order, they configure them as “blueprints” where the resources are the same and they just need to spin them up with locally relevant configuration settings. The blueprint could grab the resources from the various “stores”, or public and private cloud resources available, and the “shopper” just needs to click Submit. The resources are then provisioned accordingly. A lot of time is saved by not having each new branch or store get these resources independently each time they want to add a new one to their portfolio.

Finally, once the blueprinted IT resources are in place, they can be updated regularly all at the same time if necessary. If a critical security issue is uncovered in a particular IT resource running in that environment, a patch could proactively be administered to the whole set to avoid any further problems.

CloudBolt provides exactly the type of platform that can help you achieve these economies of scale. You have one place to manage all the resources that IT consumers need as “shoppers.” You can do all of the upfront planning and significantly lower the TCO for each consumer-owner of IT resources.

For more information about advancing a cloud control strategy that not only helps mitigate hidden costs but also tackles security and user access, see Advancing a Cloud Control Strategy.

To see how CloudBolt can help you manage the hidden cost of sprawl request a demo today!.

Car Buying—The Romance

Getting a new car from a dealer is often one of the most exciting times for any family or individual. Test driving, picking out colors and options, and then eventually driving off from the dealership with a brand new vehicle makes for an exhilarating event. You might want to show off your car, keep it clean, and start noticing others who have made similar choices. Some dealers even put a big wrapping present bow on your vehicle and do a photo shoot before you leave.

Car Ownership—The Reality

When reality sets in, though, it’s essential to follow the recommended service intervals set by the manufacturer and have records to keep your warranty in place, as improper care voids most warranties. A light comes on to remind you of services and you typically get a barrage of messages from the dealership, and sometimes recalls are issued to mitigate issues from car manufacturers when defective parts are discovered.

Automating Infrastructure Provisioning and Maintenance

Likewise, for infrastructure provisioning in any private or public cloud environment, virtual machines (VMs) are issued as brand new, and, just like a car ,things need to be updated over time to run smoothly.

For a car, you have no choice but to manually schedule the vehicle for the maintenance and take the car “offline” so to speak to get the work done. That’s not the case with infrastructure—you can automate the whole process, and if you don’t you’re stuck with a dreaded manual process that inefficient and time-consuming.

Just recently, while getting down to work on something important on my laptop, I also needed a software update. Thinking the software update might be important too I lost thirty minutes waiting for the updates to complete. Not a big deal, because there’s plenty for me to do, but can you imagine if a similar scenario is playing out in a production environment for five hundred to tens of thousands of VMs?

Improving the efficiency of deploying and managing your VMs can have a huge impact when you’re scaling to large environments. Saving thirty minutes for any task would be huge savings for an enterprise. Most would argue that even saving five or ten minutes on a process that affects many end user is certainly worth it.

Having complete cloud control means that you’ve implemented a solution, like CloudBolt, to automate the process of keeping your infrastructure up to date. Instead of addressing a backlog of IT requested updates from end users, you can have a structure in place to automatically—or by an on-demand process—check for updates and then roll them out to any cloud-managed instance of those resources.

Two Stages

There are two stages in considering up-to-date infrastructure resources. The first stage is the initial provisioning that might need domain-specific configuration information and the proper networking information. You’ll also need to consider tagging for billing codes, and identifying business units or teams for the proper accounting of who’s spending what and where for chargeback/showback purposes. Everything that is done with the VM is easily tracked, like how the VIN number of your vehicle is a tag that can be used to figure out a lot of information about your car.

If the operating system you are deploying to the VM needs an update between major releases, you could build that into the orchestration process before it goes live to the end user. You could add security applications and specific agents for any kind of logging or your enterprise-specific monitoring tools. You’d also likely use a hostname template that automatically and correctly names the host and then can be used to help with any integrated solution in your enterprise for inventory and control of resources. You could apply that same VIN comparison here—perhaps an even tighter comparison than tagging in general.

The second stage occurs after the infrastructure is running and before decommissioning. There will be times you’d like to issue a patch to any software that you installed at runtime for the initial build of resources. You might need to add settings to a group of VMs and then restart services for the settings to take effect and you can do all of that remotely instead of a tedious manual process. As you manage the resources, you’ll have windows of time when the updates should occur and you can implement power schedules to only run the VMs when they’re needed and expiration dates for the decommissioning process.

CloudBolt Lifecycle Management

This complete lifecycle control of your VMs from the initial provisioning, through the ongoing maintenance, and then retiring of the resources can be easily implemented with CloudBolt. The exhilaration of car buying is certainly not quite the same. But the dreaded manual process of car maintenance is a reminder that doing that for VMs does not scale well in the enterprise.

To see for CloudBolt’s complete cloud control capabilities for yourself request a demo today!.

With each release of CloudBolt, we introduce new features, enhance existing ones, and fix issues brought to our attention from our customers. We also want to thank a CloudBolt Champion and recognize their efforts in making us the market-leading hybrid cloud management solution for enterprises and organizations.

This release is named after Ricardo Lupi Nogueira from NOS, a leading media company in Portugal. As a Solutions Architect for NOS, he’s explained to us that CloudBolt’s environment concept and plugin-based orchestration is what really empowered him to do his work; it’s flexible enough that it lets him customize VM provisioning for each client.

In this new release of CloudBolt, we’ve focused on richer connections to cloud environments as many of our customers continue to expand their environments in new and existing private and public clouds. We’ve added new and enhanced support for our most widely used Resource Handlers—AWS, MS Azure, Google Cloud, OpenStack, and VMware.

Here are some of the main highlights:


Reserved Instances—When EC2 instances running in AWS have a predictable pattern of resource usage over time, they are often good candidates to use cost-saving, AWS reserved instances. Instead of AWS billing on a pay-as-you-go on-demand rate, the reserved instances have a set price that is lower if you pay for them ahead of time, partially or in full. CloudBolt now provides support for recommending reserved instances based on the past days of usage you set over your subscription duration. CloudBolt will list the EC2 instance candidates and then provide recommendations based on your current spending and what you can save.

Multiple Region Support

AWS GovCloud and AWS China—Along with our ongoing updates to our main AWS resource handler, we have support for two special AWS resource handlers and corresponding environments. In prior releases, we supported only one region for each resource handler. In this release, we now have the ability to support multiple regions for these resource handlers. AWS GovCloud has two regions and AWS China has several.


Resizing VMs—This release supports a new server action that resizes Azure virtual machines. For example, your VM might have only 3 or 4 disks available and you need more. You can go to the server page for an Azure VM, click Resize VM, and then select a new size rather than having to rebuild an entirely new VM just to have more disks.

Scale Sets—Virtual machines that are members of Azure Scale Sets will now be synchronized to CloudBolt. Scale sets in Azure are sets of identical VMs that provide specific services that need to scale with demand to run efficiently. They scale up with increased demand and then scale back down as necessary so they are not running continuously in a ready state.

Google Cloud

Multiple GCP Projects—The release supports synchronizing multiple Google Cloud Platform (GCP) projects using the Google Compute Engine resource handler. You can choose which GCP projects you’d like CloudBolt to manage by clicking the Fetch Projects button from the Projects tab of the resource handler.


Snapshots—You can now take a snapshot of an OpenStack instance in CloudBolt and revert back to it at any time. The open-source nature and customization often involved with OpenStack can get very tedious without the ability to preserve stable instances in an iterative process instead of having to rebuild the instance from scratch. To take a snapshot of an OpenStack server in CloudBolt, click “Create Snapshot” server action available on the server details page. To revert the server back to a snapshot, click “Revert to Snapshot” server action on the server details page.


IPv6—When synchronizing virtual machines to servers from VMware VSphere to CloudBolt, both IPv6 addresses in addition to IPv4 addresses will be considered. There are also new options in the Miscellaneous tab that allow you to select using IPv4 or IPv6 addresses for remote scripts or let the resource handler select the preferred one from vCenter.

Other Cool Stuff

Spelling—In larger deployments, the efficiency of tagging with the correct strings for tag values is critical but often plagued with spelling errors that make it difficult to manage sets of VMs based on specific tags for business units, billing inquiries, and any number of other factors. From the Tag tab in this new release, you can configure the common spelling errors to be mapped to the correct spelling.

When the tags are synchronized to the target environment, the tag values are corrected. This new feature has been added to AWS, Azure, and VMWare resource handlers.

Plugin Code—We’ve added the ability to revert plugin code back to the Out-of-the-box version, which will save time in having to copy and paste to a backup editor. To use this feature, go to a provided plugin or remote script which has edits made to the code and click “Revert Changes.”

CloudBolt Release Notes

For more detailed information about all the new features and enhancements, see the Release Notes.

To see for CloudBolt 8.7—Nogueira for yourself request a demo here!.