Just as movie rental stores gave way to streaming services that deliver instant Friday night entertainment in a click, the “anything-as-a-service” (XaaS) model has fundamentally changed how enterprises access and provision technology. Across the IT infrastructure, what were once physical pieces of hardware have slowly but surely become virtualized services, services the enterprise subscribes to and then delivers to users on-demand.

Taking a XaaS approach solves two problems at once. From an enterprise IT perspective, it can simplify deployments and remove layers of on-premises infrastructure that need to be constantly managed and patched. From the user side, especially for DevOps teams that have ongoing and time-constrained needs, XaaS means they can get the resources they need when they need them. 

By solving both these problems in a cost-efficient, controllable manner, XaaS is the future of IT infrastructure management. Automation and self-service are the key to making that future a reality.

The struggle for IT teams is real

IT teams face some very real challenges. Not only are they charged with deploying and maintaining applications across the enterprise, but they are also responsible for maintaining and securing both these applications and the networks they run on. At the same time, IT plays an essential role as a partner to DevOps teams, teams that are themselves under immense pressure to develop, test and get product out the door fast.

Delivering the VMs and other cloud-based resources DevOps teams need has its own inherent challenges and risks. Outdated processes can mean that provisioning takes an agonizingly long time, and this time lag can drive DevOps to develop their own shadow IT workarounds. What’s worse, when these processes are manual, non-standardized or dependent on tribal knowledge, they can be error-prone and generate avoidable security vulnerabilities.

Enabling resource delivery through automation

An XaaS approach addresses these challenges for both IT and the teams they support. From an IT perspective, this begins with the standardization and automation of provisioning processes. The key here is blueprints. Blueprints are templates that enable the standardized, compliant, and repeatable delivery of resources to business users. Through blueprints, IT can execute actions – or executable scripts – to safely automate and orchestrate complex processes and streamline resource lifecycle management. And since blueprints contain set rules and builds sanctioned by IT, they’ll always be able to accurately and securely deliver the right resources.

Additionally, administrators can use blueprints to define conditional logic that periodically executes and applies remediation steps. In this way, automated rules can be implemented to take corrective actions if there are issues in a particular environment, for example, shutting down unused resources that drain costs. 

In other words, not only do blueprints automate provisioning resources, they remove the manual pain of managing and patching the layers and layers of on-premises infrastructure. 

Enabling access through a service catalog

Standardizing and automating resource delivery is just one side of the XaaS equation. The other side focuses on providing DevOps teams easy access to provisioned resources. In the XaaS approach, this happens through a self-service catalog

The trick here is to make accessing resources as easy as ordering a book off Amazon. Users shouldn’t need to know anything about the idiosyncrasies of configuring this or that cloud. They should simply be able to access the catalog and get what they want. Frankly, providing this sort of easy access to resources is the surest way to discourage users from going rogue.

Self-service doesn’t mean that IT relinquishes control over resource allocation or control. In fact, it’s just the opposite. By using blueprints to provision the resources available through the self-service catalog, IT ensures that the provisioned resources actually come with the necessary guardrails built in. That is, the combination of automation and self-service allows IT to accurately and securely deliver the right resources, while maintaining proper governance and cost controls.

The Anything-as-a-Service Imperative

The idea behind XaaS is that anything in the IT stack can be turned into and delivered as a service. For IT, this is a no-brainer, because the approach makes it possible to deliver technology more rapidly while decreasing overhead and mitigating risk. For DevOps, XaaS gives them exactly what they want: instantaneous access to the resources agile teams need to get products out the door. 

Automation and self-service are the two key components of standing up an XaaS model. Automation reduces the inefficiency of manual processes while providing total control over resource allocation. Self-service streamlines access to resources, keeping users happy and supporting the demands of continuous innovation. 

So, what’s stopping your team from making XaaS a reality in your organization?

Learn how CloudBolt can help transform your organization’s self-service IT journey.

The movement to the cloud has been the most significant megatrend in the IT world for years now. Its allure comes from promises of simplified management and infrastructure and, hopefully, reduced costs.

Despite this promise, cloud spending has only increased. Research firm IDC expects spending on public cloud services to more than double worldwide from 2019 to 2023. Without strong cloud cost management in place, those hopes of lower costs can drift away.

What organizations don’t often account for is the sprawl that can occur when resources aren’t properly purchased or provisioned. Without proper processes, consumers of IT resources can run up a big bill if spending goes unchecked.

A cloud strategy without cloud cost management best practices is doomed to fail and won’t provide the business value expected from a migration.

How Cloud Computing Costs Can Spiral

Moving compute and storage resources to the cloud can seem great at first. Teams can more readily access the resources they need without having to manage on-premises infrastructure. 

But things start to get tricky pretty quickly. Cloud providers will often try to sell organizations add-ons to their deployments, upping the overall cost at the end of the day. And many organizations are utilizing multi-cloud environments where resources are coming from several public cloud providers. This doesn’t even account for private cloud/on-premises resources. 

There’s also the matter of shifting priorities or needs for an organization. When business priorities change, cloud priorities can change as well. This can sometimes lead to resources that keep running while not being utilized. Without a cloud cost management framework in place, your organization will be on the hook for paying for them. 

Organizations may also not be aware of potential discounts or offers from cloud providers that could significantly cut down on the overall resource cost within a cloud cost management framework.

Getting the Most out of Cloud Cost Management

The best way to avoid the pains of these high costs is to utilize a centralized platform to implement all of your cloud management strategies in one place. That includes setting quotas, ensuring workloads are run at lowest cost sites, or decommissioning workloads during off-peak times. This way, efforts aren’t duplicated and stakeholders have one place to go for those resources. Bulk ordering can be set up so resources across common groups of users and teams are easy to acquire with costs staying in check.

Having one central location for these resources can also significantly aid in utilizing cloud cost management tools because it can be centrally-managed by IT to ensure users get the resources they need when they need it, and those resources can be turned off or diverted when they don’t.

It’s also critical to account for economies of scale when it comes to cloud cost management. Things can change rather quickly for organizations of all sizes; they can either become acquired or acquire other entities, or grow on an exponential basis with new offices and branches spread across the globe.

With this in mind, using a platform that allows organizations to configure IT resources through “blueprints” allows for an easily-repeatable and cost-effective format where configurations can just be altered based on needed configuration settings.

Taking this approach to cloud cost management should result in your organization realizing those promised benefits of the cloud. Learn how CloudBolt can help.

Organizations looking to use the power of hybrid cloud environments are faced with a significant task: picking the right cloud management system. Without one in place, IT and DevOps teams will face the consequences related to unorganized and ad-hoc deployment of cloud resources.

The foundation of any worthwhile cloud management system is the right set of capabilities to accomplish business and technological objectives. In this post, we’ll explore the top cloud management tools necessary for any system and why they’re critical to your success.

Provisioning and Orchestration

The correct capabilities for provisioning and orchestration allow teams to create new resources as requested by those who need it, or modify or delete those resources based on priorities.

In addition, a cloud management system should have tools enabling orchestration for provisioning workflows and management operations for hybrid cloud resources.

These cloud management tools are critical because without them in place, organizations may be faced with significant backlogs for resource request and no centralized way to fulfill them. With robust provisioning and orchestration tools, the headaches associated with ensuring resources get to where they need when needed become a thing of the past.

Cost Management

There’s no question that utilizing cloud resources from both public and private sources can get costly. These cost issues can get even worse without the right capabilities to manage them.

As such, a cloud management system needs features that allow organizations to keep track of what’s being spent on cloud resources and ensure that spending is being utilized effectively. These tools allow unused or little-used resources to be reapportioned as needs of the business evolve.

Any of the cloud management tools in the market should also provide the ability to keep resource capacity in line with the demand that is actually being placed on the specific workloads being tracked.

Inventory Management and Monitoring

Cloud management systems need inventory capabilities that allow for the discovery and maintenance of all the cloud resources that exist within a specific enterprise. Without that holistic view, managing those resources can get hectic fast.

Inventory management capabilities should also allow for monitoring changes to those resources and management of new configurations.
In addition to keeping track of inventory, one of the most important cloud management tools available includes monitoring of that inventory.

These should provide metrics on availability and performance of resources as well as intelligence for incident prevention and resolution.

This intelligence gives stakeholders the ability to report the performance of the cloud asset management system back to the business and prove its effectiveness.

Self-Service IT

Lastly, any strong cloud management system has to afford users the ability to make their requests for resources in an easy-to-use and understand portal that’s managed by a central IT function. That’s where self-service IT comes into the picture.

Self-service IT cuts down on unnecessary and wasteful back-and-forths between IT and users regarding provisioning of resources. That makes both parties more productive at the end of the day and helps drive greater business value.

Your organization may have different needs when it comes to provisioning and innovating on hybrid cloud resources. But for an effective hybrid cloud management system, tools around provisioning, orchestration, cost, inventory, monitoring and self-service IT cannot be discounted.

See the cloud management system you’ve been waiting for in action. Sign up for a demo of CloudBolt.

Although Kubernetes has been around for quite some time, the open-source container orchestration platform has recently gained traction with public announcements of support by big vendors like Google Cloud and VMware. Continuous integration, continuous delivery (CI/CD) software development continues to be what’s driving the greater awareness. 

As enterprises consider moving more workloads to the cloud, it’s a great time to consider approaches that include containerization orchestrated by Kubernetes instead of just lifting and shifting traditional VMs that have been architected to run on premises in a data center. Kubernetes provides more agility and control of modular computing containers running microservices that can scale with demand. 

Here’s a quick summary of how Kubernetes is deployed and managed.

Deploying a Kubernetes Cluster

The first step in getting started with Kubernetes is to deploy at least one controller node and typically two or more worker nodes as virtual machines running Kubernetes. This set of nodes as a cluster is where all the configuration and containerization will be deployed and run. The cluster makes up the capacity of compute power for containers to run. 

Major public cloud providers provide managed Kubernetes clusters as a service. There’s Google Kubernetes Service (GKE), Amazon Elastic Kubernetes Service (EKS), and Azure Kubernetes Service (AKS). In addition, other vendors provide native Kubernetes cluster configurations and services. Once the clusters are set up and are running, you can manage the orchestration and configuration of containerization, typically Docker as the container service. 

Kubernetes Pods and Nodes

The nodes in a Kubernetes cluster act as workers for the most part with a controller (master node) providing the instructions for each node. Each node will have pods that can dynamically change based on the controller management.They can scale up or down and as well as get started and stopped. As pods are scheduled to start and stop on the cluster worker nodes they run the containers specified to run at any given time or current state of the Kubernetes cluster. One or more containers can run on a pod and containers are where applications and services run at scale in the Kubernetes environment. 

Managing a Kubernetes Cluster

Kubernetes clusters are managed using a Yet Another Markup Language (YAML) file. These files are simple text files that can be updated with conditions for the state of any Kubernetes cluster and managed in a central repository outside of the Kubernetes environment. For example, DevOps engineers can have a repository of YAML files in a Github account and several team members can be working any aspect of the Kubernetes environment to maintain and run containers in the target Kubernetes cluster environment. The files can be retrieved and executed from a command line or any Kubernetes management platform. 

CloudBolt and Kubernetes

CloudBolt supports Kubernetes orchestration for enterprises in the following ways:

See also: Orchestrating Docker Containerization with Kubernetes

See how CloudBolt can help you. Request a demo today!

CloudBolt sponsored and attended VMworld 2019 in San Francisco (with 12 CloudBolters in attendance!) and it was an energy-packed event. I’ll summarize some of the news and talk from the conference here.

VMWare’s main announcements

Last week, VMware announced the release of:

Analysis of VMware’s Direction 

Shift of focus from IT to developers

VMware has traditionally focused on selling products and services to IT departments, but their messaging and product direction are steering toward selling to developers. This is likely in response to VMware’s observation that the locus of decision-making and the budget for technology are shifting toward development teams over time. 

I even got to play some Robotron on the floor of VMworld 2019.

Embracing of containers

With both Project Pacific and Tanzu, it’s clear that VMware is now betting on containers and does not want to miss that train. These two projects will embed a container runtime in vSphere and provide a Kubernetes cluster management tool (playing in the same space as Google’s Anthos).

Emphasis on VMC on AWS

VMC on AWS is a key part of the hybrid cloud story that VMware is delivering. The idea is to keep running workloads on VMware ESXi, and using vCenter to manage them, but the servers run in data centers owned by AWS instead of customer-owned and operated data centers. This is appealing as it allows large organizations to swap their capex spend out for opex, and to do it without making major changes to applications to run using modern public cloud services and/or containers. The possibility remains that organizations could move applications off VMC on AWS to just AWS or a different cloud, so it will be interesting to see how VMware handles that long-term.

vRA 8 Announced

VMware officially announced vRealize Automation 8, the rewrite of their Cloud Management Platform. We talked with a lot of vRA 7.x customers who are wondering what the path forward looks like for them. vRA 8.0 will have some good new features (like more agnostic public cloud support, more flexible blueprints than vRA 7, and potentially enhanced extensibility), but that it will have a subset of the features of vRA 7. It remains to be seen when upgrades from vRA 7 to 8 will be supported, or how difficult they will be when that day comes. It’s also unclear whether old-style extensions will be supported.

What this Means for CloudBolt

Since 2011, CloudBolt has been focusing on meeting the needs of both:

  1. Empowering developers with a simple self-service way to obtain the resources they need to do their job AND
  2. Turning the central IT team into superheroes, giving them unmatched visibility and the ability to orchestrate and automate everything

At CloudBolt, we are passionate about the themes VMware brought up. Here’s how CloudBolt stacks up in these themes:

ThemeCloudBolt’s Support
Empowering developers & IT admins✅ Since 2011
Easy upgrades of CloudBolt✅ Since 2012
Agnostic hybrid cloud support✅ Since 2013
Infinite and easy extensibility ✅ Since 2013
Solid support for GCP and Azure✅ Since 2014 (plus 6 other public clouds in the ensuing years)
Flexible Blueprints✅ Since 2014
Kubernetes support✅ Since 2015, and getting deeper in every CloudBolt version
VMC on AWS supportComing in CloudBolt 9.1 in December

Summarizing, CloudBolt has been focused on the themes that matter most to IT and developers and the product has matured over many years of releases and management of production environments for global 2000 companies.

What’s Next

We look forward to heading back to VMworld in 2020, and in the meantime you can find us at upcoming VMware User Group (VMUG) gatherings in Boston (9/25), Atlanta (10/2) and Phoenix (10/30) this fall. Stop by to chat with us!

Want to see how CloudBolt stacks up with vRA? Download our datasheet today.

Welcome to this week’s edition of CloudBolt’s Weekly CloudNews!

Last week, our CEO Brian Kelly posted this column in PaymentsSource on how operational holes can cause breaches more than security glitches, and in particular highlighted the recent breach at CapitalOne.

Reminder: Carahsoft will be hosting a webinar featuring CloudBolt and AWS on Thursday, Sept. 5 at 2 p.m. EST. The topic will be on orchestrating AWS with CloudBolt. If you’re among the first 50 to sign up and attend, you’ll receive 100 free AWS credits. Sign up here.

With that, onto this week’s news:

Many throats to choke: For better or worse, multiple clouds are here to stay

Paul Gillin, SiliconAngle, Aug. 25, 2019

“In information technology circles, it’s called “one throat to choke.”

“It’s a metaphor for chief information officers’ preference for concentrating most of their business with a single strategic supplier in each category of application and infrastructure. The approach has a lot of appeal to risk-averse IT organizations, including fewer points of failure, better customer service, bigger discounts and clearer strategic direction.”

What Multicloud Really Costs

David Linthicum, InfoWorld, Aug. 30, 2019

“Multicloud is becoming the de facto standard. Indeed, a solid 84 percent of the respondents in the RightScale report use more than four cloud providers, including both the public and private clouds. (Note, RightScale is now part of Flexera.) However, not only are companies shifting to multicloud, but to more than one public cloud as well. That means using Google, Microsoft, and AWS—two or three providers, typically, and sometimes more.”

Why Red Hat sees Knative as the answer to Kubernetes orchestration

James Sanders, TechRepublic, Sept. 5, 2019

“Containers are where the momentum is, in enterprise computing. Even VMware, the last stalwart of traditional virtual machines, is embracing containerization with their absorption of Pivotal. Revenue for the container software market is anticipated to grow 30% annually from 2018 to 2023—surpassing $1.6 billion—according to a recently published IHS Market report.

“From a deployment standpoint, containers are still just different enough of a paradigm that adoption can become complicated at scale. While Docker itself is straightforward enough, automating update lifecycles across dozens or hundreds of deployed containers requires some level of automation in order to increase efficiency.”

Beyond data recovery: A CIO’s perspective on digital preservation

Joseph Kraus, CIO, Aug. 30, 2019

“While most IT organizations have taken the time to establish data backup and recovery procedures as part of their overall operations, few consider long term digital preservation as part of data protection planning. Establishing a formal plan to ensure access to critical data over time is becoming increasingly important as the amount of digital information continues to expand and serves as the only record of an organization’s asset.

“Digital preservation is a formal endeavor to ensure the digital information of continuing value remains accessible and usable. It involves planning, resource allocation, and application of preservation methods and technologies.  This is done to ensure continued access to reformatted and born-digital content, regardless of the challenges of media failure and technological change.”

See how CloudBolt can help you. Request a demo today!

Agile software development practices have emerged as the preferred way to deliver digital solutions to market. Instead of defined stages of software development, sometimes referred to as “waterfall” approaches, software changes are continuous. We now consider almost any software delivery process as agile as long as it combines development and operations (DevOps) and the releases are frequent. 

The DevOps term is often used even when it is more like mini-stages in comparison to the past or simply has the developers not only write the code but also are the ones who deploy it in production. A DevOps engineering team can include software coders, security experts, and IT system admins. 

DevOps teams have the objective of a continuous integration, continuous delivery (CI/CD) pipeline of code from “story” to production, the more that is automated in the process, the faster the delivery. Stories are created from customer or user driven needs or can be part of a product vision for new capabilities. The story describes the intended behavior and user experience of a software solution when the code becomes live in production. Because delivery is continuous, the stories can change over time and the code is modified and delivered. 

Beside the coding expertise, DevOp engineers use many IT tools that can help in the infrastructure provisioning side as well as on the software coding side. In this post, we’ll look at some of the key automation and provisioning tools.

Chef

As a configuration management tool based on cooking, Chef creates a way to deploy “recipes” for application configuration and synchronization. Recipes can be combined and modularized with “cookbooks” to help organize the management of configuration automation using Chef. 

Chef deployments consist of three components:

The servers manage the environment and workstations are used by developers to create and deploy cookbooks. The clients are the managed nodes or targets for configuration.

Puppet

Released in 2005, Puppet has been around longest as a configuration automation tool. Puppet uses a declarative approach to create a state and the Puppet executes the changes. There are controller and agent components. The agent on a managed client polls a Puppet controller on another node (master) to see if it needs to update anything based on what is declared in Puppet modules

Puppet uses a unique configuration language based on the Nagios file format. The state of the configuration can be defined with manifests and modules that are typically shared in a repository such as Github. The file format accepts Ruby functions as well as other conditional statements and variables.

Ansible  

As the most popular configuration management tool among DevOps engineers, Ansible doesn’t require agents to run on client machines. Instead, it’s possible to secure shell (SSH) directly to the managed nodes and issues commands directly on the virtual machine. The Ansible management software can be installed on any machine that supports Python 2 or higher and it’s a popular notion that DevOps engineers run Ansible updates from the Mac laptops.

In order for an update to occur, the change must be “pushed” to the managed node and the approach is procedural as opposed to the declarative approach of Puppet. 

Terraform

For infrastructure provisioning and configuration automation, Terraform builds infrastructure in the most popular public and private cloud environments and can be managed with versioning. As an infrastructure as code (IaC) DevOps tool environments can be built from any device running Terraform, connectivity to an environment as a resource is specified.  

Terraform plans are declarative and describe the infrastructure components necessary to run specific apps as well as a whole virtual data center of networking components and integrated with other configuration and management tools. Terraform determines what changed and then creates incremental execution plans that get applied as necessary to achieve the desired state of infrastructure and applications. 

CloudFormation

As an IaC DevOps tool for Amazon Web Services (AWS), CloudFormation is a service that helps configure EC2 instances and Amazon RDS DB instances from a template that describes everything that needs to be provisioned. CloudFormation is specific only to AWS and helps AWS users who don’t want to configure some of the backend complexity that is automatically configured by using CloudFormation as a service. CloudFormation is free to use for anyone subscribed to AWS.

CloudBolt and DevOps Tools for Success

DevOps tools used to configure infrastructure and the applications and services running on them vary by enterprise and often in different teams in the same enterprise. Having the visibility and control of the DevOps tools used to configure resources and the resources themselves gives IT admins using CloudBolt a faster way to find out who’s using what and where in a potentially complex and siloed hodge-podge of technology.

To see for yourself how our platform helps you reach DevOps Success, request a demo today!

Deploying applications at the speed of users can paradoxically be something of a slog. IT, DevOps, and SecOps organizations may spend hours/days/months trying to figure out ways to simplify the delivery of applications while providing the safety and security required by today’s users.

(more…)

{% raw %}

(more…)