Although Kubernetes has been around for quite some time, the open-source container orchestration platform has recently gained traction with public announcements of support by big vendors like Google Cloud and VMware. Continuous integration, continuous delivery (CI/CD) software development continues to be what’s driving the greater awareness.
As enterprises consider moving more workloads to the cloud, it’s a great time to consider approaches that include containerization orchestrated by Kubernetes instead of just lifting and shifting traditional VMs that have been architected to run on premises in a data center. Kubernetes provides more agility and control of modular computing containers running microservices that can scale with demand.
Here’s a quick summary of how Kubernetes is deployed and managed.
Deploying a Kubernetes Cluster
The first step in getting started with Kubernetes is to deploy at least one controller node and typically two or more worker nodes as virtual machines running Kubernetes. This set of nodes as a cluster is where all the configuration and containerization will be deployed and run. The cluster makes up the capacity of compute power for containers to run.
Major public cloud providers provide managed Kubernetes clusters as a service. There’s Google Kubernetes Service (GKE), Amazon Elastic Kubernetes Service (EKS), and Azure Kubernetes Service (AKS). In addition, other vendors provide native Kubernetes cluster configurations and services. Once the clusters are set up and are running, you can manage the orchestration and configuration of containerization, typically Docker as the container service.
Kubernetes Pods and Nodes
The nodes in a Kubernetes cluster act as workers for the most part with a controller (master node) providing the instructions for each node. Each node will have pods that can dynamically change based on the controller management.They can scale up or down and as well as get started and stopped. As pods are scheduled to start and stop on the cluster worker nodes they run the containers specified to run at any given time or current state of the Kubernetes cluster. One or more containers can run on a pod and containers are where applications and services run at scale in the Kubernetes environment.
Managing a Kubernetes Cluster
Kubernetes clusters are managed using a Yet Another Markup Language (YAML) file. These files are simple text files that can be updated with conditions for the state of any Kubernetes cluster and managed in a central repository outside of the Kubernetes environment. For example, DevOps engineers can have a repository of YAML files in a Github account and several team members can be working any aspect of the Kubernetes environment to maintain and run containers in the target Kubernetes cluster environment. The files can be retrieved and executed from a command line or any Kubernetes management platform.
CloudBolt and Kubernetes
CloudBolt supports Kubernetes orchestration for enterprises in the following ways:
- Create a Kubernetes cluster on Google Cloud Platform
- Deploy a Kubernetes cluster to your virtualization environment
- Connect to any Kubernetes cluster and orchestrate containerization
See how CloudBolt can help you. Request a demo today!
Welcome to this week’s edition of CloudBolt’s Weekly CloudNews!
Last week, our CEO Brian Kelly posted this column in PaymentsSource on how operational holes can cause breaches more than security glitches, and in particular highlighted the recent breach at CapitalOne.
Reminder: Carahsoft will be hosting a webinar featuring CloudBolt and AWS on Thursday, Sept. 5 at 2 p.m. EST. The topic will be on orchestrating AWS with CloudBolt. If you’re among the first 50 to sign up and attend, you’ll receive 100 free AWS credits. Sign up here.
With that, onto this week’s news:
Many throats to choke: For better or worse, multiple clouds are here to stay
Paul Gillin, SiliconAngle, Aug. 25, 2019
“In information technology circles, it’s called “one throat to choke.”
“It’s a metaphor for chief information officers’ preference for concentrating most of their business with a single strategic supplier in each category of application and infrastructure. The approach has a lot of appeal to risk-averse IT organizations, including fewer points of failure, better customer service, bigger discounts and clearer strategic direction.”
David Linthicum, InfoWorld, Aug. 30, 2019
“Multicloud is becoming the de facto standard. Indeed, a solid 84 percent of the respondents in the RightScale report use more than four cloud providers, including both the public and private clouds. (Note, RightScale is now part of Flexera.) However, not only are companies shifting to multicloud, but to more than one public cloud as well. That means using Google, Microsoft, and AWS—two or three providers, typically, and sometimes more.”
Why Red Hat sees Knative as the answer to Kubernetes orchestration
James Sanders, TechRepublic, Sept. 5, 2019
“Containers are where the momentum is, in enterprise computing. Even VMware, the last stalwart of traditional virtual machines, is embracing containerization with their absorption of Pivotal. Revenue for the container software market is anticipated to grow 30% annually from 2018 to 2023—surpassing $1.6 billion—according to a recently published IHS Market report.
“From a deployment standpoint, containers are still just different enough of a paradigm that adoption can become complicated at scale. While Docker itself is straightforward enough, automating update lifecycles across dozens or hundreds of deployed containers requires some level of automation in order to increase efficiency.”
Beyond data recovery: A CIO’s perspective on digital preservation
Joseph Kraus, CIO, Aug. 30, 2019
“While most IT organizations have taken the time to establish data backup and recovery procedures as part of their overall operations, few consider long term digital preservation as part of data protection planning. Establishing a formal plan to ensure access to critical data over time is becoming increasingly important as the amount of digital information continues to expand and serves as the only record of an organization’s asset.
“Digital preservation is a formal endeavor to ensure the digital information of continuing value remains accessible and usable. It involves planning, resource allocation, and application of preservation methods and technologies. This is done to ensure continued access to reformatted and born-digital content, regardless of the challenges of media failure and technological change.”
See how CloudBolt can help you. Request a demo today!
Agile software development practices have emerged as the preferred way to deliver digital solutions to market. Instead of defined stages of software development, sometimes referred to as “waterfall” approaches, software changes are continuous. We now consider almost any software delivery process as agile as long as it combines development and operations (DevOps) and the releases are frequent.
The DevOps term is often used even when it is more like mini-stages in comparison to the past or simply has the developers not only write the code but also are the ones who deploy it in production. A DevOps engineering team can include software coders, security experts, and IT system admins.
DevOps teams have the objective of a continuous integration, continuous delivery (CI/CD) pipeline of code from “story” to production, the more that is automated in the process, the faster the delivery. Stories are created from customer or user driven needs or can be part of a product vision for new capabilities. The story describes the intended behavior and user experience of a software solution when the code becomes live in production. Because delivery is continuous, the stories can change over time and the code is modified and delivered.
Beside the coding expertise, DevOp engineers use many IT tools that can help in the infrastructure provisioning side as well as on the software coding side. In this post, we’ll look at some of the key automation and provisioning tools.
Chef
As a configuration management tool based on cooking, Chef creates a way to deploy “recipes” for application configuration and synchronization. Recipes can be combined and modularized with “cookbooks” to help organize the management of configuration automation using Chef.
Chef deployments consist of three components:
- Clients
- Servers
- Workstations
The servers manage the environment and workstations are used by developers to create and deploy cookbooks. The clients are the managed nodes or targets for configuration.
Puppet
Released in 2005, Puppet has been around longest as a configuration automation tool. Puppet uses a declarative approach to create a state and the Puppet executes the changes. There are controller and agent components. The agent on a managed client polls a Puppet controller on another node (master) to see if it needs to update anything based on what is declared in Puppet modules.
Puppet uses a unique configuration language based on the Nagios file format. The state of the configuration can be defined with manifests and modules that are typically shared in a repository such as Github. The file format accepts Ruby functions as well as other conditional statements and variables.
Ansible
As the most popular configuration management tool among DevOps engineers, Ansible doesn’t require agents to run on client machines. Instead, it’s possible to secure shell (SSH) directly to the managed nodes and issues commands directly on the virtual machine. The Ansible management software can be installed on any machine that supports Python 2 or higher and it’s a popular notion that DevOps engineers run Ansible updates from the Mac laptops.
In order for an update to occur, the change must be “pushed” to the managed node and the approach is procedural as opposed to the declarative approach of Puppet.
Terraform
For infrastructure provisioning and configuration automation, Terraform builds infrastructure in the most popular public and private cloud environments and can be managed with versioning. As an infrastructure as code (IaC) DevOps tool environments can be built from any device running Terraform, connectivity to an environment as a resource is specified.
Terraform plans are declarative and describe the infrastructure components necessary to run specific apps as well as a whole virtual data center of networking components and integrated with other configuration and management tools. Terraform determines what changed and then creates incremental execution plans that get applied as necessary to achieve the desired state of infrastructure and applications.
CloudFormation
As an IaC DevOps tool for Amazon Web Services (AWS), CloudFormation is a service that helps configure EC2 instances and Amazon RDS DB instances from a template that describes everything that needs to be provisioned. CloudFormation is specific only to AWS and helps AWS users who don’t want to configure some of the backend complexity that is automatically configured by using CloudFormation as a service. CloudFormation is free to use for anyone subscribed to AWS.
CloudBolt and DevOps Tools for Success
DevOps tools used to configure infrastructure and the applications and services running on them vary by enterprise and often in different teams in the same enterprise. Having the visibility and control of the DevOps tools used to configure resources and the resources themselves gives IT admins using CloudBolt a faster way to find out who’s using what and where in a potentially complex and siloed hodge-podge of technology.
To see for yourself how our platform helps you reach DevOps Success, request a demo today!
Deploying applications at the speed of users can paradoxically be something of a slog. IT, DevOps, and SecOps organizations may spend hours/days/months trying to figure out ways to simplify the delivery of applications while providing the safety and security required by today’s users.
{% raw %}
Using the SovLabs template language that is built into the SovLabs vRA Extensibility Plugin, you can achieve flexible, yet powerful IT processes in vRA without annoying Blueprint sprawl.
{% raw %}
As enterprises and agencies continue to expand on cloud initiatives, there’s an increasing number of choices from public and private cloud offerings that integrate with existing on-premises data center virtualization technologies and configuration management tools. Choices include lifting and shifting basic virtual machines (VMs) from one workload environment to another, to re-architecting legacy solutions altogether with distributed microservices running in containers along with new innovation strategies underway. A hybrid cloud approach continues to be extremely relevant.
In spite of what might be considered daunting—if done carefully—strategic workload placement in the cloud can dramatically reduce costs, speed up development, and accelerate the return on investment (ROI) for digital initiatives. Gartner identifies the crucial time to consider moving to the cloud as now in its Top 10 Trends Impacting Infrastructure and Operations for 2019.
The main drivers for transitioning to the cloud (if one hasn’t already) is the ability to consume data-driven insights from just about anywhere from internet of things (IoT) devices intertwined with real-time, responsive applications that help enterprises and organizations take a leap ahead of what they’ve done in the past. Distributed cloud-native architectures and edge computing makes this all possible.
We now respond faster to consumer data, go to market with right-fit goods and services, and in some cases save lives with time-sensitive insights and action. For example, consider the interactive wearable fitness app, Fitbit and health apps like Noom that are enabled by digital innovation using cloud services. They both scale directly with usage and geographical regions because of the cloud.
Cloud Architecture Approaches
As successful approaches to cloud adoption emerge from just about every sector almost everywhere, enterprises and institutions must keep up and hopefully win against their competition. There’s little room for failure when the competition is ready and willing to snatch up unhappy customers or when the failure puts lives at risk for healthcare and some agency initiatives. The US Department of Defense (DoD) has initiated a “Cloud Smart” plan to make sure cloud adoption works without catastrophic consequences.
Every technology and associated vendor will have a spin from open source solutions to enterprise-class incumbents like IBM, Dell, HPE, and BMC. There’s Splunk, ServiceNow, and New Relic as other emergent and dominant technology offerings to consider, each in their own best-of-breed swim lanes, so to speak. Data centers have large footprints of VMware along with Nutanix and OpenStack emerging as private clouds on-premises.
While getting ramped up and running in the cloud, key decisions can’t be overlooked by today’s IT leaders. As industries are shifting in this cloud and digital era, no one wants to make mistakes. Trusting one source over another and trying to understand what’s best for the organization, there’s unfortunately no “one-size-fits-all” approach.
The approaches pitched all have merit and common elements. And a good approach is better than no approach, even if it is self-serving to some extent.
Amazon Web Services (AWS) is betting on a “Well-Architected Framework” as its signature methodology to approach cloud services. As CloudBolt is an Advanced Technology Partner in the APN for AWS, we’re ready to integrate the design principles as needed for any organization on the journey.
Getting Cloud Savvy
AWS cloud architects, developers, and other IT pros have spent a great deal of time and resources to encourage others to follow a framework for success. With input from hundreds of seasoned AWS Solution Architects and their customers, AWS pitches a Well-Architected Framework.
This framework has been evolving for a decade but more formally “as a thing” since 2017.
Here’s a quick introduction to the five pillars of the AWS Well-Architected Framework that include:
- Operational Excellence
- Security
- Reliability
- Performance Efficiency
- Cost Optimization
AWS provides an incredible amount of detail for each of these five pillars and design principles for each one on their website. This introductory white paper, published in July 2019, is a good start.
As a quick summary of the pillars in action, check out how CloudBolt can help with specific aspects for each of them.
Operational Excellence—The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.
CloudBolt provides inventory classification and tagging for each synchronized resource in AWS as shown in this example:
Security—The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
CloudBolt provides role-based access controls (RBAC) and permissions to implement a least-privilege strategy for end user access. In this example, a Group Admin controls all aspects of the group permissions.
Reliability—The ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
In this example, CloudBolt provides a customized framework of actions that can execute any business logic to meet reliability demands, such as notifying group admins when quota usage is over a threshold in this example.
Performance Efficiency—The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
CloudBolt blueprints can provide standard access to efficient builds of automated resources without requiring any specific domain knowledge from the end users.
Cost Optimization—The ability to run systems to deliver business value at the lowest price point.
CloudBolt provides many ways to build in cost optimization with workflows that automate best venue execution for cost as well as power scheduling and the enforcement of expiration dates when necessary. In this example, for AWS, CloudBolt can recommend reserved instances after analyzing spending patterns.
Although the reserved instance example shows only minimal savings in a lab environment, consider the dollar number in terms of a percentage. The reserved instances recommended would yield a $39.37 per year savings that is reaching almost a 40% reduction in savings for the year! (39.37/109.47)
For more detailed information about each of the pillars, check them out here.
CloudBolt and AWS
CloudBolt provides visibility and governance of the workloads you can deploy and manage in AWS. CloudBolt’s powerful blueprint orchestration and provision capabilities provides a way for experts on the AWS platform to curate and provide right-fit resources to end users who just need one-click access to infrastructure without needing to know or understand the backend technology.
For more information on how CloudBolt works alongside AWS, request a demo today!
{% raw %}
Problem Description:
The Custom Naming machineRequested workflow runs for a long time and eventually fails with this error: