Agile software development practices have emerged as the preferred way to deliver digital solutions to market. Instead of defined stages of software development, sometimes referred to as “waterfall” approaches, software changes are continuous. We now consider almost any software delivery process as agile as long as it combines development and operations (DevOps) and the releases are frequent. 

The DevOps term is often used even when it is more like mini-stages in comparison to the past or simply has the developers not only write the code but also are the ones who deploy it in production. A DevOps engineering team can include software coders, security experts, and IT system admins. 

DevOps teams have the objective of a continuous integration, continuous delivery (CI/CD) pipeline of code from “story” to production, the more that is automated in the process, the faster the delivery. Stories are created from customer or user driven needs or can be part of a product vision for new capabilities. The story describes the intended behavior and user experience of a software solution when the code becomes live in production. Because delivery is continuous, the stories can change over time and the code is modified and delivered. 

Beside the coding expertise, DevOp engineers use many IT tools that can help in the infrastructure provisioning side as well as on the software coding side. In this post, we’ll look at some of the key automation and provisioning tools.

Chef

As a configuration management tool based on cooking, Chef creates a way to deploy “recipes” for application configuration and synchronization. Recipes can be combined and modularized with “cookbooks” to help organize the management of configuration automation using Chef. 

Chef deployments consist of three components:

The servers manage the environment and workstations are used by developers to create and deploy cookbooks. The clients are the managed nodes or targets for configuration.

Puppet

Released in 2005, Puppet has been around longest as a configuration automation tool. Puppet uses a declarative approach to create a state and the Puppet executes the changes. There are controller and agent components. The agent on a managed client polls a Puppet controller on another node (master) to see if it needs to update anything based on what is declared in Puppet modules

Puppet uses a unique configuration language based on the Nagios file format. The state of the configuration can be defined with manifests and modules that are typically shared in a repository such as Github. The file format accepts Ruby functions as well as other conditional statements and variables.

Ansible  

As the most popular configuration management tool among DevOps engineers, Ansible doesn’t require agents to run on client machines. Instead, it’s possible to secure shell (SSH) directly to the managed nodes and issues commands directly on the virtual machine. The Ansible management software can be installed on any machine that supports Python 2 or higher and it’s a popular notion that DevOps engineers run Ansible updates from the Mac laptops.

In order for an update to occur, the change must be “pushed” to the managed node and the approach is procedural as opposed to the declarative approach of Puppet. 

Terraform

For infrastructure provisioning and configuration automation, Terraform builds infrastructure in the most popular public and private cloud environments and can be managed with versioning. As an infrastructure as code (IaC) DevOps tool environments can be built from any device running Terraform, connectivity to an environment as a resource is specified.  

Terraform plans are declarative and describe the infrastructure components necessary to run specific apps as well as a whole virtual data center of networking components and integrated with other configuration and management tools. Terraform determines what changed and then creates incremental execution plans that get applied as necessary to achieve the desired state of infrastructure and applications. 

CloudFormation

As an IaC DevOps tool for Amazon Web Services (AWS), CloudFormation is a service that helps configure EC2 instances and Amazon RDS DB instances from a template that describes everything that needs to be provisioned. CloudFormation is specific only to AWS and helps AWS users who don’t want to configure some of the backend complexity that is automatically configured by using CloudFormation as a service. CloudFormation is free to use for anyone subscribed to AWS.

CloudBolt and DevOps Tools for Success

DevOps tools used to configure infrastructure and the applications and services running on them vary by enterprise and often in different teams in the same enterprise. Having the visibility and control of the DevOps tools used to configure resources and the resources themselves gives IT admins using CloudBolt a faster way to find out who’s using what and where in a potentially complex and siloed hodge-podge of technology.

To see for yourself how our platform helps you reach DevOps Success, request a demo today!

Deploying applications at the speed of users can paradoxically be something of a slog. IT, DevOps, and SecOps organizations may spend hours/days/months trying to figure out ways to simplify the delivery of applications while providing the safety and security required by today’s users.

(more…)

{% raw %}

(more…)

VMworld 2019 is upon us, with thousands of IT professionals descending on the Moscone Center in San Francisco from Aug. 25 to 29. The sheer volume of breakout sessions, among other things of interest at the show, can be overwhelming to take in especially if you’re there to learn about one or two specific subjects.

If you’re looking to get your hybrid cloud questions answered, we’ve picked out five sessions for the week that you might want to consider checking out. We’d advise you to check out VMworld’s program for up-to-date information on where these sessions are taking place as well as potential changes to times and speakers.

Without further ado, here are our picks for the week (all times are in US Pacific Standard Time):

Automating Builds and Deployments, aka CI/CD for Dummies

Tim Davis, Cloud Advocate, VMware

Monday, Aug. 26, 5:30 to 6:30 p.m.

Is doing all the building, testing and deployment for your projects manually your best bet, or is automation the way to go? And how do you get there? This session will take a look at continuous integration/continuous delivery (CI/CD) and explore the basic questions around it, including defining what it is, why it exists, how to do it and how to get there with a real-world example.

Best Practices for Cloud Migration and Acceleration

Shane Gibson, TE Manager, Business Applications – Americas, Hitachi Vantara; Joe Horstmann, Solutions Architect (TE), Cloud Solutions – Americas, Hitachi Vantara

Tuesday, Aug. 27, 2 to 3 p.m.

Organizations face a great deal of challenges in their cloud journey. In this session, two representatives from Hitachi Vantara will explore the best practices for building a comprehensive multi-cloud strategy, including how to keep costs in check, maintaining security compliance and keeping business operations flowing during migrations.

Dirty Deeds Done Dirt Cheap – A Guide to BC/DR

Theresa Miller, Principal Technologist, Cohesity

Tuesday, Aug. 27, 4:45 to 5 p.m.

Does your organization have a comprehensive plan for disaster recovery? What about business continuity? In this session, you’ll learn how to avoid the pitfalls associated with costly or underperforming BC/DR solutions. Also, hopefully some AC/DC will be played before the session starts.

Getting Started with Terraform and Ansible

Shri Upadhye, Native Cloud Advocate, VMware

Monday, Aug. 26, 3:30 to 4 p.m.

When it comes to infrastructure as code, everyone interested can use some information or a solid refresher, whether you’ve used it, want to use it or don’t know anything about it. This session will include a demonstration on how Terraform and Ansible work and the impact they could have on your hybrid cloud plans.

Tackling Common Cloud Security Mistakes

Hadar Freehling, Cloud Security Solution Architect, VMware

Monday, Aug. 26, 5:15 to 6 p.m.

Tuesday, Aug. 27, 2:15 to 3 p.m.

If your cloud deployments are configured correctly, you could be putting your systems at potential security risks. In this session, you’ll hear about common cloud misconfigurations and how you can prevent them from hurting your organization. This session will have two occurrences in case you can’t make one time or the other.

For further information on how best to tackle your hybrid cloud concerns, you can visit CloudBolt at VMworld at Booth 134.



Using the SovLabs template language that is built into the SovLabs vRA Extensibility Plugin, you can achieve flexible, yet powerful IT processes in vRA without annoying Blueprint sprawl.

(more…)

{% raw %}

(more…)

As enterprises and agencies continue to expand on cloud initiatives, there’s an increasing number of choices from public and private cloud offerings that integrate with existing on-premises data center virtualization technologies and configuration management tools. Choices include lifting and shifting basic virtual machines (VMs) from one workload environment to another, to re-architecting legacy solutions altogether with distributed microservices running in containers along with new innovation strategies underway. A hybrid cloud approach continues to be extremely relevant. 

In spite of what might be considered daunting—if done carefully—strategic workload placement in the cloud can dramatically reduce costs, speed up development, and accelerate the return on investment (ROI) for digital initiatives. Gartner identifies the crucial time to consider moving to the cloud as now in its Top 10 Trends Impacting Infrastructure and Operations for 2019.

The main drivers for transitioning to the cloud (if one hasn’t already) is the ability to consume data-driven insights from just about anywhere from internet of things (IoT) devices intertwined with real-time, responsive applications that help enterprises and organizations take a leap ahead of what they’ve done in the past. Distributed cloud-native architectures and edge computing makes this all possible.

We now respond faster to consumer data, go to market with right-fit goods and services, and in some cases save lives with time-sensitive insights and action. For example, consider the interactive wearable fitness app, Fitbit and health apps like Noom that are enabled by digital innovation using cloud services. They both scale directly with usage and geographical regions because of the cloud.

Cloud Architecture Approaches

As successful approaches to cloud adoption emerge from just about every sector almost everywhere, enterprises and institutions must keep up and hopefully win against their competition. There’s little room for failure when the competition is ready and willing to snatch up unhappy customers or when the failure puts lives at risk for healthcare and some agency initiatives. The US Department of Defense (DoD) has initiated a “Cloud Smart” plan to make sure cloud adoption works without catastrophic consequences. 

Every technology and associated vendor will have a spin from open source solutions to enterprise-class incumbents like IBM, Dell, HPE, and BMC. There’s Splunk, ServiceNow, and New Relic as other emergent and dominant technology offerings to consider, each in their own best-of-breed swim lanes, so to speak. Data centers have large footprints of VMware along with Nutanix and OpenStack emerging as private clouds on-premises.

While getting ramped up and running in the cloud, key decisions can’t be overlooked by today’s IT leaders. As industries are shifting in this cloud and digital era, no one wants to make mistakes. Trusting one source over another and trying to understand what’s best for the organization, there’s unfortunately no “one-size-fits-all” approach.

The approaches pitched all have merit and common elements. And a good approach is better than no approach, even if it is self-serving to some extent. 

Amazon Web Services (AWS) is betting on a “Well-Architected Framework” as its signature methodology to approach cloud services. As CloudBolt is an Advanced Technology Partner in the APN for AWS, we’re ready to integrate the design principles as needed for any organization on the journey. 

Getting Cloud Savvy

AWS cloud architects, developers, and other IT pros have spent a great deal of time and resources to encourage others to follow a framework for success. With input from hundreds of seasoned AWS Solution Architects and their customers, AWS pitches a Well-Architected Framework

This framework has been evolving for a decade but more formally “as a thing” since 2017. 

Here’s a quick introduction to the five pillars of the AWS Well-Architected Framework that include:

AWS provides an incredible amount of detail for each of these five pillars and design principles for each one on their website. This introductory white paper, published in July 2019, is a good start.

As a quick summary of the pillars in action, check out how CloudBolt can help with specific aspects for each of them.  

Operational Excellence—The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. 

CloudBolt provides inventory classification and tagging for each synchronized resource in AWS as shown in this example:

Security—The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. 

CloudBolt provides role-based access controls (RBAC) and permissions to implement a least-privilege strategy for end user access. In this example, a Group Admin controls all aspects of the group permissions. 

Reliability—The ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. 

In this example, CloudBolt provides a customized framework of actions that can execute any business logic to meet reliability demands, such as notifying group admins when quota usage is over a threshold in this example. 

Performance Efficiency—The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve. 

CloudBolt blueprints can provide standard access to efficient builds of automated resources without requiring any specific domain knowledge from the end users. 

Cost Optimization—The ability to run systems to deliver business value at the lowest price point.

CloudBolt provides many ways to build in cost optimization with workflows that automate best venue execution for cost as well as power scheduling and the enforcement of expiration dates when necessary. In this example, for AWS, CloudBolt can recommend reserved instances after analyzing spending patterns. 

Although the reserved instance example shows only minimal savings in a lab environment, consider the dollar number in terms of a percentage. The reserved instances recommended would yield a $39.37 per year savings that is reaching almost a 40% reduction in savings for the year! (39.37/109.47)

For more detailed information about each of the pillars, check them out here

CloudBolt and AWS

CloudBolt provides visibility and governance of the workloads you can deploy and manage in AWS. CloudBolt’s powerful blueprint orchestration and provision capabilities provides a way for experts on the AWS platform to curate and provide right-fit resources to end users who just need one-click access to infrastructure without needing to know or understand the backend technology. 

For more information on how CloudBolt works alongside AWS, request a demo today!

{% raw %}

Problem Description:

The Custom Naming machineRequested workflow runs for a long time and eventually fails with this error:

(more…)

This assumes 

(more…)