Near the end of every year, everyone likes to look ahead and imagine how life might change. It’s fun and it allows us to write about what we want!
So Forrester gives us their predictive takes (Predictions 2023) on what’s coming in 2023 and I thought, why not weigh in on those that affect our world of multi-cloud infrastructure?
CloudBolt could NOT agree more! Being pragmatic is a requirement for transitioning from the ‘great resignation’ into a possible recession and a period of ‘do more with less’. Automation ensures greater coverage, efficiency, and productivity. One of CloudBolt’s strengths is its Automation/ Orchestration capabilities around the delivery, usage, and reporting of resources and applications. Every organization has multiple automation scripts and snippets, but they’re often trapped within organizational silos. CloudBolt can tap into that well and bring order to automation chaos. Speed to market and new innovations with fewer people is the goal! Another angle to automation is security and governance. Through automation, you can dictate safe, standardized, and predictable behavior!
Kubernetes use and investment will continue to rise and SHOULD, but VM-based workloads are not going to be totally replaced in 2023. A transition period has begun around DevOps, CloudOps, ITOps and it centers around delivering resources faster, more consistently, and in a governed and well-tracked manner. It could be K8s, Internal Developer Portal (IDP), or call it ‘Platform Engineering’. They’re all poking at the same issue -to more efficiently deliver innovative solutions to problems (customer or internal). With an API-based open architecture, CloudBolt supports containerization today, and can easily adopt the next tool to aid DevOps, FinOps, or SecOps that is right around the corner!
Forrester claims “an average of 200,000 open tech jobs that cannot be filled due to lack of suitable candidates.” I think automation and orchestration play another HUGE part here too! If I can automate the tasks of 3 people into 1, then I don’t need as many people. “But what about the skills?” Build some of that into the process. If this happens, then do this….essence of automation! Part of the skills gap problem is that we are constantly changing seeking new and innovative ways to deliver. But by doing so, we CREATE a skills gap. CloudBolt address by building expertise into the workload/container. We build Terraform workloads, Kubernetes workloads for example that don’t require users to KNOW that tool, they just click the button and it works!
The 2021 Verizon Data Breach Report reports miscellaneous errors were responsible for almost 20% of all data breaches. Much more eye-opening is this finding from a Gartner Survey that found misconfigurations cause 80% of all data security breaches! IT’S A PROBLEM and scandals caused by cyber-attacks erode customer trust.
WHAT CAN YOU DO ABOUT IT?
Of course, you can invest in tools to monitor for bad behavior, alert to its bad presence, and protect credentials but reducing application and resource misconfigurations and human errors address these hidden vulnerabilities. CloudBolt enables you to offer workloads that orchestrate approved and tested processes and resources. Those procedures are governed by policy to ensure they execute safely and consistently every time, without thought from users.
We regularly hear our customers say they were rapidly able to reallocate 20-30% of their staff to other high-value projects or that we gave them back the equivalent of one full workday per week!
EVERY new year brings uncertainty. You know the problems and issues that plague your organization, and some, like those above, are likely to be written about.. But regardless of today’s and tomorrow’s problems, the tools of automation, easy adoption (APIs), and tracking become essential. CloudBolt is focused on addressing the operational complexities introduced in a multi- and hybrid-cloud environment. How can we help you?
Cloud Management Platforms (CMPs) are largely sold to IT and IT Operations teams to help deliver infrastructure to the business more efficiently. That includes self-service automation to developer users across a mix of on-prem, private, and public clouds (hybrid multi-cloud). A new term is emerging with striking similarity, it’s called Internal Developer Portal/Platform (I’ve seen it both ways). The “IDP” acronym causes issues with a more popular term of the same three letters – Identity Provider.
Internal Developer Platforms claim to automate the app delivery process from a serve-yourself portal. Security is built-in so developers don’t have to remember. Guardrails keep bad behavior out and APIs enable choice and flexibility among favorite tools (Terraform, OpenShift, Ansible, Kubernetes, etc.). Ummm… this sounds a lot like CMPs and their self-service portals with automation, governance, and out-of-the-box integrations.
ALWAYS consider these KEY audiences when managing hybrid multi-cloud infrastructure
Cloud Management Platforms were touted to bring order to cloud infrastructure chaos and they did, to a certain extent. We got self-service portals and infrastructure was served up much faster but you had to bring some expertise with you to get the speed. You had to enter certain values or point to particular clusters and remember to add the security parameters in and ensure everything was tagged, and… So it was faster, better than waiting 3 weeks! But not quite ideal.
There are four key audiences involved in infrastructure use, each with a different goal:
- ITOps – Maintain and serve, goal is to deliver infrastructure to users fast and efficiently
- FinOps – Ensure infrastructure spending is optimized
- SecOps – Safeguard infrastructure and users from cyber attack
- DevOps – Innovate with speed
All four audiences should be involved in a self-service solution or you run the risk of silos, poor solution choices, and getting too many resources tied up in going different directions. If you attack the problems as a cohesive whole, then you have much greater chances of success. Next, let’s look into the capabilities that matter!
Call it “whatever” – critical to managing hybrid multi-cloud infrastructure
- Automate infrastructure delivery into self-service (Self-order and receive) – The faster the better and the more “built-in” the better. This can be a way to introduce the organization to the benefits of a tool like Terraform without having to train everyone on how to use it. One-click order and deployment!
- Process orchestration – Continually blending existing and new scripts, tools, and processes helps ensure end-to-end process flow. Gain the ability to take “goodness” from one area of the organization and “share” it for the benefit of everyone else. Changing processes are modular, change the sub-step and the rest of the process works as previously intended.
- Govern behavior – Said another way, put guardrails in place so users cannot get into trouble. From role-based access to building governance into blueprints and/or workloads, security is best when it is invisible to users. As you uncover anomalies, you can write policies to ensure they never happen again… a loop on continuous improvement.
- Auto-tag resources – Relying on humans to tag resources is fraught with risk. Like above, automate it (users have to do nothing). Automatically tag any resource used/ordered from the self-service portal. Doing so ensures you can track, report, and optimize future spending (public cloud, private cloud, on-prem…mix, match, and stack). No tag = No track = No optimize.
- Accelerate Development/Engineering – The real value to the organization comes from meeting customer needs faster and better. That could be adopting the next Terraform and deploying without a huge training requirement. It could be testing infrastructure BEFORE running a CI/CD pipe and learning of failure mid-way. A customer recently said their devs were spending roughly 20% of their week messing with and managing the infrastructure so they can build upon it (Prior to CloudBolt)! Imagine getting even 10% of that time back…
At the end of the day, vendors and analysts can play around with names, meanings, and endless acronyms but if you involve key stakeholders and solve your collective problems, it doesn’t matter what you call it… it will be successful!
And when you’re talking about customers, security, speed, and automation, done correctly can be the difference between thriving and just surviving in today’s uncertain economic times. When seeking solutions, look for flexibility & extensibility, the next open-source super-solution is right around the corner and you WILL WANT to take advantage. Make sure your framework choice doesn’t force you down a particular path. Another piece of advice, get a cost management/cost visibility solution in place WHEN you deploy self-service automation, if you don’t, you’re inviting trouble. Self-service ease with no EYES on costs is a disaster waiting to happen….think the finance and accounting people reading this just gasped… a little.
Businesses are adopting software at a faster rate than ever before. Many companies are adopting cloud/SaaS apps for things like Sales Force Automation or Cloud Cost Management at unprecedented rates. But just because you’ve implemented a new piece of software doesn’t mean everyone will use it. Even if that application can save time, improve processes, or help users be more productive, people are resistant to change. It pays dividends later to have a plan early to ensure adoption when seeking new software.
“78% of mobile apps are abandoned after only a single use, and web applications and software don’t fare much better”
Doesn’t matter how fast you implement software if no one uses it
This may sound obvious, but it’s an important point. So much time and effort are focused on going faster, sometimes we miss the point. Speed is great but racing to a finish line where no one uses it is simply a waste! Glad we got that out of the way. Now let’s look at some of the factors that make software adoption more important than deployment.
Software adoption is a process of change — organizational & personal
In order to achieve full adoption, it’s important to consider the entire organization that will be using it. For example, if you buy a self-service automation solution to serve infrastructure to engineering teams and they don’t use it, it doesn’t matter if it’s easier, faster, and more secure.
When implementing any new system, there are many stakeholders who need to be involved in the decision-making process (and ideally will also feel ownership over its success). The trick is getting these various groups on board with the idea of adopting your new solution and making sure they know what they’ll need from it once it’s up and running. In the case of cloud operations management, you have an IT audience, an Engineering audience, a Finance audience, and often a Security audience. A successful project starts with pulling together key stakeholders and understanding their needs.
Adoption starts when evaluating new software
The best time to start talking about adoption is when you’re evaluating new software systems.
The process of adopting a new product or system can be broken down into three phases:
- Implementation – getting the technology up and running in your organization (hardware, software, integration)
- Roll-out – training and communicating with users about how to use it once it’s ready
- Adoption – making sure that users incorporate the technology into their daily workflow
Think early about use in a daily workflow. Do you use incentives? Do you get leaders to adopt 1st and “show” others the benefits? Do you use negative consequences? Understand your audience’s daily workflow and show how/where new software improves it with minimal effort.
THE KEY – A comprehensive plan
The plan needs to include key applicable parties. It’s not enough for your technology team to create or buy new software, but also for everyone involved in the project to understand how it will be used and how they will benefit. You can’t just develop or buy an amazing capability, then expect people within your organization to adopt it on their own; you need everyone at every level of your company on-board with your vision and mission before major changes are made. In my example above, this would include ITOps, DevOps, FinOps, SecOps.
The best way I’ve found for achieving this is by creating an end-to-end strategy: one that includes all aspects of the project’s lifecycle – from initial requirements gathering through post-production maintenance – and making sure each applicable team understands how they fit into this process. This way if someone has questions about something they’re working on, they know who should help them out with particular issues because we’ve already discussed what needs to be done beforehand. It also ensures we don’t miss anything important when building out new processes or approaches because we’ve covered everything during the planning stage. Cannot stress enough how feeling like part of the answer early is a huge help for adoption later.
Don’t get too caught up in demoing features or checking off list items
Instead, focus on the user(s). Help them think about how this software is going to improve their lives and compare it to what they do today. You’ll find that users will be more engaged with your ‘product’ when they recognize its value early on. Show them why they need it. Show them what’s needed to get up and running. Speak in the language of your users. If they call something “Blue” when everyone else calls it “Red”, use their terms. And that’s when your team will have an easier time delivering something impactful!
Using my example above, this could be showing the development team how much time you can save by automating delivery. Or could be showing the finance team you can tag, track, and accurately report on resource use to properly chargeback and plan. Or could be showing the security team how you can prevent bad behavior from ever happening in the first place.
Think about users’ needs and goals first, then work backward
We often talk about the importance of understanding user needs, goals, and business requirements before delivering software. But we don’t often talk about how important it is to understand your software architecture for the same reason: because it’s hard to iterate on a system without knowing what that system is going to do first.
To make sure of an optimal design, there are several steps to follow:
- Start by understanding the problem space.
- How big are your problems? (e.g. provisioning resources takes too long & difficult to track usage)
- Who are they affecting? (e.g. engineers, finance, IT)
- What is your goal with this project (expected outcome)? (e.g. reduce provision times by 50%+ & be able to showback 80%+)
- Look at potential solutions to those problems and assess their pros and cons (the list could be long). Focus on what matters most to you, your users, and your situation (e.g. resources available-both human and technology).
- Identifying key stakeholders within each department so we can ensure everyone understands how they will benefit from using this new product/service/tool/etc… (e.g. ITOps, DevOps, FinOps, SecOps)
If you’re planning a software rollout, chances are you’re thinking about adoption. That’s good! But remember that adoption is just one part of the process. To make sure everyone on your team is on board with your plans and prepared for change, you need to start by understanding their goals—and making sure they understand yours as well. Do this early, it will pay dividends later.
Developers typically are technology junkies. By nature, they love to explore the latest, greatest tools and try out new capabilities. As a result, organizations end up having hundreds, if not thousands, of developers out doing their own thing – downloading open-source and freeware tools and intermingling them into processes along with technology provided by the organization – to innovate and drive new ways to solve customer problems.
This can cause issues including:
- Benefits of a particular tool being limited to only the few who know how to use it
- Developers losing valuable ‘office’ time learning how to use the new tools
- Lack of standardization in security, compliance, and governance standards and procedures
- Limited sharing of best practices across teams – that would accelerate innovation systemically
While devs explore new tools in pursuit of finding ways to be faster, better, and more innovative, the reality is that the opposite often occurs instead.
Challenges of rogue tool adoption
One of our prospects described their developer community as “6,000 different snowflakes”, each with perceived unique needs and siloed from one another. This is not collaborative. This is not efficient. If 6,000 different people are using Terraform, chances are they could be using it 6,000 different ways – and some of those WILL be better than others, but they’ll never know because they aren’t sharing what works best.
Furthermore, no one learns technology through osmosis; there is a learning curve during which they are not providing value because they’re learning. Each person is constrained by their individual level of proficiency and their ability to use the tool. How much productive time is lost in learning new tools every week, month, or year?
How do you enable devs to use whatever tools they want while still complying with governance? How do you get them to build security into their applications and processes? How do you get them to follow procedures outlined for everyone’s best interests (efficiency & risk)? How do ensure a majority use IT-sanctioned and approved resources?
THE ANSWER: AUTOMATION
Ways to improve the situation
Reduce the learning curve by automating steps. For instance, build a Terraform plan with the required infrastructure calls already built in. Do this for Ansible plans, Chef recipes, any tool! Doing so also reduces the skill and learning curve required. Devs don’t have to know the underlying platform, they simply execute on the infrastructure options provided. Allowing devs to choose the right tool for the job is good enough, but it’s better to ensure everyone can use them in a standardized, secure, and optimized way.
Latest estimates show that devs are spending anywhere from 19-26% of their time building and maintaining their own environments so they can do their jobs! How much faster could you propel your business forward if developers got back 8 hours or more per week to innovate versus having to learn tools or provision resources? Building automation, security and compliance into a workload is ideal and ensures devs use approved and protected workloads consistently across varying platforms and clouds. Automation allows you to build it all in for them to make it a “one-click” order. By doing so, you can abstract away complexities like networking, storage, environmental constraints, security footprint, power management, and more.
The challenge is real and complex, but the answer is straightforward. Organizations must accept that Devs will continue to explore and use the latest tools (especially open source). But to be productive, responsible, and efficient, companies need to ensure that automated guardrails exist to remove complexity, ensure optimization, and eliminate risks. If you’re looking for the easiest and most comprehensive way to accomplish that in your organization, CloudBolt is here to help!
To read Part 1, click here
You can only track what you know and tag – Cost Management Only As Good As Your Cloud Management
No one is able to gain visibility into “shadow IT” – but when you provide infrastructure in minutes vs days or weeks, devs are more likely to use IT-provided resources vs going rogue and provisioning public cloud resources on their own. Cost management solutions are only as good as YOUR tagging strategy. By that I mean, if you do not tag regularly and consistently, your cost solution can only “see” what it IS ABLE TO look for; it uses tags to track and categorize. Cost solutions don’t provision and tag resources. Trying to do it manually is inconsistent at best – and inconsistency is as good as not doing tagging at all. Without diligent tagging strategy and execution, it doesn’t matter WHAT cost solution you choose; it can only do its job if it’s aware of the resources.
Peer Survey – Only 9% of respondents said they “always employ tagging” – 73% said they “sometimes used tagging”
“Can’t optimize one without the other…” – Cloud & Cost Management TOGETHER
Why have so few vendors offered these two pieces together? They are integral and vital parts of increasingly complicated multi-cloud operations. Some hypothesize that NetApp can get there with its CloudCheckr acquisition, but they lack true cloud management & automation. VMware has the pieces but their approaches are VMware-centric, complicated, and rigid (plus, being bought by Broadcom has cast a shadow over near-term and long-term innovation). That leaves CloudBolt, which has embraced this cost & automation tandem for years.
We believe in the importance of having something in place to regularly and automatically discover workloads (ones that may have slipped through the cracks). That, combined with a solid tagging strategy, are pre-requisite musts. A cost management solution without automated enforcement to ensure anomalous spend not only is remediated but doesn’t happen again is like solving a problem incompletely – which is like not solving the problem at all. (To Ponder: If cost management solution only shows/ tracks/optimizes 55% of your total cloud spend, is it worth the annual subscription fee? Could you be gaining so much more if discovery and tagging were already observed best practices?)
Multi-cloud visibility is fuzzy – Get the picture you want
All the cost management solutions have similar AWS capabilities, but where they break down is multi-cloud. They were developed in a prior era, one where there was only one public cloud that mattered. But now Azure is nearly as popular as Amazon, even GCP is rising in popularity. With 92 percent of organizations having a multi-cloud strategy in place or underway, being good at just AWS isn’t good enough anymore.
Seek out solutions with good Azure capabilities (or GCP if that is your primary or secondary option). Ensure your cost solution gives you an overall view across clouds; most today require different screens and show different information making it infinitely more difficult to compare, contrast, and optimize. Ensure flexibility in reporting. Inevitably, key stakeholders within the organization are going to want to see data and reports in new and particular ways. Reporting flexibility goes a long way after initial implementation.
Automate & Orchestrate for higher levels of security and efficiency
To reiterate, cost management solutions are not tagging solutions. They are not infrastructure provisioning, automation, and management tools either. But when used properly in combination, they become a powerful weapon. The power comes from continually identifying anomalies and bad behavior with a cost solution and then turning around and automating a process that ensures that particular anomaly or behavior doesn’t happen again with an infrastructure management solution. Common examples of this can include:
- Workloads left idle when done
- Ordering resources that are over-powered for the desired task
- Forgetting to power down compute over weekends/during off hours
- Choosing an over-priced option when cheaper & better alternatives are available
- Over-provisioning reserve instances for “savings”
Cost solutions typically only make you aware of the issues! While the first step is always identification of anomalies, the ability to ensure it doesn’t happen again is a comparative superpower. Governance to help humans be less error-prone, less forgetful, and less wasteful is essential to the next phase of Cloud.
Bottom line, EVERYONE is suffering from a labor shortage and a skills gap… you simply cannot keep throwing people at the problems. It’s error-prone (which causes even further problems), expensive, time-consuming, and doesn’t scale.
Pairing a multi-cloud cost management solution with a multi-cloud cloud management solution has become imperative.
Tips & Tricks: Your solution(s) search
Good Capabilities Across Major Clouds and vCenter
If you’re not multi-cloud today, you will be soon. Not all cost management tools have solid features across the major clouds and on-premise vCenter. Today, having advanced capabilities on AWS is table stakes. Azure is fast becoming a real competitor to AWS yet most cost management vendors have only rudimentary capabilities. GCP is even worse.
Show Me Multi-Cloud Views
Seek vendors with multi-cloud views. Many cost management vendors require you to hop between screens to “see” your multi-cloud spend. It’s annoying and eats up time. Why not show it all in a single dashboard? Similarly, seek vendors with flexible reporting. Requirements WILL change and people WILL request variations… be ready!
Pair Cloud Management with Cost Management
Ensure you have a good provisioning and automation solution that spans multiple clouds and on-prem. (If no on-prem, no problem, but nearly every enterprise has some on-premise infrastructure). This ensures you can automate away bad behavior from happening in the first place, which enables infrastructure’s to deliver continuous improvement. Ensure your provisioning/automation tool can tag resources and users. No tags = no tracking = no reports.
It was easier in the past…
Infrastructure delivery 10 years ago looked a lot different than today (heck, it likely looks a lot different today than just a couple of years ago). On-prem was easy to track & budget, and as we introduced elastic cloud computing, we could handle tracking infrastructure usage, reporting, showing/chargeback, and improving processes.
Even as we adopted more advanced technology, integrated more systems, and automated more processes while using more and more single cloud services, things for the most part stayed under control. We hired people. We used native tools. We bought business reporting & intelligence tools to give us new insights.
Adding more & more over time weighs the organization down…
Over time we kept adding more cloud services, more new technologies, more new tools, and added a 2nd or even 3rd public cloud to the mix, all thinking we were moving faster, figuring out a better way. But doing this eventually creates exponentially higher degrees of complexity that break 1st Generation tools. (See RESEARCH SHOWS – 79% Are Hitting a Wall Using Existing Tools & Platforms). 1st Gen Cloud Management Platforms (CMPs) and Cost Management & Optimization tools simply fall short in this more complex multi-cloud, multi-tool world we now live in.
We try to mask issues by hiring more people (analysts & admins) to cover more significant and greater complexity caused by more usage, more clouds, more tools, ….just more of everything!!! Some call this Technical Debt. Call it what you want but it’s the boat most are in today.
Not enough people & time…
Many of you have gotten to this point:
- Not enough people available – to keep throwing bodies at the problem, it doesn’t scale
- Native tools aren’t multi-cloud – Great when we were single cloud
- 1st-gen solutions don’t go far enough – Often good at AWS, but not so great at multi-cloud
- 1st -gen tools have inaccuracies and the time to get data is unacceptable
- Reporting functions are limited – Often not flexible, leaving stakeholders wanting more
- You use a host of disparate tools or manual processes for provisioning, integration, cost tracking and reporting, cloud security, compliance, infrastructure testing, etc.
The lift is too great to ask people to keep writing endless scripts, aggregating data in spreadsheets, and manually generating usage reports & areas to save.
New approach needed – We live in a new world with new problems and challenges…
- Multi-cloud cost views are difficult and require work….aggregate, report, display, etc..
- Automating processes among varying and disparate systems is not easy….there are more tools and clouds than ever before (Terraform, Ansible, Kubernetes…added to Puppet, Chef, Jenkins, Docker, etc. and I’m just listing the Dev tools)
- Securing the organization from easy cloud access, multi-tool complexity, and manual error-prone processes
- Ensuring governance is applied every time – Guardrails to keep users from getting into trouble
It’s time to redefine what MORE means…
The new cloud world requires new tools. Stop trying to fit a square peg in a round hole, seek solutions that bring more integration capabilities (so they can work with existing tools & automate larger activities), more flexibility to approach issues in different ways (more easily adapt to YOUR process than vendor’s), and more current and differentiated capabilities. For example:
- Build governance into an integration, workload, or process (Remain in control)
- Share and re-use goodness with Content Library (Higher efficiency and productivity)
- Testing infrastructure before using it (Save time)
- Identify anomalies and suggest optimal improvements via machine learning (Continual improvement)
A decade in the grand scheme of things is not a long period of time. But in cloud computing, it’s an eternity. Companies need to resist the pull of inertia that has many accepting the limitations of technologies (and resulting processes) that were never designed to solve the multi-layered cloud complexities of today. The future belongs to the flexible – flexibility to integrate, automate and orchestrate while optimizing costs and ensuring governance. In the near future, there will be a stark difference between the enterprises and service providers who make the pivot rapidly and those who don’t.
The only question left is which do you want to be?
VMware customers—and more specifically customers of VMware’s vRealize Automation (vRA) and vRealize Orchestration (vRO)—are currently living through a trifecta of disconcerting realities that have many of them rightfully worried. There’s a perfect storm brewing, and in the cloud that’s never good.
Understanding The Trifecta
So, what exactly are the three main issues causing growing consternation and gnashing of teeth? Well, the one gaining the most media attention right now is Broadcom’s announcement of its desire to purchase VMware for $61 Billion. Seasoned IT veterans will remember from the CA (Computer Associates) and Symantec acquisitions that Broadcom has a track record of buying a customer base and then milking the cash cow for as long as possible. Note the first sentence in this article from The Register: “VMware customers have seen companies acquired by Broadcom Software emerge with lower profiles, slower innovation, and higher prices – a combination that makes them nervous about the virtualization giant’s future.” And that’s what happens if you’re one of the largest (read: most important) customers. If you’re not a top 500 customer then your prospects could be even bleaker as forced attrition may be in your future using history as our guide.
While I don’t advocate chasing ambulances or reveling in the misfortunes of others, I’d be remiss if I didn’t at least acknowledge the second issue at hand—VMware’s security is in question as a result of the Cybersecurity and Infrastructure Security Agency (CISA) finding four vulnerabilities among various VMware commercial products. Two of the four vulnerabilities were actively being exploited by cyber-attackers, affecting the following VMware products:
- vRealize Automation (vRA)
- vRealize Suite Lifecycle Manager (vRSLCM)
- Cloud Foundation
- Workspace ONE Access
- Identity Manager (vIDM)
The third and final issue in this perfect storm is actually specific to vRA/vRO customers and is a self-inflicted wound: VMware is ending support for vRealize Automation 7.6 and earlier (vRA) and vRealize Orchestrator 7.6 and earlier (vRO), forcing customers into making a painful choice—stay with vRealize and painstakingly re-write all of their integrations, automations, and processes over again in the newer version or seek alternatives (Terraform, Native Tools, VMware competitors) to replace vRA/vRO altogether.
What steps can be taken now?
Who knows what the future brings, but hedging your bets and protecting yourself a bit can pay dividends later. Seems obvious, but you want to minimize your reliance on any one solution, particularly the ones identified above. Broadcom is obviously not going to tank the entire VMware business and honestly, they’re incentivized to make it thrive, but history tells us to proceed with caution. Just revisit recent Broadcom acquisitions—CA Technologies (acquired in 2018) and Symantec (in 2019)—and examine who exactly benefited.
Put New VMware Purchases On-Hold
If you were planning any new VMware purchases, taking a more cautious approach may be prudent. It never hurts to wait and see how things “shake out” post-acquisition. If you must have something now, don’t bite on a typical 3-year commitment. Keep an escape plan with as little financial and technical risk as possible. Similarly, I would suspend any VMware professional service engagements that are not critical. Maintain your optionality.
Redundancy & Overlapping Capabilities
How many different versions of Terraform, Ansible, or other projects are running independently in your environment today? Most organizations have some sort of cloud management tooling, be it native or commercial, and most have some sort of cost tracking approach, either manual or commercial. How many tools are needed and can you consolidate to cover if VMware tools are no longer in the mix? What is the lift (cost & effort) to make that happen? Start planning now. Have a “Plan B” just in case.
Search for Alternatives
Know what new technologies are available. How is the market talking about solving for your specific challenges now vs 2 years ago? What vendors and solutions are popular now for your use case(s)? Why are they popular? Do you prefer a managed-service or DYI approach? If you had to switch tomorrow… You catch my drift. Be prepared!
READ CASE STUDY
You’ve heard the phrase, ”all good things must come to an end.” I have no idea if VMware’s time has come or is only beginning to take off for new heights. But the past tells a cautionary tale and recent findings don’t help the situation. The advice I give is simply this—have a backup plan. As with anything else in life or business, plan for contingencies or plan to fail.
I have spent a 20+ career with vendors making arguments for commercial software and against building capabilities internally. While I believe the benefits of commercial software generally outweigh the benefits of internally developed, they both have huge benefits! Why does it have to be one or the other? Why can’t one enable and facilitate the other?
The drive to the cloud and multi-cloud environments is pushing the boundaries of build vs buy. Native cloud management tools tend to focus on their resources, but don’t help much with a multi-cloud approach yet 94% of IT leaders agree a hybrid approach is critical1. Most cloud & cost management tools are great with AWS or VMware or Azure, but not good at managing, tracking, governing, and optimizing them ALL.
Isn’t this what 94% of IT leaders said they wanted?
Compounding the problem is every organization’s need to innovate and seek new ways to solve customers’ problems. When IT cannot deliver consistent and timely access to resources for engineers/developers to invent/build/solve customer problems, they go around IT and access resources on their own. If they are not provided adequate (or even good) cloud automation or cloud management tools to perform their jobs, they buy their own or use open-source versions. If you have hundreds (thousands, tens of thousands…. gets worse the more you have) of developers/engineers running their own unique tools, then you miss out on the speed of consistency, re-use, and sharing of the best. Cloud observability becomes infinitely more difficult in these multi-cloud, multi-tool worlds.
How do you get a true “near-real-time” view of consumption across all public and on-prem infrastructure?
Who is going to build that view? Are those who want the views builders? Why just buy a solution to ‘view’…why not buy it to govern, automate, and optimize too?
“Views” only go so far… a window to see a disaster coming is good but less effective than one that provide tools to combat the issues it shows you. Are the views complete? How many rogue multi-cloud tools are there and how much Shadow IT infrastructure usage is going on? Without being able to know and see it all, how can you ever get the chaos under control?
What if you could have the best of both worlds? –
“Bring Your Own Tool (BYOT)’ AND get the visibility, governance,
automation, and optimization of the business needs with CloudBolt
CloudBolt has been making acquisitions, expanding flexibility, and adding unique capabilities for years. We’re excited to offer an evolved approach to hybrid cloud management. We call it the CloudBolt Framework and it helps companies automate easily, optimize continuously, and govern at scale by unifying disparate capabilities for DevOps, ITOps, FinOps, and SecOps.
By pulling together islands of automation, our framework helps bring order to chaos both today and tomorrow.
Watch this video (<2 minutes). Learn how CloudBolt can help you move from the “either/or” of Buy vs. Build to the “both/and” of a unifying framework that accelerates all your multi-cloud/multi-tool efforts!
Terraform is great but it’s not perfect. In fact, there are some critical “buts” that need to be addressed to elevate and accelerate its performance and impact.
Terraform is great, but…
…it requires a significant cognitive load
Time spent learning how to use Terraform is more time spent not developing, not building the next customer answer, not innovating. Additionally, Terraform requires inputs—how are those captured, shared, and managed? It’s hard enough to learn how to use but also where to find the inputs (e.g. Cloud environments, instance types, networks, etc.).
If you have lots of dev teams then you also have lots of people at varying stages of Terraform learning and expertise. Abstracting away cloud infrastructure complexities through automation should be applied to Terraform, just like AWS, vCenter, or Ansible before it.
The reason to adopt and use new technology is to improve the process in some way, and the faster you can use sit with the least amount of knowledge required the better.
So what’s the answer? Build automation, orchestration, and security directly into a Terraform plan to decrease errors, accelerate timelines, and vastly reduce learning curves.
…it’s not always easy to share what works across the organization so others can benefit.
Terraform plans and exciting use cases rarely get shared across the organization. Siloed dev teams form in isolation, minimizing the impact of better ways, new processes, or optimized configurations. Keeping a lid on the cool, new, and exciting ways your people are using Terraform only minimizes the potential impact.
Stop inventing and re-inventing the wheel. Harness innovation, breakthroughs, and efficient orchestration by sharing with others across the organization so everyone benefits. How much faster can the team move when everyone is able to rapidly leverage what works?
…is woefully lacking in governance, visibility, and reporting.
Your finance/FinOps team is continuously seeking an accurate view of infrastructure consumption, be that on-prem, private, or public clouds. Since Terraform is an open-source tool it’s easy for an individual or a specific team to use as they see fit. As a result, the same groups often provision and use infrastructure outside of normal channels (e.g. purchase direction via credit card). And when that happens, there is no hope for visibility, compliance, cost control, security, or governance. It’s shadow IT, even when done with the best of intentions. Any developer or group not using org-approved resources is creating blind spots. Secure? Not sure. Visible? Not likely. Automated? Maybe.
A management and cloud operations framework allows you to standardize Terraform plans, automate and orchestrate processes, build security into plans, and gain visibility, tracking, and reporting on Terraform use of infrastructure across the organization.
Watch a quick customer video describing how CloudBolt simplified the use of Terraform and other cloud operations tools
Terraform truly is a great technology, but there are some costly gaps that must be filled, especially in hybrid/multi-cloud environments. While using Terraform in disparate groups throughout the organization has benefits it may also be creating costly issues that diminish the value, increase cost, and slow down innovation.
Make your Terraform experience better, more efficient, secure, and re-usable so that your whole organization gains the greatest benefits and advantages. No buts about it.