Calculating the return on investment (ROI) for any digital enterprise initiative is tricky. There are so many factors to consider, most of which involve a decision to invest in new digital resources vs sticking with current approaches. The analysis is typically informed by asking, “What business value will be advanced by a new approach?”
By using an either/or approach, you might consider the cost of deploying your infrastructure in a public cloud vs the equivalent of those resources on premises. Hybrid cloud means choosing the right-fit environment for the workload, so this is a good start.
When making the ROI calculation, consider the cost for the resources as well as the time to deploy them in the cloud or on-premises. Your goal is to identify quantifiable savings. However, don’t dismiss that some of your choices might take less time and cost less while not achieving the innovation that you’re really looking to get. Make sure that in addition to time and cost factors, elements like the experience of the users in your environment or the future benefits from adopting a system might cost more in the beginning but have a better long-term benefit.
Getting a lower cost and shorter time-to-value is important but it’s far from the whole picture. Let’s look at the following summary of key analysis factors as you consider possible outcomes for hybrid cloud ROI.
Public Cloud vs Public Cloud
Shopping for a public cloud provider for your hybrid cloud strategy is not going to be easy. Recently, all three major public cloud providers have claimed “all-in” to hybrid cloud. Microsoft Azure boosts its Azure Stack for data center development for enterprises. Google recently released Google Kubernetes Engine (GKE) On Prem to help enterprises manage Kubernetes clusters anywhere. Amazon Web Services (AWS) at their recent re:Invent 2018 conference released AWS Outpost, for providing their cloud experience on-prem.
Consider that choosing one over the other might also be influenced by a strategic position of your organization or the public cloud providers. If you’re a big retail customer, for example, you might not want to invest in AWS because of its obvious connection to Amazon’s online shopping business. In another case, your staff might be partial to a Microsoft Windows environment and would lose out on the expertise that might get underutilized in another public cloud environment.
Public Cloud vs Data Center
For a new business, IT at your fingertips in the cloud that scales on demand will probably be your best bet. In this case, you would then do the analysis in the previous section and pit the public clouds against each other. On the other hand, most large enterprises that have been in business for a decade or so will have a more complicated analysis. Most of their IT departments would concede that they have over 50% of their infrastructure on premises in one or more data centers across the globe.
Adopting an incremental or trial and error approach seems to be what most organizations are doing with such a heavy on-prem footprint. Several of our customers have reported that they’ve moved infrastructure to the cloud only to move in back on-prem due to unexpected costs. They encounter security, control, or even a competitive factor like the one previously mentioned about a retailer not going with AWS. The ROI calculation would first start by comparing the cost to run services in a public cloud versus those same services on premises.
Hybrid Cloud Delivery
Here’s where it gets more complicated—the way that enterprise users consume resources to achieve digital business objectives. Enterprises with a deep bench in the IT department might have systems in place in which delivering cloud resources are tied to IT service requests. This is typically done through ticketing systems while provisioning is done by IT operations.
In a DevOps environment, the public cloud resources might be available as self-service within each cloud environment—responsibility and control are more distributed. A cloud delivery platform that connects to hybrid cloud resources for the entire enterprise can enable Self-Service IT and User Empowerment that can have an incredibly positive effect on ROI.
Other factors to consider in the time-to-value consideration of ROI, include complexity versus ease of use for any system in place. as well as Extensibility to Future and Legacy Technology.
CloudBolt provides a single platform for hybrid cloud delivery that enables significant ROI for faster time-to-value and the ability to empower end users to innovate without being tied up in the complexity of configuration in multiple clouds.
If you’ve ever tried to get a job in software, you’ve likely been given the advice to go check out meetups—networking events where like-minded people go to meet others with similar interests. Some folks might be turned off by the term “networking event”, associating them with rooms full of strangers and uncomfortable schmoozing, but meetups are so much more than that. Let’s talk about the benefits of meetups, why we at CloudBolt love them, and what we’re doing to support the awesome people who make them happen.
Why go to meetups?
If you’re a developer looking to learn a new skill, a meetup can be one of your best resources. There are meetups for all sorts of tech topics, from machine learning to MySQL. Here in Portland, searching the “tech” category on meetup.com returns over 300 groups, so if there’s a specific subject you want to learn there’s probably a meetup for it. Many of these meetups involve presentations where you can learn from people’s real-world experience, where you not only get the chance to hear their expertise but also to ask them your own questions and enhance your understanding.
Meetups also offer a great chance to practice your skills. Instead of having presentations, many meetups are gatherings of people who want to work on their projects, but could use a place to do so and a community to help discuss ideas. Often you will find mentors who are experienced in the meetup’s subject and are willing to lend a helping hand with the work you’re trying to do.
Of course, we don’t want to imply that networking isn’t a reason to go to meetups. Meetups are full of organizers and members who are passionate about their subject and are welcoming towards those who want to learn more. Meetups can also be a way to connect for groups that are traditionally underrepresented in tech. Talking to people at meetups is a great way to learn about interesting work going on in your area and to learn about the other communities and organizations that people are involved with.
And yes, meetups can help you get a job. My first meetup was for a web framework I was just starting to learn in a town I had recently moved to, and I went in feeling extremely nervous. To add to my nerves, half of the members turned out to be people that had interviewed me for a job earlier that day. But meeting them outside the interview room, I saw how friendly they were and how much they enjoyed their work. I left feeling better about the job, and they must have felt better about me because I got an offer shortly after. My story involved coincidence and luck, but meetups allow for this kind of chance connection to lead to great new opportunities.
CloudBolt and Meetups
At CloudBolt, we want to support our community meetups, and the main way we do that is through sponsorship. CloudBolt has been a sponsor of the Portland Python User Group, providing food and drink for attendees so they don’t have to spend the whole time hungry. We want to help foster communities where people can learn, grow, and connect, and this is how we contribute.
We’re also looking for more ways to contribute going forward. In early 2019, our Portland team is moving to a larger office with space to start hosting meetup events. We’re also looking to contribute by giving more talks, and we’re thinking about how we can share what we’ve learned at CloudBolt with the local developer community.
We also sponsor meetups because we want people to know that we’re hiring! We have lots of exciting projects that we want to work on, but we need more engineers to help us do so. We’re looking for people with lots of different skills, and our list of positions is always updating.
Hope to see you at a meetup! Want to get in touch before then? Contact us or apply for one of our open positions.
The likelihood of dealing with enterprise IT gremlins1 is heightened during certain times of the year for any DevOps team. My brother, who works in IT Disaster Recovery for a healthcare agency, reminded me of this during our most recent Thanksgiving gathering. He had to address four hours of downtime right before the holiday, as something DevOps related pushed a change to the production system instead of in a test environment. Sound familiar?
Whether it’s a holiday, close of the quarter, or “go live” day, any number of factors can put a little extra stress on IT staff with more of a chance for network gremlins to plague any enterprise. Although not as mischievous as mythical gremlins, sloppiness causes trouble, difficulties, or unexpected failures—threatening security as well as contributing to downtime and poor performance.
Self-Service Resources and IT Automation
Keeping gremlins at bay can be achieved with a solid plan for self-service options and IT automation. End users need to have access to hardened resources and processes when others who have the keys to these resources are on PTO or swamped by other high priority projects.
Leaving users in the dust while waiting for resources or an update can make them turn to workarounds or short cuts. The idea is that you don’t want anyone in your organization going rogue during the stressful times. The more that enterprise IT and DevOps teams have self-service IT enabled, the less likely the chance for folks to fend for themselves.
Making any DevOps practice or IT process bulletproof for occasional mishaps is nearly impossible, but reducing the likelihood is worth the effort needed by using the following approaches:
- Eliminate bottlenecks
Consider a typical workflow from start to finish and make sure that if there are dependencies that require manual input, you have taken that into consideration and have an alternative method for achieving the end result. One way to do this is to make sure that administrator access is enabled for trusted individuals who can step in if the primary admin is not available. In some cases, this person could be above or just below the person on the org chart. Get your boss’s boss to intervene when necessary and you’ll be guaranteed to move the bottleneck issue along a little faster. - Automate approvals
Always consider routine approval processes and automate them whenever you can. That does not mean to approve any request automatically but rather to set up automated checklists so that if the request meets those requirements, there’s no need to have a manual approval. You could also set up specific sets of resources that meet the requirements without an approval. This is particularly useful when you want to have self-service resources but not an open faucet. IT automation eliminates the unnecessary manual errors. - Consolidate resources
Another way to reduce mishaps or what is considered the “who’s on first?” effect is to make sure that resource management is centralized to specific teams with defined roles and plans for backup coverage. When resources and roles are scattered throughout the whole organization and someone with a key role is out on PTO, you’ll be scrambling to figure out where to get the IT resources you need—just like the old Abbott & Costello skit. - Embed security
Security must be part of the whole process from start to finish. When provisioning IT resources on premises for both private and public cloud environments, there’s special consideration for containerization and other virtualized environments in the cloud. Here’s a quick reference for security concerns and DevOps resources: Enterprise Hybrid Cloud Containerization and Rugged DevOps and DevSecOps for Security. This post drills down to these two manifestos, which are also helpful in hardening security issues.
A centrally managed platform like CloudBolt can get any IT organization on the right path to avoiding the “gremlin” effect, especially as we approach another holiday season and schedules and priorities will undoubtedly be different for many enterprises.
1—Gremlins are unexplained problems or faults (↑BACK↑)
Over time computing has gone from mainframes to bare metal servers to on-premises virtualization to cloud server instances and containerization to serverless computing. What’s next, codeless computing? Probably not, but luckily we’re not talking about something as bizarre as that with serverless computing. The server element for executing code is essentially abstracted away from its developers, and it’s new enough that we’re in the Wild West.
Serverless Computing Explained
Serverless computing is a fancy way of saying that you don’t have to worry about the servers when you want to execute code—often referred to as a Function-as-a-Service (FaaS). Major cloud providers have compute capacity ready for anyone to reserve and run virtual machines (VMs) and containerization of microservices.
For public cloud providers, why not take it one step further and isolate running code on demand as a way to make more money? This is great for developers who need to continuously add services and features to their application stack but don’t want to fuss with managing the infrastructure.
Major cloud providers offer these serverless computing options with an emphasis on the payment model:
- Amazon Web Services (AWS)—AWS Lambda
Run code without thinking about servers. Pay only for the compute time you consume.” - Microsoft Azure—Functions
Accelerate your development with an event-driven, serverless compute experience. Scale on demand and pay only for the resources you consume. - Google Cloud Platform (GCP)—Google Cloud Functions
Event-driven serverless compute platform. - IBM Cloud—IBM Cloud Functions (aka, OpenWhisk)
…executes functions in response to incoming events and costs nothing when not in use.
As great as these services are, though, we still have to contend with The Good, The Bad, & The Ugly
The Good
The good is the on-demand nature of this computing strategy at low cost. Suppose an application developer wants to give their aging application architecture a quick lift with a small feature that checks an Internet of Things (IoT) sensor in a smart home, like air quality to automatically suggest or order a new air filter. Instead of adding the compute power of infrastructure needed for many thousands of subscribers to the application, they can develop this on-demand function that only needs to run occasionally.
The Bad
The bad is that these functions can get complicated and hard to manage, especially if they must run for more than five minutes at a time in an application process. They must also be accessed by a private API gateway and will require the dependencies from common libraries to be packaged into them. This can be terribly inefficient compared to containerization. The more complicated the coding required the less likely a serverless function is going to suit the application architecture well. For more information, see What is Serverless Architecture? What are its Pros and Cons?
The Ugly
The ugly is that there is currently no standardization of serverless computing across the different public providers. Vendor lock-in will be at risk when these enticing functions as code—with low prices—become addicting to some developers and enterprises. They cannot be easily ported around like in the same was as containers can.
As Rick Kilcoyne, VP Solutions Architecture at CloudBolt stated in a recent article:
“…tantalizing as serverless computing is, one must be fully aware that moving code between serverless platforms is extremely difficult and only made more so by cloud vendor specific libraries, paradigms, and IAM. Serverless computing is the technological equivalent of a snare trap as there’s virtually no way to easily migrate from one platform to another once committed.”
Roundup
Serverless computing should definitely be a part of any enterprise hybrid cloud strategy. Just as a hybrid cloud application has a mix of public and private clouds, it can also have a mix of infrastructure technologies such as virtualization, containerization, and serverless computing with functions. Our CloudBolt hybrid cloud management platform helps you manage it all from one place.
To see how CloudBolt makes serverless computing easier, check out a demo.
At CloudBolt, we believe that software solutions should be easy to maintain, manage, and understand. We also believe they should be self-regulating and self-healing, when possible. You will see a focus on this starting in 8.4—Tallman but also continuing through our 9.x releases, which will give you better visibility into CloudBolt’s internal status, management capabilities directly from the web UI, and reduce the number of times you need to ssh to the CB VM to check things or perform actions.
CloudBolt 8.4—Tallman introduces a new Admin page called “System Status” which provides several tools for checking on the health of CloudBolt itself.
The System Status Page in 8.4—Tallman
To see the System Status page in your newly installed/upgraded CloudBolt 8.4-Tallman, navigate to Admin > Support Tools > System Status. You will see a page that looks a bit like this:
There are three main parts of this page.
1. CloudBolt Mode
This section provides a way to put CloudBolt into admin-only maintenance mode. This prevents any user who is not a Super Admin or CloudBolt admin from logging in or navigating in this CloudBolt instance. This is useful for times when you need to perform maintenance on CloudBolt (eg. upgrading it, making changes to the database, etc), and you want to prevent users from accessing it while in an intermediate state, but you yourself need to perform some preparation and verification within the CB UI before and after the maintenance.
2. Job Engine
This section shows the status of each job engine worker, each running on a different CloudBolt VM now that active-active Job Engines are supported. It also shows a chart of all jobs run in the last hour and day per job engine. When things are healthy, and the job engines are not near their max concurrency limit, there should be a fairly even split of how many jobs are being run by each worker.
3. Health Checks
This section has several kinds of checks:
- Indications of the health of a specific service, as would be seen from the Linux command line when running `service <name> status`
- Tests of OS-level health, such as a check of available disk space on the root partition
- Functional tests, which perform some basic action to make sure systems are working properly. Functional tests in 8.4—Tallman include writing a file to disk and deleting it, creating an entry in the database and deleting it, and adding an entry to memcache and deleting it.
Ensuring the health of the systems that underlie CloudBolt can help you quickly hone in on the root cause of an issue, and we hope that the system status page will help narrow the time it takes to troubleshoot and resolve issues with CloudBolt.
What’s Next for the System Status Page
We have some ideas for what we might add next:
- Uptime metrics for each job engine worker
- The average time for jobs to complete for each worker
- Disk space checks for all partitions on the CB VM
- CPU, memory, I/O, and network utilization for the CB VM
- Uptime for the CB VM as a whole
- Network health checks, including:
- testing DNS lookups
- testing pinging the gateway
- testing connections to any configured proxies
If there are any of these that seem like they would be especially useful to you, we’d love to hear that to help us prioritize. We’d also love to hear any additional ideas you have for this new page!
Most IT leaders can agree that agility and speed are the main focus for any enterprise DevOps team, meaning that responding quickly to digital business needs is essential. There’s an implied bargain in this—we can get these results faster if we have more control of our working environments. Alongside that, there’s the risk that the DevOps teams will spend a lot more time and money than necessary without a good strategy in place.
DevOps promises to shorten the time-to-value (TTV) by merging aspects of application development and IT operations. DevOps teams need to be empowered to configure, code, and run mission-critical digital services for the enterprise without too many handoffs between different departments.
DevOps Challenges
DevOps teams either play a role in or are responsible for IT resource provisioning, automation, and orchestration followed by development, testing, and delivering applications, services, and workloads. Most of them follow a framework of continuous integration and delivery (CI/CD). But you guessed it—that’s a tall order for most enterprises—particularly because they have become entrenched in so much complexity and have a strong footprint in datacenter legacy technologies. There is a lot of digital value running in these legacy enterprise workloads.
A startup company can just run with all the new and shiny stuff, architecting their solutions from scratch. Obviously, this is not the case for most large enterprises.
There are many tradeoffs to consider in an enterprise-level DevOps process. For example, provisioning new IT resources could be unbridled and very generous. You could give the teams whatever they want by allowing them to self-provision in whatever environment they need with the hopes that they’ll be prudent in making decisions. But how will you know that they do not end up spending more than the value that is achieved by using those resources? Keep in mind that the advantage of being more generous is the ability to get moving on any initiative without being too concerned with the cost of compute resources.
On the other hand, IT resources might not be as easily attainable. Public cloud providers would love for DevOps teams to just order as many resources as they wanted from an open account. But there might be more hurdles to address with regards to governance and approval processes that can slow everything down, leading to TTV taking a hit. By balancing all of these competing factors, DevOps teams need to take into account utilizing legacy value, the speed needed to get started, and their spending budgets.
Achieving Cost Management Nirvana
When DevOps teams have the ability to do self-provisioning unhampered by inefficient hurdles and have the ability to right-size and manage costs throughout the process, they get closer to an ideal state-of-being for delivering value.
Here’s just a short summary of ways to improve cost management for DevOps teams:
- Cost preview and comparisons—Before provisioning resources provide a preview of what the resources are projected to cost, as well as a comparison of the same resources in other environments. This will facilitate a level of control and fosters responsible spending.
- Scheduling and expiration—Resources running in cloud environments rack up expenses when they are powered on, even if they are not being used directly. Providing ways to schedule running resources only as they are needed as well as expiring them when they are not will have a huge impact on IT budgets. Turn off the lights over the weekend, especially in public cloud environments.
- Quota and budget alerting—Controlling the amount of compute power in terms of CPUs, memory, and disk for a specific group of users or tasked can help mitigate getting a surprise bill when no one is watching. These controls can be set and managed by resource type, by user, or by specific cloud environment, all with the intent to fine-tune an ideal environment for DevOps teams to work productively without second guessing whether they are running up expenses unnecessarily.
- Billing summaries—Seeing the actual spending on a monthly basis by group or by service can help provide cost transparency. Knowing what groups are using what resources helps to provide input for a cost-benefit analysis of a digital service. These summaries can then be used to inform what quotas and notifications should be set in subsequent months.
Nirvana with CloudBolt
CloudBolt’s enterprise hybrid cloud management platform provides a perfect fit to empower DevOps teams with resources from on-premises legacy infrastructure to every major private and public cloud provider. The platform also provides robust cost management to satisfy the dilemma of balancing speed and agility with spending.
For more information about avoiding slow-car scenarios in your enterprise IT environment, check out our solutions overview
Hybrid Cloud and Hypervisor Management
It’s no secret that most enterprises have a mix of cloud technologies to meet their IT needs. They set up IT environments in both private and public clouds that are used to develop mission-critical applications and run additional services as well as subscribe to software-as-a-service (SaaS) applications to support other business needs.
New cloud initiatives can help enterprise IT achieve:
- Cost savings from capital expenditures (CapEx) to operating expenditures (OpEx)
- Flexibility for application development and deployment
- Ability to scale more efficiently based on demand
- Simplified access to business applications
When these initiatives get out of control, cloud usage can end up giving you unexpected results and catch you off guard—in many ways, it’s like being caught in a rainstorm without an umbrella. Having the visibility of what is running in complex environments is key, just as watching clouds overhead can help you plan for the upcoming downpour. You want to keep an eye out for any storms that can come your way.
Visibility and Control
As your IT department manages the complexity of a hybrid cloud environment, you must consider the following:
User Management
Enterprise user management requires the administration of connections to so many different environments such as IT systems, networks, and SaaS applications. Most enterprises implement role-based access control (RBAC) so that each resource can be accessed based on a level of security and control.
The management of users and passwords can easily get out of hand, especially in large enterprises. This, obviously, hinders overall productivity. One way to mitigate the maintenance of the credentials used across an enterprise is to configure Single Sign-On (SSO) so that everyone in the organization can use just one username and password to access the many IT resources they need. As more complexity enters IT control, maintaining SSO access can become more difficult without a plan in place.
Subscriptions and Metered Usage
SaaS applications, as well as platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS), each have specific accounts to set up and manage. Some of these accounts might have been initially acquired by other departments, but they are now part of central IT.
SaaS applications typically have billing associated with the number of users or access levels of users. This can be easier for setting and maintaining a budget but some of them are using consumption-based pricing so that can be a little more difficult to anticipate. Other “as-a-service” resources can be billed based solely on usage. Their costs can surge during peak times unexpectedly or the resources can be running and racking up expenses with no oversight.
Cloud Resource and On-Premises Inventory
As private, public, and on-premises resources are running together, IT departments will benefit from the ability to discover and maintain an inventory of every environment. In some cases, they can do this with all of the native consoles for a particular environment. For example, they can log in and manage their on-premises inventory for virtual machines (VMs) using tools like VMware vCenter. For other public providers, such as MS Azure, Amazon Web Services (AWS), and Google Cloud Compute (GCP), they can achieve a view of a similar inventory of resources and usage within the clouds themselves.
As most enterprises are adopting a range of cloud resources to meet the growing needs of their users, the ability to maintain a view of the inventory and usage can get overwhelming quickly. For that reason, they seek ways to consolidate views and make it easier to manage whenever they can. Having a robust hybrid cloud management platform that provides this visibility to you in one place will greatly reduce this struggle.
Provisioning and Orchestration
Having the ability to provision and orchestrate which resources are provided for which users has a significant impact on the overall productivity of an enterprise. IT needs to oversee how resources are created, modified and deleted across their on-premises, private, and public cloud environments. Most enterprises now have resources from at least two of the three major cloud providers, AWS, Azure, or GCP.
For large enterprises, this process can be taken care of by IT service management solutions that are designed to help users and IT work together to initiate, track, and respond to any need. A lot of the backend configuration and resource presentation comes from central IT.
What can go wrong?
With all of this complexity, there is also a lot that can go wrong. There can be a lack of visibility into one or more of the environments that provide compute power to the organization leading to a bad understanding of what is being used and how much is providing value. When you couple that with tracking multiple users and what they are doing in each environment, IT departments can end up with that storm that catches them by surprise and never ends.
CloudBolt Helps to Weather the Storm
With the right central platform, your IT department can connect to any 3rd-party resource, gather the inventory, and provision new resources to be used by anyone inside the organization. With this capability in place, you can:
- See IT resource utilization patterns and anticipate potential problems, prioritizing the most important issues first.
- Serve as first responders when IT resources become unavailable with the ability to create replacement resources, helping to triage a situation before the issue becomes a costly problem.
- Identify when resource usage is not cost effective and deploy alternatives, either from one of the many other cloud resources available or by bringing back a more stable workload to the data center.
No matter how you approach this challenge, having a holistic view of your hybrid cloud environment, coupled with the ability to know costs and which functions within your organization are consuming resources, is going to keep you ahead of the game.
Just as a runaway train can cause havoc, so can uncontrolled IT spending. Over the last few years, IT departments have been sidetracked by DevOps teams and other IT initiatives, while many convinced themselves and the leaders in their organizations to innovate on the edges of traditional IT.
If you wanted to experiment or get things done faster, you swiped a credit card and rapidly took advantage of IT services that were not offered by the in-house department. In many cases, you were lured into a free trial. giving powerful resources to almost anyone. That was a real eye-opener. Why did it take so long for traditional IT to give you the same resources?
Over time, some business units essentially ran their own IT services from cloud providers and didn’t even bother with central IT. This happened to many enterprises—and will continue to be the case—until the bills start getting noticed by leaders in C-level suites of the enterprise.
When the Chief Financial Officer (CFO) of an organization drills down on how the expenses are aligned to revenue, it’s no longer business as usual.
Runaway IT
A lot of resources are being provisioned in clouds or in various places within the enterprise without being accounted for, making it difficult to reign in unnecessary spending. it’s very easy to end up with resources that are dormant or running at a much higher capacity than necessary. As a result of the fast pace, some teams can provision things that no one else in the enterprise knows about or even how to use. As people move on from the organization, these resources go unattended.
Most enterprise IT leaders know how difficult it is to understand their spending without the proper enterprise-wide visibility and control of resources. There are so many departments and so many different ways to get things done digitally. The derogatory term “shadow IT” is now obsolete. In fact, one of our industry analysts refers to this distributed IT as just “shadowy” and not with the same disdain that used to be the norm.
I once worked at a company maintaining two collaboration tools for file sharing. One was a trickle-up enterprise account that occurred when we started using Dropbox for business instead of just for personal. A department lead purchased the enterprise version of Dropbox without central IT approval. At the same time, our central IT was moving to a massive Sharepoint initiative to help with collaboration. Both existed for a while. I’m sure since it was a large organization, there are possibly terabytes of files that no one knows about that are being counted in storage fees.
Last week, I heard a good tip that can be compared to getting control of a runaway train of IT resources. Put a Post-it on the employee fridge stating, “Please claim your stuff and date it or it will be thrown out by Friday”. Sounds simple, but it works for the lunch room. Something similar can definitely work for IT too.
Visibility and Control for the Hybrid Cloud
With so many cloud providers and so many resources sprawled out in so many places and user accounts, it’s a huge advantage to have central visibility of what is being consumed as well as by whom. Resource accounts might still be active when users associated with these accounts are no longer part of the company.
From a central platform, IT can connect to any 3rd-party resource, gather the inventory, and provision new resources to be used by anyone in the organization.
This helps in the following ways to:
- See IT resource utilization patterns and anticipate potential problems, prioritizing the most important issues first.
- Serve as first responders when IT resources become unavailable with the ability to create replacement resources to triage a situation before the issue becomes a costly problem.
- Identify when resource usage is not cost effective and deploy alternatives, either from one of the many other cloud resources available or by bringing back a more stable workload to the data center.
To get a deeper look into how CloudBolt can help you reduce cost and manage VM sprawl, check out our Product Overview
Slow cars can annoy many of us—you see one chugging along in the fast lane and you have to slow down or pass them at your own inconvenience. Alternately, you might be the slow one angering those behind you. Whatever the reason, it’s inconvenient.
The same is true in IT – sometimes you have to go slower than expected because of a lack of resources and other times it takes longer than expected to get the resources you need. Whether it’s you or others around you slowing down work, it can be just as frustrating as a slow-paced vehicle. Productivity is stifled while you and others are essentially driving slow cars in what should be a fast race.
Problematic IT Provisioning
There are many factors that can slow down any IT environment, but here’s a short list of some potentially problematic conditions:
- Requesting and provisioning resources requires multiple steps – the process can include a back and forth between a ticketing system and IT operations staff and/or other departments.
- There’s no consistency in provisioned IT resources – each time IT resources are requested, they are provided and provisioned as a custom order rather than from a set of standard offerings that are easier to deliver consistently.
- Getting resources requires special training or technical advice – as new technologies emerge, understanding how to use them requires training or the expertise of others within the organization.
The desire to innovate gets impacted because of the potential ordeal in getting resources efficiently, leading to delays in whatever work is facilitated by the IT resources.
When some users don’t get the resources they want in time, they take risks to get the IT resources they need on-demand from public cloud providers – just one credit card swipe away.
If a well-meaning initiative provides digital innovation without the help of central IT, the hope is that leadership won’t mind.
What’s at risk?
Having all of these disjointed and slow resources for end users within an enterprise will eventually catch up with central IT. Both budget and efficiency will be challenged all the way up to the top. In the meantime, when users get resources they might not have the compute power they wanted, and the turnaround time to fix the issue might make everything even worse.
What happens if the person in charge of the technical aspects of garnering resources moves on to another company? Nothing good.
Self-Service, Centralized by IT
In a previous blog, Balancing Risk and Reward, we discussed a continuum of self-service IT that most enterprises have, ranging from service tickets to a fully managed hybrid cloud platform like CloudBolt.
Many IT leaders are now looking to centralized platforms of self-service provisioning that empower users with easy-to-get IT resources without a lot of back and forth among departments. This makes the central IT department staff the heroes they have always wanted to be.
A centralized platform can help alleviate some of the slow-car scenarios with the following outcomes:
- Improved end-user productivity by providing a self-service IT portal instead of requesting and waiting for resources
- Controlled user access to IT resources that are managed on the backend by IT administrators
- Eliminated need for specialized training and/or technology expertise for end users
- Facilitated innovation with readily available resources for development and testing
An enterprise hybrid cloud platform with pre-built connections to the most common on-premises, private, and public cloud providers, as well as extensibility to any resources, can help.