We get the question all the time… “Can CloudBolt move my VMs from my private cloud to Amazon… or from Amazon to Azure?”

The answer is the same. “Sure, but how much time do you have?”

Cloud-based infrastructures are revolutionizing how enterprises design and deploy workloads, enabling customers to better mange costs across a variety of needs. Often-requested capabilities like VM migration (or as VMware likes to call it, vMotion) are taken for granted, and increasingly customers are interested in extending these once on-prem-only features to help them move workloads from one cloud to another.

fiber_cables_72

At face value, this seems like a great idea. Why wouldn’t I want to be able to migrate my existing VMs from my on-prem virtualization direct to a public cloud provider?

For starters, it’ll take a really long time.

VM Migration to the Cloud

Migration is the physical relocation (probably a better term) of a VM and it’s data from one environment to another. Migrating an existing VM to the cloud requires:

  1. Copying of every block of storage associated to a VM.
  2. Updating the VM’s network info to work in the new environment.
  3. Lots and lots of time and bandwidth (See #1).

Let’s assume for a minute that you’re only interested in migrating one application from your local VMware infrastructure to Amazon. That application is made up of 5 VMs, each with a 50GiB virtual hard disk. That’s 250 GiB of data that needs to be moved over the wire. (Even if you assume some compression, you will see below how we’re still dealing with some large numbers).

At this point, there is only one question that matters: how fast is your network connection?

Transfer Size (GB) Upload speed (Mb/s) Upload Speed (MB/s) Transfer Time (Seconds) Transfer Time (Hours) Transfer Time   (Days)
250 1.5 0.188 1,333,333 370 15.4
250 10 1.25 200,000 55.6 2.31
250 100 12.5 20,000 5.56 0.231
250 250 31.25 8,000 2.22 0.0926
250 500 62.5 4,000 1.11 0.0463
250 1,000 125 2,000 0.556 0.0231
250 10,000 1,250 200 0.0556 0.00231

The result from this chart is clear: the upload speed of your Internet connection is the only thing that matters. And don’t forget that cloud providers frequently charge you for that bandwidth, so your actual cost of transfer will only be limited by how much data you’d like to upload.

Have more data to migrate? Then you need more bandwidth, more time, or both.

If you want to do this for your entire environment, note that you’re effectively performing SAN mirroring. The same rules of physics apply, and while you can load a mirrored rack of storage on a truck and ship it to your DR site, most public cloud providers won’t line up to accept your gear.

The Atomic Unit of IT is Workload, Not the VM

When customers ask me about migrating VMs, they typically want to run the same workload in a different environment—either for redundancy, or best-fit, etc. If it’s the workload that’s important, why migrate the entire VM?

Componentizing the workload can take work, but automating the application deployment with tools such as Puppet, Chef, or Ansible will make it much easier to deploy that workload into a supported environment.

Redeployment, Not Relocation

If migrating whole stacks of VMs to the cloud isn’t practical, how does an IT organization more effectively redeploy workloads to alternate environments?

Workload redeployment requires a few things:

  1. Mutually required data must be available (i.e. database, etc.);
  2. A configuration management framework available to each desired location, or
  3. Pre-built templates that have all required components pre-installed.

I won’t spend the time here talking through all of these points in detail, but I will say that any of these options requires effort. Whether you’re working to componentize and automate application deployment and management in a CM/automation tool, or re-creating your base OS image and requirements in various cloud providers, you’re going to spend some time getting the pieces in place.

A possible alternative to VM migration is to deploy new workloads in two places simultaneously, and then ensure that needed data and resources are mirrored between the two environments.  In other words, double your costs, and incur the same challenges with data syncing. This approach likely only makes sense for the most critical of production workloads, not the standard developer.

Ultimately, Know Thy Requirements

It seems as though the concept of cloud has caused some people to forget physics. Although migrating/relocating existing VMs to a public cloud provider is an interesting concept, the bandwidth required to effectively accomplish this is either very expensive, or simply not available. Furthermore, VM migration to a public cloud assumes that the performance and availability characteristics of the public cloud provider are the same or better than your on-prem environment… which is a pretty big assumption.

While there are some interesting technologies that are helping with this overall migration event, customers still need to do the legwork to properly configure target environments and networks, not to mention determine which workloads can be effectively moved in the first place. Technology alone cannot replace sound judgment and decision making, and the cloud alone will not solve all of your enterprise IT problems.

And don’t forget that IT governance in the public cloud is much more important than it is in your on-prem environment, because your end users are unlikely to generate large cost overruns when deploying locally. If you don’t control their access to the public cloud, you will eventually get a very rude awakening when you get that next bill.

Want Some Help?

So how does CloudBolt actually satisfy this need? We focus on redeployment and governance. One application, as provided by a CM tool, can be deployed to any target environment. CloudBolt then allows you to define multi-tiered application stacks that can be deployed to any capable target environment. Your users and groups are granted the ability to provision specific workloads/applications into the appropriate target environments, and networks. And strong lifecycle management and governance ensures that your next public cloud provider bill won’t break the bank.

Want to try it now? Let us set you up a no-strings-attached demo environment today.

schedule a demoor try it yourself

CloudBolt’s flagship cloud management platform helps seamlessly manage private cloud environments offered by Blackboard’s Cloud Services Team.

Reston, Virginia – September 23, 2014. CloudBolt has been handpicked by Blackboard’s Cloud Services team to help manage the company’s large enterprise private cloud environment.

“We’re excited to be working with CloudBolt,” says Blackboard’s Director of Cloud Services Infrastructure, Al DeGregorio. “CloudBolt’s solution is the best-of-breed of the various alternatives we investigated.  Given our large and complex environment, we determined that CloudBolt provided us with the most powerful and compelling solution that also represented the lowest TCO of any of the products we evaluated,” said DeGregorio. He continued: “Once we determined CloudBolt was the solution for us, we had thousands of servers under management in just a few days.”

Blackboard’s Cloud Services Infrastructure team provides Infrastructure-as-a-Service offerings to the company’s product development teams, corporate IT as well as hosting services to customers that prefer hosted access to the company’s market-leading education technology solutions.

“Blackboard’s Cloud Services environment is large and complex,” says CloudBolt’s EVP of Marketing, Justin Nemmers. “Despite the complexities, our initial proof-of-concept was complete in an astounding two and a half days.” Nemmers continued: “It’s my understanding that in addition to the evaluated competition’s higher total cost of ownership, CloudBolt was the only tool that effectively integrated out-of-the-box with Blackboard’s existing technology environment, presenting their existing resources as private cloud to their broad and demanding customer base. CloudBolt also provides them the flexibility to make future architecture decisions without disrupting their end users,” concluded Nemmers.

CloudBolt has been rapidly expanding its enterprise footprint with customers across a variety of markets, including the US Government, managed service providers, public universities, and global retail giants.  Adding to their string of successes, CloudBolt was also selected as a Gartner Cool Vendor in Cloud Management earlier this year.

About CloudBolt Software

CloudBolt Software transforms how IT interacts with business.  CloudBolt is an on-premises unified IT manager and self-service IT portal that leverages existing IT resources and technologies to create a private and hybrid/heterogeneous cloud environment in minutes.  Your IT organization can be more agile with CloudBolt, automating the request, provisioning, and ongoing management of systems and applications from an intuitive user interface, or through a common API. IT organizations that use CloudBolt are more agile, and significantly improve service to lines-of-business. Please visit www.cloudboltsoftware.com for more information and resources about CloudBolt.

Disclaimer: Gartner does not endorse any vendor; product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Gartner, Cool Vendors in Cloud Management, 2014, Donna Scott, Milind Govekar, Gregor Petri, Bob Gill, April 17, 2014

###

Schedule a Demo

Introducing the latest release of CloudBolt C2: v4.5

Connector Updates

With C2 v4.5, we’ve added two new connectors that further expand the breadth of technologies IT organizations can manage from a single-pane-of glass.

Google Compute Engine support gives administrators the ability to seamlessly offer end users controlled access to yet another public cloud provider. This includes the ability to install and manage applications from a supported configuration manager, as well as the ability to include GCE instances in C2 Service Catalog service templates.

Google Cloud Platform CloudBolt C2
Google Compute Engine CloudBolt C2
C2 v4.5 includes support for Google Compute Engine in the Google Cloud Platform.

We’ve also totally re-written and re-based our OpenStack connector. In this update, we’ve focused on compatibility, and we’re now able to support Icehouse, Havana, and Grizzly from the major OpenStack providers such as Mirantis. Of course, C2 can include OpenStack-backed resources when provisioning applications, running external flows, and accounting for licenses, just to name a few. C2 is already the best dashboard for OpenStack, and it’s getting even better with each release. No Horizon development needed!

openstack-cloud-software-vertical-large

We’ve also made some additional updates to our vCenter connector, including improved error handling when your VMware tools are out of date, and allowing for longer Windows hostnames. We’ve also made the Windows disk extending messages more clear and straightforward.

Amazon Web Services has also received some developer love. C2 now synchronizes both the public and private IP addresses for each AWS EC2 instance.

Configuration Management

We worked closely with the engineering team at Puppet, and now have a unique capability: C2 can now discover and import classes from a Puppet server.

Chef integration is even better: C2 now enables Chef bootstrapping on Windows and Ubuntu Linux systems.

User Interface Updates

Updates to the C2 UI are perhaps more subtle, but focused on helping users and administrators more effectively manage large numbers of applications and servers. We’ve integrated simple indicators describing the total number of selected items in each data table, making it much easier to manage large environments.

Did you know that you can use C2 to open network-less console connections on C2-managed servers? We’ve made this feature faster and more reliable in C2 v4.5.

Upgrading

Upgrading C2 is just like any other feature in C2: fast, easy, and predictable. Upgrading to C2 v4.5 is now even faster and easier than before.

Sounds Great, I Want It!

CloudBolt C2 has been recognized by Gartner for our industry-leading time to value. We effectively eliminate the barrier to entry for enterprise Cloud Management. C2 v4.5 is available today. Request a download, and you’ll be up and running in your own environment in no time.

(Originally posted in the In-Q-Tel Quarterly)

The cloud-enabled enterprise fundamentally changes how personnel interact with IT. Users are more effective and efficient when they are granted on-demand access to resources, but these changes also alter the technical skill-sets that IT organizations need to effectively support, maintain, and advance their offerings to end users. Often, these changes are not always immediately obvious. Automation may be the linchpin of cloud computing, but the IT staff’s ability to effectively implement and manage a cloud-enabled enterprise is critical to the IT organization’s success and relevance. Compounding the difficulties, all of the existing legacy IT systems rarely just “go away” overnight, and many workloads, such as large databases, either don’t cleanly map to cloud-provided infrastructure, or would be cost-prohibitive when deployed there. The co-existence of legacy infrastructure, traditional IT operations, and cloud-enabled ecosystems create a complicated dance that seasoned IT leadership and technical implementers alike must learn to effectively navigate.

In-Q-Tel Quarterly Image

In the past five or so years, and as enterprise IT organizations have considered adopting cloud technologies, I’ve seen dozens of IT organizations fall into the trap of believing that increased automation will enable them to reduce staff. In my experience, however, staff reductions rarely happen.  IT organizations that approach cloud-enabled IT as a mechanism to reduce staffing are often surprised to find that these changes do not actually reduce complexity in the environment, but instead merely shift complexity from the operations to the applications team. For instance, deploying an existing application to Amazon Web Services (AWS) will not make it highly available.  Instead of IT administrators using on-premises software tools with reliable access—and high speed, low-latency network and storage interconnects—these administrators must now master concepts such as regions, availability zones, and the use of elastic load balancers. Also, applications often need to be modified or completely re-designed to increase fault tolerance levels. The result is that deployments are still relatively complex, but they often require different skillsets than a traditional IT administrator is likely to have.

A dramatic shift in complexity is one of the reasons why retraining is important for existing IT organizations.  Governance is another common focus area that experiences significant capability gains as a result of cloud-enabled infrastructure.  Automation ensures that every provisioned resource successfully completes each and every lifecycle management step 100% of the time.  This revelation will be new to both IT operations and end users. I’ve also frequently seen components of the IT governance mechanism totally break down due to end user revolt—largely because particularly onerous processes could be skipped by the administrators as they manually provisioned resources.

Cloud-based compute resources will dramatically change the computing landscape in nearly any organization I’ve dealt with. For example, one IT Director worked to automate his entire provisioning and lifecycle management process, which resulted in freeing up close to three FTE’s (Full Time Equivalent) worth of team time.  Automating their processes and offering end users on-demand access to resources helped their internal customers, but it also generated substantial time savings for that team. The IT director also recognized what many miss: the cloud offerings may shift complexity in the stack, but ultimately all of those fancy cloud instances are really just Windows and Linux systems. Instances that still require traditional care and feeding from IT. Tasks such as Active Directory administration, patch management, vulnerability assessment, and configuration management don’t go away.

Another common learned-lesson that I have witnessed is that with shifting complexity comes dependence on new skills in the availability and monitoring realms. Lacking access to physical hardware, storage, and network infrastructure does not remove them as potential problem areas. As a result, I have seen organizations too slowly realize that applications need to be more tolerant of failures than they were under previous operating models.  Making applications more resilient requires different skills that traditional IT teams need to learn and engrain in order to effectively grow into a cloud-enabled world. Additionally, when developers and quality assurance teams have real-time access to needed resources, they also tend to speed up their releases, placing an increased demand on the workforce components responsible for tasks such as release engineering, release planning, and possibly even marketing, etc.

I’ve encountered few customers that have environments well suited for a complete migration to the public cloud. While a modern-day IT organization needs to prepare for the inevitability of running workloads in the public or community clouds, they must also prepare for the continued offering of private cloud services and legacy infrastructures. Analyst firms such as Gartner suggest that the appropriate path forward for IT orgs is to become a broker/provider of services. The subtext of that statement is that IT teams must remain in full control over who can deploy what, and where. IT organizations must control which apps can be deployed to a cloud, and which clouds are acceptable based on security, cost, capability, etc. Future IT teams should be presenting users with a choice of applications or services based on that user’s role, and the IT team gets to worry about the most appropriate deployment environment. When this future materializes, these are all new skills IT departments will need to master. Today, analyzing cloud deployment choices and recommending the approaches that should be made available are areas that typically fall outside the skillsets of many IT administrators. Unfortunately, these are precisely the skills that are needed, but I’ve witnessed many IT organizations overlook them.

The Way Ahead

While IT staff can save significant time when the entirety of provisioning and lifecycle management is automated, there are still many needs elsewhere in the IT organization. The successful approaches I’ve seen IT organizations use all involve refocusing staff to value-added tasks. When IT administrators are able to spend time on interesting problems rather than performing near-constant and routine provisioning and maintenance, they are often more involved, fulfilled, and frequently produce innovative solutions that save organizations money. Changing skillsets and requirements will also have a likely affect on existing contracts for organizations with heavily outsourced staffing.

Governance is another important area where changes in the status quo can lead to additional benefits. For example, manually provisioned and managed environments that also have manual centralized governance processes and procedures typically have significant variance in what is actually deployed vs. what the process says should have been deployed: i.e. processes are rarely followed as closely as necessary. No matter how good the management systems, without automation and assignment, problems like Virtual Machine “sprawl” quickly become rampant. I’ve also seen scenarios where end users revolt because they were finally subjected to policies that had been in place for a while, but were routinely skipped by administrators manually provisioning systems. Implementing automation means being prepared to retool some of the more onerous policies as needed, but even with retooled processes, automated provisioning and management provides for a higher assurance level than is possible with manual processes.

Automation in IT environments is nothing new. However, today’s IT organizations can no longer solely rely on the traditional operational way of doing things. Effective leadership of IT staff is critical to the organization’s ability to successfully transition from a traditional provider of in-house IT to an agile broker/provider of resources and services. Understanding the cloud impacts much more than just technology is a great place to start.  This doesn’t mean that organizations that are currently implementing cloud-enabling solutions need to jam on the brakes, just realize that the cloud is not a magic cure-all for staffing issues. Organizations need to evaluate the potential impact of shifting complexity to other teams, and generally plan for disruption. Just as you would with any large-scale enterprise technology implementation, ensuring that IT staff has the appropriate skills necessary to successfully implement and maintain the desired end state will go a long way to ensuring your success.

Ready to learn more? See how CloudBolt can help you.

Why CloudBolt is Important When Adopting OpenStack

“If I’m moving to OpenStack, why do I need a Cloud Manager like CloudBolt C2?”

As organizations look to extend their footprints beyond the traditional virtualization infrastructure providers (read: VMware), we hear questions like this both more frequently, and with more fervor. It’s a good question. At face value, many people see projects and products like OpenStack, and just assume that they compete directly with CloudBolt C2, but actually, when used together, the two products each provide distinct benefits that are absolutely game changing.

OpenStack Cloud Software

Despite the influx of added code and interest in Horizon, this still represents a rather significant, and complex barrier to full OpenStack adoption in the enterprise.  In my conversations with many large organizations that are implementing OpenStack, it’s become apparent that nearly every single one is either writing their own non Horizon-based front-end interface on top of OpenStack, or purchasing a commercially-available front-end (i.e. CloudBolt C2). Those organizations that are developing their own UIs are effectively signing up to maintain that code and project in-house for the life of their OpenStack environment.

Why C2?

We can look deeper into one example: updating a UI option for an instance order form. In Horizon, it requires advanced knowledge of Django and Python, and creates upgrade problems down the road. (Random aside: Want more info on UI and how difficult it is to make a good one? Read more here.) In C2, updating the order process takes a non-developer just a few clicks. Add to that C2’s built-in rates, quotas, ongoing server/application management, and software license management, and the potential value-add to the build vs. buy decision becomes quite real.

Beyond the configurability of the interface itself, there is the question of choice, and existing complexity. Chances are your IT environment contains a significant number of technologies—some of which will integrate well with OpenStack, and others that will not. And then, it apparently does matter which vendor’s OpenStack you decide to purchase, given Red Hat’s ominous announcement at the OpenStack Summit about their impending support policy changes.

Despite this concerning policy shift, OpenStack vendors will continue expanding support for proprietary tools and platforms, but are unlikely to solve the equation for every technology present in typical IT organizations’ legacy environments.  In the end, OpenStack– from any vendor–  will force a choice: roll your own capability, or replace what you’ve got with something more OpenStack friendly. Using C2 can ease this transition by managing everything in the environment- OpenStack, legacy systems, public cloud providers, configuration management systems, etc.. End users will not know where their servers and applications are actually being deployed. IT again owns the decision of the best underlying environment for the workload.

Given these points, the difficulty of implementation and ongoing support of your existing infrastructure and environments means that the only real scenario when implementing OpenStack is to run two environments in parallel—one is your existing environment making continued use of existing integrations and technologies—and the second is the new OpenStack-based one, which will largely be a re-implementation and re-basing of both technology and process. The IT organization can then begin the task of migrating workloads from the legacy environment to OpenStack.

When run alongside existing IT, new environments absolutely benefit from a unified visualization, reporting, quotas, access, and management. This is another reason why C2 is still important in enterprises that are moving to OpenStack. Few organizations that are investing in OpenStack immediately replace their existing technology. Their environments are a mix of legacy and modern, and they need to find ways to effectively manage those stacks. Rapidly growing businesses also frequently need to ingest infrastructure and technology from acquired companies.

OpenStack is gaining significant momentum in IT, and for good reason. IT organizations looking for ways to further commoditize their technology stacks see OpenStack as a great way to build and maintain a standards-based private cloud environment, and they’re largely right. C2 is a critical component into easing the adoption of not just OpenStack, but also other disruptive technologies.

Ready to get started? 
schedule a demo