History of Technology: From Mainframe to Hybrid Cloud

From Mainframe to Hybrid Cloud

For Tech’s Future, the Only Constant is Change

Computing infrastructure has come a long way in the last 50 years and the rate of change continues to rise. In order to inform our view of where infrastructure and computing platforms are going in the future, we need to take a look at the past and how far we’ve come…

1960s – Mainframes & Timesharing

During the mainframe epoch, companies had only a few large computers, occupying entire rooms. They usually required physical access to use (though logging in over phone lines with primitive remote terminals started to emerge in the ‘60’s). Maintenance of these machines required several physical operators, and automation of this maintenance was not widely considered.

1970s – Advent of Personal Computers

During the ‘70’s, computers, started showing up on people’s desks, albeit in a limited manner, instead of filling entire rooms. Administration of these ‘desktops’ was still done locally.

1980s – Networks Arise

The ’80’s saw a growth, dare I say widespread connectivity of computers, including modems appearing everywhere, DNS’ creation in ’83, usenet, gopher, and even the inception of the World Wide Web (WWW) in ’89. Still, servers that large organizations owned and operated remained on a relatively small scale, and administration of these servers was done on a one-off basis.

1990s – The Web and the Proliferation of Physical Servers

The ’90’s saw drastic change to computing infrastructure. The first popular web browser (Mosaic) was released in ’93 and suddenly more servers were needed than before. Companies quickly moved from having 10’s or 100’s of servers, to having tens of thousands of servers.

This shift required a new approach to server management. No longer could you hire one system administrator to manage each server, (or a few), instead they each needed to manage hundreds, an insurmountable task without automation. Cfengine was released as the first configuration management tool in 1993, but the demand for automation greatly outpaced available solutions.

2000s – Virtualization

In the early ‘00’s, The maturation of Linux catalyzed a shift from traditional, proprietary Unix systems (such as Solaris, HP-UX, and IBM AIX, which ran on proprietary hardware) to RedHat and other Linux variants, running on commodity hardware. This was fueled by, and in turn further fueled the expansion of the number of servers that needed to be managed.

The early ‘00s saw the emergence of a new class of products called Data Center Automation (DCA) products, including Opsware and Bladelogic, however, Sun and HP had their competitors too.

Virtualization became accepted in the mid-’00s, first in dev/test labs, but increasingly in production environments. This let companies slow the growth in physical hardware, while still adding virtual servers, each of which with its own running OS. This resulted in more operational efficiency, but management nightmares abounded as it became harder to track, patch, upgrade, and secure all of the operating systems running in a datacenter.

More mature configuration management tools rose from this chaos, notably Puppet and Chef (founded in 2005 and 2008, respectively), and the term DevOps was coined in 2007.

2010s – Public Cloud

While AWS was released (in non-beta form) in 2008, it wasn’t widely adopted as a platform for enterprise computing until the 2010s. Suddenly, all of the frustrations of dealing with one’s own data centers could be solved with a credit card. No more waiting for the IT team to create your VM, no more dealing with the possibly obstructionist networking, database, or security teams…

Companies throughout the 2010s have shifted back and forth between bearishness and bullishness on the public cloud (often depending on how recent and shocking their last bill was). However, serious, enterprise IT shops are realizing that hybrid cloud is truly the best solution, and the end goal. Hybrid cloud enables them to use a mix of public clouds and their own datacenters, choosing the best environment for each workload. Hybrid also allows IT admins to use public cloud during times when demand is above average, scaling back down when demand subsides, so they do not wind up with a gigantic bill.

The Advent of a Hybrid Cloud Management Platform

The first pre-release of what is now CloudBolt was created in 2010. The gap that my co-founder Auggy and I saw was that, in larger companies, the interface to the IT organization was broken. Developers (and other folks who needed IT resources), would submit a ticket to IT, and then wait weeks to get what they needed (often a VM, but sometimes a physical server, network change, storage allocated, etc). This problem was exacerbated in the ‘90s and ‘00s with the rise in demand of access to servers and VMs, and was made highly contentious by the advent of public cloud (“If I can get a VM in minutes from AWS, why do I have to wait weeks for my IT people to get me one?!”).

CloudBolt brings the experience of using a company’s private datacenter up to par with the public cloud experience, and spans the eras from the physical server age, through VMs and public clouds, to the container-based and serverless age of computing. Most large companies today have a bit of each of these eras represented, and now CloudBolt provides them with a unified, easy-to-use interface to provision, manage, and control all of the artifacts of past eras in a consistent way, and from a common web interface and API.

2020s – Hybrid, Containers, and Serverless

So that brings us to the next decade. The one thing we know for sure in today’s world is that the only constant is change.

Want to learn more?  Check out our Solution Overview or check out our videos for more info!