thumbnail

The data center has gone through a tremendous transformation over the past twenty years. Significant improvements have been made related to core infrastructure, including energy efficiency and cooling improvements, and higher resilience standards have been introduced. 

However, while this was going on, an even more significant change was happening on the IT infrastructure side. Standalone rack servers became more powerful, and emerging virtualization technology allowed for greater workload density to be achieved in one bare metal machine. This also led to the virtualization of traditional storage arrays and networking functions, which allowed them to more easily scale using commodity hardware.

Public cloud hyperscalers, like AWS, Microsoft, and Google, deal with millions of servers and are perfect examples of how the principles of software-defined data centers (SDDC) can help effectively manage and maintain global infrastructures used by thousands of customers. The same principles can also be successfully applied to much smaller data centers to enable efficient management and scaling of your physical IT infrastructure.

Summary of software defined datacenter best practices

The table below summarizes the best practices outlined in detail later in this article.

Best practiceDescription
Evaluate the current stateThe existing state can influence which technical solutions could be most beneficial for the desired outcome
Use the public cloud as an SDDC acceleratorThe benefits of SDDC can be realized by adopting a public cloud provider using a public or hybrid model.
Scale maximization with hyper-converged solutionsPlan and rightsize your HCI infrastructure scale units and adjust according to the workloads they are running.
Automate workload provisioningMinimize manual effort in provisioning compute, storage, and networking resources while improving consistency.
Invest in operational automationLeverage the additional insights that a single management interface can provide and reduce operational overhead.

The main pillars of SDDC

The term “software-defined data center” typically incorporates software-defined compute nodes (workload virtualization), software-defined networking (SDN), and software-defined storage (SDS). As the term implies, those are software solutions that can be deployed on more or less generic server hardware; unlike a classical data center, these are not fixed components. We look at each of these components in more detail below.

The components of an SDDC

Server virtualization

Server hardware virtualization and the ability to simultaneously run more than one operating system on one compute node are the oldest of the concepts discussed in this chapter. They have seen fairly wide enterprise-level adoption for 15 years.

Back then, the hypervisor’s ability to better utilize hardware and run distinct isolated systems in one server represented a significant business cost reduction. Engineers and sysadmins quickly embraced the idea of treating an entire server like a software workload. They enjoyed the perks of simplified backup and restore practices and much easier maintenance and uptime improvements with hardware clustering and live workload migration.

This is why the hypervisor still remains at the heart of an SDDC, and players like VMware (with their vSphere), Microsoft (with Hyper-V), or KVM on the open-source software front are typically found in a lot of private data centers running these products as their virtualization foundation. And, most importantly, they can be installed and run on any commodity x86 server hardware.

Over the past five years, the virtualization platform has been extended with containerization, which takes care of packaging and running application-level workloads rather than full operating system images. While container workloads are also a very old concept, they have been reinvented with the emergence of Docker and made enterprise-ready by Kubernetes.

Software-defined networking 

Network switches and routers have typically been treated as independent devices and a completely different layer of infrastructure than servers or storage. Management-wise, each device is treated individually, which represents a significant scaling bottleneck.

Layers of a typical SDN stack
Layers of a typical SDN stack

In contrast, the software-defined networking (SDN) philosophy relies on separating the network into control and data planes, creating the opportunity for centralized management of the entire network stack, with physical or virtual switching components taking care of just the data forwarding. The configuration changes of these physical devices can then be made instantly within a large infrastructure. APIs are used for communications, and virtualized components like network firewalls and load balancers can be configured and provisioned on demand in a way that is similar to virtual machines.

This software-defined paradigm for networking allows operating on two distinct levels: the underlay network, which is the lowest level of networking that can be considered the core physical network in a traditional data center, and an overlay network, which is built on top and allows flexible routing and network segmentation. The overlay network is what end application owners operate on without introducing wide-impact core network changes.

“Suddenly, I can offer an engineer productivity! Where it used to take them roughly 40 hours to build up a system to overlay their tools, I deliver all of that in minutes with CloudBolt.”

Sr. Director, IT Operations, Global Industrial Manufacturer

Learn More

Software-defined storage

Data is one of the most valuable assets of any organization. For this reason, readily available and fast storage is typically a fairly high priority in the new equipment purchase list.

The most simple storage in servers is directly attached storage: internal hard or solid-state drives, which are connected into a RAID array to increase availability and/or performance. For more highly available configurations, where the availability of the data should extend beyond the availability of a single server, dedicated highly available storage systems called storage area network (SAN) arrays are typically used. These generally provide storage either over Ethernet via the iSCSI protocol or using dedicated Fibre Channel fabrics. They are capable of serving highly available, fast pooled storage to multiple servers.

This approach allows meeting typical enterprise requirements for performance and availability. However, SANs can be expensive, purpose-built systems with limited expandability, representing a significant expense line and a potential bottleneck for growing IT infrastructures.

Software-defined storage (SDS) builds upon server virtualization technology and addresses some of the shortcomings of traditional SANs. To achieve high availability, multiple physical storage nodes (servers) pool their storage devices together, creating a virtual resource that spans multiple nodes and can then be presented to servers as backend storage. Unlike RAID arrays in single servers, this storage pool becomes a part of RAIN: a redundant array of independent nodes.

Components of a software-defined storage system
Components of a software-defined storage system

SDS brings significantly more flexibility than SAN arrays. Not only can the storage devices on each server be customized, the number of servers can also be expanded to increase performance or redundancy. This means that the failure of more individual servers could have no significant impact on the operations of the infrastructure.

How everything fits together

While the concepts of software-defined servers, networks, and storage make a lot of sense individually, all of them can also be combined together. Infrastructure built using commodity servers with virtualized computing, networking, and storage is called hyperconverged Infrastructure (HCI). In this infrastructure, each physical server contributes to all three aspects:

  • The server provides compute resources (CPU and RAM) to host the virtualized workloads and form a highly available virtualization cluster.
  • The same server contributes its internal drives to the highly available storage pool, which is then presented back to the same group of nodes.
  • It also contains the virtualized network controller instance that handles communication between the underlay and overlay networks and ensures that the workloads have the intended network configuration.

In such scenarios, a single server becomes a scaling unit, making expansion of infrastructure capacity or performance easy in single-node increments. A good example of such solutions is a VMware-based infrastructure consisting of certified third-party hardware nodes with the three software components deployed: vSphere for compute virtualization, vSAN for storage virtualization, and NSX for software-defined networking.

Completely integrated solutions from a single vendor also exist on both the hardware and software sides. Cisco’s HyperFlex encompasses Cisco’s UCS physical servers together with the HX Data Platform SDS components and the ACI SDN platform, which further expands into the physical Nexus switching infrastructure.

“Developers are overwhelmed by the amount of security configurations that are needed to secure the cloud…they no longer have to be security experts or worry about creating vulnerabilities for the organization.”

SVP Infrastructure & CISO, National Financial Company

Learn More

Going above and below the stack

Like many other things in IT, HCI infrastructure can be viewed as a set of infrastructure layers in a stack. It builds on top of existing servers and hands the set over to workload provisioning. To extend the functionality and provide more automation and alignment, both physical server provisioning and workload provisioning can be addressed as well.

While adding two or three new servers to your infrastructure in a quarter does not seem like a demanding task, it can quickly get challenging as the number of servers multiplies. Unless you’re using a turnkey solution from your HCI provider with preconfigured nodes, bare metal provisioning platforms like Razor or Cobbler are highly recommended to extend the software-defined model and create a zero-touch provisioning of servers over the network.

Above the HCI stack, the organization typically proceeds to automate the provisioning and management of its workloads to deliver a true private cloud experience. While there are multiple ways to do that, a cloud management platform like CloudBolt can significantly accelerate this process.

The main reasons to go SDDC

SDDC brings many benefits:

  • Increased scaling potential and efficiency: Having just a single scaling item to expand the infrastructure is an effective mechanism for maintaining available capacity and planning for future growth. The ability to scale by a factor of one server prevents organizations from overspending on capacity; this is especially true with storage arrays.
  • Vendor lock-in control: SDDC is what it sounds like, putting more emphasis on the software and running the infrastructure on commodity hardware. If one vendor hikes up prices significantly, it’s far easier to start consuming hardware from another.
  • Simplified management: Going beyond individual device control facilitates making changes significantly faster and operating much bigger infrastructures. In addition to that, a higher level of integration among compute, storage, and networking enables better insights into the operational aspects of the infrastructure.
  • Ability to choose from a range of solutions, from turnkey to fully “built-it-yourself”: There are multiple ways to build an HCI infrastructure. Complete solutions can be obtained from a single vendor; good examples of such options include Azure Stack HCI, Nutanix, and Dell VxRail. Even though infrastructure from a single provider has its benefits in terms of supportability, different options from multiple vendors could be used successfully as well. A full HCI infrastructure can also be built using open-source components like CEPH, Open vSwitch, and KVM, albeit with significantly more development and operational effort.

Best practices for designing your software-defined data center

To achieve maximum success in creating your SDDC, follow these industry best practices.

Evaluate the current state

There is a very good chance that the vendors of hardware and software that are currently in any active data center have offerings that could complement the existing infrastructure and bring more SDDC benefits. SDDC is not an “all or nothing” solution and can be adopted in alignment with the existing infrastructure lifecycle.

Maybe there is a managerial burden that a centralized management solution could address? Perhaps there is a plan to build a new application that would benefit from having a separate scalable infrastructure rather than work on expanding an existing solution? Answers to such questions will help you understand which requirements align with SDDC’s strong suits.

Use the public cloud as an SDDC accelerator

Public cloud providers have built their business model on the SDDC paradigm. They operate vast fleets of very similar servers and are able to provide hundreds of distinct services for thousands of customers. Just as described in the SDN chapter, these cloud providers rely on overlay networks where customers are free to build their network topologies without having any impact on the cloud provider’s global connectivity.

While it might not seem obvious, the adoption of a public cloud provider can also be a step toward SDDC. The public cloud brings much of the automation and convenience of SDDC, but it eliminates a lot of the underlying infrastructure maintenance that companies would typically have to take care of.

Major cloud providers also offer part of the public cloud experience in their customer’s data centers. Solutions like Azure Stack Hub or AWS Outposts not only encompass the HCI infrastructure but also the management overlay and public cloud integration options.

“We were surprised at how few vendors offer both comprehensive infrastructure cost management together with automation and even governance capabilities. I wanted a single solution. One vendor to work with.”

Phil Redmond, General Manager of Services, Data#3

Learn More

Scale with hyperconverged solutions

As mentioned earlier, HCI nodes provide the ability to scale the infrastructure by the smallest increment of just one server. They can also bring dozens of servers online at the same time, if significant future capacity demand is projected.

Apart from just the amount of CPU, memory, and storage capacity, scaling out an HCI cluster also has the benefit of increasing the storage performance of already running workloads. If no storage capacity or performance increases are needed, in some cases, nodes without internal storage devices can be added and still participate in the same HCI clusters as compute nodes.

In addition to looking at the infrastructure as a single platform, we can have multiple smaller distinct clusters that we can assign different roles, e.g., to run completely separate critical software systems on. This can enable isolated yet highly available and self-sustained infrastructure groups.

Automate workload provisioning

Adding new physical resources in your SDDC can typically be very pain-free. Apart from the racking and stacking and the initial preparations, the new hardware can typically be adopted by the SDN or SDC controllers easily, and the new capacity will be ready to be used.

The automation of core infrastructure is typically viewed as the hardest part in an end-to-end software delivery process. With this part not being taken care of, it only makes sense to automate the workload provisioning as well.

While some HCI solutions transition into self-service workload provisioning automation, they typically end with one or a few basic IaaS components, like virtual machines. Combining that into repeatable infrastructures for your applications requires additional effort, often implemented with custom scripts and declarative code using tools like Ansible or Progress Chef. If your organization is looking to bring the benefits of automation to technical and non-technical users, a Cloud Management Platform offers the right balance..For example, CloudBolt obfuscates the code and executes it with clicks from a user interface, while senior engineers could continue coding and exposing these solutions as self-service items.

Invest in operational automation

It is important to not get lost in the capabilities of an SDDC and still remember that the entire infrastructure is your responsibility. That said, a modern SDDC allows the management of all aspects more conveniently with the use of well-documented APIs that can be used to observe and extract data and then react either automatically or manually.

These operational responsibilities typically include the following items. Note that we have linked each to a corresponding page explaining the functionality as provided by CloudBolt:

Having a common automated platform that allows you to discover, track, and automate these operational aspects is essential to ensuring that your data center is able to deliver value both now and in the future. The right management platform for your SDDC will have the ability to both provision and configure new workloads with current industry best practices and tooling you might already be using. It will also bring the overall management experience together, making it unnecessary to jump across multiple management tools for capacity management, virtualization, or networking support.

Level Up Your Data Center With CloudBolt

Learn More

icob

Integrate with best of breed tools like Terraform and Ansible

icon

Create a self-service UI-based catalog of controlled automation for standard IT requests

icon

Improve security posture and regulatory compliance with preset policies

Conclusion

SDDC is a great concept that brings much-needed convenience into the typically very inert market of core IT infrastructure. However, while it has the potential to simplify scaling and operational tasks, it is important to understand its underlying principles to make the most of it. Remember that even though software-defined data center infrastructure brings some of the features seen at the public cloud providers, it still is the full responsibility of the end customer to operate it end to end.

You Deserve Better Than Broadcom

Speak with a VMWare expert about your migration options today and discover how CloudBolt can transform your cloud journey.

Demand Better

Explore the chapters:

Related Blogs

 
thumbnail
The New FinOps Paradigm: Maximizing Cloud ROI

Featuring guest presenter Tracy Woo, Principal Analyst at Forrester Research In a world where 98% of enterprises are embracing FinOps,…

 
thumbnail
What is cloud fabric orchestration

Understanding the Cloud Fabric The cloud fabric encompasses all the different clouds, services, applications, tools, and threads interweaving to form…

 
thumbnail
CloudBolt named market leader in GigaOm Radar Report for FinOps

 
thumbnail
VMware Migration – Evaluating your Options

Near the end of 2023, millions of users waited with abated breath to see if Broadcom’s $69 billion acquisition of…