GCP Cost Optimization
Cost optimization for a public cloud can be summarized as maximizing cost reduction while causing little to no performance impact on the cloud deployment.
Cost considerations are at the center of any public cloud deployment; they are some of the biggest factors influencing any decision to move services to the cloud. This article aims to help with understanding the cost of deploying infrastructure and services to the cloud and, more importantly, optimizing cost by choosing the best options for your requirements.
Public clouds are engineered to meet the infrastructure requirements of almost all organizations, each of which has its own unique challenges. This flexibility introduces a degree of complexity. Unlike hardware purchased by an organization itself, which can be tailored to meet exact specifications, public clouds have to be flexible enough for almost every use case. This flexibility means there are a plethora of options to choose from when deploying infrastructure.
Google Cloud Platform (GCP) is no different. There are many different options to choose from when deploying resources like virtual machines, storage units, and network interfaces, each with its own cost, which means it’s easy to get confused.
In this article, we look at some of the best practices for optimizing GCP cost. This is the lead article in a series addressing various areas of GCP cost optimization, so please refer to the chapters at the end to get a deeper understanding of related concepts.
Summary of key concepts
The following table summarizes the key concepts covered in this article.
|Use the tools provided in GCP to control costs, set budgets, and use informative billing dashboards.
|Take advantage of discounts by committing to usage levels, powering off VMs, and using preemptible VMs.
|Size your instances to get the best cost-to-performance ratio.
|Enable an automatic process to scale the machine group up or down to meet utilization demand and ensure that you’re never paying for more than you need.
|Configure retention policies and apply optimization techniques to reduce backup costs.
|Choose the correct storage and location type to meet performance and redundancy requirements while reducing cost wherever possible.
GCP lets you create multiple projects, each with its own infrastructure and configurations. Every project must be linked to a billing account that holds the payment details.
GCP allows multiple projects to be linked to a single billing account to avoid the repetitive entry of billing details. For billing to be considered “enabled” for a given project, the billing attached must be an active billing account. If a project is not yet linked or is linked to an inactive billing account, it can only use free-tier services.
See the best multi-cloud management solution on the market, and when you book & attend your CloudBolt demo we’ll send you a $75 Amazon Gift Card.
GCP has a detailed, native Billing Dashboard that lets administrators keep track of costs and stay within their operating budgets.
The Billing Dashboard gives administrators access to powerful reports they can run to track their costs and understand their billing. One of the most important report types is the cost table report, which provides an in-depth breakdown of cost for a selected invoice. Administrators can see which service types, SKUs, and projects have incurred the most cost. To find out more about the cost table report, visit this link.
Budgets can also be set from the dashboard, allowing administrators to create budgets for specific time frames such as monthly, quarterly, or yearly. Administrators can define the scope of the budget: whether it’s applicable to all projects, a particular project, or services within a project. Budgets can be set at a specific amount or the amount based on previous calendar years.
Another important feature of budgets is the ability to set alerts. These alerts can be triggered based on policies such as a certain percentage of the allocated budget being used or spending going over the budget threshold.
Administrators can also set up their own customized dashboards using Google Data Studio. This requires billing data to be exported using BigQuery and then using Google Data Studio to visualize costs.
This approach gives you complete control over your cost queries. A dashboard like the one above shows cost comparisons between different services from month to month, which can be very useful at a glance. This is, of course, just one example, and the tool can be customized in any way the administrator requires. It also allows administrators to configure a dashboard for more complex cases, like having several billing accounts or using currencies other than the US dollar.
Labels allow administrators to create a logical grouping of GCP resources. It is a powerful tool that gives administrators insight into resource usage and spending. By grouping resources based on any criteria, administrators can find out the exact cost of all the resources in the group. Please visit this link to learn more about labels, how they can assist in cost optimization, and the many different ways they add value to a GCP environment.
Multi Cloud Integrations
Security & Compliance
GCP offers three different kinds of discounts when running a VM: sustained use discounts, committed use discounts, and discounts for using preemptible VMs.
Sustained Use Discounts
This discount is applied on sustained usage over a period of time for a VM. It is automatically applied by GCP, and no additional configuration is required. You can save up to 30% on a VM running over a period of a month through a sustained use discount. You can learn more about sustained use discounts here.
Committed Use Discounts
This discount, as the name suggests, is offered based on a minimum usage commitment. You can enter into an agreement with GCP for applicable instance types for available committed discounts; the longer the committed term, the greater the discount.
The available committed terms are one year to three years; commitments are available for resources like vCPU, RAM, GPU, local SSD, and sole tenancy. A sole tenant is a physical server that is dedicated to a single client’s VMs; the diagram below shows the difference between sole-tenant and multi-tenant nodes.
Committed discounts are a great way to reduce infrastructure costs once you have a stable usage pattern. You can learn more about committed use discounts here.
Preemptible VMs are temporary VMs available for a maximum of 24 hours, after which they are deleted. Preemptible VMs can be preempted by the system at any time by either stopping or deleting the VM; this gives the provider flexibility, so a discount is offered to customers who allow this interruption of service. A stopped VM enters into a terminated state but still shows in the GCP interface to allow an administrator to access the disk attached to the preemptible VM to retrieve any data required.
Utilizing preemptible VMs is a great way of saving money on stateless workloads that are time-sensitive, like media transcoding. Preemptible VMs are 60-91% cheaper than standard VMs but are only available if there is capacity in the required zone.
Spot VMs are the newest generation of preemptible VMs. You can still create preemptible VMs using the same pricing model as Spot VMs, but Spot VMs offer additional features that preemptible VMs do not. You can learn more about Spot VMs here.
Powering down VMs
Although not a discount per se, powering down VMs that are not currently being utilized is a great way to save money. Most organizations that have “9-5” requirements and don’t need the VM to be available outside their work hours set a schedule for the VMs to power up and down accordingly. Powering down noncritical infrastructure during off hours or over the weekend can drastically reduce costs.
There are many different strategies you can use to reduce your cloud cost. We’ll discuss some of the most common ones that apply to nearly all deployments.
Proper instance sizing
One of the most important areas to consider when it comes to cost is getting the sizing right. As discussed earlier, GCP has many different size offerings for almost all its services. Choosing the right instance size to balance performance with cost is critical to cost optimization. Pursuing cost saving aggressively by going with smaller instances than required will result in a performance impact that may lead to a loss for the business. At the same time, going with instance sizes much larger than necessary will increase cost with no real benefit in performance.
A good rule of thumb is to aim for a size that offers 5-10% more resources than required for a use case at its peak utilization. However, this will only work for use cases where normal and peak utilization aren’t very far apart. Otherwise, you will end up with a very expensive instance that will only be used to its fullest extent for a short peak utilization period and underutilized the rest of the time.
GCP offers instances that use a feature called CPU bursting that allows instances to use more CPU than allocated for a short period of time. This feature is especially handy for use cases where resource consumption is even throughout the day with an occasional spike in utilization. This spike can be taken care of by the CPU burst, which will be available if the instance has built up “token,” which is another word for credit, by remaining under 100% of the utilization level overall. You can find more information on instance sizes in the GCP Instance Types article in this series.
GCP offers instance groups, which are collections of instances, in two types: managed and unmanaged.
A managed instance group (MIG) consists of identical VMs that apps operate on. MIG workloads are scalable and highly available through services like autoscaling, autohealing, zone deployment, and auto-updating. In contrast, unmanaged groups require manual tuning or the configuration of settings.
Autoscaling is a great way to reduce instance costs. Resources are automatically added to or removed from your MIG. Policies set by the administrator constantly monitor resource utilization to add resources (VMs) when the target threshold, as dictated by the policies, has been reached. This is done through either “scaling up” or “scaling out.” Likewise, GCP will “scale down” or “scale in” the MIG when the extra resources are no longer required. This can either be done through policies or can be triggered by the administrator manually.
You can learn more about instance groups here.
GCP offers a comprehensive business continuity and disaster recovery (BCDR) solution. However, it can become very expensive: Backups are one of the biggest cost drivers in any public cloud.
Backups consist of two types of cost. The first is a fixed cost to backup services such as VMs and databases; this cost tends to remain relatively fixed. The second cost is for backing up the storage associated with the service. This can be misleading because it starts out quite low but then grows as storage use increases, getting more expensive with every new snapshot, incremental backup, full backup, etc.
This increase in backup size and cost should be considered when creating a backup policy. It is very easy to get carried away when setting a backup policy, such as retaining daily backups for several weeks and keeping full weekly, monthly, and yearly backups for much longer than is required. Without a well thought out backup policy appropriate for the Service Level Agreements (SLAs), backup costs can easily run over budget.
Storage is a prominent factor in the cost of any public cloud implementation, which makes it extremely important to get the storage right for cost optimization. As with computing, storage can have a significant performance impact, which is why it’s essential to balance out the cost-to-performance ratio.
GCP offers four storage classes:
- Standard storage for data that is accessed frequently.
- Nearline storage for infrequently accessed data that can be stored for at least 30 days, like backups or multimedia content.
- Coldline storage for infrequently accessed data that can be stored for at least 90 days, like disaster recovery points or old incremental data.
- Archive storage for infrequently accessed data that can be stored for at least 365 days, like archived legal documents, yearly backups, etc.
The cost for each storage class varies. Standard storage obviously costs more than other storage classes. It is important to understand each storage class and which class to use to get the best cost-to-performance ratio.
GCP also offers three location options to store data that provide different levels of redundancy and performance. The standard regional option provides redundancy in the same geolocation by saving multiple copies of the data in the same data center. However, if users are scattered in various parts of the world, serving them copies of data from a single source might not be ideal in terms of both redundancy and performance. This is where dual-region and multi-region location options can be considered. Different location options are also available for backups. Multi- and dual-region options shouldn’t be used without a good reason, as they are more than twice the storage cost. The standard region location option is always the choice when there is no particular use case for enhanced redundancy and performance considerations.
Taking a proactive approach
Cost optimization isn’t a one-time job. It is like exercise: something you must do routinely and methodically for it to be effective.
Infrastructure must be checked regularly to flag services that are not being used to their full capacity. For example, once some time passes after initial setup, it is very common to have several orphaned disks, public IPs, network resources, and even entire virtual machines. These orphaned resources increase costs that you can eliminate by proactively reviewing your GCP setup on a regular basis.
Cloudbolt’s cost optimization solution enables proactive review of your GCP setup with predefined automated checks and remedial action. It takes the burden off administrators, so they can focus on more strategic initiatives. Complexities of multi-cloud deployments (GCP & cloud #2 or more) typically drive the need for a 3rd party solution.
GCP cost optimization is almost as complex as GCP itself, and there is no one correct way to optimize cost. This is why it is very important to have a deep understanding of any services you decide to run on GCP. Read more in our series to improve your understanding of these services and their pricing models and further optimize cost.
Follow our LinkedIn monthly digest to receive more free educational content like this.
The New FinOps Paradigm: Maximizing Cloud ROI
Featuring guest presenter Tracy Woo, Principal Analyst at Forrester Research In a world where 98% of enterprises are embracing FinOps,…
VMWare Alternatives – What’s Next For Your Cloud Practice
As a VMware partner, you may have received notice that Broadcom is terminating your contract. It’s like the tech world’s…
The cloud ROI problem
Why the cloud cost problem is not going away, and why we need to change the way we look at…