How it works

StormForge by CloudBolt

From real-time recommendations to continuous optimization and drift prevention, StormForge seamlessly observes, applies, and maintains Kubernetes efficiency—without disrupting your existing workflows or tooling.
thumbnail
HPA Workload Optimization The Agent + Applier SaaS Delivery Model Integrations

HPA Workload Optimization

Your autoscaling settings stay where you put them.

StormForge optimizes HPA-managed workloads through bi-dimensional autoscaling to ensure optimal vertical and horizontal scaling behavior. When the HPA is scaling on a resource metric, other rightsizing solutions will alter the vertical resource requests and change the workloads scaling profile causing it to thrash, leading to downtime and unexpected scaling behavior. StormForge uses Machine Learning to detect scaling behavior and update requests alongside HPA target utilization to preserve intended scaling.

But that optimization erodes silently. A CI/CD deploy resets your HPA targets to what’s in the manifest. An Argo CD sync overwrites a tuned value. A teammate changes a setting without realizing it was being managed. Scaling behavior degrades — and nobody notices until costs spike or performance drops.

Here’s why drift happens: when StormForge right-sizes a workload, it adjusts resource requests — which shifts the utilization ratio HPA depends on. Bi-dimensional autoscaling recalculates the HPA target alongside requests to keep your scaling behavior intact. But now the target utilization is being managed continuously. The next trigger – human or automation – resets it. The drift cycle begins.


Bi-directional autoscaling

Bi-dimensional autoscaling

When requests change, the utilization ratio HPA uses to trigger scaling shifts. If you only adjust requests without updating the HPA target, you change when and how aggressively the workload scales. Bi-dimensional autoscaling solves this by treating requests and HPA targets as a coupled pair, preserving your intended scaling behavior while reducing resource waste.

Continuous HPA reconciliation icon

Continuous HPA reconciliation

The StormForge Applier watches HPA target utilization settings and detects drift. When something changes outside of StormForge, it reconciles — automatically restoring optimized values.

CI/CD-aware workload reconciliation icon

CI/CD-aware workload reconciliation

Recommended request settings are maintained across deployments. When Argo CD, Flux, or another CD tool deploys, StormForge manages the source of truth for optimized settings — ensuring the correct requests are deployed as the workload gets updated.

No silent regressions icon

No silent regressions

Optimization doesn’t erode over time. Settings don’t drift back to defaults after a deploy. What StormForge optimized stays optimized — until a new recommendation says otherwise.

The Agent + Applier

Start with visibility. Add automation when you’re ready.

Not every team is ready to automate optimization on day one. Some need to see the recommendations first, build trust with the data, and prove value before granting write access. StormForge is built for that path.

StormForge runs as two lightweight, self-optimizing components. Each has a distinct job. Install them together or separately depending on where your team is.


The Agent: observe and recommend

A Kubernetes controller paired with a metrics forwarder. The controller watches your workloads and configuration. The forwarder streams CPU, memory, and usage metrics from each container to the StormForge SaaS backend over HTTPS. Just resource telemetry.

The SaaS-hosted ML engine analyzes usage patterns and generates right-sizing recommendations on the schedule you define. First recommendation takes just a few minutes.

The Applier: execute with precision

An optional, separately installable component that applies recommendations to your workloads. Three apply methods give you control over how changes land:

Server-side patches — Direct resource updates to workload specs.

Mutating admission webhook — Enables in-place pod resizing and advanced rollout strategies (Immediate and Hybrid).

GitOps export — Download recommendations as patches for CI/CD-driven apply. No Applier required.

After applying, the Applier validates rollout health and monitors the workload. If something goes wrong, it catches it. If HPA targets drift, it reconciles. If a CI/CD deploy overwrites optimized request settings, it reapplies.

Agent without Applier

Visibility first. Install the Agent alone to see recommendations without acting on them. No RBAC write permissions needed.

Agent + Applier

Full automation. Install both and define your policies. StormForge observes, recommends, applies, validates, and reconciles — continuously.

SaaS Delivery Model

All the intelligence in the cloud. Minimal footprint in your cluster.

Many Kubernetes optimization tools are self-hosted — meaning the recommendation engine, analytics, and data storage all run inside your cluster. Full control, but at the cost of operational overhead that scales with every cluster you onboard.

What gets installed in-cluster

The StormForge Agent and optional Applier — both lightweight and self-optimizing. Metrics forwarding and applying recommendations. That’s it.

What runs in SaaS

The ML recommendation engine. The web UI. All historical data and analytics. No self-hosted Prometheus dependency at scale. No cluster-side data warehousing.

No bundled infrastructure to manage

Competitors that run self-hosted require you to operate their Prometheus instance, manage their storage, and scale their infrastructure alongside yours. StormForge streams a targeted set of resource metrics to the SaaS backend via HTTPS — your cluster stays lean.

TCO advantage that compounds over time

Self-hosted tools add operational overhead to every cluster you onboard. SaaS overhead stays flat. At 10 clusters, the difference is noticeable. At 50, it’s a line item.

Calculate the total cost of ownership difference:

TCO Calculator →

Trusted by the world’s largest financial institutions

Metrics are scoped to resource usage and performance data — not application payloads. Data is transmitted over HTTPS. SOC 2 compliant. No sensitive workload data leaves your cluster.

Integrations

Your stack stays yours. StormForge fits inside it.

Rip-and-replace doesn’t work for platform teams managing production Kubernetes. You’ve already invested in your GitOps pipeline, your observability stack, and your deployment model. StormForge works alongside all of it — not instead of it.

GitOps: Argo CD + Flux

Native integration with both. Configure StormForge as a recognized field manager in Argo CD. Mutate pods directly alongside Flux to prevent reconciliation conflicts. Or skip the Applier entirely and export recommendations as patches for your CI/CD pipeline.

Scaling: KEDA + HPA

Full support for workloads scaled by KEDA ScaledObjects. HPA target utilization is reconciled automatically — not just respected, actively maintained.

Platforms: EKS Add-On + OpenShift

Available as an AWS EKS add-on for streamlined procurement and installation. Dedicated install path for Red Hat OpenShift Container Platform.

Three ways to apply recommendations

Server-side patches (default). Mutating admission webhook (for in-place resizing). GitOps export (for CI/CD-driven apply). Pick the method that fits your deployment model — per namespace, per workload.

Why platform engineers choose CloudBolt

90% faster time-to-savings

From months to hours, for immediate cost reductions.

Reduced manual work

Eliminate repetitive, unsustainable resource tuning

Up to 85% in Kubernetes savings

Right-size workloads and reduce your node footprint.

99% allocation accuracy

Maximize node efficiency and avoid wasted spend.

NEXT STEPS

Ready to put Kubernetes rightsizing on autopilot?

Start reducing your Kubernetes costs without sacrificing performance. See it work with your actual workloads—risk-free.

*Free trial includes full optimization on 1 cluster for 30 days.

Start free trial

grid pattern

Related resources

 
fr image
Videos

How Acquia cut web node infrastructure by 65% with continuous Kubernetes rightsizing

Acquia modernized a platform that previously ran on roughly 26,000 EC2 nodes by moving to Kubernetes. The goal wasn’t just containerization—it was elastic scaling for traffic spikes without relying on fixed “small/medium/large” sizing. Results at a glance 65% reduction in web node footprint 99.99% availability delivered consistently 26,000 EC2 nodes as the legacy baseline modernized […]

 
Blog

Bill-Accurate Kubernetes Cost Allocation, Now Built Into CloudBolt

CloudBolt is introducing granular Kubernetes cost allocation directly within the platform, now available in private preview. This new capability delivers bill-level accuracy down to the container, intelligently allocates shared costs, and integrates natively with enterprise chargeback. If you’d rather see it than read about it, start with a quick walkthrough of the experience: Here’s what […]

 
Videos

StormForge Optimize Live: 5-minute demo

StormForge Optimize Live gives teams a clear, intelligent, and automated way to right-size every Kubernetes workload with confidence. In this short walkthrough, Product Manager Nick Walker shows how Optimize Live provides instant top-down visibility across clusters, surfaces both waste and underprovisioning risks, and generates precise, usage-based recommendations that improve performance while reducing cost. Viewers will […]