Despite all the abstractions, servers are complex. Add to that raw hardware complexity the vast multitude of possibilities to manually tune and adapt to even just one singular application, and that complexity increases by orders of magnitude.
For some, manual tuning is an art and a science, but even for the rare few who take great pleasure in endless knob-tuning, it is ultimately a massive drain on time and energy. Even with expert tuning that accounts for all the vagaries of the various CPU, operating system, framework, application, and other parameters, we are all just humans after all—and there are limits to what we can see.
This is the case for small node-count deployments, but imagine tuning across much larger fleets of machines, each of which has been acquired at different times, requiring a different knob rotation for every performance aspect.
The tunability spins quickly out of control, leaving progressively larger gaps in performance and efficiency—and eating into costs. Human insight is at its limits, but as customers like Packet (a bare metal cloud based in NYC) are quickly realizing, there is huge value in enabling end users to maximize performance without tedious manual tuning.
Extracting More Value from Hardware
As a provider of high performance cloud resources, Packet is constantly looking for new ways to deliver more value from the physical servers provisioned on its platform. No matter what applications or technologies users end up putting on top of their Packet servers, extracting extra performance and efficiency from the hardware was an important and missing part of their story.
They understand that to stand out as a unique provider in a crowded – and mainly virtualized – cloud industry, Packet needs to invest in tools like automatic tuning and any “easy” resource that helps customers extract more value from their infrastructure. This constant desire for value / optimization is what brought them to DatArcs.
Cloud Portability Makes Manual Tuning Impractical
While Packet’s Dell, SuperMicro, Quanta and Foxconn servers are designed with performance characteristics on a broad spectrum of applications, at the end of the day, Packet’s nearly 8,000 users take that commodity hardware and do all kinds of things with it.
From databases and serverless functions, to big data and storage, the use cases are endless. Additionally, due to Packet’s focus on automation and “cloud native” workloads, the environments are constantly being refreshed as users swap out servers, add new hardware, or auto-scale to handle peaks and valleys.
This makes the long, slow process of manual hardware tuning impractical, requiring a move beyond human level adjustments toward software-driven optimization that can see the application, recognize its needs relative to the hardware, quickly learn its ins and outs, and implement the ideal setup for that environment and workload automatically.
How the Magic Happens – And Why it Needs to Feel Like Magic!
These are all high-level features of DatArcs, but under the hood, a complex set of learning algorithms snaps together a complete profile of the application and sets it on its best path to energy efficient performance.
And with no manual tuning for either Packet or its customers and at a low cost for users at provisioning time for pennies on the hour, the choice was clear—dynamic, adaptive, automatic tuning without the headache or budget hit.
DatArcs understands that Packet is just one of many companies, researchers, or enterprise data analysts that need results as fast as possible and within a power-aware envelope. They also know that time-constrained pros have no time to waste endlessly changing knobs, only to find that one subtle change can cascade into a whole new wave of needed post-tuning.
What smart developers, engineers, scientists, and data-driven analysts need is a tool that outsmarts even the smartest system administrator with a deep inside view into the system, application, and parameters for adaptive tuning.