Why Leading Bare Metal Cloud Provider Packet Partnered with DatArcs to Help Users Maximize Hardware Infrastructure

Despite all the abstractions, servers are complex. Add to that raw hardware complexity the vast multitude of possibilities to manually tune and adapt to even just one singular application, and that complexity increases by orders of magnitude.

For some, manual tuning is an art and a science, but even for the rare few who take great pleasure in endless knob-tuning, it is ultimately a massive drain on time and energy. Even with expert tuning that accounts for all the vagaries of the various CPU, operating system, framework, application, and other parameters, we are all just humans after all—and there are limits to what we can see.

This is the case for small node-count deployments, but imagine tuning across much larger fleets of machines, each of which has been acquired at different times, requiring a different knob rotation for every performance aspect.

The tunability spins quickly out of control, leaving progressively larger gaps in performance and efficiency—and eating into costs. Human insight is at its limits, but as customers like Packet (a bare metal cloud based in NYC) are quickly realizing, there is huge value in enabling end users to maximize performance without tedious manual tuning.

Extracting More Value from Hardware
As a provider of high performance cloud resources, Packet is constantly looking for new ways to deliver more value from the physical servers provisioned on its platform. No matter what applications or technologies users end up putting on top of their Packet servers, extracting extra performance and efficiency from the hardware was an important and missing part of their story.

They understand that to stand out as a unique provider in a crowded – and mainly virtualized – cloud industry, Packet needs to invest in tools like automatic tuning and any “easy” resource that helps customers extract more value from their infrastructure. This constant desire for value / optimization is what brought them to DatArcs.

Cloud Portability Makes Manual Tuning Impractical
While Packet’s Dell, SuperMicro, Quanta and Foxconn servers are designed with performance characteristics on a broad spectrum of applications, at the end of the day, Packet’s nearly 8,000 users take that commodity hardware and do all kinds of things with it.

From databases and serverless functions, to big data and storage, the use cases are endless. Additionally, due to Packet’s focus on automation and “cloud native” workloads, the environments are constantly being refreshed as users swap out servers, add new hardware, or auto-scale to handle peaks and valleys.

This makes the long, slow process of manual hardware tuning impractical, requiring a move beyond human level adjustments toward software-driven optimization that can see the application, recognize its needs relative to the hardware, quickly learn its ins and outs, and implement the ideal setup for that environment and workload automatically.

How the Magic Happens – And Why it Needs to Feel Like Magic!
These are all high-level features of DatArcs, but under the hood, a complex set of learning algorithms snaps together a complete profile of the application and sets it on its best path to energy efficient performance.

And with no manual tuning for either Packet or its customers and at a low cost for users at provisioning time for pennies on the hour, the choice was clear—dynamic, adaptive, automatic tuning without the headache or budget hit.

DatArcs understands that Packet is just one of many companies, researchers, or enterprise data analysts that need results as fast as possible and within a power-aware envelope. They also know that time-constrained pros have no time to waste endlessly changing knobs, only to find that one subtle change can cascade into a whole new wave of needed post-tuning.

What smart developers, engineers, scientists, and data-driven analysts need is a tool that outsmarts even the smartest system administrator with a deep inside view into the system, application, and parameters for adaptive tuning.

DatArcs Awarded with PowerBridgeNY Ignition Grant


DatArcs Optimizer is a great tool for improving performance of servers with minimum engineering effort. In addition to boosting performance, DatArcs Optimizer is also able to improve the energy efficiency of servers by optimizing for performance per Joule. We believe data centers could be much more efficient than they are today with the help of automatic and dynamic tuning. We’re happy that others share our vision – we’ve just became awardees of the PowerBridgeNY Ignition Grant!

PowerBridgeNY is a state/university initiative that leverages clean energy innovations emerging from institutional research labs to create more and stronger energy businesses in New York State. With the help of PowerBridgeNY, our vision is now one step closer.

We’re lucky to have met our mentor David Levine through the program, and are thankful for his guidance and support. Thanks David!

Beta Version 0.5 and our presentation at HPCAC’17 @ Stanford

We’re pleased to release beta version 0.5 of DatArcs Optimizer which now supports static tuning mode (more information about the changes are available in the Change Log). Earlier this week we’ve shared some results using the new version at the HPC Advisory Council Stanford Conference:
The original slides are available for download from the Conference website. The full video of the presentation is available below:

In the presentation we’ve detailed our experience with the Phoronix Apache Web Server test when running on a Packet type-2 server. The performance in the different phases is detailed in the below graph, where the horizontal axis is the iteration number and the vertical axis is the normalized performance (normalized number of requests per second that were served by the web server):

We ran the benchmark 200 times as follows:

  1. 20 iterations in baseline – We simply ran the benchmark without Optimizer in the background. The results of this phase were used to normalize the graph.
  2. 140 iterations in learning phase – We enabled Optimizer in the beginning of this phase with a clean database. The performance in the first 20 iterations (~50 minutes) in this phase were jittery. This is because Optimizer explored various knob options. After around 20 iterations, the performance stabilized at around 23% improvement over the baseline.
  3. 20 iterations in best phase – We switched Optimizer to “best” mode at the beginning of this phase, which suppresses further exploration and reduces some overhead required for continuous tuning. The performance in this phase averaged 24%.
  4. 20 iterations in static phase – We switched Optimizer to “static-best” mode at the beginning of this phase. This resulted in Optimizer applying the best settings it had found in the learning phase. After that, Optimizer exited and no longer consumed any CPU cycles or memory. Since dynamic tuning was turned off, the performance improvement dropped from 24% to 8.8%

The full output of the run is available below:

[root@pkt-type2 ~]# datarcs-benchmark 20/140/20/20 pts/apache
Set up benchmark suite ...
Start benchmark ...
Baseline Phase : [########################################] 100%
Setting mode to tune
Learning Phase : [########################################] 100%
Setting mode to best
Optimization Phase : [########################################] 100%
Setting mode to static-best
Static Phase : [########################################] 100%
Summary for benchmark pts/apache:
phase, runs, performance, improvement, relative_stdev
Baseline, 20, 27317, 0%, 2.141%
Learning, 140, 32752.9, 19.899%, 5.092%
Optimized, 20, 33940.3, 24.246%, 0.888%
Static, 20, 29730.5, 8.835%, 2.412%