How to Keep a Lid on Your Container Costs


These days, pretty much every organization needs to develop software, whether it’s in-house tools or customer-facing apps and websites. Your developers need to rapidly create, test, and deploy great software – and one of the best ways to do that, is access to agile servers running virtual machines (VMs) and containers.

When it comes to HCI (hyperconverged infrastructure), VMs and containers are fundamental building blocks. Both have been around for decades, but in the last 5-10 years, containers have become the de facto tool for testing and running applications in isolation.

Why are containers so popular? 

Containers work much like virtual machines, with their own resources separated from other VMs and containers. But because they run at the OS-level, not the hardware level, they are faster to spin up than VMs. Containers can be created in seconds, whereas VMs can take minutes. When running hundreds or thousands of containers, this agility and speed advantage can have a big impact on costs and efficiency.

Docker and Kubernetes

Containerisation firm Docker® has led the way in making containers faster and easier to deploy, helping with the widespread popularity of containers and making them the increasingly essential development tool they are today. The Docker platform enables exceptional scalability up and down, just as HCI does.

Likewise, Google’s Kubernetes®, an open-source system for automating deployment, scaling, and management of containerised applications, has set the standard in container operations. Both Docker and Kubernetes are often found working in harmony together in containerised application deployments.

Such is the popularity of containers that the giant of the virtual machines world, VMware®, has just launched its Tanzu offering. VMware Tanzu is an outstanding example of a solution that integrates HCI and containers, bringing native Kubernetes orchestration to the VMware vSphere® management platform.

How to run fast and efficient containers

So, we know that containers are often the right choice for your developers. But how do you maximise performance?

The hardware you choose for your HCI infrastructure makes a big difference, and the most important component of all is the processor. More capable processors typically mean better performance, more containers per server, which can result in lower costs. 

AMD EPYC™ processors combine up to 64 cores per CPU with high bandwidth memory and excellent IO bandwidth. This means these high-performance processors from AMD can serve more containers per CPU compared to competitor chips, while AMD’s single socket processors means you can avoid the potential latency drawbacks of dual socket CPUs. This performance capacity means you can literally run fewer physical servers, reducing your CapEx and OpEx and letting you spend crucial dollars and cents elsewhere.

For example, the AMD EPYC 7742  processor is exceptionally good at running container workloads, out-performing Intel’s Xeon Platinum 8280 processor by up to 45% at 256 concurrent containers.

chart 3 AMD

AMD has both ends of the spectrum covered: if you are targeting low TCO and high VM or container density, an AMD 64-core CPU could be the right choice. If you want per-core or per-thread performance, or you use software that is licensed by the core, AMD offers higher-frequency chips with fewer cores, either 8, 16 or 24 cores. 

Find out more

Discover the full potential of AMD chips for HCI servers here.

Copyright © 2021 IDG Communications, Inc.



Source link