Over the past decade, virtualization has changed the face of IT. Thanks in large part to virtualization, workloads are deployed more efficiently, large data centers run hypervisors that enable public and private cloud applications, and compute resources can be spun up and down faster than ever before. Purpose-built servers, while not as popular from a buzzword perspective, are another innovation changing the way modern IT gets done. Understanding the benefits of each technology will help you make better business decisions and get the most ROI and best performance out of your servers. In this post, we’ll discuss the benefits of purpose-built servers and virtualization and help define how you can use them together to optimize your IT infrastructure.
What are purpose-built servers?
As the name implies, purpose-built servers are servers designed with a particular use case or application in mind. They are optimized for a specific application allowing them to outperform “commodity” servers and meet requirements other servers cannot.
One of the most common types of purpose-built servers most IT professionals will be familiar with is a storage server. Thinking about how a server that will be used as a NAS (Network Attached Storage) server requires significantly different specifications than a server that will be used for CPU-heavy big data analytics applications is a good way to begin conceptualizing the benefits of purpose-built servers. From there, you can drill down into specialization within different categories. For example, Premio manufactures different storage servers that are purpose-built to meet the demands of a wide variety of specific use cases ranging from all-flash storage arrays for CDNs (Content Delivery Networks) to servers optimized for virtual-desktop infrastructure / hyper-converged infrastructure (see Different Types of Storage Devices & How to Use Them for more information).
Simply put, the benefit of purpose-built servers at a high-level is the same benefit specialization offers in other fields: when you focus on doing one thing, you do it very well.
What is server virtualization?
As anyone who has taken an entry level computer science course can attest, IT is all about layers of abstraction. Virtualization is another example of that. It is the process by which compute resources are abstracted by a hypervisor. Operating systems are then installed on top of the hypervisor with the hypervisor supplying virtualized CPU, RAM, storage, and other resources to the guest operating systems. Oftentimes the guest operating systems installed on top of hypervisors are referred to as virtual machines. Tech note: while the terms are used interchangeably in casual conversation, there are some technical differences between virtual machines and guests. Check out this VMware knowledge base article if you’re interested in the nuts and bolts.
At a high level, there are two types of hypervisors:
- Type 1 hypervisors or “bare metal” hypervisors- These hypervisors are installed directly onto computer hardware in the same way a traditional operating system would be. Examples include VMware’s ESXi, Xen Project, and Microsoft’s HyperV.
- Type 2 hypervisors or “hosted” hypervisors- These are hypervisors that are installed within an operating system (like Windows 10 or MAC OSX 10.11). Examples include VMware WorkStation and Oracle VirtualBox.
We’ll focus on Type 1 bare-metal hypervisors as those are most commonly found in data centers and used by organizations to virtualize their servers.
Where businesses previously had to deploy multiple different physical servers, likely running at low resource utilization levels, for network services like Active Directory, mail services, print services, file sharing, and more, through virtualization they can deploy multiple discrete servers on the same hardware. This fundamental shift away from a one to one “role-to-physical server” model allows for a number of benefits including:
- Greater operational flexibility- Relative to an environment full of standalone physical servers running just one operating system, virtualized environments offer significantly greater operational flexibility. Development, QA, and test environments can be spun up almost on demand and under-utilized virtual servers can be decommissioned or scaled down with greater ease than a physical server. Additionally, provisioning can be made highly centralized through management tools and taking “snapshots” of virtual machines can help streamline the backup and recovery process.
- Better resource utilization- As mentioned, those physical servers were likely running at very low resource utilization levels. Consolidating them onto one physical server through virtualization allows for better utilization of available computing and storage resources.
- A “greener” solution- Using resources more efficiently leads to less overall consumption of rack space, power, and cooling. In addition to benefiting the environment, this also benefits the bottom line.
Why are purpose-built servers critical for virtualization?
While there are numerous benefits to virtualization, getting the most out of it still depends on selecting the right hardware to run your hypervisor on. In addition to making sure you meet the minimum requirements to run your hypervisor of choice, to optimize performance you’ll want to architect a server solution that allows you to get the best performance possible for your mission-critical applications and services.
In short, since a hypervisor is virtualizing all the hardware resources for a number of guest operating systems, special consideration must be taken when selecting the underlying hardware to best accommodate the unique demands placed on servers running virtualized workloads. Servers purpose-built for virtualization are uniquely equipped to meet those demands. Below are a few key areas to consider when selecting a server:
- CPUs- Some hypervisors, like VMware’s ESXi 6.5, require CPUs with multiple cores. While clock speed is important, generally speaking, in virtualized environments the number of cores is even more important. This is because the driving factor in CPU performance in virtualized environments is generally the number of threads available. That’s what makes the number of cores and the ability for the processors to support hyperthreading so important. Processors should also support hardware virtualization for 64-bit guest operating systems.
- Memory- RAM drives the performance of many virtualized environments. Given the criticality of server workloads, the demands server operating systems place on RAM, and the benefits of consolidating hardware, it becomes clear that high-density RAM is a plus for virtualization. Support for multiple DDR4 memory banks is a huge plus for a server purpose-built for virtualization. When in doubt, err on the side of higher density in RAM.
- Network connectivity- Consolidating all the servers on a network onto hypervisors means you will also be centralizing the network traffic. Throughput and bandwidth will be vital to maintaining acceptable performance levels in enterprise environments. Servers purpose-built for modern production environments should have at least two Gigabit Ethernet adapters (for example, the Intel Ethernet Server Adapter I350).
- Storage- Fast, high throughput, reliable storage is vital to any virtualized production environment. This whitepaper from VMware calls out a number of best practices and recommendations related to hardware in general and storage in particular. Some of the key takeaways include: subpar storage performance is often related to configuration issues with the underlying storage devices, not the hypervisor and input/output operations that can significantly impact overall performance. The benefits of flash storage, in general, become even more pronounced in virtual environments.
- Power, redundancy, and high availability- By consolidating workloads, virtualization also consolidates the criticality of a single server. This means that redundant power supplies and hot-swappable features are of the utmost importance in servers purpose-built for virtualization. If one power supply fails, the redundant supply can keep the system running. If a hot-swappable server or component fails, IT staff can make a replacement without taking the system down. All of these features help IT teams maintain as much uptime as possible for the businesses they support.
- Server management- The ability to control a server “out of band” can oftentimes be the difference between a minor blip in operations and a major outage. Features like IPMI (Intelligent Platform Management Interface) give administrators a means of controlling their servers even if the hypervisor management utilities lock up or are otherwise inaccessible, making this feature a big value-add for servers purpose-built for virtualization.
While every use case will vary, experts familiar with the demands of virtualized workloads can provide experience-based guidance and recommendations on what solution is the right fit for you.
Servers customized for your use case
Premio specializes in designing server and storage solutions that are purpose-built for a wide variety of applications including virtualization. For a real-world example check out this case study that details how Premio helped a leading DDI solutions provider architect a solution that improved performance by 300% percent while reducing MTBF and increasing preorders. Contact us today to get started architecting a solution designed to meet the unique demands of your business.