As data centers become more and more saturated with data thanks to the increasing number of IoT devices coming online daily and the availability of 5G networks bringing more devices online than ever before, data center operators are looking for ways to extract as much performance as possible from their servers. Datacenter operators can extract more performance from the same hardware by adding SmartNICs to their servers. SmartNICs allow operators to extract more performance from existing hardware because they can offload some of the networking, storage, and security functions from the server’s CPU to the SmartNIC. This is possible thanks to the powerful processing power and performance accelerators that SmartNICs are equipped with. Now that we know why we are even discussing SmartNICs let’s examine the difference between a regular NIC and a SmartNIC, but before we dive into the difference, let’s explain what a regular NIC is and what a SmartNIC is?
What is a Regular NIC?
A network interface controller, commonly known as a NIC, is a computer hardware component that connects a computer to a network. A regular NIC enables communication between computers on the same local area network (LAN) and large-scale networks through the internet protocol (IP). Today, our focus will be standard wired NICs, which are often used in data centers because of their reliability and robustness.
What is a SmartNIC?
A SmartNIC is a dedicated piece of hardware that can be used to accelerate networking, storage, and security functions. Also, they can perform virtualization, load balancing, and data path optimization. SmartNICs are typically made from a network interface controller, a multi-core CPU, with the option to add an FPGA and/or GPU. SmartNICs are equipped with computing power, allowing them to offload all networking functions, security function, and storage functions from the host server to the SmartNIC, freeing up valuable processing power. Freeing up processing power allows the server to focus on running revenue-generating enterprise applications and the OS more efficiently. Furthermore, network virtualization protocols can be offloaded to the SmartNIC. Protocols that can be offloaded include VXLAN, NVGRE, Geneve protocols, and many more. Also, SmartNICs are capable of performing the following functions: packet inspection, flow table processing, encryption, VXLAN overlays, and NVMe-oF.
SmartNIC vs. Regular NIC
Regular NICs only operate as a middleman between data center servers or computers on a data network. SmartNICs, on the other hand, can be programmed to perform functions that regular NICs are not capable of performing. For example, a regular NIC is not capable of packet processing functions that include packet filtering, timestamping, deduplication, flow shunting, and flow classification. Basic NICs only facilitate communication between computers on a network; they are not intelligent enough to perform other functions, nor are they intelligent enough to offload functions from the host system’s CPU. Let’s explore some of the functions that SmartNICs are capable of that cannot be performed on a regular NIC. This is so because SmartNICs have a compute layer that is not present on regular NICs. The compute layer enables the NIC to run custom software on the SmartNIC itself.
1. Networking Functions
Network functions that a SmartNIC is capable of running include routing, firewalling, telemetry, load balancing, NAT, and overlay networks. All of this can be processed by the SmartNIC’s CPU, reducing the load on the host server’s CPU, freeing up cycles to run other revenue-generating enterprise applications.
2. Storage Functions
SmartNICs can also function as storage controllers, managing the hard drives (HDDs) or SSDs (solid-state drives) found on data center servers. Furthermore, SmartNICs are typically connected to the same BUS as the storage that is in servers. As such, the SmartNIC can talk directly to the storage devices, negating the need for data to flow to the server’s CPU, instead data only needs to flow from the storage device to SmartNIC. Moreover, SmartNICs can run VMware NSX on the NIC itself, improving network bandwidth, reducing latency, and freeing up CPU cycles for better application performance.
3. DDoS Defense
SmartNICs can protect data center servers from DDoS attacks. DDoS attacks occur when a person or organization floods the target network or server with an overwhelming amount of traffic, causing a denial of service to normal traffic to the target site or network. SmartNICs have the ability to block DDoS attacks. Offloading the detection and prevention of DDoS attacks from the host server to the SmartNIC prevents the main system CPU from being overwhelmed by the DDoS attack and improves DDoS mitigation capacity. This is so because SmartNICs can be programmed to drop DDoS attack packets dynamically. F5 claims that a data center server equipped with a SmartNIC can withstand a DDoS attack 300x in magnitude when compared to a server without a SmartNIC. Moreover, SmartNICs can filter all inbound and outbound packets, similar to how Ip Tables works, providing a robust architecture for filtering network traffic.
Also, DPUs can accelerate data center servers by offloading data encryption from the server’s CPU to the DPU. Data processing units have built in hardware-based encryption and key infrastructure engines, including a true random number generator, built-in PKI engine, and a secure key store that keep session keys encrypted in memory. Additionally, when it comes to security, SmartNICs provide significant security since they create an air gap between the host system’s operating system and the SmartNICs operating system, preventing attacks on the SmartNIC’s OS.
5. Offloading Tasks from CPU to SmartNIC
The ability of SmartNICs to offload computationally intensive tasks from a server’s CPU onto a SmartNIC is of significant importance for data center because operators typically want to extract as much performance from existing hardware as possible, and SmartNICs allow them to do so by offloading tasks from the host server’s CPU to the SmartNIC’s multi-core processor. SmartNICs offer data center operators more performance without having to change out all of the existing hardware. SmartNICs can easily be integrated into legacy data center servers as they plug in via PCIe slot.
What Are the Main Components of a Regular NIC vs SmartNIC?
A regular NIC is made from ethernet ports, a small amount of memory to store data that’s being communicated, and a low-powered processor for converting the data message so it can be communicated. SmartNICs, on the other hand, are made using powerful multi-core processors, a high-performance network interface controller equipped with (10/25/50/100/200 Gigabit Ethernet ports), and a rich set of flexible, programmable acceleration engines to improve the performance of specific applications. Additionally, some SmartNICs also come equipped with a GPU to accelerate AI workloads, such as machine learning and deep learning. So, as you can probably tell at this point, SmartNICs are significantly beefier and equipped with technology that makes them significantly more intelligent than regular NICs. That said, regular NICs are still often used on servers even ones equipped with SmartNICs for moving data between data center servers and computers.
What Other Performance Accelerators Are Data Centers Using?
Let’s look at some of the other performance accelerators that are being used in data centers to cope with the increase in the volume and velocity of data being stored and accessed. Performance accelerators include GPUs (graphics processing units), computational storage devices (CSDs), and FPGAs.
1. GPU (Graphics Processing Unit)
Source Credit (Nvidia)
GPUs are often added to servers to accelerate workloads that involve mathematical calculations. GPUs are great for such workloads because they often have thousands of small cores, enabling them to perform many tasks in parallel. For this reason, GPUs are often used for demanding workloads, such as machine learning and deep learning, because they can process multiple computations simultaneously. Additionally, GPUs can handle huge amounts of data much better than are CPUs. So, the larger the amount of data being processed, the more likely it is that adding a GPU will improve the performance of a server. The ability of GPUs to process data in parallel makes them great for servers that will be tasked with performing artificial intelligence, high-resolution video editing, medical imaging, and many other demanding workloads.
2. CSD (Computational Storage Device)
Source Credit (Samsung)
Computational storage is another performance accelerator that’s being used in data center servers to accelerate servers. Computational storage devices are different from regular storage devices in that they are equipped with a multi-core processor that allows them to process data at the storage device level. Processing data at the storage device level places less stress on the main CPU since data does not need to be sent to the CPU. Furthermore, processing data at the storage device level allows organizations to extract valuable insights in real-time. Also, using computational storage devices reduces latency since the data is processed on the storage device itself, negating the need for data to travel through the system to the CPU. Moreover, since the data is processed on the storage device and does not need to travel, data is less vulnerable to being misappropriated. Furthermore, our systems support Peer to Peer DMA, which allows the system to send data directly from one device to another without going through the CPU nor RAM. This is great because there is no CPU overhead or need for synchronization when sending data from one device to another over the PCIe BUS. Additionally, it provides lower latency, and less data movement, freeing up precious PCIe bandwidth.
3. FPGA (Field Programmable Gate Array)
Source Credit (Mouser Electronics)
FPGAs is an integrated circuit that is made from logic blocks, I/O cells, and other performance accelerators that can be reprogrammed and reconfigured according to a user’s specific requirements. FPGAs have gained popularity because of their ability to accelerate artificial intelligence workloads, such as machine learning and deep learning. FPGAs are also being used in the making of SmartNICs because of their reprogrammable nature. FPGA-based SmartNICs can offload networking, storage, and security functions from the host server’s CPU to the SmartNIC, freeing up precious CPU cycles for running enterprise applications and the operating system. The customizability and adaptability of FPGAs to the ask at hand cannot be found elsewhere. That said, professional training is required for individuals tasked with programming them.
The Bottom Line
The bottom line is that SmartNICs are significantly more intelligent than regular NICs because they are equipped with multi-core processors and performance accelerators that can offload the processing of networking, security, and storage functions from data center server CPUs to the SmartNIC, freeing up precious CPU cycles that can be used to improve the performance of revenue-generating enterprise applications. Freeing up CPU cycles to improve revenue generating applications’ performance is becoming increasingly important as the amount and velocity of data being stored and accessed continues to increase. With Moore’s Law slowing down and processors performance improvements slowing down, data center operators are looking at performance accelerators such as SmartNIC enabled servers to improve their data centers’ performance. Additionally, SmartNICs are intelligent because they are programmable, providing OEM and organizations deploying them with a solid platform that they can program according to their specific requirements. Premio offers a variety of SmartNIC servers for you to choose from. If you need assistance choosing a server equipped with SmartNICs, contact us and one of our SmartNIC server professionals will assist you with choosing a solution that meets your specific requirements.