LLM-1U-RPL Series

1U Edge AI Server for On-Prem LLM Workloads

Run Generative AI locally with the LLM-1U-RPL Series — a 1U edge AI server designed for real-time large language model (LLM) inference. Powered by advanced GPUs, it supports models up to 40 billion parameters for low-latency, secure, and on-premise AI processing without cloud dependency. Deploy in industrial sites, field locations, or enterprise data centers to bring AI closer to your data. See Product Brief ...

  • Short-Depth 1U Form Factor
  • 13th Gen Intel® Core™ Processor (up to 65W TDP)
  • Supports NVIDIA® RTX™ 5000 Ada GPU for AI acceleration
  • Comprehensive I/O for edge connectivity and expansion
  • Cybersecurity-Ready with hardware-level protection
  • World-Class Certifications: UL, CE, FCC
 

Gen AI Inferencing

Active Cooling

PCIe 4.0 Expansion

GPU Acceleration

Hardware-Level Data Security

f

Designed for the On-Prem Data Center Edge

The LLM-1U-RPL is strategically designed for deployment at the on-prem edge, serving as the final compute layer before sensor data is sent to the cloud. Positioned at this critical junction, LLM-1U brings local AI inferencing, data sovereignty, privacy, and reliability advantages. Learn More About Edge Servers ...

On-Premises LLM Inferencing

Power LLM workloads locally for low latency decision-making insights.


Data Sovereignty and Privacy

Minimize cloud exposure of sensitive data and ensure data privacy compliance.


Industrial-Grade Reliability

Ensure continuous uptime and performance in mission-critical edge environments.

Compact, Server Rack-Ready Design

Engineered with a short-depth 1U form factor, the LLM-1U Series fits seamlessly into space-constrained deployments such as micro data centers, control rooms, and mobile enclosures.

Real-Time Performance At The Edge

The LLM-1U-RPL leverages a performance hybrid architecture from Intel to streamline intensive industrial workloads while minimizing latency and maximizing efficiency.

13th Gen Intel® Core™ E
Processors

Up to
24 Cores & 32 Threads

Performance Hybrid Architecture with up to
8 P-Cores & 16 E-Cores

Dual-channel DDR4
Up to 64GB

GPU Support for Gen AI Acceleration

The LLM-1U-RPL Series supports workstation-class GPUs that enable high-performance inferencing for on-prem LLM and genAI while ensuring efficiency in thermal management and form factor.

  • 1000+ TFLOPS NVIDIA RTX™ 5000 Ada
  • Up to 250W
  • Workstation-Class GPUs

High-Speed PCIe 4.0 Expansion for AI Workloads

In addition to GPU support, PCIe 4.0 expansion provides the flexibility needed for high-specification deployments utilizing high-throughput networking and other industrial add-on expansion cards.

• 1x PCIe 4.0 x16

for full-bandwidth accelerators

• 2x PCIe 4.0 x8

For multiple add-on expansion cards

Comprehensive IIoT Connectivity

Consolidating diverse and high-volume sensor data, the LLM-1U Series provides essential connectivity for seamless integration with IIoT cameras, sensors, and devices.

  • 6x USB 3.2 Gen2 (10Gbps)
  • 3x RJ45 (2.5GbE)
  • 2x COM

Scalable Storage & High-Speed NVMe

For performance-critical workloads, the LLM-1U-RPL supports high-speed NVMe alongside swappable SATA bays for easy maintenance and data access.

  • 1x M.2 M-Key slot for NVMe SSD
  • 2x hot-swappable 2.5" SATA drive bays

24/7 Uptime with Redundant Power and Cooling

The LLM-1U Series features redundant power supplies and smart cooling fans to ensure uninterrupted operation and simplified maintenance in critical edge AI deployments.

Hot-swappable 600W PSU (1+1)

Hot-swappable Fans

Hardware-Level Cybersecurity for Edge AI

LLM-1U Series features multiple hardware-level protection for physical security, anti-tampering, and data integrity especially in edge deployments.

Security Bezel

Chassis Intrusion Detection Switch

TPM 2.0

World-Class Safety Certifications

Undergone rigorous testing and validation to achieve UL safety standards and IEC 62443-4-1 cybersecurity certifications. The LLM-1U Series is trusted for deployment peace of mind and confidence in mission-critical environments.

  • IEC 62443-4-1
  • UL
  • CE, FCC