An Introduction to Nvidia Jetson


Computers have a long history of becoming the machines they are today, and many of the computers seen today all follow similar design structures, containing at a minimum – a CPU, GPU, memory, and storage. To this day, much of what we know about a computer’s design has been based upon these crucial components that allow a computer to function. Each component brings a unique role and function that allows computers to output performance to a certain. In the case of a GPU, this particular piece has long evolved into a very specialized and unique piece of the puzzle.
 

GPU’s have long transcended their initial purpose of rendering graphics to become an indispensable tool for parallel processing tasks, especially in today’s AI and machine learning applications. With the rapid growth of these applications, GPUs have quickly risen in popularity as a key component for many to leverage heavy workload demands that we now find in industry 4.0 automation or robotics. When a computer incorporates a GPU, these systems bring enhanced performance that enables more powerful processing than what a traditional CPU can do on its own. GPUs excel at executing large numbers of calculations simultaneously, making them ideal for tasks which are fundamental to neural network training and edge AI inference.  

As GPU technology is the foundation of parallel processing, NVIDIA has been one of the primary designers of semiconductor technology behind powerful graphic cards to this day. The company has designed and offered their CUDA core technology for users all over the world to access their proprietary graphic processing units to run everything from video games to heavy workload AI software. Industry 4.0 has benefited from the powerful impact a GPU makes, especially with the huge boom in AI technology that is emerging today. With more AI applications taking over the market today, more embedded applications are seeking to take advantage of newer ways of heterogeneous domain-specific compute architecture like a GPU or a similar performance accelerator.

However, the one troubling issue that many industrial applications will always face is the natural physics behind heat-dissipating power and thermals, especially in GPUs. GPUs are big, generate a lot of heat, and siphon a lot of power. Many consumer GPUs on the market today have huge power consumption that are typically not suited for industrial computers, without having to resort to some method of active cooling. 

As a hardware manufacturer for 35 years, Premio has tackled this challenge when it comes to balancing power efficiency and performance in rugged industrial computing. Jetson Orin paves the way for manufacturers like us to provide cutting-edge solutions for today's new wave of edge AI computing. With Jetson Orin, Premio can meet the challenges of real-time AI processing and low-latency data telemetry at the rugged edge, setting new standards for power efficiency and performance. 

What is Jetson? 

NVIDIA recognized the challenges that many industrial settings were facing when it comes to implementing AI and edge computing, especially when GPUs were involved. Seeing AI applications expand beyond traditional data centers and cloud environments, there was a growing need for a dedicated platform that could seamlessly integrate AI into edge devices without the drawbacks. NVIDIA sought to democratize AI and create a more accessible solution for many industrial applications. The motivation was to empower developers, researchers, and businesses of all sizes to harness the power of AI without the complexities typically associated with large-scale computing infrastructure. 

The creation was NVIDIA’s Jetson platform, which is a family of embedded AI computing devices designed to bring the transformative capabilities of deep learning and computer vision to edge devices or more unstable, remote environments. At its core, the Jetson series provides a unique blend of high-performance GPUs, dedicated AI hardware, and a comprehensive software stack to enable the deployment of intelligent applications in real-world scenarios. The pain points driven by traditional x86 systems paved the way for NVIDIA to push their efforts into creating a system that alleviates the common challenges currently faced by many around the world.  

Understanding Jetson Modules 

Key Design Uniqueness: 

Each NVIDIA Jetson is a complete System on Module (SoM) that includes GPU, CPU, memory, power management, high-speed interfaces, and more. These components are optimized to deliver high-performance AI inference while maintaining power efficiency, enabling edge devices to run complex AI algorithms efficiently. By combining all the critical pieces into one single module, NVIDIA Jetson can integrate with smaller, space-constrained solutions without sacrificing the needed performance or power efficiency found in many edge devices.  

What further sets the Jetson platform apart is also NVIDIA’s unique design philosophy. NVIDIA crafted Jetson as not just a piece of specialized hardware but has also created an ecosystem of tools that are tailored for AI development on Jetson devices. This includes support for popular AI frameworks like TensorFlow and NVIDIA's own TensorRT, as well as libraries and APIs for computer vision and deep learning. Additionally, NVIDIA's commitment to open-source software and comprehensive documentation fosters a collaborative ecosystem, where the community can contribute to and benefit from the platform's continuous improvement. NVIDIA’s Jetpack SDK is the company’s most comprehensive library for Jetson products, empowering customers with a full solution for end-to-end AI Solution building.

Unique Design of Jetson

What sets the Jetson platform apart is not only its raw computing power but also its unique design philosophy. NVIDIA has crafted Jetson devices with a holistic approach, considering both hardware and software aspects to create a seamless development experience.  

Jetson Orin Module Lineup 

The latest in NVIDIA Jetson devices is the Jetson Orin lineup (launched in March 2023), offering three distinct form factors to tackle a variety of Edge AI workloads that require real-time processing, data telemetry, and rich I/O flexibility.

 

Jetson Orin AGX Series 

Jetson Orin NX Series 

Jetson Orin Nano Series 

Jetson AGX Orin 64GB 

Jetson AGX Orin 32GB 

Jetson Orin NX 16GB 

Jetson Orin NX 8GB 

Jetson Orin Nano 8GB 

Jetson Orin Nano 4GB 

AI Performance 

275 TOPS 

200 TOPS 

100 TOPS 

70 TOPS 

40 TOPS 

20 TOPS 

GPU 

2048-core NVIDIA Ampere architecture GPU with 64 Tensor Cores 

1792-core NVIDIA Ampere architecture GPU with 56 Tensor Cores 

1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores 

1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores 

512-core NVIDIA Ampere architecture GPU with 16 Tensor Cores 

GPU Max Frequency 

1.3 GHz 

930MHz 

918MHz 

765MHz 

625MHz 

CPU 

12-core Arm® Cortex®-A78AE v8.2 64-bit CPU 

8-core Arm® Cortex®-A78AE v8.2 64-bit CPU  

8-core Arm® Cortex®-A78AE v8.2 64-bit CPU 

6-core Arm® Cortex®-A78AE v8.2 64-bit CPU 

6-core Arm® Cortex®-A78AE v8.2 64-bit CPU 

3MB L2 + 6MB L3 

2MB L2 + 4MB L3 

2MB L2 + 4MB L3 

1.5MB L2 + 4MB L3 

1.5MB L2 + 4MB L3 

CPU Max Frequency 

2.2 GHz 

2.2 GHz 

2 GHz 

1.5 GHz 


    Jetson AGX Orin Module 

      Jetson AGX Orin modules deliver up to 275 TOPS of AI performance with power configurable between 15W and 60W. This is the flagship model within the Jetson Orin lineup, offering the highest performance and capabilities. It is designed for demanding AI applications in areas such as autonomous vehicles, robotics, industrial automation, and smart cities. The AGX Orin Module provides powerful processing capabilities suitable for applications that require real-time AI inference, sensor fusion, and high-performance computing. 

        Jetson Orin NX Module 

          Jetson Orin NX modules deliver up to 100 TOPS of AI performance in the smallest Jetson form factor, with power configurable between 10W and 25W. Positioned as a mid-range option, the Orin NX model offers a balance between performance, power efficiency, and cost-effectiveness. It is suitable for a wide range of AI edge computing applications, including smart cameras, drones, intelligent appliances, and embedded AI systems. The Orin NX module provides significant processing power while being more compact and cost-effective compared to the AGX module. 

            Jetson Orin Nano Module

              Jetson Orin Nano series modules deliver up to 40 TOPS of AI performance in the smallest Jetson form factor, with power options between 7W and 15W. The Orin Nano is the entry-level model within the Jetson Orin lineup, targeting applications where lower power consumption and cost are prioritized while still requiring AI processing capabilities. It is suitable for edge AI devices, IoT (Internet of Things) devices, and other embedded systems where space, power, and cost limitations are critical factors. 

              Applications

              The versatility of the Jetson platform opens the door to a myriad of applications across various industries. Some notable implementations include: 

              Choosing the Right Tool for the Job

              With NVIDIA Jetson, this does not necessarily discount GPUs as a source of processing power. Rather, it gives more options for system integrators and OEMs when deciding how to implement their solution for AI computing tasks. There are always certain factors to consider when it comes to your industrial application. There are many cases where a GPU can be a more beneficial choice than choosing Jetson, and vice versa. 

              • GPU: Opt for GPUs when training large-scale neural networks or executing complex AI algorithms that demand immense computational power. GPUs excel in scenarios where massive parallelism and raw compute capabilities are paramount. 
              • Jetson: Choose Jetson devices for deploying AI applications at the edge, where factors like low latency, power efficiency, and compact form factor are critical. Jetson excels in scenarios requiring real-time inference and integration into small, resource-constrained devices. 
              • Other AI accelerators (TPUs) - Another option may be a completely different route, where a tensor processing unit may be more favorable than a GPU or Jetson. In many cases, there are applications where the transition from x86 to ARM may prove more detrimental than beneficial. These accelerators are a low power, fast performing alternative to both GPUs and NVIDIA Jetson. 

              To learn more about the differences and benefits between each type of performance accelerator, check out our complete overview that dives into the key differentiators of GPUs, TPUs, and more.

              Conclusion 

              In the dynamic landscape of AI computing, both GPUs and the NVIDIA Jetson platform play indispensable roles, each catering to distinct requirements and use cases. By understanding the strengths and applications of these technologies, developers and engineers can leverage them effectively to drive innovation and unlock the full potential of artificial intelligence across diverse domains. Whether harnessing the parallel processing prowess of GPUs or deploying AI directly at the edge with Jetson, the future of AI computing is ripe with possibilities, waiting to be explored and harnessed for the betterment of society.

               

              Premio’s latest lineup of Jetson Orin AI Edge Computers taps into the design of Jetson Orin to offer a new family of rugged, fanless edge AI industrial computers. The JCO Series is Premio’s first line utilizing ARM-based architecture, offering three scalable models in an entry level (JCO-1000-ORN), mid-level (JCO-3000-ORN), and high-performance (JCO-6000-ORN) for real-time AI computing power at the rugged edge.