| Category | Specification |
|---|
| Product Name | NVIDIA Jetson AGX Orin 32GB Developer Kit (Y-C8-DEV-ORIN32) |
| Core Module | Jetson AGX Orin 32GB AI compute module (900-13701-0040-000) |
| AI Performance | ~200 TOPS inferencing capability |
| GPU | NVIDIA Ampere architecture with 1792 CUDA cores & 56 Tensor Cores |
| CPU | 8-core Arm® Cortex-A78AE v8.2 64-bit CPU (up to ~2.2 GHz) |
| Memory | 32 GB 256-bit LPDDR5 (~204.8 GB/s) |
| Storage | 64 GB eMMC 5.1 onboard |
| USB Ports | 3× USB 3.2 Gen2, 4× USB 2.0 |
| Networking | 1× Gigabit Ethernet (RJ45) |
| Camera Interfaces | Up to 6 cameras via 16-lane MIPI CSI-2 |
| Display | 1× 8K60 multi-mode DisplayPort / HDMI output |
| I/O Interfaces | GPIO, SPI, I2C, RS-232, UART, CAN |
| Carrier Board | Reference carrier with power, cooling fan, and expansion headers |
| Power Supply | DC +12 V |
| Operating Temp. | ~-20 °C to +65 °C |
| Physical Size | ~188 × 170 × 43 mm |
| Typical Uses | AI prototyping, robotics, autonomy, industrial AI |
Product Description:
The NVIDIA Jetson AGX Orin 32GB Developer Kit (Y-C8-DEV-ORIN32) is a powerful AI-centric development platform designed to accelerate prototyping and deployment of edge AI, robotics, autonomous machines, and industrial intelligent systems. Built around the Jetson AGX Orin 32GB module, this kit delivers around 200 TOPS of AI inferencing performance using an NVIDIA Ampere-architecture GPU with 1792 CUDA cores and 56 Tensor Cores, paired with an 8-core Arm® Cortex-A78AE CPU and 32 GB of high-speed LPDDR5 memory, making it capable of handling complex neural network models and multi-sensor workloads in real time.
The developer kit features a reference carrier board with robust I/O and expansion options, including multiple USB 3.2 and USB 2.0 ports, Gigabit Ethernet networking, camera interfaces over MIPI CSI-2 lanes, and flexible industrial-style connections like GPIO, SPI, I2C, RS-232/UART, and CAN. An integrated 64 GB eMMC provides onboard storage, and the carrier includes a cooling fan and power circuitry for stable operation in testing environments.
With support for NVIDIA’s JetPack software stack — including CUDA, TensorRT, DeepStream, and other libraries — the Y-C8-DEV-ORIN32 kit is ideal for developers building high-performance AI applications such as autonomous robots, advanced vision systems, predictive maintenance platforms, and edge analytics solutions. Its rugged design and wide operating temperature range help ensure reliable performance during extended development cycles and field trials.