Hardware Kit
Jetson / CUDA
ARM CPU + NVIDIA GPU for edge AI and accelerated autonomy. Best for heavy perception or large map processing with CUDA-enabled modules.
CUDA
Edge GPU
Linux
Compute
ARM CPU + NVIDIA GPU
Memory
4-64 GB unified memory
OS/RTOS
JetPack (Ubuntu-based)
Toolchain
JetPack, nvcc, CUDA Toolkit
Power
5-30W power modes
Typical Boards
- Jetson Orin Nano / Orin NX
- Jetson Xavier NX
- Jetson AGX Orin for max throughput
Toolchain + Build Profile
- JetPack-matched CUDA Toolkit + nvcc
- Match compute capability to deployed Jetson SKU (Orin/Xavier)
- CMake with -DENABLE_CUDA=ON and explicit architecture flags when needed
- Nsight for profiling
cmake .. -DTARGET=cuda -DENABLE_CUDA=ON
cmake --build .
Pin-Level + Electrical
- 3.3V logic on GPIO header
- CSI camera interfaces (MIPI CSI-2)
- I2C/SPI/UART for sensors and peripherals
- Use level shifting for 5V devices
Sensors + Peripherals
- Multi-camera arrays via CSI
- LiDAR via Ethernet/USB
- IMU via I2C/SPI
Comms + Networking
- Gigabit Ethernet, Wi-Fi (module dependent)
- CAN via add-on or USB adapter
Real-Time + Determinism
- GPU workloads are async; manage CPU/GPU synchronization
- Use pinned memory for stable transfer rates
Memory + Thermal Constraints
- Unified memory is shared by CPU and GPU, so map volume must be budgeted conservatively.
- Prolonged perception workloads can trigger thermal throttling without active cooling.
- Use nvpmodel plus sustained-clock testing before field deployment.
Recommended Vault Modules
Pitfalls + Mitigations
- Thermal throttling under sustained GPU load
- CPU/GPU sync stalls if transfers are not batched
- Power mode misconfiguration reduces throughput
Field Checklist
- Set Jetson power mode (nvpmodel) for workload
- Profile GPU kernels and avoid small launches
- Verify cooling and sustained clocks