Objective
The primary objective of this lab is to introduce students to the NVIDIA Jetson Orin Nano development platform and demonstrate how to build a real-time image classification system. Students will configure their devices in both graphical and headless modes, set up the development environment using Docker, and deploy a deep learning model for classifying hand gestures (thumbs up and thumbs down) using a live camera feed.
Learning Outcomes
Upon completion of this lab, students will be able to:
- Configure the Jetson Orin Nano board and connect via SSH in headless mode.
- Set up and launch JupyterLab using NVIDIA DLI Docker containers.
- Use iPython widgets within Jupyter notebooks for data collection via webcam.
- Fine-tune a pre-trained ResNet-18 model for binary image classification tasks.
- Evaluate and test model predictions in real time using a live webcam feed.
- Iterate and improve the model by collecting more diverse training data.
Lab Tasks
- Hardware and OS Setup: Connect the Jetson Nano with monitor, mouse, and keyboard, or establish headless access using a Type-C data cable. Log into the system and connect to Wi-Fi.
- Docker Environment Configuration: Create a persistent data directory and execute the docker_dli_run.sh script to pull and run the DLI AI container.
- Launching Jupyter Notebook: Access the JupyterLab interface via 192.168.55.1:8888 and open the notebook classification_interactive.ipynb.
- Data Collection: Use the notebook’s interactive widget to capture images for two classes — thumbs up and thumbs down — through a connected webcam.
- Model Training: Train a modified ResNet-18 model with a final layer adjusted for binary classification. Monitor the training progress through live loss and accuracy metrics.
- Live Testing: Evaluate the trained model by showing real-time hand gestures to the camera. Interpret prediction probabilities via slider widgets.
- Model Improvement: Collect additional images with varying backgrounds, angles, and lighting conditions. Retrain themodel to improve accuracy and robustness.
Technologies Used
- Jetson Orin Nano (Ubuntu-based platform)
- Docker with NVIDIA runtime (nvcr.io/nvidia/dli/dli-nano-ai)
- JupyterLab with Python notebooks
- ResNet-18 (PyTorch-based)
- Webcam input and live video stream analysis
Expected Deliverables
- Screenshots of the Jupyter interface with trained model outputs.
- Brief written report including training accuracy and observations.
- Notes on data augmentation strategies used for improving model performance.