The following general guidelines must be followed throughout all laboratory sessions in ECE-381: Applied Machine Learning. These instructions ensure consistency, reproducibility, and effective utilization of the Jetson Orin Nano platform and associated toolkits.
- Jetson Orin Nano Setup
- Ensure your Jetson Orin Nano board is properly connected with keyboard, mouse, and monitor for standard mode, or via USB-C to your laptop for headless mode.
- For headless access, connect to the board using the command ssh <username>@192.168.55.1 from your terminal
or PowerShell.
- Use the default login credentials provided by the instructor:
Username: ECE381-<Jetson #>
Password: machinelearning<Jetson #>
- Docker Environment
- All labs rely on Docker containers to isolate the machine learning environments. Ensure Docker with NVIDIA runtime is installed and functional.
- Use provided scripts (e.g., docker_dli_run.sh or container-specific run.sh) to initialize lab environments.
- For persistent storage of notebooks, data, and outputs, always mount local directories using the -v flag with Docker.
- JupyterLab and WebUI Access
- For Labs 1 and 2, access JupyterLab through the browser at 192.168.55.1:8888. The default password is dlinano.
- For Labs 3 and 4, respective WebUIs will be hosted at ports like 0.0.0.0:7860 (Stable Diffusion) or local URLs printed in terminal. Always open the links in Chrome or Firefox.
- Camera and Device Usage
- Ensure the USB webcam is connected before launching Docker containers. If the container cannot detect /dev/video0, restart Docker and reconnect the device.
- Only one process can access the camera at a time. If errors arise, shut down all active kernels or containers using the camera.
- SSD and Storage Management
- For Labs 3b and 4, external SSDs must be mounted at /mnt/nvme/my_storage.
- Format the SSD using ext4 and configure /etc/fstab to enable auto-mount at boot.
- Always store large models and generated outputs in the SSD to avoid filling up the Jetson’s internal memory.
- Model Execution Guidelines
- Restart Docker containers if you encounter CUDA or memory issues.
- Always run conversion scripts (e.g., ModelConversion.py) before deploying YOLOv11n models for fast inference.
- Use GPU-optimized models (TensorRT where applicable) for efficient frame processing and real-time output.
- Submission and Documentation
- Each lab must be submitted as a report (one per group unless specified). Include:
- Task descriptions
- Screenshots of results
- Observations and conclusions
- Challenges and proposed improvements
- Ensure your group partner’s name is clearly mentioned in the report.
- Naming Convention: LabX_GroupY_Report.pdf
- Troubleshooting Tips
- If your camera freezes in Jupyter or WebUI, shut down the kernel or container, restart it, and rerun all code cells.
- If Docker throws numpy auto-update errors, rerun the same command as instructed.
- If you encounter GPU-related failures, check available memory using tegrastats or reboot the Jetson board