Object Detection
Object Detection is a technique that helps computers find and label multiple objects in an image, like detecting people, cars, or animals in a photo. Whereas Image Classification associates a single label with a whole image, Object Detection uses bounding boxes around individual objects in an image. This makes tracking the position and behavior of objects in a scene possible.
YOLO (You Only Look Once) models
YOLO model, which stands for "You Only Look Once," is a type of computer vision model used for real-time object detection, where it identifies and locates objects within an image by processing it in a single pass, meaning it only needs to "look" at the image once to detect objects within it, making it very fast compared to methods that require multiple passes.
YOLO was originally created by Joseph Redmon in 2016, later transitioning to a community model with significant contributions by Ultralytics, their latest version being YOLO11.
Source: Ultralytics
Run Object Detection on Astra
This assumes you are familiar with setting up your Astra board. If not, please refer to the setup tutorial.
The Quick guide is compatible with all Machina SL2600-Series kits with OOBE image with pip and python pre-installed and optimization tailored to:
NPU for SL2610-Series
To run the example on Astra, first you need to run through the Prerequisites which downloads the GitHub Examples Repo on your board.
Set up your environment
If you haven't done so already, setup your environment.
Clone our Examples GitHub repository and Navigate to the Repository Directory:
git clone https://github.com/synaptics-astra-demos/sl2610-examples
cd sl2610-examples
Set up your Python environment ensuring all required dependencies are installed within a virtual environment:
python3 -m venv .venv --system-site-packages
source .venv/bin/activate
pip install -r requirements.txt
Set up the display environment (required for visual output).
export XDG_RUNTIME_DIR=/var/run/user/0
export WAYLAND_DISPLAY=wayland-1
Object Detection on the edge
cd Object_detection/standalone/
python3 object_detection.py \
--model yolov8_od.vmfb \
--image dog_bike_car.jpg \
--labels labels.json \
--device torq
You should see a result in the form of:
Detections:
dog Conf: 0.9186 Box: [133 219 177 315]
car Conf: 0.5663 Box: [468 79 254 86]
bicycle Conf: 0.5663 Box: [151 137 412 280]

Python walkthrough
The Python example above uses the Torq platform to perform NPU accelerated inference directly from Python. Review the code ./Object_detection/standalone/object_detection.py to see how it works.
Learn more about Torq here in Torq GitHub
This application calls the IREE runtime, while passing in the device type (torq in our case), model path, input, and output.
executable = "iree-run-module"
cmd = [
executable,
f"--device={device}",
f"--module={model_path}",
"--function=main",
f"--input=@{input_npy_path}",
f"--output=@{output_bin_path}"
]
You can find a tutorial on compling YOLO for Synaptics Astra NPU.