Edge AI made efficient
Synaptics Astra™ equips developers with best-in-class edge AI hardware and open-source tools — for product innovation with a proven path to scale.
CES 2025
On-device AI Assistant Demo
An efficient, on-device contextual AI voice assistant for quick, private, multi-modal interactions without cloud dependency.
Learn more →Get started in minutes
Get started
Get started today with embedded AI models optimized for Synaptics Astra GPU and NPU.
Learn more →
Build apps with Ultralytics YOLO
Compile YOLO11, YOLOv8 for edge computer vision applications.
Learn more →
Optimize models for NPU
Optimize models for NPU-accelerated MPUs and MCUs that are cost and power efficient.
Learn more →
Models ready to go
Get your project started in minutes with the optimized models preinstalled on Synaptics Astra
- Convert NV12@1920x1080 to RGB@1920x1080
- image processing
A preprocessing model to convert NV12 formatted images from 1920x1080 resolution to RGB format at the same resolution.
- Convert NV12@1920x1080 to RGB@224x224
- image processing
A preprocessing model to convert NV12 formatted images from 1920x1080 resolution to RGB format at 224x224 resolution.
- Convert NV12@1920x1080 to RGB@640x360
- image processing
A preprocessing model to convert NV12 formatted images from 1920x1080 resolution to RGB format at 640x360 resolution.
- Inception V4 299 Quant
- image classification
A quantized Inception V4 model optimized for image classification on ImageNet at 299x299 resolution.
- MobileNet V1 0.25 224 Float
- image classification
A MobileNet V1 model with a 0.25 width multiplier optimized for image classification on ImageNet at 224x224 resolution.
- MobileNet V1 0.25 224 Quant
- image classification
A quantized MobileNet V1 model with a 0.25 width multiplier optimized for image classification on ImageNet at 224x224 resolution.
- MobileNet V2 1.0 224 Float
- image classification
A MobileNet V2 model with a 1.0 width multiplier optimized for image classification on ImageNet at 224x224 resolution.
- MobileNet V2 1.0 224 Quant
- image classification
A quantized MobileNet V2 model with a 1.0 width multiplier optimized for image classification on ImageNet at 224x224 resolution.
- MobileNet224 Full1
- object detection
A lightweight MobileNet model optimized for people detection on high-resolution images.
- MobileNet224 Full80
- object detection
A MobileNet model optimized for object detection on the COCO dataset with full resolution at 224x224 and 80 classes.
- PoseNet MobileNet 0.75 Float
- object detection
- pose estimation
A PoseNet model using MobileNet architecture with 75% width multiplier for efficient body pose estimation.
- PoseNet MobileNet 0.75 Quant
- object detection
- pose estimation
A quantized PoseNet model using MobileNet architecture with 75% width multiplier for efficient and optimized body pose estimation.
- SR Fast Y UV 1280x720 to 3840x2160
- image processing
A fast super-resolution model converting YUV images from 1280x720 to 3840x2160 resolution.
- SR Fast Y UV 1920x1080 to 3840x2160
- image processing
A fast super-resolution model converting YUV images from 1920x1080 to 3840x2160 resolution.
- SR QDEO Y UV 1280x720 to 3840x2160
- image processing
A QDEO-based super-resolution model converting YUV images from 1280x720 to 3840x2160 resolution.
- SR QDEO Y UV 1920x1080 to 3840x2160
- image processing
A QDEO-based super-resolution model converting YUV images from 1920x1080 to 3840x2160 resolution.
- YOLOv5m 640x480
- object detection
A YOLOv5m model for object detection optimized for 640x480 resolution, offering a balance between speed and accuracy.
- YOLOv5s 640x480
- object detection
A YOLOv5s model for object detection optimized for 640x480 resolution, suitable for real-time applications.
- YOLOv5s Face 640x480 ONNX MQ
- face detection
- object detection
A YOLOv5s model specialized for face detection, optimized for 640x480 resolution using ONNX with mixed quantization.
- YOLOv8s Pose
- object detection
- pose estimation
A YOLOv8s model specialized for body pose estimation, optimized for real-time applications.
Edge AI efficiency
The hardware-aware SyNAP compiler targets the exact NPU or GPU resources available on-chip, which can significantly improve inference speed. There are also advanced optimization options, such as mixed-width and per-channel quantization.
Bring your own model
Have a different model you'd like to bring? Target it to Astra's on-chip NPU or GPU with one command:
- ONNX
- PyTorch
- TensorFlow Lite
$ synap convert --target {$CHIP_MODEL} --model example.onnx
$ synap convert --target {$CHIP_MODEL} --model example.torchscript
$ synap convert --target {$CHIP_MODEL} --model example.tflite
Technical blogs
On-device speech-to-text
Voice UI using on-device speech-to-text models like OpenAI's Whisper.
Learn more →
LLMs and SLMs on Astra
LLMs, SLMs, and embedded AI are the future of AI development. Learn how to get started with Synaptics Astra AI Developer.
Learn more →
YOLOv8 Instance Segmentation
Learn how real-time instance segmentation works using YOLOv8 model on Astra Machina SL1680 board.
Learn more →
Trusted at scale
Brought to you by Synaptics — trusted by leading electronics brands worldwide to deliver billions of semiconductors.
Reference Docs
🤖 SyNAP AI Toolkit
Deep dive into the SyNAP toolkit for building NPU-accelerated apps.
Read more →⚙️ Advanced Optimization
Learn how to convert your existing AI models to run on Synaptics Astra.
Read more →💻 Astra SDK
Build C++ applications with accelerated AI inference using on-device NPU
Read more →