Skip to main content

AI Developer

Synaptics Astra™ equips developers with best-in-class edge AI hardware and open-source tools - for product innovation with a proven path to scale.

Find Out More →

Tutorials


Get started

Get started today with embedded AI models optimized for Synaptics Astra GPU and NPU.
Learn more →

Build apps with Ultralytics YOLO

Compile YOLO11, YOLOv8 for edge computer vision applications.
Learn more →

Optimize models for NPU

Optimize models for NPU-accelerated MPUs and MCUs that are cost and power efficient.
Learn more →

Technical Blogs


LLMs and SLMs on Astra

LLMs, SLMs, and embedded AI are the future of AI development. Learn how to get started with Synaptics Astra AI Developer.
Learn more →

Whisper on Astra

In the realm of real-time speech recognition, OpenAI's Whisper models have been state-of-the-art, offering developers the ability to transcribe audio efficiently.
Learn more →

YOLOv8 Instance Segmentation

Learn how real-time instance segmentation works using YOLOv8 model on Astra Machina SL1680 board.
Learn more →

Bring your own model


Have a different model you'd like to bring? Target it to Astra's on-chip NPU or GPU with one command:

$ synap convert --target {$CHIP_MODEL}  --model example.torchscript

Edge AI Efficiency


The hardware-aware SyNAP compiler targets the exact NPU or GPU resources available on-chip, which can significantly improve inference speed. There are also advanced optimization options, such as mixed-width and per-channel quantization.

Reference Docs


🤖 SyNAP AI Toolkit

Deep dive into the SyNAP toolkit for building NPU-accelerated apps.

Read more →

⚙️ Advanced Optimization

Learn how to convert your existing AI models to run on Synaptics Astra.

Read more →

💻 Astra SDK

Build C++ applications with accelerated AI inference using on-device NPU

Read more →