Skip to main content

Edge AI Made Efficient

Synaptics Astra™ equips developers with best-in-class Edge AI hardware and Open-source tools enabling product innovation with a proven path to scale.

Find Out More →

Get started in minutes


Intro to Edge AI

Learn about running AI models directly on embedded devices in real-time.
Learn more →

Unlock Edge AI with Astra

Astra makes it seamless to integrate Edge AI into your existing embedded development pipeline.
Learn more →

Getting Started

Get started with series of Quick Tutorials and embark your Edge AI journey.
Learn more →

Models ready to go


Get your project started in minutes with the optimized models preinstalled on Synaptics Astra

Edge AI efficiency


The hardware-aware SyNAP compiler targets the exact NPU or GPU resources available on-chip, which can significantly improve inference speed. There are also advanced optimization options, such as mixed-width and per-channel quantization.

Bring your own model


Have a different model you'd like to bring? Target it to Astra's on-chip NPU or GPU with one command:

$ synap convert --target {$CHIP_MODEL}  --model example.torchscript

Trusted at scale


Brought to you by Synaptics — trusted by leading electronics brands worldwide to deliver billions of semiconductors.

Reference Docs


🤖 SyNAP AI Toolkit

Deep dive into the SyNAP toolkit for building NPU-accelerated apps.

Read more →

⚙️ Advanced Optimization

Learn how to convert your existing AI models to run on Synaptics Astra.

Read more →

💻 Astra SDK

Build C++ applications with accelerated AI inference using on-device NPU

Read more →