Skip to main content

8 docs tagged with "SL1680"

View all tags

Cross Compile LLM

This tutorial will guide you to cross-compile llama.cpp binaries (building binaries on a host machine for customization purposes) for Synaptics Astra™ Machina™.

GStreamer Inference

This tutorial covers how to run real-time inference on a video stream. It assumes you are familiar with setting up your Astra board. If not, please refer to the setup tutorial.

Llamafile on Astra

LLMs are powerful tools which have many uses, but the output may contain inaccuracies, bias, or safety issues.

LLM on Astra

This tutorial will guide you through the process of running the TinyLlama model using llama.cpp Natively on an Synaptics Astra™ Machina™ using the SL1680 processor.

VLM on Astra

This tutorial will guide you through running Vision Language Models (VLMs) using llama.cpp natively on Synaptics Astra™ Machina™ boards. VLMs are multimodal AI models that can understand and generate information using both images and text.

Whisper Speech-to-Text

This tutorial will guide you through the steps required to run whisper.cpp on an Synaptics Astra™ Machina™ SL1680 to test out a wav file.

Whisper Streaming

This tutorial will guide you through the steps required to run whisper.cpp on an Synaptics Astra™ Machina™ dev kit using SL1680 to test real-time speech recognition using a microphone.