Bring Your Own Model
synap
synap
This tutorial will guide you to cross-compile llama.cpp binaries (building binaries on a host machine for customization purposes) for Synaptics Astra™ Machina™.
This tutorial covers how to run real-time inference on a video stream. It assumes you are familiar with setting up your Astra board. If not, please refer to the setup tutorial.
LLMs are powerful tools which have many uses, but the output may contain inaccuracies, bias, or safety issues.
This tutorial will guide you through the process of running the TinyLlama model using llama.cpp Natively on an Synaptics Astra™ Machina™ using the SL1680 processor.
This tutorial will guide you through running Vision Language Models (VLMs) using llama.cpp natively on Synaptics Astra™ Machina™ boards. VLMs are multimodal AI models that can understand and generate information using both images and text.
This tutorial will guide you through the steps required to run whisper.cpp on an Synaptics Astra™ Machina™ SL1680 to test out a wav file.
This tutorial will guide you through the steps required to run whisper.cpp on an Synaptics Astra™ Machina™ dev kit using SL1680 to test real-time speech recognition using a microphone.