Skip to main content

3 docs tagged with "llama.cpp"

View all tags

Cross Compile LLM

This tutorial will guide you to cross-compile llama.cpp binaries (building binaries on a host machine for customization purposes) for Synaptics Astra™ Machina™.

LLM on Astra

This tutorial will guide you through the process of running the TinyLlama model using llama.cpp Natively on an Synaptics Astra™ Machina™ using the SL1680 processor.

VLM on Astra

This tutorial will guide you through running Vision Language Models (VLMs) using llama.cpp natively on Synaptics Astra™ Machina™ boards. VLMs are multimodal AI models that can understand and generate information using both images and text.