Skip to main content

Running Whisper.cpp on Astra

This tutorial will guide you through the steps required to run whisper.cpp on an Synaptics Astra™ Machina™ SL1680 to test out a wav file.

note

This tutorial is compatible with all SL16xx boards. While inference may vary, the steps remain the same across all processors.

Whisper.cpp is a C++ implementation of the Whisper speech-to-text model. View more details about Whisper.cpp at the Original GitHub Repo.

tip

If you want to run streaming/real-time Whisper, follow the Whisper Streaming tutorial.

Prerequisites

You can natively compile the Whisper binary on the Machina board, as we support the required packages and compilers in our OOBE (Out of Box Experience) image v1.2.0 and above. You can skip the prerequisites and jump straight to Step 1, running commands on the Machina Terminal instead of the Ubuntu Terminal.

The steps below will guide you to cross-compile Whisper.cpp binaries. You will use your host development machine as Ubuntu. On an Ubuntu system, you can first build a Yocto environment specific for our Astra Machina Board. For this, get the pre-built toolchain from here.

Download the standalone toolchain for your chipset. In this tutorial, you will use sl1680.

Once downloaded, open a terminal in Ubuntu and run the command:

bash poky-glibc-x86_64-astra-media-cortexa73-sl1680-toolchain-4.0.17.sh

Now run this command to activate the environment:

. /opt/poky/4.0.17/environment-setup-cortexa73-poky-linux
tip

To check if the environment is active, use the command in your Ubuntu terminal:

echo $CC

Step 1: Generate Binary for Whisper.cpp

Open a terminal in Ubuntu and clone the whisper.cpp repository from GitHub:

git clone https://github.com/ggerganov/whisper.cpp.git
cd whisper.cpp

Create a build directory and navigate to it, then run the CMake command to build the project:

mkdir build && cd build
cmake ..
cmake --build . --config Release

The binary for main will be created in ~/build/bin/, and the shared library file libwhisper.so.1 in ~/build/src. These two files will help you run the Whisper models, so you need to copy them to your Machina board.

info

For running real-time Whisper.cpp, follow Whisper streaming guide. It provides a tutorial to build the stream binary for Astra.

Also, you need to create ggml binaries with the latest updates in whisper.cpp, so Follow instructions from Official Repo or just follow the steps below:

git clone https://github.com/ggerganov/ggml
cd ggml

# install python dependencies in a virtual environment
python3.10 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

# build the examples
mkdir build && cd build
cmake ..
cmake --build . --config Release -j 8

Once you use these steps, three ggml binaries will be created in /ggml/build/src/ location, libggml.so, libggml-base.so and libggml-cpu.so. Copy these to your Machina work directory.

Step 2: Setting up Astra Machina Board

Use ADB to access the Astra Machina Board from a host machine such as Ubuntu.

Follow these steps from the Access Machina Board tutorial to setup ADB.

Once in the ADB shell, create a new directory inside the home directory of the Machina board:

mkdir whisper

Now, open a new Ubuntu terminal in your development system and use push to copy the binary files you generated from Step 1 to the Machina board:

adb push ~/whisper.cpp/build/bin/main /home/whisper
adb push ~/whisper.cpp/build/libwhisper.so.1 /home/whisper
adb push ~/ggml/build/src/libggml.so /home/whisper
adb push ~/ggml/build/src/libggml-base.so /home/whisper
adb push ~/ggml/build/src/libggml-cpu.so /home/whisper

Once you have copied the binary files to the Machina board, link the libwhisper.so.1 file. Follow these commands:

mv libwhisper.so.1 /usr/lib/
ln -s /usr/lib/libwhisper.so.1 /home/whisper/libwhisper.so.1
ls -l
note

You should see a linkage libwhisper.so.1 -> /usr/lib/libwhisper.so.1 after ls -l command like,

lrwxr-xr-x    1 root     root            24 Dec  5 22:03 libwhisper.so.1 -> /usr/bin/libwhisper.so.1

If running the binary gives an error:

./main: error while loading shared libraries: libwhisper.so.1: cannot open shared object file: No such file or directory

Remove libwhisper.so.1 using the command:

rm libwhisper.so.1

Copy it again, then push libwhisper.so.1 from the host development machine to the Machina board's whisper directory and then give permissions.

Step 3: Download Test WAV and Whisper Models

For the test wav, you can use the jfk.wav, present in the Whisper GitHub repo in whisper.cpp/samples/ which you downloaded in Step 1.

You need to download a Whisper model on your host development machine (Ubuntu in this case) from HuggingFace.

In this tutorial, you will use the tiny quantized model ggml-tiny.en-q8_0.bin. Link: https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-tiny.en-q8_0.bin

Once downloaded, you need to copy this model and the test wav file jfk.wav to your Astra Machina Board:

scp ~/Downloads/ggml-tiny.en-q8_0.bin root@10.3.10.132:/home/whisper
scp ~/Downloads/whisper.cpp/samples/jfk.wav root@10.3.10.132:/home/whisper

Step 4: Running Whisper Using WAV File on Machina Board

With the model downloaded and the binary built, you can now run Whisper inside the whisper folder:

./main -m ggml-tiny.en-q8_0.bin -f jfk.wav

The output from your Synaptics Astra board should look like:

.. [00:00:00.000 --> 00:00:07.960] And so my fellow Americans ask not what your country can do for you [00:00:07.960 --> 00:00:10.760] ask what you can do for your country.

whisper_print_timings: load time = 110.04 ms whisper_print_timings: fallbacks = 0 p / 0 h whisper_print_timings: mel time = 25.06 ms whisper_print_timings: sample time = 175.44 ms / 138 runs ( 1.27 ms per run) whisper_print_timings: encode time = 3040.76 ms / 1 runs ( 3040.76 ms per run) whisper_print_timings: decode time = 6.37 ms / 1 runs ( 6.37 ms per run) whisper_print_timings: batchd time = 562.93 ms / 133 runs ( 4.23 ms per run) whisper_print_timings: prompt time = 0.00 ms / 1 runs ( 0.00 ms per run) whisper_print_timings: total time = 3949.22 ms

Congratulations!

You have successfully installed and run Whisper.cpp on your Astra Machina Board. If you want to run streaming or real-time Whisper, follow this Whisper streaming guide.

You can also explore further by trying different models or integrating Whisper into your projects. For more advanced usage and options, refer to the Whisper.cpp GitHub repository.