Evaluate Model Compatibility
This guide explains how to take your trained model in one of the supported formats and evaluate its compatibility on Astra™ SR100 MCU. Evaluate open-source AI models on SR100 NPU without having actual SR silicon by:
- Bring any model
- Convert to TFLite INT8 Quantized model
- Compile the model using SR100 Model Compiler Space
This workflow enables rapid prototyping and validation before deploying to your Astra Machina Micro board, saving development time and allowing for iterative model improvements.
Bring any model
You can use a trained model in any of these formats and convert to a TFLite Model. Refer to the respective documentation for conversion:
- Keras to TFLite Conversion
- TensorFlow SavedModel to TFLite Conversion
- ONNX to TFLite Conversion
- PyTorch to TFlite Conversion
These guides provide the most reliable and up-to-date methods for converting and quantizing your models for deployment on SR100.
Convert to TFLite INT8 Quantized Model
Quantization is a process that makes your model smaller and faster by converting its weights and activations from floating-point numbers to integers (like INT8). To perform quantization, you need a representative dataset. This dataset should be similar to the data your model will see in real-world use. The quantization process uses this data to calibrate the model and maintain accuracy.
Once you have a TFLite model and a representative dataset, you can follow the TFLite Quantization Guide to convert your model to an INT8 quantized version.
Once you have a TFLite INT8 quantized model, compile it using the SR100 Model Compiler hosted on Hugging Face or below:
SR100 Model Compiler Space 🤗
You can directly use our Hugging Face SR100 Model Compiler Space:
This Space uses a simulation toolchain to estimate model performance without an actual SR silicon, providing results that closely reflect real hardware behavior.
If you see Error: 404 Sorry, we can’t find the page you are looking for , please restart this space at HuggingFace: SR100 Model Compiler Space →
References
Deploy on Real Hardware
Now that you have evaluated and compiled your model for Astra SR-Series, you can take the next step and deploy it on real hardware.
Request a Machina™ Micro Dev Kit:
Get hands-on experience with the Astra SR100 MCU by requesting a development kit. With the dev kit, you can validate your model in real-world scenarios, optimize further, and accelerate your edge AI deployment. ➡ Request Machina™ Micro Dev Kit
Also, check out the performance of pre-optimized vision models developed by Synaptics for SR100 Series MCUs using our 🤗 Space: