AIfES 2  2.0.0
Main page

Welcome to the official AIfES 2 documentation! This guideline should give you an overview of the functions and the application of AIfES. The recommendations are based on best practices from various sources and our own experiences in the use of neural networks.

Vocabulary and Abbreviations

  • AIfES: Artificial Intelligence for Embedded Systems
  • ANN: Artificial Neural Network
  • FNN: Feedforward Neural Network (used in this documentation for simple multi-layer perceptrons)
  • RNN: Recurrent Neural Network
  • CNN: Convolutional Neural Network
  • Inference: The calculation of an ANN (foreward pass / prediction)
  • Backpropagation: A training algorithm for ANNs that is based on gradient descent

Overview (Documentation)

  • Tutorial inference F32: Guides you through the necessary steps to perform an inference with AIfES, based on an example. (Float 32 model)
  • Tutorial training F32: Guides you through the training process with AIfES, based on an example. (Float 32 model)
  • Tutorial inference Q7: Guides you through the necessary steps to perform an inference with AIfES, based on an example. (Int 8 model)

Overview (Features)

AIfES 2 is a modular toolbox designed to enable developers to execute and train an ANN on resource constrained edge devices as efficient as possible with as little programming effort as possible. The structure is closely based on python libraries such as keras and pytorch to make it easier to get started with the library.

The AIfES basic module contains currently the following features:

Data types

  • F32 (32 bit floating point values)
  • Q31 (32 bit quantized fixed point values)
  • Q7 (8 bit quantized fixed point values)

General

For Inference

Layer:

Layer f32 q31 q7
Dense ailayer_dense_f32_default()
ailayer_dense_f32_cmsis()
ailayer_dense_f32_avr_pgm()
ailayer_dense_q31_default() ailayer_dense_q7_default()
ailayer_dense_wt_q7_default()
ailayer_dense_wt_q7_cmsis()
ailayer_dense_q7_avr_pgm()
ailayer_dense_wt_q7_avr_pgm()
Input ailayer_input_f32_default() ailayer_input_q31_default() ailayer_input_q7_default()
ReLU ailayer_relu_f32_default() ailayer_relu_q31_default() ailayer_relu_q7_default()
ailayer_relu_q7_avr_pgm()
Sigmoid ailayer_sigmoid_f32_default() ailayer_sigmoid_q31_default() ailayer_sigmoid_q7_default()
ailayer_sigmoid_q7_avr_pgm()
Softmax ailayer_softmax_f32_default() ailayer_softmax_q31_default() ailayer_softmax_q7_default()
ailayer_softmax_q7_avr_pgm()
Leaky ReLU ailayer_leaky_relu_f32_default() ailayer_leaky_relu_q31_default() ailayer_leaky_relu_q7_default()
ailayer_leaky_relu_q7_avr_pgm()
ELU ailayer_elu_f32_default() ailayer_elu_q31_default() ailayer_elu_q7_default()
ailayer_elu_q7_avr_pgm()
Tanh ailayer_tanh_f32_default() ailayer_tanh_q31_default() ailayer_tanh_q7_default()
ailayer_tanh_q7_avr_pgm()
Softsign ailayer_softsign_f32_default() ailayer_softsign_q31_default() ailayer_softsign_q7_default()
ailayer_softsign_q7_avr_pgm()
Conv2D ailayer_conv2d_f32_default()
Batch Normalization ailayer_batch_norm_f32_default()
MaxPool2D ailayer_maxpool2d_f32_default()
Reshape ailayer_reshape_f32_default()
Flatten ailayer_flatten_f32_default()

Algorithmic:

For Training

Layer:

Layer f32 q31 q7
Dense ailayer_dense_f32_default()
ailayer_dense_f32_cmsis()
ailayer_dense_q31_default()
Input ailayer_input_f32_default() ailayer_input_q31_default() ailayer_input_q7_default()
ReLU ailayer_relu_f32_default() ailayer_relu_q31_default() ailayer_relu_q7_default()
Sigmoid ailayer_sigmoid_f32_default() ailayer_sigmoid_q31_default()
Softmax ailayer_softmax_f32_default() ailayer_softmax_q31_default()
Leaky ReLU ailayer_leaky_relu_f32_default() ailayer_leaky_relu_q31_default()
ELU ailayer_elu_f32_default() ailayer_elu_q31_default()
Tanh ailayer_tanh_f32_default() ailayer_tanh_q31_default()
Softsign ailayer_softsign_f32_default() ailayer_softsign_q31_default()
Conv2D ailayer_conv2d_f32_default()
Batch Normalization ailayer_batch_norm_f32_default()
MaxPool2D ailayer_maxpool2d_f32_default()
Reshape ailayer_reshape_f32_default()
Flatten ailayer_flatten_f32_default()

Loss:

Loss f32 q31 q7
Mean Squared Error (MSE) ailoss_mse_f32_default() ailoss_mse_q31_default()
Crossentropy ailoss_crossentropy_f32_default()
ailoss_crossentropy_sparse8_f32_default()

Optimizer:

Optimizer f32 q31 q7
Stochastic Gradient Descent (SGD) aiopti_sgd_f32_default() aiopti_sgd_q31_default()
Adam aiopti_adam_f32_default()

Algorithmic:

AIfES Express

High level functions to build a simple multi layer perceptron / fully-connected neural network with a few lines of code.

Python tools

To help you to get your neural network models from Python to AIfES and to perform the weights quantization to integer types, we provide some AIfES Python tools.

You can install the tools via pip from our GitHub repository with:

pip install https://github.com/Fraunhofer-IMS/AIfES_for_Arduino/raw/main/etc/python/aifes_tools.zip

You can have a look at the automatic quantization example to see how the Python tools can be used to quantize a model to Q7 integer type.

Structure and design concepts of AIfES

AIfES was designed as a flexible and extendable toolbox for running and training or artificial neural networks on microcontrollers. All layers, losses and optimizers are modular and can be optimized for different data types and hardware platforms.

Example structure of a small FNN with one hidden layer in AIfES 2:

Modular structure of AIfES 2 backend: