AIfES 2  2.0.0
ailayer_dense_avr_pgm.h File Reference

Implementation of the Dense layer with parameters in PROGMEM for AVR contoller. More...

Go to the source code of this file.

Functions

ailayer_tailayer_dense_f32_avr_pgm (ailayer_dense_f32_t *layer, ailayer_t *input_layer)
 Initializes and connect a Dense layer with the F32 AVR PGM implementation. More...
 
ailayer_tailayer_dense_q7_avr_pgm (ailayer_dense_q7_t *layer, ailayer_t *input_layer)
 Initializes and connect a Dense layer with the Q7 AVR PGM implementation. More...
 
ailayer_tailayer_dense_wt_q7_avr_pgm (ailayer_dense_q7_t *layer, ailayer_t *input_layer)
 Initializes and connect a Dense layer with the Q7 AVR PGM implementation. More...
 

Detailed Description

Implementation of the Dense layer with parameters in PROGMEM for AVR contoller.

Version
2.2.0

AVR controller implementations of the Dense layer in F32 and Q7 data-type. For more information about the Dense layer refer to ailayer_dense.h.

This implementation allows access of weights and biases directly from the program memory of AVR controllers. This is useful if there are too many weights to fit into the RAM.

Requires avr/pgmspace.h library

Function Documentation

◆ ailayer_dense_f32_avr_pgm()

ailayer_t* ailayer_dense_f32_avr_pgm ( ailayer_dense_f32_t layer,
ailayer_t input_layer 
)

Initializes and connect a Dense layer with the F32 AVR PGM implementation.

The weights and bias have to be defined constant in program memory (PROGMEM). The rest of the layer configuration is the same as with ailayer_dense_f32_default().

Example: Create the layer structure with pretrained weights:
In C:

// Use constant data only for inference. For training remove the const qualifier!!
const float weights_data_dense[] PROGMEM = {-10.1164f, -8.4212f, 5.4396f, 7.297f, -7.6482f, -9.0155f};
const float bias_data_dense[] PROGMEM = {-2.9653f, 2.3677f, -1.5968f};
ailayer_dense_f32_t dense_layer = {
.neurons = 3,
.weights.data = (float *) weights_data_dense,
.bias.data = (float *) bias_data_dense
};
General Dense layer structure.
Definition: ailayer_dense.h:71
uint32_t neurons
Layer neurons count (number of outputs).
Definition: ailayer_dense.h:80

In C, C++ and on Arduino:

// Use constant data only for inference. For training remove the const qualifier!!
const float weights_data_dense[] PROGMEM = {-10.1164f, -8.4212f, 5.4396f, 7.297f, -7.6482f, -9.0155f};
const float bias_data_dense[] PROGMEM = {-2.9653f, 2.3677f, -1.5968f};
ailayer_dense_f32_t dense_layer = AILAYER_DENSE_F32_M(3, weights_data_dense, bias_data_dense);

Example: Create the layer structure for automatic parameter distribution:
In C:

ailayer_dense_f32_t dense_layer = {
.neurons = 3
};

In C, C++ and on Arduino:

ailayer_dense_f32_t dense_layer = AILAYER_DENSE_F32_A(3);

Example: Initialize and connect the layer:

x = ailayer_dense_f32_avr_pgm(&dense_layer, x);
ailayer_t * ailayer_dense_f32_avr_pgm(ailayer_dense_f32_t *layer, ailayer_t *input_layer)
Initializes and connect a Dense layer with the F32 AVR PGM implementation.
Parameters
*layerThe layer structure to initialize.
*input_layerThe prior layer.
Returns
The (successfully) initialized layer structure.

◆ ailayer_dense_q7_avr_pgm()

ailayer_t* ailayer_dense_q7_avr_pgm ( ailayer_dense_q7_t layer,
ailayer_t input_layer 
)

Initializes and connect a Dense layer with the Q7 AVR PGM implementation.

The weights, bias and quantization parameter have to be defined constant in program memory (PROGMEM). The rest of the layer configuration is the same as with ailayer_dense_q7_default().

Example: Create the layer structure with pretrained weights:
In C:

// Use constant data only for inference. For training remove the const qualifier!!
// Weights (8 bit quantized)
const aimath_q7_params_t weights_q_params_dense PROGMEM = { .shift = 3, .zero_point = 0 };
const int8_t weights_data_dense[] PROGMEM = {-81, -67, 44, 58, -61, -72};
// Bias (32 bit quantized)
const aimath_q31_params_t bias_q_params_dense PROGMEM = { .shift = 10, .zero_point = 0 };
const int32_t bias_data_dense[] PROGMEM = {-3036, 2425, -1635};
// Result (8 bit quantized)
const aimath_q7_params_t result_q_params_dense PROGMEM = { .shift = 3, .zero_point = 41 };
ailayer_dense_q7_t dense_layer = {
.neurons = 3,
.weights = {
.tensor_params = (aimath_q7_params_t *) &weights_q_params_dense,
.data = (int8_t *) weights_data_dense
},
.bias = {
.tensor_params = (aimath_q31_params_t *) &bias_q_params_dense,
.data = (int32_t *) bias_data_dense
},
.base.result.tensor_params = (aimath_q7_params_t *) &result_q_params_dense
};
Parameters used for the quantized Q31 values, used as property of a tensor.
Definition: aimath_q31.h:149
uint16_t shift
The scaling factor of the quantization (The total scale is calculated with )
Definition: aimath_q31.h:150
Parameters used for the quantized Q7 values, used as property of a tensor.
Definition: aimath_q7.h:148
uint16_t shift
The scaling factor of the quantization (The total scale is calculated with )
Definition: aimath_q7.h:149

In C, C++ and on Arduino:

// Use constant data only for inference. For training remove the const qualifier!!
// Weights (8 bit quantized)
const aimath_q7_params_t weights_q_params_dense PROGMEM = { .shift = 3, .zero_point = 0 };
const int8_t weights_data_dense[] PROGMEM = {-81, -67, 44, 58, -61, -72};
// Bias (32 bit quantized)
const aimath_q31_params_t bias_q_params_dense PROGMEM = { .shift = 10, .zero_point = 0 };
const int32_t bias_data_dense[] PROGMEM = {-3036, 2425, -1635};
// Result (8 bit quantized)
const aimath_q7_params_t result_q_params_dense PROGMEM = { .shift = 3, .zero_point = 41 };
ailayer_dense_q7_t dense_layer = AILAYER_DENSE_Q7_M(3,
weights_data_dense, &weights_q_params_dense,
bias_data_dense, &bias_q_params_dense,
&result_q_params_dense);

Example: Create the layer structure for automatic parameter distribution (parameter buffer must be in PROGMEM):
In C:

ailayer_dense_q7_t dense_layer = {
.neurons = 3
};

In C, C++ and on Arduino:

ailayer_dense_q7_t dense_layer = AILAYER_DENSE_Q7_A(3);

Example: Initialize and connect the layer:

x = ailayer_dense_q7_avr_pgm(&dense_layer, x);
ailayer_t * ailayer_dense_q7_avr_pgm(ailayer_dense_q7_t *layer, ailayer_t *input_layer)
Initializes and connect a Dense layer with the Q7 AVR PGM implementation.
Parameters
*layerThe layer structure to initialize.
*input_layerThe prior layer.
Returns
The (successfully) initialized layer structure.

◆ ailayer_dense_wt_q7_avr_pgm()

ailayer_t* ailayer_dense_wt_q7_avr_pgm ( ailayer_dense_q7_t layer,
ailayer_t input_layer 
)

Initializes and connect a Dense layer with the Q7 AVR PGM implementation.

This implementation is the same as ailayer_dense_q7_avr_pgm() but with a transposed weights matrix/tensor.

The weights, bias and quantization parameter have to be defined constant in program memory (PROGMEM). The rest of the layer configuration is the same as with ailayer_dense_q7_default().

Example: Create the layer structure with pretrained weights:
In C:

// Use constant data only for inference. For training remove the const qualifier!!
// Weights (8 bit quantized)
const aimath_q7_params_t weights_q_params_dense PROGMEM = { .shift = 3, .zero_point = 0 };
const int8_t weights_data_dense[] PROGMEM = {-81, 58, -67, -61, 44, -72};
// Bias (32 bit quantized)
const aimath_q31_params_t bias_q_params_dense PROGMEM = { .shift = 10, .zero_point = 0 };
const int32_t bias_data_dense[] PROGMEM = {-3036, 2425, -1635};
// Result (8 bit quantized)
const aimath_q7_params_t result_q_params_dense PROGMEM = { .shift = 3, .zero_point = 41 };
ailayer_dense_q7_t dense_layer = {
.neurons = 3,
.weights = {
.tensor_params = (aimath_q7_params_t *) &weights_q_params_dense,
.data = (int8_t *) weights_data_dense
},
.bias = {
.tensor_params = (aimath_q31_params_t *) &bias_q_params_dense,
.data = (int32_t *) bias_data_dense
},
.base.result.tensor_params = (aimath_q7_params_t *) &result_q_params_dense
};

In C, C++ and on Arduino:

// Use constant data only for inference. For training remove the const qualifier!!
// Weights (8 bit quantized)
const aimath_q7_params_t weights_q_params_dense PROGMEM = { .shift = 3, .zero_point = 0 };
const int8_t weights_data_dense[] PROGMEM = {-81, 58, -67, -61, 44, -72};
// Bias (32 bit quantized)
const aimath_q31_params_t bias_q_params_dense PROGMEM = { .shift = 10, .zero_point = 0 };
const int32_t bias_data_dense[] PROGMEM = {-3036, 2425, -1635};
// Result (8 bit quantized)
const aimath_q7_params_t result_q_params_dense PROGMEM = { .shift = 3, .zero_point = 41 };
ailayer_dense_q7_t dense_layer = AILAYER_DENSE_Q7_M(3,
weights_data_dense, &weights_q_params_dense,
bias_data_dense, &bias_q_params_dense,
&result_q_params_dense);

Example: Create the layer structure for automatic parameter distribution (parameter buffer must be in PROGMEM):
In C:

ailayer_dense_q7_t dense_layer = {
.neurons = 3
};

In C, C++ and on Arduino:

ailayer_dense_q7_t dense_layer = AILAYER_DENSE_Q7_A(3);

Example: Initialize and connect the layer:

x = ailayer_dense_wt_q7_avr_pgm(&dense_layer, x);
ailayer_t * ailayer_dense_wt_q7_avr_pgm(ailayer_dense_q7_t *layer, ailayer_t *input_layer)
Initializes and connect a Dense layer with the Q7 AVR PGM implementation.
Parameters
*layerThe layer structure to initialize.
*input_layerThe prior layer.
Returns
The (successfully) initialized layer structure.