# snntoolbox.conversion¶

## snntoolbox.conversion.utils¶

This module performs modifications on the network parameters during conversion from analog to spiking.

 normalize_parameters Normalize the parameters of a network.

@author: rbodo

snntoolbox.conversion.utils.normalize_parameters(model, config, **kwargs)[source]

Normalize the parameters of a network.

The parameters of each layer are normalized with respect to the maximum activation, or the n-th percentile of activations.

Generates plots of the activity- and weight-distribution before and after normalization. Note that plotting the activity-distribution can be very time- and memory-consuming for larger networks.

snntoolbox.conversion.utils.get_scale_fac(activations, percentile)[source]

Determine the activation value at percentile of the layer distribution.

Parameters: activations (np.array) – The activations of cells in a specific layer, flattened to 1-d. percentile (int) – Percentile at which to determine activation. scale_fac – Maximum (or percentile) of activations in this layer. Parameters of the respective layer are scaled by this value. float
snntoolbox.conversion.utils.get_percentile(config, layer_idx=None)[source]

Get percentile at which to draw the maximum activation of a layer.

Parameters: config (configparser.ConfigParser) – Settings. layer_idx (Optional[int]) – Layer index. Percentile. int
snntoolbox.conversion.utils.apply_normalization_schedule(perc, layer_idx)[source]

Transform percentile according to some rule, depending on layer index.

Parameters: perc (float) – Original percentile. layer_idx (int) – Layer index, used to decrease the scale factor in higher layers, to maintain high spike rates. Modified percentile. int
snntoolbox.conversion.utils.get_activations_layer(layer_in, layer_out, x, batch_size=None)[source]

Get activations of a specific layer, iterating batch-wise over the complete data set.

Parameters: layer_in (keras.layers.Layer) – The input to the network. layer_out (keras.layers.Layer) – The layer for which we want to get the activations. x (np.array) – The samples to compute activations for. With data of the form (channels, num_rows, num_cols), x_train has dimension (batch_size, channels*num_rows*num_cols) for a multi-layer perceptron, and (batch_size, channels, num_rows, num_cols) for a convolutional net. batch_size (Optional[int]) – Batch size activations – The activations of cells in a specific layer. Has the same shape as layer_out. ndarray
snntoolbox.conversion.utils.get_activations_batch(ann, x_batch)[source]

Compute layer activations of an ANN.

Parameters: ann (keras.models.Model) – Needed to compute activations. x_batch (np.array) – The input samples to use for determining the layer activations. With data of the form (channels, num_rows, num_cols), X has dimension (batch_size, channels*num_rows*num_cols) for a multi-layer perceptron, and (batch_size, channels, num_rows, num_cols) for a convolutional net. activations_batch – Each tuple (activations, label) represents a layer in the ANN for which an activation can be calculated (e.g. Dense, Conv2D). activations containing the activations of a layer. It has the same shape as the original layer, e.g. (batch_size, n_features, n_rows, n_cols) for a convolution layer. label is a string specifying the layer type, e.g. 'Dense'. list[tuple[np.array, str]]
snntoolbox.conversion.utils.try_reload_activations(layer, model, x_norm, batch_size, activ_dir)[source]