Introduction

Batch Normalization 1 2 is a technique which takes care of normalizing the input of each layer to make the training process faster and more stable. In practice, it is an extra layer that we generally add after the computation layer and before the non-linearity.

It consists of 2 steps:

  1. Normalize the batch by first subtracting its mean $\mu$, then dividing it by its standard deviation $\sigma$.
  2. Further scale by a factor $\gamma$ and shift by a factor $\beta$. Those are the parameters of the batch normalization layer, required in case of the network not needing the data to have a mean of 0 and a standard deviation of 1.
$$ \Large \begin{aligned} &\mu_{\mathcal{B}} \leftarrow \frac{1}{m} \sum_{i=1}^{m} x_{i}\\ &\sigma_{\mathcal{B}}^{2} \leftarrow \frac{1}{m} \sum_{i=1}^{m}\left(x_{i}-\mu_{\mathcal{B}}\right)^{2}\\ &\widehat{x}_{i} \leftarrow \frac{x_{i}-\mu_{\mathcal{B}}}{\sqrt{\sigma_{\mathcal{B}}^{2}+\epsilon}}\\ &y_{i} \leftarrow \gamma \widehat{x}_{i}+\beta \equiv \mathrm{BN}_{\gamma, \beta}\left(x_{i}\right) \end{aligned} $$

Due to its efficiency for training neural networks, batch normalization is now widely used. But how useful is it at inference time?

Once the training has ended, each batch normalization layer possesses a specific set of $\gamma$ and $\beta$, but also $\mu$ and $\sigma$, the latter being computed using an exponentially weighted average during training. It means that during inference, the batch normalization acts as a simple linear transformation of what comes out of the previous layer, often a convolution.

As a convolution is also a linear transformation, it also means that both operations can be merged into a single linear transformation!

This would remove some unnecessary parameters but also reduce the number of operations to be performed at inference time.



How to do that in practice?

With a little bit of math, we can easily rearrange the terms of the convolution to take the batch normalization into account.

As a little reminder, the convolution operation followed by the batch normalization operation can be expressed, for an input $x$, as:

$$ \Large \begin{aligned} z &=W * x+b \\ \text { out } &=\gamma \cdot \frac{z-\mu}{\sqrt{\sigma^{2}+\epsilon}}+\beta \end{aligned} $$

So, if we re-arrange the $W$ and $b$ of the convolution to take the parameters of the batch normalization into account, as such:

$$ \Large \begin{aligned} w_{\text {fold }} &=\gamma \cdot \frac{W}{\sqrt{\sigma^{2}+\epsilon}} \\ b_{\text {fold }} &=\gamma \cdot \frac{b-\mu}{\sqrt{\sigma^{2}+\epsilon}}+\beta \end{aligned} $$

We can remove the batch normalization layer and still have the same results!

Note: Usually, you don’t have a bias in a layer preceding a batch normalization layer. It is useless and a waste of parameters as any constant will be canceled out by the batch normalization.


How efficient is it?

We will try for 2 common architectures:

  1. VGG16 with batch norm
  2. ResNet50

Just for the demonstration, we will use ImageNette dataset and PyTorch. Both networks will be trained for 5 epochs and what changes in terms of parameter number and inference time.


VGG16

Let’s start by training VGG16 for 5 epochs (the final accuracy doesn’t matter):

epoch train_loss valid_loss accuracy time
0 1.985012 3.945934 0.226497 00:31
1 1.868819 1.620619 0.472611 00:31
2 1.574975 1.295385 0.576815 00:31
3 1.305211 1.161460 0.617325 00:32
4 1.072395 0.955824 0.684076 00:32

Then show its number of parameters:

Total parameters : 134,309,962

We can get the initial inference time by using the %%timeit magic command:

%%timeit
model(x[0][None].cuda())
2.77 ms ± 1.65 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

So now if we apply batch normalization folding, we have:

Total parameters : 134,301,514

And:

%%timeit
folded_model(x[0][None].cuda())
2.41 ms ± 2.49 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

So 8448 parameters removed and even better, almost 0.4 ms faster inference! Most importantly, this is completely lossless, there is absolutely no change in terms of performance:

folded_learner.validate()
[0.9558241, tensor(0.6841)]

Let’s see how it behaves in the case of Resnet50!


Resnet50

Same, we start by training it for 5 epochs:

epoch train_loss valid_loss accuracy time
0 2.076416 2.491038 0.246624 00:20
1 1.696750 1.517581 0.489427 00:19
2 1.313028 1.206347 0.606115 00:20
3 1.057600 0.890211 0.716943 00:21
4 0.828224 0.793130 0.740892 00:19

The initial amount of parameters is:

Total parameters : 23,528,522

And inference time is:

%%timeit
model(x[0][None].cuda())
6.17 ms ± 13.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

After using batch normalization folding, we have:

Total parameters : 23,501,962

And:

%%timeit
final_model(x[0][None].cuda())
4.47 ms ± 8.97 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

So now, we have 26,560 parameters removed and even more impressive, an inference time reduce by 1.7ms! And still without any drop in performance.

final_learner.validate()
[0.7931296, tensor(0.7409)]


So if we can reduce the inference time and the number of parameters of our models without enduring any drop in performance, why shouldn’t we always do it?


I hope that this blog post helped you! Feel free to give me feedback or ask me questions is something is not clear enough.

Code available at this address!