Prune Transformers

Prune transformers architecture with fasterai
Note

This example code is taken from the fastai docs

pretrained_weights = 'gpt2'
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_weights)
model = GPT2LMHeadModel.from_pretrained(pretrained_weights)
path = untar_data(URLs.WIKITEXT_TINY)

Let’s create our fastai Learner.

learn = Learner(dls, model, loss_func=CrossEntropyLossFlat(), cbs=[DropOutput], metrics=Perplexity())

And let’s try to extend a given prompt with the pretrained model.

prompt = "\n = Unicorn = \n \n A unicorn is a magical creature with a rainbow tail and a horn"
preds = learn.model.generate(inp, max_length=40, num_beams=5, temperature=1.5)
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
tokenizer.decode(preds[0].cpu().numpy())
'\n = Unicorn = \n \n A unicorn is a magical creature with a rainbow tail and a horn on its head.\n\nA unicorn is a magical creature with a rainbow tail and a horn'
learn.validate()
(#2) [3.695716619491577,40.2744255065918]
learn.fit_one_cycle(1, 1e-4)
epoch train_loss valid_loss perplexity time
0 3.124115 2.844266 17.188944 07:50
prompt_ids = tokenizer.encode(prompt)
inp = tensor(prompt_ids)[None]

preds = learn.model.generate(inp.cuda(), max_length=40, num_beams=5, temperature=1.5)

tokenizer.decode(preds[0].cpu().numpy())
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)

Make it sparse !

Let’s see now if we retrain our model, this time introducing sparsity

learn = Learner(dls, model, loss_func=CrossEntropyLossFlat(), cbs=[DropOutput], metrics=Perplexity())

Unfortunately, the transformer model uses a custom layer: Conv1D, which is not a part of PyTorch. To overcome this problem, we have to add this layer to our Granularities class, so that it knows what to sparsify.

Here, the Conv1D behaves like a Linear layer, i.e. the weights are defined by a matrix of dimension (nf,nx)

doc(Conv1D)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)

Conv1D

Conv1D(nf, nx)

1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2). Basically works like a linear layer but the weights are transposed. Args: nf (`int`): The number of output features. nx (`int`): The number of input features.

We can thus add the Conv1D granularity by using the add_granularity method, indicating the target module and the corresponding granularities that it can handle (the same as Linear so we can reuse it)

Granularities.add_granularity(Conv1D, Granularities._granularities_Linear)

Let’s now define our SparsifyCallback. Let’s say we want to make our model 30% sparse, by removing the highest-norm weight in each attention head.

sp_cb = SparsifyCallback(sparsity=30, granularity='weight', context='local', criteria=large_final, schedule=one_cycle, layer_type=Conv1D)

We now only have to pass our callback to fastai

learn.fit_one_cycle(1, 1e-4, cbs=sp_cb)
Pruning of weight until a sparsity of [30]%
Saving Weights at epoch 0
Sparsity at the end of epoch 0: [30.0]%
Final Sparsity: [30.0]%
Sparsity in Conv1D 9: 30.00%
Sparsity in Conv1D 10: 30.00%
Sparsity in Conv1D 15: 30.00%
Sparsity in Conv1D 16: 30.00%
Sparsity in Conv1D 22: 30.00%
Sparsity in Conv1D 23: 30.00%
Sparsity in Conv1D 28: 30.00%
Sparsity in Conv1D 29: 30.00%
Sparsity in Conv1D 34: 30.00%
Sparsity in Conv1D 35: 30.00%
Sparsity in Conv1D 40: 30.00%
Sparsity in Conv1D 41: 30.00%
Sparsity in Conv1D 46: 30.00%
Sparsity in Conv1D 47: 30.00%
Sparsity in Conv1D 52: 30.00%
Sparsity in Conv1D 53: 30.00%
Sparsity in Conv1D 58: 30.00%
Sparsity in Conv1D 59: 30.00%
Sparsity in Conv1D 64: 30.00%
Sparsity in Conv1D 65: 30.00%
Sparsity in Conv1D 70: 30.00%
Sparsity in Conv1D 71: 30.00%
Sparsity in Conv1D 76: 30.00%
Sparsity in Conv1D 77: 30.00%
Sparsity in Conv1D 82: 30.00%
Sparsity in Conv1D 83: 30.00%
Sparsity in Conv1D 88: 30.00%
Sparsity in Conv1D 89: 30.00%
Sparsity in Conv1D 94: 30.00%
Sparsity in Conv1D 95: 30.00%
Sparsity in Conv1D 100: 30.00%
Sparsity in Conv1D 101: 30.00%
Sparsity in Conv1D 106: 30.00%
Sparsity in Conv1D 107: 30.00%
Sparsity in Conv1D 112: 30.00%
Sparsity in Conv1D 113: 30.00%
Sparsity in Conv1D 118: 30.00%
Sparsity in Conv1D 119: 30.00%
Sparsity in Conv1D 124: 30.00%
Sparsity in Conv1D 125: 30.00%
Sparsity in Conv1D 130: 30.00%
Sparsity in Conv1D 131: 30.00%
Sparsity in Conv1D 136: 30.00%
Sparsity in Conv1D 137: 30.00%
Sparsity in Conv1D 142: 30.00%
Sparsity in Conv1D 143: 30.00%
Sparsity in Conv1D 148: 30.00%
Sparsity in Conv1D 149: 30.00%
epoch train_loss valid_loss perplexity time
0 3.151266 2.882525 17.859306 09:44

And we can check the predicion to the same prompt as before

prompt_ids = tokenizer.encode(prompt)
inp = tensor(prompt_ids)[None]

preds = learn.model.generate(inp.cuda(), max_length=40, num_beams=5, temperature=1.5)

tokenizer.decode(preds[0].cpu().numpy())
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
'\n = Unicorn = \n \n A unicorn is a magical creature with a rainbow tail and a horn @-@ shaped head. The unicorn is a member of the <unk> <unk>'

That’s it ! You now have a sparse Transformer as performant as the whole model. However, this model is currently not more efficient speed and storage wise. To have such a speed-up, I suggest you to look at the granularity section.