Pretrained Transformers as Universal Computation Engines (Machine Learning Research Paper Explained)

3 years ago
23

#universalcomputation​ #pretrainedtransformers​ #finetuning​

Large-scale pre-training and subsequent fine-tuning is a common recipe for success with transformer models in machine learning. However, most such transfer learning is done when a model is pre-trained on the same or a very similar modality to the final task to be solved. This paper demonstrates that transformers can be fine-tuned to completely different modalities, such as from language to vision. Moreover, they demonstrate that this can be done by freezing all attention layers, tuning less than .1% of all parameters. The paper further claims that language modeling is a superior pre-training task for such cross-domain transfer. The paper goes through various ablation studies to make its point.

OUTLINE:
0:00​ - Intro & Overview
2:00​ - Frozen Pretrained Transformers
4:50​ - Evaluated Tasks
10:05​ - The Importance of Training LayerNorm
17:10​ - Modality Transfer
25:10​ - Network Architecture Ablation
26:10​ - Evaluation of the Attention Mask
27:20​ - Are FPTs Overfitting or Underfitting?
28:20​ - Model Size Ablation
28:50​ - Is Initialization All You Need?
31:40​ - Full Model Training Overfits
32:15​ - Again the Importance of Training LayerNorm
33:10​ - Conclusions & Comments

Paper: https://arxiv.org/abs/2103.05247​
Code: https://github.com/kzl/universal-comp...​

Abstract:
We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning -- in particular, without finetuning of the self-attention and feedforward layers of the residual blocks. We consider such a model, which we call a Frozen Pretrained Transformer (FPT), and study finetuning it on a variety of sequence classification tasks spanning numerical computation, vision, and protein fold prediction. In contrast to prior works which investigate finetuning on the same modality as the pretraining dataset, we show that pretraining on natural language improves performance and compute efficiency on non-language downstream tasks. In particular, we find that such pretraining enables FPT to generalize in zero-shot to these modalities, matching the performance of a transformer fully trained on these tasks.

Authors: Kevin Lu, Aditya Grover, Pieter Abbeel, Igor Mordatch

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick​
YouTube: https://www.youtube.com/c/yannickilcher​
Twitter: https://twitter.com/ykilcher​
Discord: https://discord.gg/4H8xxDF​
BitChute: https://www.bitchute.com/channel/yann...​
Minds: https://www.minds.com/ykilcher​
Parler: https://parler.com/profile/YannicKilcher​
LinkedIn: https://www.linkedin.com/in/yannic-ki...​
BiliBili: https://space.bilibili.com/1824646584​

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...​
Patreon: https://www.patreon.com/yannickilcher​
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Loading comments...