Premium Only Content

Pretrained Transformers as Universal Computation Engines (Machine Learning Research Paper Explained)
#universalcomputation​ #pretrainedtransformers​ #finetuning​
Large-scale pre-training and subsequent fine-tuning is a common recipe for success with transformer models in machine learning. However, most such transfer learning is done when a model is pre-trained on the same or a very similar modality to the final task to be solved. This paper demonstrates that transformers can be fine-tuned to completely different modalities, such as from language to vision. Moreover, they demonstrate that this can be done by freezing all attention layers, tuning less than .1% of all parameters. The paper further claims that language modeling is a superior pre-training task for such cross-domain transfer. The paper goes through various ablation studies to make its point.
OUTLINE:
0:00​ - Intro & Overview
2:00​ - Frozen Pretrained Transformers
4:50​ - Evaluated Tasks
10:05​ - The Importance of Training LayerNorm
17:10​ - Modality Transfer
25:10​ - Network Architecture Ablation
26:10​ - Evaluation of the Attention Mask
27:20​ - Are FPTs Overfitting or Underfitting?
28:20​ - Model Size Ablation
28:50​ - Is Initialization All You Need?
31:40​ - Full Model Training Overfits
32:15​ - Again the Importance of Training LayerNorm
33:10​ - Conclusions & Comments
Paper: https://arxiv.org/abs/2103.05247​
Code: https://github.com/kzl/universal-comp...​
Abstract:
We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning -- in particular, without finetuning of the self-attention and feedforward layers of the residual blocks. We consider such a model, which we call a Frozen Pretrained Transformer (FPT), and study finetuning it on a variety of sequence classification tasks spanning numerical computation, vision, and protein fold prediction. In contrast to prior works which investigate finetuning on the same modality as the pretraining dataset, we show that pretraining on natural language improves performance and compute efficiency on non-language downstream tasks. In particular, we find that such pretraining enables FPT to generalize in zero-shot to these modalities, matching the performance of a transformer fully trained on these tasks.
Authors: Kevin Lu, Aditya Grover, Pieter Abbeel, Igor Mordatch
Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick​
YouTube: https://www.youtube.com/c/yannickilcher​
Twitter: https://twitter.com/ykilcher​
Discord: https://discord.gg/4H8xxDF​
BitChute: https://www.bitchute.com/channel/yann...​
Minds: https://www.minds.com/ykilcher​
Parler: https://parler.com/profile/YannicKilcher​
LinkedIn: https://www.linkedin.com/in/yannic-ki...​
BiliBili: https://space.bilibili.com/1824646584​
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...​
Patreon: https://www.patreon.com/yannickilcher​
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
-
1:07
Baym3ez
4 years agoLearning Continues
73 -
0:51
STLNutritionDoc
4 years agoLearning gas
66 -
0:27
sweetbaby
4 years agoInside universal studios
126 -
3:20
WMAR
4 years agoMosaic Learning
48 -
1:00
Kajetan
4 years agoModel Steam engines
44 -
LIVE
Barry Cunningham
1 hour agoREACTING TO STEPHEN MILLER | KASH PATEL | TULSI GABBARD INTERVIEWS AND MORE NEWS!
2,184 watching -
58:01
MattMorseTV
1 hour ago $1.46 earnedđź”´The Dems. just lost 4.5 MILLION voters.đź”´
2.1K16 -
LIVE
Mally_Mouse
8 hours agoLet's Hang!!
44 watching -
1:04:10
BonginoReport
3 hours agoCornhusker Clink & A Sizzling Border Wall Deter Illegals - Nightly Scroll w/ Hayley Caronia (Ep.116)
68.7K41 -
LIVE
blackfox87
24 minutes agoFoxyFam takes on Warzone! | PREMIUM CREATOR | #DisabledVeteran
18 watching