Premium Only Content
CM3: A Causal Masked Multimodal Model of the Internet (Paper Explained w/ Author Interview)
#cm3 #languagemodel #transformer
This video contains a paper explanation and an incredibly informative interview with first author Armen Aghajanyan.
Autoregressive Transformers have come to dominate many fields in Machine Learning, from text generation to image creation and many more. However, there are two problems. First, the collected data is usually scraped from the web and uni- or bi-modal and throws away a lot of structure of the original websites, and second, language modelling losses are uni-directional. CM3 addresses both problems: It directly operates on HTML and includes text, hyperlinks, and even images (via VQGAN tokenization) and can therefore be used in plenty of ways: Text generation, captioning, image creation, entity linking, and much more. It also introduces a new training strategy called Causally Masked Language Modelling, which brings a level of bi-directionality into autoregressive language modelling. In the interview after the paper explanation, Armen and I go deep into the how and why of these giant models, we go over the stunning results and we make sense of what they mean for the future of universal models.
OUTLINE:
0:00 - Intro & Overview
6:30 - Directly learning the structure of HTML
12:30 - Causally Masked Language Modelling
18:50 - A short look at how to use this model
23:20 - Start of interview
25:30 - Feeding language models with HTML
29:45 - How to get bi-directionality into decoder-only Transformers?
37:00 - Images are just tokens
41:15 - How does one train such giant models?
45:40 - CM3 results are amazing
58:20 - Large-scale dataset collection and content filtering
1:04:40 - More experimental results
1:12:15 - Why don't we use raw HTML?
1:18:20 - Does this paper contain too many things?
Paper: https://arxiv.org/abs/2201.07520
Abstract:
We introduce CM3, a family of causally masked generative models trained over a large corpus of structured multi-modal documents that can contain both text and image tokens. Our new causally masked approach generates tokens left to right while also masking out a small number of long token spans that are generated at the end of the string, instead of their original positions. The casual masking object provides a type of hybrid of the more common causal and masked language models, by enabling full generative modeling while also providing bidirectional context when generating the masked spans. We train causally masked language-image models on large-scale web and Wikipedia articles, where each document contains all of the text, hypertext markup, hyperlinks, and image tokens (from a VQVAE-GAN), provided in the order they appear in the original HTML source (before masking). The resulting CM3 models can generate rich structured, multi-modal outputs while conditioning on arbitrary masked document contexts, and thereby implicitly learn a wide range of text, image, and cross modal tasks. They can be prompted to recover, in a zero-shot fashion, the functionality of models such as DALL-E, GENRE, and HTLM. We set the new state-of-the-art in zero-shot summarization, entity linking, and entity disambiguation while maintaining competitive performance in the fine-tuning setting. We can generate images unconditionally, conditioned on text (like DALL-E) and do captioning all in a zero-shot setting with a single model.
Authors: Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer
Links:
Merch: store.ykilcher.com
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
-
1:22:21
TheDozenPodcast
20 hours agoBroadmoor, bare knuckle, recovery: Ben Hatchett
40.3K1 -
10:58
Degenerate Jay
22 hours ago $7.19 earnedThe Rejected Deadpool And Wolverine Joke That Was Too Far For Disney
86.2K5 -
13:00
Dermatologist Dr. Dustin Portela
15 hours ago $3.38 earnedAnother Insurance Company Harming Patients - Doctor Explains
41K4 -
52:32
Survive History
21 hours ago $5.57 earnedCould You Survive in King George's Redcoats During the Jacobite Rising?
34.3K5 -
17:53
Fit'n Fire
20 hours ago $2.06 earnedA Rifle for the Family -- BCM MK2 BFH and Gunnr Optics Odin 1-10x28mm LPVO
24K2 -
1:03:52
GrassRootsWarriorNetwork
1 month agoWe The People Are The News Now While MSM Is On It’s Way Out - YourNews.com with Sam Anthony
22.2K1 -
21:12
DeVory Darkins
15 hours ago $19.45 earnedGavin Newsom gets what he deserves after NBC Reporter FACT CHECKS his Lies
96K68 -
1:57:13
MyronGainesX
15 hours agoFormer Fed Explains Sting That Led To The Murder Of A State Trooper
100K22 -
3:56:27
Due Dissidence
21 hours agoNewsom ROASTED For Pod Save Interview, Candace Owens CALLS OUT Elon, Ian Carroll RATIOES Israel Post
88.5K67 -
2:16:17
TheSaltyCracker
13 hours agoLooters Descend on LA ReeEEeE Stream 01-12-25
168K357