Premium Only Content

CM3: A Causal Masked Multimodal Model of the Internet (Paper Explained w/ Author Interview)
#cm3 #languagemodel #transformer
This video contains a paper explanation and an incredibly informative interview with first author Armen Aghajanyan.
Autoregressive Transformers have come to dominate many fields in Machine Learning, from text generation to image creation and many more. However, there are two problems. First, the collected data is usually scraped from the web and uni- or bi-modal and throws away a lot of structure of the original websites, and second, language modelling losses are uni-directional. CM3 addresses both problems: It directly operates on HTML and includes text, hyperlinks, and even images (via VQGAN tokenization) and can therefore be used in plenty of ways: Text generation, captioning, image creation, entity linking, and much more. It also introduces a new training strategy called Causally Masked Language Modelling, which brings a level of bi-directionality into autoregressive language modelling. In the interview after the paper explanation, Armen and I go deep into the how and why of these giant models, we go over the stunning results and we make sense of what they mean for the future of universal models.
OUTLINE:
0:00 - Intro & Overview
6:30 - Directly learning the structure of HTML
12:30 - Causally Masked Language Modelling
18:50 - A short look at how to use this model
23:20 - Start of interview
25:30 - Feeding language models with HTML
29:45 - How to get bi-directionality into decoder-only Transformers?
37:00 - Images are just tokens
41:15 - How does one train such giant models?
45:40 - CM3 results are amazing
58:20 - Large-scale dataset collection and content filtering
1:04:40 - More experimental results
1:12:15 - Why don't we use raw HTML?
1:18:20 - Does this paper contain too many things?
Paper: https://arxiv.org/abs/2201.07520
Abstract:
We introduce CM3, a family of causally masked generative models trained over a large corpus of structured multi-modal documents that can contain both text and image tokens. Our new causally masked approach generates tokens left to right while also masking out a small number of long token spans that are generated at the end of the string, instead of their original positions. The casual masking object provides a type of hybrid of the more common causal and masked language models, by enabling full generative modeling while also providing bidirectional context when generating the masked spans. We train causally masked language-image models on large-scale web and Wikipedia articles, where each document contains all of the text, hypertext markup, hyperlinks, and image tokens (from a VQVAE-GAN), provided in the order they appear in the original HTML source (before masking). The resulting CM3 models can generate rich structured, multi-modal outputs while conditioning on arbitrary masked document contexts, and thereby implicitly learn a wide range of text, image, and cross modal tasks. They can be prompted to recover, in a zero-shot fashion, the functionality of models such as DALL-E, GENRE, and HTLM. We set the new state-of-the-art in zero-shot summarization, entity linking, and entity disambiguation while maintaining competitive performance in the fine-tuning setting. We can generate images unconditionally, conditioned on text (like DALL-E) and do captioning all in a zero-shot setting with a single model.
Authors: Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer
Links:
Merch: store.ykilcher.com
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
-
1:30:34
FreshandFit
4 hours agoHow To Stay Focused While Pursuing Women...The Good, The Bad, And The Ugly
30.9K20 -
1:47:05
Drew Hernandez
8 hours agoISRAEL PLANNING POSSIBLE DRAFT IN USA & TRUMP'S VIEW ON ETERNAL LIFE ANALYZED
20.2K55 -
29:55
Afshin Rattansi's Going Underground
3 days agoProf. Omer Bartov: The REAL REASON the US, UK, and EU Have Not Recognised Israel’s Genocide in Gaza
14.6K24 -
LIVE
SpartakusLIVE
5 hours agoWednesday WZ with the Challenge MASTER || Duos w/ GloryJean
398 watching -
2:36:12
Barry Cunningham
5 hours agoREACTING TO STEPHEN MILLER | KASH PATEL | TULSI GABBARD INTERVIEWS AND MORE NEWS!
62.3K53 -
LIVE
Alex Zedra
2 hours agoLIVE! Solo Scary Game night
294 watching -
58:01
MattMorseTV
5 hours ago $0.86 earned🔴The Dems. just lost 4.5 MILLION voters.🔴
61.7K52 -
1:04:10
BonginoReport
7 hours agoCornhusker Clink & A Sizzling Border Wall Deter Illegals - Nightly Scroll w/ Hayley Caronia (Ep.116)
124K84 -
10:22:43
ZWOGs
11 hours ago🔴LIVE IN 1440p! - SoT w/ Pudge & SBL, The Finals w/ The Brrrap Pack, Kingdome Come - Come Hang Out!
11K1 -
LIVE
VapinGamers
3 hours ago $0.03 earnedOff The Grid - Yes I Have a Problem but Winning Aint One! #1 Controller Scrub NA - !rumbot !music
81 watching