Premium Only Content
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances (SayCan - Paper Explained)
#saycan #robots #ai
Large Language Models are excellent at generating plausible plans in response to real-world problems, but without interacting with the environment, they have no abilities to estimate which of these plans are feasible or appropriate. SayCan combines the semantic capabilities of language models with a bank of low-level skills, which are available to the agent as individual policies to execute. SayCan automatically finds the best policy to execute by considering a trade-off between the policy's ability to progress towards the goal, given by the language model, and the policy's probability of executing successfully, given by the respective value function. The result is a system that can generate and execute long-horizon action sequences in the real world to fulfil complex tasks.
Sponsor: Zeta Alpha
https://zeta-alpha.com
Use code YANNIC for 20% off!
OUTLINE:
0:00 - Introduction & Overview
3:20 - Sponsor: Zeta Alpha
5:00 - Using language models for action planning
8:00 - Combining LLMs with learned atomic skills
16:50 - The full SayCan system
20:30 - Experimental setup and data collection
21:25 - Some weaknesses & strengths of the system
27:00 - Experimental results
Paper: https://arxiv.org/abs/2204.01691
Website: https://say-can.github.io/
Abstract:
Large language models can encode a wealth of semantic knowledge about the world. Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language. However, a significant weakness of language models is that they lack real-world experience, which makes it difficult to leverage them for decision making within a given embodiment. For example, asking a language model to describe how to clean a spill might result in a reasonable narrative, but it may not be applicable to a particular agent, such as a robot, that needs to perform this task in a particular environment. We propose to provide real-world grounding by means of pretrained skills, which are used to constrain the model to propose natural language actions that are both feasible and contextually appropriate. The robot can act as the language model's "hands and eyes," while the language model supplies high-level semantic knowledge about the task. We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions, while value functions associated with these skills provide the grounding necessary to connect this knowledge to a particular physical environment. We evaluate our method on a number of real-world robotic tasks, where we show the need for real-world grounding and that this approach is capable of completing long-horizon, abstract, natural language instructions on a mobile manipulator. The project's website and the video can be found at this https URL
Authors: Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan
Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yann...
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
-
1:42:46
Lara Logan
1 day agoINJECTING TRUTH INTO THE VACCINE DEBATE with Del Bigtree | Ep 43 | Going Rogue with Lara Logan
13.3K31 -
12:55
Cash Jordan
19 hours agoNYC Busses 'MOBBED' by Millionaires... as "Communist" Mayor VOWS to END AMERICA
8.86K26 -
LIVE
Major League Fishing
1 day agoLIVE! MLF Toyota Series Championship!
582 watching -
18:54
Bearing
1 day agoNew York COMMUNIST TAKEOVER 🚨 Zoran Mamdani’s Revolution 💥
10.9K77 -
1:05:26
Man in America
1 day ago“Poseidon” Doomsday Sub, Microplastics & The War on Testosterone w/ Kim Bright
45.2K55 -
12:15
Degenerate Jay
21 hours agoIs GTA 6 In Trouble?
3.27K -
LIVE
FyrBorne
15 hours ago🔴Battlefield 6 Live M&K Gameplay: Who's Hunting Who?
571 watching -
LIVE
Times Now World
5 days agoVladimir Putin LIVE | Putin rushes to help Maduro, sends Wagner Group | US-Venezuela News | Trump
83 watching -
LIVE
DynastyXL
6 hours ago🔴LIVE Slurp Man when? Lets PLAY with viewers
82 watching -
1:12:46
Dialogue works
11 hours ago $2.24 earnedLarry C. Johnson & Col. Larry Wilkerson: Russia & Iran to Build a WAR SHIELD — China Just Stepped In
18.8K3