Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

1 year ago
349

Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast  #368
---
Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI.
---
The Code Used for ChatGPT is Proprietary, and not available for public examination.
---
This does not allow the public the possibility to ensure that it DOES NOT POSE A SERIOUS DANGER.
---
Nor can it be proven that they are not cheating and lying about the true capabilities.
---
OUTLINE:
0:00 - Introduction
0:43 - GPT-4
23:23 - Open sourcing GPT-4
39:41 - Defining AGI
47:38 - AGI alignment
1:30:30 - How AGI may kill us
2:22:51 - Superintelligence
2:30:03 - Evolution
2:36:33 - Consciousness
2:47:04 - Aliens 2:52:35 - AGI Timeline
3:00:35 - Ego
3:06:27 - Advice for young people
3:11:45 - Mortality
3:13:26 - Love
---
EPISODE LINKS: Eliezer's Twitter: https://twitter.com/ESYudkowsky LessWrong Blog: https://lesswrong.com Eliezer's Blog page: https://www.lesswrong.com/users/eliez... Books and resources mentioned: 1. AGI Ruin (blog post): https://lesswrong.com/posts/uMQ3cqWDP... 2. Adaptation and Natural Selection: https://amzn.to/40F5gfa
---
FAIR USE FOR EDUCATIONAL PURPOSES
---
Mirrored From:
https://www.youtube.com/@lexfridman

Loading comments...