Robert Miles Archive Channel
Robert Miles Archive Channel

Robert Miles Archive Channel

10 Followers
    Superintelligence Mod for Civilization V
    1:04:39
    Why Does AI Lie, and What Can We Do About It?
    9:23
    We Were Right! Real Inner Misalignment
    11:46
    Intro to AI Safety, Remastered
    18:04
    Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...
    10:19
    The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
    23:23
    Quantilizers: AI That Doesn't Try Too Hard
    9:53
    Sharing the Benefits of AI: The Windfall Clause
    11:43
    10 Reasons to Ignore AI Safety
    16:28
    9 Examples of Specification Gaming
    9:39
    Training AI Without Writing A Reward Function, with Reward Modelling
    17:51
    AI That Doesn't Try Too Hard - Maximizers and Satisficers
    10:21
    Is AI Safety a Pascal's Mugging?
    13:40
    A Response to Steven Pinker on AI
    15:37
    How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification
    11:32
    Why Not Just: Think of AGI Like a Corporation?
    15:26
    Safe Exploration: Concrete Problems in AI Safety Part 6
    13:45
    Friend or Foe? AI Safety Gridworlds extra bit
    3:46
    AI Safety Gridworlds
    7:22
    Experts' Predictions about the Future of AI
    6:46
    AI learns to Create ̵K̵Z̵F̵ ̵V̵i̵d̵e̵o̵s̵ Cat Pictures: Papers in Two Minutes #1
    5:19
    Why Would AI Want to do Bad Things? Instrumental Convergence
    10:35
    Intelligence and Stupidity: The Orthogonality Thesis
    13:02
    Scalable Supervision: Concrete Problems in AI Safety Part 5
    5:02
    AI Safety at EAGlobal2017 Conference
    5:29