Pushing the Boundaries of AI, Cheaply and Efficiently: Murat Onen Explains

1 year ago
7

Large-scale AI models that enable next-generation applications like natural language processing and autonomous systems require intensive training and immense power. The monetary and environmental expense is too great.

This is where analog deep learning comes into play. The concept behind it is to develop a new type of hardware that can accelerate the training of neural networks, achieving a cheaper, more efficient, and more sustainable way to move forward with AI applications.

Murat Onen, a postdoctoral researcher in the Department of Electrical Engineering and Computer Science at MIT, explains.

Tune in to explore:

Conventional vs. novel methods of training neural networks
The difference between GPUs and CPUs and why it matters
Analog vs. digital machine operations
About how long it will take to have small and full-scale systems that outperform conventional AI models
Press play for the full conversation.

Loading comments...