The Artificial Intelligence Threat - Atlantis Rising Magazine

2 years ago
193

Visit Atlantis Rising Research Group at https://www.atlantisrising.com/

Humanity faces a crisis. At the rapid rate Artificial Intelligence (AI) is developing, we will quickly reach the point of Ray Kurzweil’s “Singularity”—the point where AI equals human intelligence. Then AI will proceed beyond—to “Superintelligence.” The superintelligent AI will be so smart they will have the capability to destroy their creators. Unless, as a new theme in the AI world argues, we learn to impart human values to these machines—a process (the “values loading problem”) now seen as extremely tricky—we are toast. Yup, like toast, we will be eaten for breakfast. Maybe we last until lunch.

The AI Onslaught

AI has made tremendous progress. AI programs are current champions in a multitude of games—chess, checkers, backgammon, Othello, even Jeopardy. Unfortunately, each of these programs has logic and algorithms very specific to the game; none are generalizable to a true, general intelligence (AGI)—what humans are considered to be. AI has seen three major approaches. The first is the standard “symbolic” programming with which most are familiar. It has led to theorem provers, language-understanding programs, and problem solvers. Another approach—evolutionary algorithms—has attempted to “evolve” programs. A third, neural networks, has numerous achievements, generally in applying statistical techniques to detecting patterns. It boasts nearly 150,000 academic papers. But all lead to a large black hole from which no clear exit is seen. On the other side stand two high, admittedly untaken, hills—common-sense knowledge and true-language comprehension.

The two hills are connected. Here is a commonsense problem: Given a 12-inch cubical box, a razor blade, rubber bands, staples, a pencil, string, toothpicks, a piece of cheese—create a mousetrap. We might make a “crossbow” where the pencil is inserted into a hole in the box’s side, pulled back outside the box by the rubber bands, notched in place by the toothpick, and a string tied to the toothpick and the cheese. Or we might create a “beheader,” with razorblade notched into the pencil, one pencil end anchored in the box corner, the pencil-axe raised up by the toothpick, with string attached again to toothpick and cheese. Both solutions require concrete knowledge and experience of physical dynamics, forces, properties of materials. Both are “analogic” solutions. The linguistic statement, “The mouse trap is a crossbow,” is an analogy. Douglas Hofstadter of Gödel, Escher, Bach fame, in his recent tome (Surfaces and Essences), while showing indisputably that analogy is the foundational operation of human thought, ridicules all current AI language-understanding programs (SIRI included). The problem is that language is based on analogy, and Hofstadter clearly doubts that computers, as we know them, can handle analogy at all.

But the black hole and untaken hills are not seen as stopping the superintelligence onslaught. Oxford professor, Nick Bostrom (Superintelligence: Paths, Dangers, Strategies), having taken us to the edge of the black hole in his book, veers away, placing his bet on whole brain emulation (WBE). WBE relies on the neuroscientists to map, via neural recordings, the functioning of the brain. But this is a shaky bet; WBE is an Everest. We face an 85-billion-neuron brain with roughly 1,000 types of neurons, the functions of none of these types we understand. We do not know basic facts, such as how memory (our experience) is stored. We are quite certain that the brain is not using what we understand currently as “computation,” but we do not know what this other form is. We face data from neural recordings that will be so massive, it will be in zeta-bytes, yet any interpretation will be completely dependent on a guiding theory—note, a theory—when we have none such. It will be, as neuroscientists Marcus and Freeman note (The Future of the Brain), like trying to learn what a laptop is and does by taking electrical recordings of its components when we have no theory of, or knowledge of, the existence of something called “software.”

This is to say, we really have no clue what type of “device” the brain actually is. Yet Bostrom, with the rest of AI, sails serenely on from this subject of brain emulation, confident without a qualm that we will have recreated the brain as a silicon and wires device. Since it is certain that we will have electronics, we can speed up the transmission velocities, say, by 10,000x, thus allowing the device to quickly develop, creating and moving to a superintelligence and thus to the terrible threat of that future breakfast. But, but… what if the brain is a “device” that cannot be replicated in silicon at all?

Loading comments...