Tuesday, October 1, 2013

Be Afraid: Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat

Since the start of the Industrial Revolution humans have been worrying that the fancy machines they've created will one day get tired of serving humans and rise up in revolt. In fact, from Isaac Asimov to Battlestar Galactica it's been such a common trope in science ficition books, movies and TV, that the concept doesn't even really seem scary anymore. It's just movie nonsense. In the meantime, we've gotten so cozy with our machines that we trust them implicitly. My iPhone would never hurt me would it? My XBox and I are friends, right? My Roomba is interested only in eliminating dirt from my floor, isn't it?

According James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, we are safe from our machines for now, but soon we should be afraid, very afraid. A future generation of machines is coming and they could be nearly omnipotent. They also may not have our best interests at heart.

Barrat is talking about the fast-evolving area of Artificial Intelligence, or AI which seeks to build machines smart enough to match or exceed human intelligence. He fears that developments in AI will almost inevitably lead to a technology which we cannot control. Barrat breaks AI into three types. The first is limited AI, the kind of AI that is already widely in use, driving, for example, Google search engines, Netflix affinity programming and iPhone's Siri virtual assistant. The second type of AI, which he expects to emerge in mere decades, is AGI or Artificial General Intelligence. AGI will have intelligence and abilities roughly equivalent to human beings. Finally, ASI, or Artificial Super Intelligence which far exceeds human smarts will emerge and when that happens, we'll really be in trouble.


In Our Final Invention Barrat reviews the current AI situation. It does indeed seem destined to disaster:
  1. AI development is currently done mostly in secret, by military organizations or corporations wishing to gain an edge.
  2. Few who are developing AI perceive the danger.
  3. Even today AIs are employing "black box" technologies like neural nets which are extremely powerful but which we don't really understand.
  4. AGIs and ASIs will be able to reproduce, modify and improve themselves which could lead to extremely fast and unstoppable development.
  5. Artificial intelligences may see no reason to preserve human life.
The book engages each of these in detail and looks to possible solutions for them. Along the way he introduces us to the characters and minds and stories behind the current work in AI. He explores and explains clearly many of the concepts driving AI development, such as brain modeling, human augmentation, and genetic coding. But in the end Barrat's assessment is grim. Unless we put foolproof safeguards in place now, he is convinced that we are unlikely to survive our final invention.

I'm not as convinced. So much of what he speaks to is such wild speculation, that I'm not quite ready to go all anti-robot just yet. I don't think that AI can develop in only one human-destroying direction. But I have been convinced that the possibility of AI getting out of control exists and that by bringing it our attention Barrat is doing humanity a real service.

Be patient with the book. Barrat is so worried about the direction AI is headed that the first few chapters of the book rush through arguments, but he eventually settles down and settles in and the book as a whole is both informative and engaging.


The publisher provided me with a time-limited electronic copy for the purposes of this review.


back to main page

No comments: