Wednesday, September 12, 2007

Will Super Smart Artificial Intelligences Keep Humans Around As Pets?

Robots, my friends. Robots.

Robots are the least of our worries. Robots would be little more than the minions of the superintelligent. And somewhere in this mess we will find ourselves as either the superior or kept around by the superior as a curiosity. So fuck robots! Let's talk about ginormous brains and the shit that might cause it to happen.
What is the "Singularity?" As Eliezer Yudkowsky, cofounder of the Singularity Institute, explained, the idea was first propounded by mathematician and sci-fi writer Vernor Vinge in the 1970s. Vinge found it difficult to write about a future in which greater than human intelligence arose. Why? Because humanity would stand in relation to that intelligence as an ant does to us today. For Vinge it was impossible to imagine what kind of future such superintelligences might craft. Vinge analogized that future to black holes which are singularities surrounded by an event horizon past which outside observers simply cannot see. Once the Singularity occurs the future gets very, very weird. According to Yudkowsky, the Event Horizon school is just one of the three main schools of thought about the Singularity. The other two are the Accelerationist and the Intelligence Explosion schools.

The best-known Accelerationist is inventor Ray Kurzweil whose recent book The Singularity is Near: When Humans Transcend Biology (2005) lays out the case for how exponentially accelerating information technology will spark the Singularity before 2050. In Kurzweil's vision of the Singularity, AIs don't take over the world: Humans will have so augmented themselves with computer intelligence that essentially we transform ourselves into super-intelligent AIs.

Yudkowsky identifies mathematician I.J. Good as the modern initiator of the idea of an Intelligence Explosion. To Good's way of thinking, technology arises from the application of intelligence. So what happens when intelligence applies technology to improving intelligence? That produces a positive feedback loop in which self-improving intelligence bootstraps its way to superintelligence. How intelligent? Yudkowsky offered a thought experiment which compared current brain processing speeds with computer processing speeds. Speeded up a million-fold, Yudkowsky noted, "you could do one year's worth of thinking every 31 physical seconds." While the three different schools of thought vary on details, Yudkowsky concluded, "They don't imply each other or require each other, but they support each other."
Good to know I'm not the only one who thinks about such things. But what will the land of such intelligence really look like? Wouldn't so much computational power just lead to an ego explosion of equal proportions? Couldn't that lead to some uber badness?
Hall said that his friend economist Robin Hanson pointed out to him that we already live with superhuman psychopaths—modern corporations—and we're not all dead.
That's optimism for you. It is also good news for those who think that computer sentience may come about on its own without the aid of computer programmers. Yes, a hyper-intelligent subroutine that is birthed by the computational power of computers linked up to each other through some kind of interdependent network, an internet so to speak.

But do we really need to be thinking about all this right now?
When are AIs likely to arise? Ray Kurzweil, who joined the Summit by video link, predicted that computational power sufficient to simulate the human brain will be available on a laptop for $1000 in the next 15 years. Kurzweil believes that AIs will come into existence before 2030. Peter Voss was even more bullish, declaring, "In my opinion AIs will be developed almost certainly in less than 10 years and quite likely in less than five years."
So prepare yourselves for the future. It's coming at us pretty fast... or humans like me love to dream big.

Just go read the full article.

4 comments:

X said...

Won't the codes we pass on to this future generation of AI be much the same as the genes and memes we pass on to our biological children? If so, wouldn't that make future AI be every bit as much "us" as future humans?

Unknown said...

Just as we are every bit as monkey as our ancestors. So it depends on which state you live in.

I think there is a torch carrying mob at your door.

X said...

This reminds me that unlike the way that Wizard of Oz syncs up with Dark Side of the Moon, Metropolis does not sync up with Herb Alpert and the Tijuana Brass.

Unknown said...

We were far too sober for that experiment.

Just to be sure that it wasn't some perfectly matched polar opposite that should have resulted in the destruction of the known universe, I think I need to try and synch it to my copy of Snoopy versus the Red Baron.