So I just finished PW Singer's new book Wired for War about Robotics/AI/Technology and how it will affect warfare in the 21st century. In one section he interviews Ray Kurzweil, of whose theories I've become acquainted with over the last couple of years. If you haven't heard of him before then you get read up here. He basically argues that exponential trends will allow for us to reach the Singularity, when Machine Intelligence will become super powerful in its computing and thinking capacity. I've become slightly fascinated with this concept, and Kurzweil places an estimate of 2045 as the date it will become reality, give or take a few years. Belief in the Singularity strikes me as almost a sort of scientific religious belief, except in this case Kurzweil is extrapolating past trends into making assumptions about future developments, which he's had tremendous success with in the past. What do you guys think? I'm sort of intrigued with his ideas, because if true it would mean that we could become essentially immortal and merge with machines by the middle of the 21st century (which would be super cool).
This is an interesting thread to find in here; I knew exactly what it was going to be about as soon as I saw it. To clairify for those who aren't familiar with the concept the Singularity is the point at which man first builds an AI smarter and more capable than man himself. If you think about it for a second, the real implication is that if we are able to design a machine smarter than us, then presumably that machine will be able to design a machine smarter than it (since it will be able to detect it's own design flaws that we cannot detect), and so and and so forth ad infinitum. If you haven't considered this before, let that roll around in your skull a bit and percolate. It almost is, in a sense. Think about what the effects of such a thing might be: an AI smart enough to unravel all the mysteries of the universe that we aren't smart enough to figure out? Something with the ability to manipulate anything within the laws of physics. All of these capabilities - and almost all of them beyond human comprehension. The AI would virtually be a god. What would such a world look like? As they say, any sufficiently advanced technology is indistinguishable from magic. The Singularity becomes a point past which we cannot predict the future with any reliability at all. And it seems almost inevitable. At some point - who knows when - we will design an AI smarter than us. I'd bet on that just as sure as I'd bet on the sun rising tomorrow morning. What happens after that is anybody's guess. I've seem some fantastical and - perhaps terrifyingly - plausible scenarios put forth by some creative scifi authors. There's a nice discussion of the Singluarity with Kurzweil and Cory Doctorow here.
I was watching a show on Nat Geo TV the other day called Known Universe, and they had one physicist who had said that we may be as far away from interstellar travel as the Wright Brothers were from Da Vinci's initial designs for flight. Saying something on the order of hundreds of years, but then using Kurzweils logic I looked at it a different way. It took 400 years for us to go from Da Vinci's ideas for flight on paper, to actual flying machines. But then it only took ~ 60 years for us to go from flying machines to flying people to the moon. The rate of technologic development is certainly increasing, its just hard to picture what that will look like in 40 years time.
What if the computer becomes super intelligent, figures out the meaning of life (there is none), and then proceeds to commit suicide in despair?
The concept is very strong, however it does strike a chord with me and BSG. Cylons anyone? Especially the new series with resurrection.
These things are tricky, and I wouldn't recommend acting like rate of human invention is an entity in an of itself that can be either relied on in general or predicted for a particular bit of knowledge. Because what we know is based on reality, and reality is its own thing that does not bend to our wishes for it. For example, there were bursts of activity in physics when we first discovered the easy stuff, then the hard stuff, and now we are looking at the really hard stuff. And guess what - it's really really hard. We don't know when or even if some of these things can be figured out. And that goes for computers as well. I've got books on AI programming going back to the middle '70s, and the predictions in all of them turned out to be jokes. We've already got computers creating original mathematical proofs and electronic circuits, but thanks to evolution-inspired genetic algorithms, we know it doesn't take intelligence to do that. And computers already make computers. No one knows everything about a Pentium, it's just too complicated. We have computers that do that. They already are way smarter than us in lots of ways - exactly those ways that the human brain is deficient (memory, computation, consistency). That's why we build them. I don't place lots of stock in this guy. He gets press for a lot of obvious stuff, one good prediction he made for the wrong reason, and now he's taken an 50-year old Isaac Asimov science fiction short story as a blueprint for the future.
I think the real implication is that the machines smarter than us will decide we are in the way and start to exterminate us. Man, that sounds like a great movie plot!
Kurzweil seems to think that computers and humans will be so integrated by 2045 that it'll be hard to tell us apart.
First - Second - an interesting discussion about the singularity here. Based on some other stuff I've seen, I'd guess that Kurzweil is having problems grappling with his own mortality. But I do agree with the discussion that there is a heck of a lot we still don't know about the human mind that would need to be understood before machine consciousness could occur. It's an interesting topic.
If anyone digs science related/nerdy podcasts, SETI does a pretty good one hosted by Seth Shostak. It's actually quite similar to Radiolab in its presentation and content. http://radio.seti.org/ This week they did a pretty good show on computers/singularity computer body integration.
PZ Meyers on why this stuff can't work: http://scienceblogs.com/pharyngula/2010/08/ray_kurzweil_does_not_understa.php http://scienceblogs.com/pharyngula/2010/08/kurzweil_still_doesnt_understa.php
More precisely, he's saying Kurzweil is dramatically underestimating the scope of the difficulty in duplicating a human brain, not that it's theoretically impossible. Which is good. I'd be disappointed in any scientist who said it would be impossible to duplicate a physical item that exists in our reality.
I think that's the one big stumbling block for most people. "The Brain is too complicated" Even if it is 200% or 1000% harder than we expect to decode, that's only 2 or 3 exponential doublings, which doesn't take much time at all once you've hit the neck of the curve.
But it isn't 1000% harder. It's billions upon billions of times harder, and that just for the tiny percent of the brains workings we know. The most important - and maybe the only major - discovery from the mapping of the human genetic code is how little that actually tells us about humans. We can't get started on the brain until we figure out protein folding, and that has proven to be very difficult. When a technological idea remains "10 years in the future" after more than a half century of work, you have to start to question the underlying assumptions of that idea. Computers are miraculous as they are, and they are better off doing things and getting better at things we suck at. We don't need more human intelligences - we already have so many we throw them away all the time.
I think people who were saying AI is "ten years in the future" back when it first became Posh in the 70s were in way over their heads, they didn't even know what they didn't know, and were attempting this top down approach with such a limited amount of computing power that we can look back now and see how laughable it is. We still may just be getting the basics but now we can see how it can be done. No one is saying it can't be done, they're just arguing over the level of complexity.
If we can't tell them apart, then maybe it's already happened. Can any of you prove whether I am human or computer?