QUOTE(DavidReinold @ Jul 30 2012, 04:36 PM)
As it has been noted many times in this thread, humans are just organic computers.
Not quite. The architecture is completely different. Functionally, they're only
barely similar in that they can gather information and analyze it to solve problems. The way they do it is completely different.
A neuron not only acts as a transistor, but also as a (slow) CPU, and a theoretically limitless hard drive. Billions of neurons work together to solve complex problems without direct programming- part of what intuition is. The brain is not, by nature, a mathematical processor, instead operating on abstract concepts, nor is it comparable to any computer technologies we have today. Not only that, but brains can essentially reprogram themselves at any time and are not bound to act on any particular algorithm.
Computers are made up of wires and transistors, storing data to metal plates with either electricity or magnetism. Given a correct algorithm, computers can solve a specialized set of complex problems, though not always in a timely manner (brute force algorithms, anyone?). Computers are, by design, mathematical processors that operate on concrete bits. Computers will only reprogram themselves when told how to do so and are always bound to do what their algorithms tell them.
QUOTE(NoeL @ Jul 30 2012, 11:08 PM)
You can't make blanket statements about what robots can and can't do without actually having those robots. Why do you assume these kinds of behaviours would be limited to organic circuitry?
I'm basing it off the current technology and its momentum. i.e. I'm extrapolating. Maybe this will all prove possible with quantum computers.
QUOTE(NoeL)
Again, it comes off as somewhat arrogant to say "robots don't wonder".
Well they don't. Quantum computers, maybe, but definitely not silicon computers. As I said earlier, computers are bound to do exactly what their algorithm is slated to do. Sure, they can be simulated to be curious by constantly asking questions, but is that true curiosity? I would say it isn't. They're not deeply interested themselves; they're just told to ask lots of questions.
QUOTE(NoeL)
How would they be dependent on us in any way?
You just don't get off-the-wall question-asking from robots. Intuitive leaps are the biggest reason why science has advanced so far in the last century. Maybe robots would be able to progress without human intuition, but would certainly be augmented by it, progressing faster than either population could do alone. It'd be an alliance for knowledge and intelligence.
See the thing is; computers have to know the solution to a problem before they can solve it. Yeah, they can solve bits and pieces of a new problem, but it takes intuition to guess what the steps in between might be. Yeah, it's possible to brute-force a solution, but often that is very inefficient and sometimes impossible for some problems. And then there's the matter of identifying the problem, which is easier to find using intuition. Intuition also helps to predict the difference between theory and practice (although that might not be much of an issue for robots).
Unless we find a way to
reproduce intuition (and not just simulate it), robots will need to join with humanity to further the advancement of both.
QUOTE(Chris Miller)
(tl;dr: No, I wouldn't give them rights, and I would severely restrict their higher cognitive functions)
If you're saying, "don't let AI reach the singularity or become self-aware," that would be a wonderful way to avoid the issue of robot rights altogether. It would also avoid the (improbable) robot uprising. (Note: given the right circumstances, a robot uprising is inevitable, but those circumstances are extremely unlikely to occur before the singularity is reached and equally unlikely to be produced by robots)
If that isn't what you're getting at, wouldn't that make you a bit of a bigot and/or a despot?
Edited by Beefster, 31 July 2012 - 12:30 PM.