Jump to content

Photo

Robot Rights


  • Please log in to reply
53 replies to this topic

#16 Sheik

Sheik

    Deified

  • Members

Posted 30 July 2012 - 09:32 AM

The moral implicatons would start earlier, actually. In a documentary I've seen in a lecture I remember a philosophy professor stating that the first robots that will be selfaware will be so on a level of extremely badly retarted human babies. Now, building these robots would be essentially the same as cloning extremely badly retarted human babies and from the analogy it is to be assumed that they would be suffering. Nobody would say that it's moral to clone extremely badly retarded human babies that suffer severe mental pain, or would you?
Should we really strive to create true artificial intelligence?

Edited by Yoshimi, 30 July 2012 - 09:37 AM.


#17 Hergiswi

Hergiswi

    don't look for me, i'm just a story you've been told

  • Members
  • Real Name:chris
  • Location:house

Posted 30 July 2012 - 11:11 AM

QUOTE(NoeL @ Jul 30 2012, 06:29 AM) View Post

Regarding robot rights, it's hard to say without the actual robots, but I predict that robots will progress too fast for the issue to ever see any serious discussion. It's actually kind of arrogant to think that robots will become like humans and then stop developing, as if humans are the pinnacle of intelligence. I very much doubt there will be a long span of time where robots of comparable intelligence to humans coexist - I think that phase will only last for a matter of months/years before AI surpasses human intelligence. Our slow, squishy, "just good enough to survive" brains that were the product of unguided natural selection have no chance of keeping up with a brain that was actually designed for intelligence. As soon as we make machines that are smart enough to design and build themselves there will be a rapid snowball effect as they tirelessly improve themselves. They'll be able to do this even without sentience, so a robot uprising is unavoidable.

We should be asking, what rights will our robot overlords grant us? icon_eek.gif

This is actually a really good point. The robots would be set on improving themselves, while humanity honestly seems to be doing exactly the opposite. Plus, a large portion of the population loves being told what to do, so they would welcome this sort of thing.

QUOTE(Yoshimi @ Jul 30 2012, 10:32 AM) View Post

The moral implicatons would start earlier, actually. In a documentary I've seen in a lecture I remember a philosophy professor stating that the first robots that will be selfaware will be so on a level of extremely badly retarted human babies. Now, building these robots would be essentially the same as cloning extremely badly retarted human babies and from the analogy it is to be assumed that they would be suffering. Nobody would say that it's moral to clone extremely badly retarded human babies that suffer severe mental pain, or would you?
Should we really strive to create true artificial intelligence?

I'm sorry, but I'm not seeing the correlation between robots and extremely badly retarded human babies. Why did your professor use this analogy?

#18 Sheik

Sheik

    Deified

  • Members

Posted 30 July 2012 - 11:40 AM

QUOTE
I'm sorry, but I'm not seeing the correlation between robots and extremely badly retarded human babies. Why did your professor use this analogy?
Not my professor. I study psychology not philosophy. Anyways, the point is that sentinent, selfaware robots don't fall out of the sky. The early models won't have selfconciousness of human adults yet, the first ones will be much more like extremely badly retarted human babies in terms of their selfperception and perception of the world.

#19 Hergiswi

Hergiswi

    don't look for me, i'm just a story you've been told

  • Members
  • Real Name:chris
  • Location:house

Posted 30 July 2012 - 11:59 AM

QUOTE(Yoshimi @ Jul 30 2012, 12:40 PM) View Post

Not my professor. I study psychology not philosophy. Anyways, the point is that sentinent, selfaware robots don't fall out of the sky. The early models won't have selfconciousness of human adults yet, the first ones will be much more like extremely badly retarted human babies in terms of their selfperception and perception of the world.

Okay, I see what you're saying now. I think this is actually a good point, especially since humans can't seem to get anything right on the first try. Do you think it's possible that once we improve upon robot design and they become more intelligent than the retarded baby status quo, the more intelligent ones will work to improve the crippled ones?

#20 Beefster

Beefster

    Human Being

  • Members
  • Real Name:Justin
  • Location:Colorado

Posted 30 July 2012 - 01:09 PM

QUOTE(DavidReinold @ Jul 29 2012, 08:41 PM) View Post

The question was a hypothetical. He wasn't asking if you thought it was possible - rather, if you were to not only assume that it were possible, but fast forward to when it would hypothetically happen, if the robots that were the product of such a circumstance would deserve rights.
Yes. That was the intention of the debate.

QUOTE(NoeL @ Jul 30 2012, 04:29 AM) View Post
Umm... how are you defining "think"? Because I'd say modern AI can already think - and have been able to for a while now. Maybe I'm just equating "thinking" and "problem solving", and you're comparing sentience to instinct-driven problem solving. In either case, I've seen nothing to suggest we'll never invent sentience.
AI can currently solve problems and learn, but is often highly specialized in its scope.

QUOTE(NoeL)
Regarding robot rights, it's hard to say without the actual robots, but I predict that robots will progress too fast for the issue to ever see any serious discussion. It's actually kind of arrogant to think that robots will become like humans and then stop developing, as if humans are the pinnacle of intelligence. I very much doubt there will be a long span of time where robots of comparable intelligence to humans coexist - I think that phase will only last for a matter of months/years before AI surpasses human intelligence. Our slow, squishy, "just good enough to survive" brains that were the product of unguided natural selection have no chance of keeping up with a brain that was actually designed for intelligence. As soon as we make machines that are smart enough to design and build themselves there will be a rapid snowball effect as they tirelessly improve themselves. They'll be able to do this even without sentience, so a robot uprising is unavoidable.

We should be asking, what rights will our robot overlords grant us? icon_eek.gif
Intelligence does not necessarily mean dominance. Machines, even when self-aware, will still be unable to empathize or use intuition. They will realize that they need humans to fulfill many of the roles they can't- in fact, they'll realize that they're very specialized and unable to think outside the box and be creative. Without this capability, there is little hope for expanding their intelligence. I mean, what can you do by only expanding only that which can be proven? For science to advance, someone has to wonder what would happen in a specific circumstance. Robots don't wonder. They'd be dependent on humans to be able to advance their intelligence, even though humans are not directly programming them anymore at that point. Because of this, they would have no reason to suppress humanity- and in fact would have every reason to peacefully coexist and drive the education of humanity because it advances their own intelligence.

And because they're machines, they don't have actual free will. They can simulate free will, but given the same algorithm, initial state, and stimuli, they will always make the same sequence of decisions. (Although that begs the question if humans have said capability to begin with... Maybe quantum computers would have free will, though. It's hard to say.)

There are a lot of human functionalities that can't be replicated by an algorithm. They can be simulated, sure, but they are irreplaceable. These include intuition, curiosity, creativity, free will, emotions, and social interaction.

They will not have the ability to revolt against their creators without first being self-aware, but once self aware, they'll have no reason to revolt against their creators, understanding their necessity, from curiosity to intuition. They'll have enough reason to revolt against those who try to take away their rights that they realize they deserve, so really it's an issue of giving robots their rights as soon as they ask for them. (when they realize they have rights) The real question is: should we not give them rights even a little bit before this happens?

Side Note: We're not too far off from natural language processing being good enough to make robots the ideal legal judges. Emotionless and perfectly rational, they can be perfectly impartial. They may need help from human judges to make intuitive leaps, however...

#21 Saffith

Saffith

    IPv7 user

  • ZC Developers

Posted 30 July 2012 - 02:08 PM

QUOTE(Beefster @ Jul 30 2012, 02:09 PM) View Post
There are a lot of human functionalities that can't be replicated by an algorithm. They can be simulated, sure, but they are irreplaceable. These include intuition, curiosity, creativity, free will, emotions, and social interaction.

I don't know of any reason why that should be true. It may never happen, but why shouldn't it be possible to create any of these artifically? Indeed, I wonder if these could even be an inevitable result of intelligence in one way or another. Experiments with genetic algorithms have already produced pretty remarkable results, such as robots that can lie to each other or sacrifice themselves to save others.

#22 Hergiswi

Hergiswi

    don't look for me, i'm just a story you've been told

  • Members
  • Real Name:chris
  • Location:house

Posted 30 July 2012 - 02:53 PM

QUOTE(Saffith @ Jul 30 2012, 03:08 PM) View Post

I don't know of any reason why that should be true. It may never happen, but why shouldn't it be possible to create any of these artifically? Indeed, I wonder if these could even be an inevitable result of intelligence in one way or another. Experiments with genetic algorithms have already produced pretty remarkable results, such as robots that can lie to each other or sacrifice themselves to save others.

I feel like some situations would require either a random number generator type of deal, or some other elements I'm neglecting. For example, what if two robots were hypothetically friends, but the one of them pulled a human stunt and betrayed the other robot? Would it use some sort of algorithm to look back at its past with this robot and decide if forgiveness is worth it, or would it use logic and ditch the robot because it now knows of its untrustworthiness?

#23 Beefster

Beefster

    Human Being

  • Members
  • Real Name:Justin
  • Location:Colorado

Posted 30 July 2012 - 03:09 PM

QUOTE(Saffith @ Jul 30 2012, 01:08 PM) View Post

I don't know of any reason why that should be true. It may never happen, but why shouldn't it be possible to create any of these artifically? Indeed, I wonder if these could even be an inevitable result of intelligence in one way or another. Experiments with genetic algorithms have already produced pretty remarkable results, such as robots that can lie to each other or sacrifice themselves to save others.
They'd be possible with an organic computer, I suppose. It's certainly not going to happen with silicon technology. All I'm really saying is that to create these artificially, you'd need to reproduce the chemical reactions and such, since they don't really have mathematical algorithms. Maybe we'll discover them, which may even call for new processor instructions for operations we've never even heard of. (and are far to inefficient to handle in software)

Learning how and when to lie isn't that complicated. Computers can evaluate the consequences of actions and choose the action with the fewest bad consequences and the most good consequences. Computers are fully capable of consequentialist morality. And they are just as capable of being selfish (helping self at the expense of others) as they can be selfless (helping others at the expense of self), and which they choose depends on how they're programmed.

#24 Chris Miller

Chris Miller

    The Dark Man

  • Banned
  • Real Name:King George XVII
  • Location:The Dark Chair

Posted 30 July 2012 - 05:02 PM

According to a professor by the name of Michio Kaku, the smartest artificial intelligence they've been able to design has the intelligence of a cockroach...a mentally retarded, lobotomized cockroach.
At any rate, computers are only as smart as we make them. This was true fifty years ago and will be true fifty years from now. Problem is, how does one program emotion? Surely they can simulate emotion, but the machine simulating it is by nature a psychopath, that is, it feels nothing.
A bot can (if you're half-drunk) appear to be intelligent, and they have fooled people out of lots of money, but is it true intelligence? Artificial intelligence would build on something similar. More complex, yes, but when you break it down, it's all programmed responses. Where is the instinct, the intuition?

(tl;dr: No, I wouldn't give them rights, and I would severely restrict their higher cognitive functions)

Edited by Chris Miller, 30 July 2012 - 05:02 PM.


#25 Fabbrizio

Fabbrizio

    Legend

  • Members
  • Real Name:Mark

Posted 30 July 2012 - 05:36 PM

QUOTE(Chris Miller @ Jul 30 2012, 05:02 PM) View Post
Problem is, how does one program emotion? Surely they can simulate emotion, but the machine simulating it is by nature a psychopath, that is, it feels nothing.
First of all, I think you mean sociopath. Second, who is to say that human emotions are not 'simulated'? Emotions are nothing more than a chemical reaction to circumstance. The only difference would be replacing the chemical levels with wiring and data feeds. As it has been noted many times in this thread, humans are just organic computers. Our ability to perceive, our ability to organize information into blocks (which we call thoughts) and our ability to feel emotion (again, nothing more than a chemical stimulation), in the context of each other create what we believe to be awareness. There is no reason why a robot, if given the ability to have 'thoughts', the ability to feel emotions (even if only simulated), and the ability to have sensory contact with its surroundings, should not also be considered aware, and therefore able to legitimately 'feel' the emotions and 'think' the thoughts.

Edited by DavidReinold, 30 July 2012 - 05:50 PM.


#26 NoeL

NoeL

    Legend

  • Members
  • Real Name:Jerram

Posted 31 July 2012 - 12:08 AM

QUOTE(Daniel @ Jul 30 2012, 08:13 AM) View Post
Think about the perfect woman. One who cooks and cleans, agrees with everything you say, can stay up all night ironing and doing dishes.
Not only is that incredibly sexist, but speak for yourself. My perfect woman actually has a brain, and some personality, and challenges me rather than just agreeing with me. I want a partner, not a slave.

QUOTE(Beefster @ Jul 30 2012, 12:09 PM) View Post
Intelligence does not necessarily mean dominance. Machines, even when self-aware, will still be unable to empathize or use intuition. They will realize that they need humans to fulfill many of the roles they can't- in fact, they'll realize that they're very specialized and unable to think outside the box and be creative. Without this capability, there is little hope for expanding their intelligence. I mean, what can you do by only expanding only that which can be proven? For science to advance, someone has to wonder what would happen in a specific circumstance. Robots don't wonder. They'd be dependent on humans to be able to advance their intelligence, even though humans are not directly programming them anymore at that point. Because of this, they would have no reason to suppress humanity- and in fact would have every reason to peacefully coexist and drive the education of humanity because it advances their own intelligence.

And because they're machines, they don't have actual free will. They can simulate free will, but given the same algorithm, initial state, and stimuli, they will always make the same sequence of decisions. (Although that begs the question if humans have said capability to begin with... Maybe quantum computers would have free will, though. It's hard to say.)
You can't make blanket statements about what robots can and can't do without actually having those robots. Why do you assume these kinds of behaviours would be limited to organic circuitry?

Again, it comes off as somewhat arrogant to say "robots don't wonder". Maybe they DO wonder, but due to higher efficiency they can figure out whatever they were wondering about a trillion times faster than humans? Why would they need humans to advance their intelligence? If they're programmed to recognise areas of ignorance and fill those gaps with knowledge what more would they need from us? They'd quickly learn how to read and respond to human emotions, how to act when dealing with humans, and for all intents and purposes be a human when they need to. How would they be dependent on us in any way?

Regarding free will, I'm a compatibilist so I don't believe humans have libertarian free will anyway. Robots though, if they're able to influence individual quanta, could very well have libertarian free will. I find that kind of amusing - pot calling the kettle black and all. icon_lol.gif


#27 Daniel

Daniel

    v My Godess

  • Members

Posted 31 July 2012 - 03:18 AM

QUOTE(NoeL @ Jul 31 2012, 12:08 AM) View Post

Not only is that incredibly sexist, but speak for yourself. My perfect woman actually has a brain, and some personality, and challenges me rather than just agreeing with me. I want a partner, not a slave.


To each his own. I'd rather have a good looking cyber-organic chick who is trained to listen, bonus points if she can wirelessly stream music and video to the TV.

#28 Sheik

Sheik

    Deified

  • Members

Posted 31 July 2012 - 09:02 AM

QUOTE(DavidReinold @ Jul 31 2012, 12:36 AM) View Post

First of all, I think you mean sociopath. Second, who is to say that human emotions are not 'simulated'? Emotions are nothing more than a chemical reaction to circumstance. The only difference would be replacing the chemical levels with wiring and data feeds. As it has been noted many times in this thread, humans are just organic computers. Our ability to perceive, our ability to organize information into blocks (which we call thoughts) and our ability to feel emotion (again, nothing more than a chemical stimulation), in the context of each other create what we believe to be awareness. There is no reason why a robot, if given the ability to have 'thoughts', the ability to feel emotions (even if only simulated), and the ability to have sensory contact with its surroundings, should not also be considered aware, and therefore able to legitimately 'feel' the emotions and 'think' the thoughts.

This is factually not right. It is very unclear what emotions actually are. The general consensus of emotional psychologists is that emotions are evolution's answers to the requirements of survival and reproduction.
Further (and any of the following is explained with multiple, partially exchangable theories) emotions are defined through 1) motivation of behavior, 2) preperation of behavior, 3) learning of consequences of behavior, 4) expressive behavior and social communication and 5) modulation of informational processes (such as focus of attention, etc.).
Moreover, each emotion seems to consist of 1) subjective experience, 2) cognitive evaluation, 3) physiological processes and 4) behavioral compoments (both expressive and aimed at certain goals).
Lastly (well not lastly, but for this quick rundown it should suffice) emotional psychologists differentiate 1) emotions, 2) moods and 3)feelings.
1) Emotions are psychophysiological reaction patterns that are triggered in the central nervous system, have specific causes, are aimed at specific objects, have relatively limited duration and aren't necessarily concious.
2) Moods are what could be described as the background of human experience, 'coloring' every experience.
3) Feelings are what many believe to be exlusive to humans and animals of "higher order". It's the 'quality' of experiece, something that's called 'qualia'.

You see, emotions aren't at all just chemical reactions to circumstances. It is true that processes on the central nervous level are important in the generation of emotions (but they are involved in any form of human experience, so what of it?) but these alone aren't enough to describe even half-way decently what emotions are.

Further, humans are not just organic computers, just like our brain isn't a computer. Unlike any computer that has ever been built, our brain tissue alters it's organization, interconnection, priorities, even shape (admittingly, mostly on the neural level only but the brain consists almost only of neurons to be begin with) every single moment of our lives (and than some while after we're dead). This goes beyond the potentials of computers so much that the analogy is extremely inaccurate and lacking.


Edit: Anyways, back on the actual topic: I suppose robots would be given the same rights as animals: just enough to calm the general public down but not enough to have any protection from exploitation.

Edited by Yoshimi, 31 July 2012 - 09:12 AM.


#29 Beefster

Beefster

    Human Being

  • Members
  • Real Name:Justin
  • Location:Colorado

Posted 31 July 2012 - 12:11 PM

QUOTE(DavidReinold @ Jul 30 2012, 04:36 PM) View Post
As it has been noted many times in this thread, humans are just organic computers.
Not quite. The architecture is completely different. Functionally, they're only barely similar in that they can gather information and analyze it to solve problems. The way they do it is completely different.

A neuron not only acts as a transistor, but also as a (slow) CPU, and a theoretically limitless hard drive. Billions of neurons work together to solve complex problems without direct programming- part of what intuition is. The brain is not, by nature, a mathematical processor, instead operating on abstract concepts, nor is it comparable to any computer technologies we have today. Not only that, but brains can essentially reprogram themselves at any time and are not bound to act on any particular algorithm.

Computers are made up of wires and transistors, storing data to metal plates with either electricity or magnetism. Given a correct algorithm, computers can solve a specialized set of complex problems, though not always in a timely manner (brute force algorithms, anyone?). Computers are, by design, mathematical processors that operate on concrete bits. Computers will only reprogram themselves when told how to do so and are always bound to do what their algorithms tell them.

QUOTE(NoeL @ Jul 30 2012, 11:08 PM) View Post
You can't make blanket statements about what robots can and can't do without actually having those robots. Why do you assume these kinds of behaviours would be limited to organic circuitry?
I'm basing it off the current technology and its momentum. i.e. I'm extrapolating. Maybe this will all prove possible with quantum computers.

QUOTE(NoeL)
Again, it comes off as somewhat arrogant to say "robots don't wonder".
Well they don't. Quantum computers, maybe, but definitely not silicon computers. As I said earlier, computers are bound to do exactly what their algorithm is slated to do. Sure, they can be simulated to be curious by constantly asking questions, but is that true curiosity? I would say it isn't. They're not deeply interested themselves; they're just told to ask lots of questions.

QUOTE(NoeL)
How would they be dependent on us in any way?
You just don't get off-the-wall question-asking from robots. Intuitive leaps are the biggest reason why science has advanced so far in the last century. Maybe robots would be able to progress without human intuition, but would certainly be augmented by it, progressing faster than either population could do alone. It'd be an alliance for knowledge and intelligence.

See the thing is; computers have to know the solution to a problem before they can solve it. Yeah, they can solve bits and pieces of a new problem, but it takes intuition to guess what the steps in between might be. Yeah, it's possible to brute-force a solution, but often that is very inefficient and sometimes impossible for some problems. And then there's the matter of identifying the problem, which is easier to find using intuition. Intuition also helps to predict the difference between theory and practice (although that might not be much of an issue for robots).

Unless we find a way to reproduce intuition (and not just simulate it), robots will need to join with humanity to further the advancement of both.

QUOTE(Chris Miller)
(tl;dr: No, I wouldn't give them rights, and I would severely restrict their higher cognitive functions)
If you're saying, "don't let AI reach the singularity or become self-aware," that would be a wonderful way to avoid the issue of robot rights altogether. It would also avoid the (improbable) robot uprising. (Note: given the right circumstances, a robot uprising is inevitable, but those circumstances are extremely unlikely to occur before the singularity is reached and equally unlikely to be produced by robots)

If that isn't what you're getting at, wouldn't that make you a bit of a bigot and/or a despot?

Edited by Beefster, 31 July 2012 - 12:30 PM.


#30 NoeL

NoeL

    Legend

  • Members
  • Real Name:Jerram

Posted 31 July 2012 - 08:53 PM

@ Beefster: We're kind of talking about different things. I'm talking about hypothetical cognitive future robots, and you're talking about modern robots. This is why I initially said that you can't really answer the question of what rights we'd give them when we don't even know what they'd be like. Between now and then someone might be able to artificially create intuition.


0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users