Total Posts:13|Showing Posts:1-13
Jump to topic:

Cyborg/Robot Rights

MrVan
Posts: 82
Add as Friend
Challenge to a Debate
Send a Message
12/22/2013 5:12:46 AM
Posted: 2 years ago
I thought about starting a thread on this subject after reading whiteflame's comment on a pole concerning whether or not robots will ever act like real creatures[1]. It really got me thinking about the ethics involved in making sentient beings and what it might mean for us in the future.

I'm sure we're all familiar with the three laws of robots, and I'm aware that there are a lot of variations and debate regarding what they should entail[2]. All in all, there's little debate as to whether or not these laws are a good thing- most people tend to think it is. After all, we don't want any robot uprisings!

But if they're sentient, is it still right? Even if we made them, if we can justify taking away free-will from robots, why not transhumans? What's keeping us from enforcing those same laws on humanity sometime in the distant future? Would it be wrong? After all, nobody would be able to cause harm to another person anymore. Overall, the world would probably become a much peaceful place.... presumably.

[1]http://www.debate.org...
[2]http://en.wikipedia.org...
whiteflame
Posts: 1,378
Add as Friend
Challenge to a Debate
Send a Message
12/22/2013 8:41:14 AM
Posted: 2 years ago
I'm glad you appreciated those comments and took them to heart. The answer to your question really depends on the same basic point as mine: is there a point when an object becomes similar enough to a real life form that we treat it as life, and does that object therefore deserve the same rights as the life it's emulating?

I don't think the answer is simple. We haven't truly defined what makes a human being human, in my opinion. Does the capacity to feel pain make us alive? How about the capacity to feel emotions? To learn and process at a complex level? Or is it more physical? Are our organs what makes us human? Is it our skin specifically? Or our brains? And if it's any of these, should a synthetic counterpart be made, would that be endowed with humanity? Or, looking at this the other way around, what if that synthetic organ is implanted in a human. Is it no longer human?

I don't think any of these questions are easy to answer. Sci-fi movies like I, Robot and A.I. have explored the topic, but they function in fictional universes that have already come to some of these conclusions, one way or another. It's difficult to say how real life people would react, and how different bodies of government may look at the situation. I do doubt that everyone would be of one mind on this.

But you were specifically talking about the three laws, and not solely implementing them in robots, but also in humans. The question of whether it is good to remove the choice to commit violence from people's minds is an interesting one, and I don't think there is a simple answer. From a utilitarian standpoint, absolutely. But it sets a really bad precedent, that taking away inherent freedoms to think and act of our own volition for the betterment of mankind is not only reasonable but preferable. And it goes beyond that. I'll look to another sci-fi movie, Minority Report, where we see that people are incarcerated permanently for murders they have yet to commit. Much as this is more extreme, the punishment in the case of the three laws is goin out to everyone. Every human is assumed to have these tendencies, and therefore must be stopped befor they can start. Assumption of guilt, penalty without crime, these are things that matter in a society focused on the rule of law.

Now, to bring this around full circle. I think we can all agree there are moral qualms with implementing such laws on human beings. So at what point in our creation of robotics will there be similar moral qualms with forcing them to adhere to them? Is it never? Is it when we find that we're practically staring in a mirror when we look at them? Is it when artificial intelligence can think for itself? And when we reach this point, how long will it be before the moral benefits and harms of forcing instructions on these robots seems very similar to the same calculus for human beings? Will we really be such a hurdle to cross then?
themohawkninja
Posts: 816
Add as Friend
Challenge to a Debate
Send a Message
12/22/2013 3:08:31 PM
Posted: 2 years ago
At 12/22/2013 5:12:46 AM, MrVan wrote:

I thought about starting a thread on this subject after reading whiteflame's comment on a pole concerning whether or not robots will ever act like real creatures[1]. It really got me thinking about the ethics involved in making sentient beings and what it might mean for us in the future.

I'm sure we're all familiar with the three laws of robots, and I'm aware that there are a lot of variations and debate regarding what they should entail[2]. All in all, there's little debate as to whether or not these laws are a good thing- most people tend to think it is. After all, we don't want any robot uprisings!

But if they're sentient, is it still right? Even if we made them, if we can justify taking away free-will from robots, why not transhumans? What's keeping us from enforcing those same laws on humanity sometime in the distant future? Would it be wrong? After all, nobody would be able to cause harm to another person anymore. Overall, the world would probably become a much peaceful place.... presumably.

[1]http://www.debate.org...
[2]http://en.wikipedia.org...

It's still up for debate whether or not artificial sentience is even a possibility.

However, if we assume that sometime in the distant future, humanity makes a sentient robot that is capable of learning, then it all boils down to how broad rights are to be defined.

We (as Europeans and Americans) used to view humans as anything that had white skin. We then decided that skin color didn't matter. Now, it's just about unanimously accepted that a human is anyone with human DNA, and therefore basic human rights should apply to anyone with human DNA. That being said, it would appear that robots have no right to any sort of, well... rights.

This point is further shown by the second law of robotics, which quite clearly implies that robots are inferior, and the slaves to humans.

Now, I would also like to point to another reason why robots shouldn't necessarily have rights (or at least equal rights to humans). Robots, like children, are created by human beings, and children don't have as many rights as adults, because children (like robots) are considered property.

Now, while I don't understand what you mean by "transhumans", as I asserted above, we have already technically restricted the rights of humans with regards to human children.
"Morals are simply a limit to man's potential."~Myself

Political correctness is like saying you can't have a steak, because a baby can't eat one ~Unknown
whiteflame
Posts: 1,378
Add as Friend
Challenge to a Debate
Send a Message
12/22/2013 5:04:41 PM
Posted: 2 years ago
It's still up for debate whether or not artificial sentience is even a possibility.

However, if we assume that sometime in the distant future, humanity makes a sentient robot that is capable of learning, then it all boils down to how broad rights are to be defined.

We (as Europeans and Americans) used to view humans as anything that had white skin. We then decided that skin color didn't matter. Now, it's just about unanimously accepted that a human is anyone with human DNA, and therefore basic human rights should apply to anyone with human DNA. That being said, it would appear that robots have no right to any sort of, well... rights.

This point is further shown by the second law of robotics, which quite clearly implies that robots are inferior, and the slaves to humans.

Now, I would also like to point to another reason why robots shouldn't necessarily have rights (or at least equal rights to humans). Robots, like children, are created by human beings, and children don't have as many rights as adults, because children (like robots) are considered property.

Now, while I don't understand what you mean by "transhumans", as I asserted above, we have already technically restricted the rights of humans with regards to human children.

I agree, A.I. is in no way certain in our future, just worth assuming for the purposes of this debate.

Off the top, the argument isn't necessarily that robots with AI should possess rights equivalent to a human being, just that they should possess rights. Comparing to human beings matters to some extent for the distribution of human-equivalent rights, but I'd say that it's possible to distribute some rights without meeting the basic requirements of what's considered human. But I'll get into that more on your point about children.

With regards to human DNA, there's an argument to be made for genetics being the basis for their humanity. However, I don't find this convincing. There's a process called xenotransplantation, wherein a human organ is grown within an animal to be used by a human. It has human DNA, yet we wouldn't declare that animal to be human. If the point is that all of a given being has to contain human DNA, then we run into a problem with people who have porcine heart valves and those who have artificial aspects added to their bodies.

The laws of robotics, as you say, only imply inferiority. The ability to give orders to a robot based on encoded laws doesn't make them automatically inferior.

The child aspect is an interesting one, and I haven't heard it before. I have three responses to this. One, children do still have rights. They have fewer rights, but they still have access to them. In fact, in many ways, similar rights could apply to robots and still retain a relationship of maker and made.

Two, children don't have fewer rights because they were created by humans. That point is infinitely regressive - all humans were created by humans, therefore all humans should have fewer rights. Children were just "created" more recently. The reason why children receive fewer rights is because of their perceived deficiencies when it comes to mental capabilities, something robots of the sort we're discussing would not have.

Three, you made a different point here at the end, which is that children and robots are both considered property. I'd say that's patently false of children. Property can't sue for independence, as children can. Property doesn't get rights, as children do. Property doesn't have whole governmental groups aimed at removing them and placing them in better homes should they be abused. There's no such rights group for a toaster, for example. Children are not considered property in any sense of the term, though their rights are most certainly restricted by comparison to adults. This equivalency is actually more true of a comparison between an adult slave and a robot, which I would argue is a point for robot rights.
themohawkninja
Posts: 816
Add as Friend
Challenge to a Debate
Send a Message
12/22/2013 6:57:29 PM
Posted: 2 years ago
At 12/22/2013 5:04:41 PM, whiteflame wrote:
It's still up for debate whether or not artificial sentience is even a possibility.

However, if we assume that sometime in the distant future, humanity makes a sentient robot that is capable of learning, then it all boils down to how broad rights are to be defined.

We (as Europeans and Americans) used to view humans as anything that had white skin. We then decided that skin color didn't matter. Now, it's just about unanimously accepted that a human is anyone with human DNA, and therefore basic human rights should apply to anyone with human DNA. That being said, it would appear that robots have no right to any sort of, well... rights.

This point is further shown by the second law of robotics, which quite clearly implies that robots are inferior, and the slaves to humans.

Now, I would also like to point to another reason why robots shouldn't necessarily have rights (or at least equal rights to humans). Robots, like children, are created by human beings, and children don't have as many rights as adults, because children (like robots) are considered property.

Now, while I don't understand what you mean by "transhumans", as I asserted above, we have already technically restricted the rights of humans with regards to human children.

I agree, A.I. is in no way certain in our future, just worth assuming for the purposes of this debate.

Off the top, the argument isn't necessarily that robots with AI should possess rights equivalent to a human being, just that they should possess rights. Comparing to human beings matters to some extent for the distribution of human-equivalent rights, but I'd say that it's possible to distribute some rights without meeting the basic requirements of what's considered human. But I'll get into that more on your point about children.

With regards to human DNA, there's an argument to be made for genetics being the basis for their humanity. However, I don't find this convincing. There's a process called xenotransplantation, wherein a human organ is grown within an animal to be used by a human. It has human DNA, yet we wouldn't declare that animal to be human. If the point is that all of a given being has to contain human DNA, then we run into a problem with people who have porcine heart valves and those who have artificial aspects added to their bodies.

Ah, but the majority of the animal that the organ is grown in isn't human, and therefore rights still don't apply.


The laws of robotics, as you say, only imply inferiority. The ability to give orders to a robot based on encoded laws doesn't make them automatically inferior.

It doesn't make them inferior in a literal sense, but it does make them inferior on a basic legal precipice sense.


The child aspect is an interesting one, and I haven't heard it before. I have three responses to this. One, children do still have rights. They have fewer rights, but they still have access to them. In fact, in many ways, similar rights could apply to robots and still retain a relationship of maker and made.

Perhaps, although unless we are going to humor the idea of a robot that cognitively develops like a human being, robots will be in a permanent state of, how one might say, being a "legal infantile".


Two, children don't have fewer rights because they were created by humans. That point is infinitely regressive - all humans were created by humans, therefore all humans should have fewer rights. Children were just "created" more recently. The reason why children receive fewer rights is because of their perceived deficiencies when it comes to mental capabilities, something robots of the sort we're discussing would not have.

It's not infinitely regressive, as we give humans more rights as they age, due to increased maturity/cognitive capacity. This isn't something that a robot would probably do, as (with the exclusion of scientific/educational purposes) why make a robot that needs to mature first. It would be like saying "why buy a apple sapling, when you can buy a mature tree?"


Three, you made a different point here at the end, which is that children and robots are both considered property. I'd say that's patently false of children. Property can't sue for independence, as children can. Property doesn't get rights, as children do. Property doesn't have whole governmental groups aimed at removing them and placing them in better homes should they be abused. There's no such rights group for a toaster, for example. Children are not considered property in any sense of the term, though their rights are most certainly restricted by comparison to adults. This equivalency is actually more true of a comparison between an adult slave and a robot, which I would argue is a point for robot rights.

While children may not be property in the common sense, as far as I know, they are still legally considered the property of their caretaker.
"Morals are simply a limit to man's potential."~Myself

Political correctness is like saying you can't have a steak, because a baby can't eat one ~Unknown
whiteflame
Posts: 1,378
Add as Friend
Challenge to a Debate
Send a Message
12/22/2013 7:58:35 PM
Posted: 2 years ago
"Ah, but the majority of the animal that the organ is grown in isn't human, and therefore rights still don't apply."

Alright, based on that, when we reach a point at which prosthetics are capable of replacing more than just arms, and in fact will include a slew of organs, we will have human beings walking around who are only partly human, not even a majority. If your majority argument holds true, these people would not be human. And since this is far likelier to happen, and in a shorter time frame no less, it will almost certainly become an issue.

"It doesn't make them inferior in a literal sense, but it does make them inferior on a basic legal precipice sense."

I'd honestly like to know what you mean by that. What is a basic legal precipice in this instance? Why does setting a law into robotics create that precipice? And if a robot does not have these laws, does it still have that inferiority?

"Perhaps, although unless we are going to humor the idea of a robot that cognitively develops like a human being, robots will be in a permanent state of, how one might say, being a "legal infantile"."

I think we would have to go based off of an assumption that a robots' intelligence can develop. I don't agree with the conclusion that you've followed on that with, though. All humans continue to develop cognitively in a sense. We all continue to learn new things and improve our understanding of the world to one extent or another. I wouldn't call a robot's learning "legal infantile" by comparison to that.

"It's not infinitely regressive, as we give humans more rights as they age, due to increased maturity/cognitive capacity. This isn't something that a robot would probably do, as (with the exclusion of scientific/educational purposes) why make a robot that needs to mature first. It would be like saying 'why buy a apple sapling, when you can buy a mature tree?'"

I wasn't specifically referring to capacity in this response - you stated that a robot is created by humans, and so is an infant, and therefore their rights structures were reduced. The infinitely regressive point is the argument that creation by humans should dictate rights structures. It applies to all humans. The ability to mature doesn't take away the fact that that human being was created by humans.

"While children may not be property in the common sense, as far as I know, they are still legally considered the property of their caretaker."

I'd say that in any sense, they have way too many rights to be considered property. I haven't seen anything that legally designates them as property of anyone, though I've seen a lot of legal structures that subvert their rights to those of their caretakers. In any case, even providing rights akin to a child would be a major decision not to be taken lightly in the case of a robot.
shiv_ramdas
Posts: 1
Add as Friend
Challenge to a Debate
Send a Message
12/22/2013 8:15:27 PM
Posted: 2 years ago
Am so glad to see that other people are discussing this! Its a topic that has long been on my mind- in fact just wrote a book on it! If anyone would like to discuss it, especially in the light of Asimovs Laws(which I believe are recipe for disaster if we ever do develop a sentient, self-aware, self-replicating machine), I'd be more than happy!
Seek
Posts: 63
Add as Friend
Challenge to a Debate
Send a Message
12/22/2013 8:38:32 PM
Posted: 2 years ago
Gene Roddenberry rules apply, in my opinion.

If it is aware that it is alive, and is aware that it can die, then it is sentient and it would be murder to end its life.

Data was alive. Wesley Crusher's modified Nanites were alive. The former was determined to be an individual and not Starfleet property, the latter were allowed to "resign" from service and found their own colony on an otherwise empty planet capable of sustaining them.
themohawkninja
Posts: 816
Add as Friend
Challenge to a Debate
Send a Message
12/22/2013 8:45:34 PM
Posted: 2 years ago
At 12/22/2013 7:58:35 PM, whiteflame wrote:
"Ah, but the majority of the animal that the organ is grown in isn't human, and therefore rights still don't apply."

Alright, based on that, when we reach a point at which prosthetics are capable of replacing more than just arms, and in fact will include a slew of organs, we will have human beings walking around who are only partly human, not even a majority. If your majority argument holds true, these people would not be human. And since this is far likelier to happen, and in a shorter time frame no less, it will almost certainly become an issue.

Okay, you do have a legitimate point there.

"It doesn't make them inferior in a literal sense, but it does make them inferior on a basic legal precipice sense."

I'd honestly like to know what you mean by that. What is a basic legal precipice in this instance? Why does setting a law into robotics create that precipice? And if a robot does not have these laws, does it still have that inferiority?

I'm sort of assuming that the laws of robotics will becomes actual laws in the legal sense. Henceforth the legal precipice (perhaps I am misusing the term) is the fact that there are laws in place that already put robots "under" humans. If the laws aren't put in place, then no, there wouldn't be such inferiority.


"Perhaps, although unless we are going to humor the idea of a robot that cognitively develops like a human being, robots will be in a permanent state of, how one might say, being a "legal infantile"."

I think we would have to go based off of an assumption that a robots' intelligence can develop. I don't agree with the conclusion that you've followed on that with, though. All humans continue to develop cognitively in a sense. We all continue to learn new things and improve our understanding of the world to one extent or another. I wouldn't call a robot's learning "legal infantile" by comparison to that.

Yeah, I suppose heuristic programming might count as such.


"It's not infinitely regressive, as we give humans more rights as they age, due to increased maturity/cognitive capacity. This isn't something that a robot would probably do, as (with the exclusion of scientific/educational purposes) why make a robot that needs to mature first. It would be like saying 'why buy a apple sapling, when you can buy a mature tree?'"

I wasn't specifically referring to capacity in this response - you stated that a robot is created by humans, and so is an infant, and therefore their rights structures were reduced. The infinitely regressive point is the argument that creation by humans should dictate rights structures. It applies to all humans. The ability to mature doesn't take away the fact that that human being was created by humans.

Well, if you are going to ignore the capacity, then yes, you would be correct here.


"While children may not be property in the common sense, as far as I know, they are still legally considered the property of their caretaker."

I'd say that in any sense, they have way too many rights to be considered property. I haven't seen anything that legally designates them as property of anyone, though I've seen a lot of legal structures that subvert their rights to those of their caretakers. In any case, even providing rights akin to a child would be a major decision not to be taken lightly in the case of a robot.

True, true.
"Morals are simply a limit to man's potential."~Myself

Political correctness is like saying you can't have a steak, because a baby can't eat one ~Unknown
whiteflame
Posts: 1,378
Add as Friend
Challenge to a Debate
Send a Message
12/22/2013 8:51:34 PM
Posted: 2 years ago
"I'm sort of assuming that the laws of robotics will becomes actual laws in the legal sense. Henceforth the legal precipice (perhaps I am misusing the term) is the fact that there are laws in place that already put robots "under" humans. If the laws aren't put in place, then no, there wouldn't be such inferiority."

Ah, that makes much more sense now. True.

"Well, if you are going to ignore the capacity, then yes, you would be correct here."

It was sort of an agreed point from the outset. I agree with you that maturity and cognitive capacity are important determinants of how rights are distributed, albeit age is used as a yardstick for determining them. So I think capacity is very important to determining which rights are distributed to whom, just not the fact that they were created.
Axonly
Posts: 1,802
Add as Friend
Challenge to a Debate
Send a Message
6/5/2016 7:18:05 AM
Posted: 6 months ago
At 12/22/2013 8:41:14 AM, whiteflame wrote:
I'm glad you appreciated those comments and took them to heart. The answer to your question really depends on the same basic point as mine: is there a point when an object becomes similar enough to a real life form that we treat it as life, and does that object therefore deserve the same rights as the life it's emulating?

I don't think the answer is simple. We haven't truly defined what makes a human being human, in my opinion. Does the capacity to feel pain make us alive? How about the capacity to feel emotions? To learn and process at a complex level? Or is it more physical? Are our organs what makes us human? Is it our skin specifically? Or our brains? And if it's any of these, should a synthetic counterpart be made, would that be endowed with humanity? Or, looking at this the other way around, what if that synthetic organ is implanted in a human. Is it no longer human?

I don't think any of these questions are easy to answer. Sci-fi movies like I, Robot and A.I. have explored the topic, but they function in fictional universes that have already come to some of these conclusions, one way or another. It's difficult to say how real life people would react, and how different bodies of government may look at the situation. I do doubt that everyone would be of one mind on this.

But you were specifically talking about the three laws, and not solely implementing them in robots, but also in humans. The question of whether it is good to remove the choice to commit violence from people's minds is an interesting one, and I don't think there is a simple answer. From a utilitarian standpoint, absolutely. But it sets a really bad precedent, that taking away inherent freedoms to think and act of our own volition for the betterment of mankind is not only reasonable but preferable. And it goes beyond that. I'll look to another sci-fi movie, Minority Report, where we see that people are incarcerated permanently for murders they have yet to commit. Much as this is more extreme, the punishment in the case of the three laws is goin out to everyone. Every human is assumed to have these tendencies, and therefore must be stopped befor they can start. Assumption of guilt, penalty without crime, these are things that matter in a society focused on the rule of law.

Now, to bring this around full circle. I think we can all agree there are moral qualms with implementing such laws on human beings. So at what point in our creation of robotics will there be similar moral qualms with forcing them to adhere to them? Is it never? Is it when we find that we're practically staring in a mirror when we look at them? Is it when artificial intelligence can think for itself? And when we reach this point, how long will it be before the moral benefits and harms of forcing instructions on these robots seems very similar to the same calculus for human beings? Will we really be such a hurdle to cross then?

This is your first ever post, White.

Have fun with the nostalgia :3
Meh!
keithprosser
Posts: 2,042
Add as Friend
Challenge to a Debate
Send a Message
6/5/2016 11:37:42 AM
Posted: 6 months ago
Making a robot that looks extremely human-like is no harder than making a life-like wax work. But I don't think a robot that behaves anything like a real person (outside a very restricted domain) will be built in this century.

I don't think people are going to be entirely rational about super-advanced AIs. A wax-work tricked-up with software and working tear glands to mimic being sad (but feeling nothing inside) will get more sympathy than something that looks like a PC but has real feelings. As things stand, we don't know how to get a machine to have 'inner feelings' like boredom or happiness - all we can do is trick-up waxworks to give the appropriate outward signs. Given fairly modest resources I could get a waxwork to plead not to be switced off with tears in its eyes - but it wouldn't mean it, or know what it was saying meant. That sort of 'self-aware' machine consciousness is miles off, it possibily will never be achieved - if is even desired.

More pressing is the issue of autonomous war machines. You don't need to program such a thing with feelings like hatred for the enemy. You just need a pattern recogniser programmed to identify enemy tanks (or uniforms), hook it up to a big gun and mount it on wheels, or wings. Connecting a computer to a big gun... what could possibly go wrong?
https://www.youtube.com...
Axonly
Posts: 1,802
Add as Friend
Challenge to a Debate
Send a Message
6/6/2016 8:13:36 AM
Posted: 6 months ago
At 6/5/2016 7:18:05 AM, Axonly wrote:
At 12/22/2013 8:41:14 AM, whiteflame wrote:
I'm glad you appreciated those comments and took them to heart. The answer to your question really depends on the same basic point as mine: is there a point when an object becomes similar enough to a real life form that we treat it as life, and does that object therefore deserve the same rights as the life it's emulating?

I don't think the answer is simple. We haven't truly defined what makes a human being human, in my opinion. Does the capacity to feel pain make us alive? How about the capacity to feel emotions? To learn and process at a complex level? Or is it more physical? Are our organs what makes us human? Is it our skin specifically? Or our brains? And if it's any of these, should a synthetic counterpart be made, would that be endowed with humanity? Or, looking at this the other way around, what if that synthetic organ is implanted in a human. Is it no longer human?

I don't think any of these questions are easy to answer. Sci-fi movies like I, Robot and A.I. have explored the topic, but they function in fictional universes that have already come to some of these conclusions, one way or another. It's difficult to say how real life people would react, and how different bodies of government may look at the situation. I do doubt that everyone would be of one mind on this.

But you were specifically talking about the three laws, and not solely implementing them in robots, but also in humans. The question of whether it is good to remove the choice to commit violence from people's minds is an interesting one, and I don't think there is a simple answer. From a utilitarian standpoint, absolutely. But it sets a really bad precedent, that taking away inherent freedoms to think and act of our own volition for the betterment of mankind is not only reasonable but preferable. And it goes beyond that. I'll look to another sci-fi movie, Minority Report, where we see that people are incarcerated permanently for murders they have yet to commit. Much as this is more extreme, the punishment in the case of the three laws is goin out to everyone. Every human is assumed to have these tendencies, and therefore must be stopped befor they can start. Assumption of guilt, penalty without crime, these are things that matter in a society focused on the rule of law.

Now, to bring this around full circle. I think we can all agree there are moral qualms with implementing such laws on human beings. So at what point in our creation of robotics will there be similar moral qualms with forcing them to adhere to them? Is it never? Is it when we find that we're practically staring in a mirror when we look at them? Is it when artificial intelligence can think for itself? And when we reach this point, how long will it be before the moral benefits and harms of forcing instructions on these robots seems very similar to the same calculus for human beings? Will we really be such a hurdle to cross then?

This is your first ever post, White.

Have fun with the nostalgia :3

Hehe.
Meh!