Total Posts:41|Showing Posts:1-30|Last Page
Jump to topic:

Laws of Robotics

Wnope
Posts: 6,924
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 3:43:23 PM
Posted: 3 years ago
A common ethical theme in robotics comes from Isaac Asimov's stories such as iRobot. He posits three laws needed to get robots to act morally:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A lot of ethics classes will talk about these laws, and how, such as in the story, they can backfire and lead to robots taking over. For instance, robots thinks people are harming themselves, so they step in to keep humans safe.

However, I'm curious, can anyone think of a means of the three laws backfiring if you simply strike out "through inaction" clause of the first law?
muzebreak
Posts: 2,781
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 3:48:34 PM
Posted: 3 years ago
At 2/10/2013 3:43:23 PM, Wnope wrote:
A common ethical theme in robotics comes from Isaac Asimov's stories such as iRobot. He posits three laws needed to get robots to act morally:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A lot of ethics classes will talk about these laws, and how, such as in the story, they can backfire and lead to robots taking over. For instance, robots thinks people are harming themselves, so they step in to keep humans safe.

However, I'm curious, can anyone think of a means of the three laws backfiring if you simply strike out "through inaction" clause of the first law?

Just thought about it for a minute, but the robots could decide that them doing things for humans hurt them in the end, and so would ignore all commands.
"Every kid starts out as a natural-born scientist, and then we beat it out of them. A few trickle through the system with their wonder and enthusiasm for science intact." - Carl Sagan

This is the response of the defenders of Sparta to the Commander of the Roman Army: "If you are a god, you will not hurt those who have never injured you. If you are a man, advance - you will find men equal to yourself. And women.
Wnope
Posts: 6,924
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 3:53:00 PM
Posted: 3 years ago
At 2/10/2013 3:48:34 PM, muzebreak wrote:
At 2/10/2013 3:43:23 PM, Wnope wrote:
A common ethical theme in robotics comes from Isaac Asimov's stories such as iRobot. He posits three laws needed to get robots to act morally:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A lot of ethics classes will talk about these laws, and how, such as in the story, they can backfire and lead to robots taking over. For instance, robots thinks people are harming themselves, so they step in to keep humans safe.

However, I'm curious, can anyone think of a means of the three laws backfiring if you simply strike out "through inaction" clause of the first law?

Just thought about it for a minute, but the robots could decide that them doing things for humans hurt them in the end, and so would ignore all commands.

Not bad. Hadn't thought of that one.
Kinesis
Posts: 3,667
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 4:05:53 PM
Posted: 3 years ago
The laws of robotics need to be somehow converted into code. I doubt it's possible for such vague, ill defined notions to be accurately translated into computer code.
muzebreak
Posts: 2,781
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 4:08:49 PM
Posted: 3 years ago
At 2/10/2013 4:05:53 PM, Kinesis wrote:
The laws of robotics need to be somehow converted into code. I doubt it's possible for such vague, ill defined notions to be accurately translated into computer code.

I'm sure it could be done in python.
"Every kid starts out as a natural-born scientist, and then we beat it out of them. A few trickle through the system with their wonder and enthusiasm for science intact." - Carl Sagan

This is the response of the defenders of Sparta to the Commander of the Roman Army: "If you are a god, you will not hurt those who have never injured you. If you are a man, advance - you will find men equal to yourself. And women.
Wnope
Posts: 6,924
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 4:56:31 PM
Posted: 3 years ago
At 2/10/2013 4:08:49 PM, muzebreak wrote:
At 2/10/2013 4:05:53 PM, Kinesis wrote:
The laws of robotics need to be somehow converted into code. I doubt it's possible for such vague, ill defined notions to be accurately translated into computer code.

I'm sure it could be done in python.

http://xkcd.com...
KeytarHero
Posts: 612
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 5:29:32 PM
Posted: 3 years ago
At 2/10/2013 3:43:23 PM, Wnope wrote:
A common ethical theme in robotics comes from Isaac Asimov's stories such as iRobot. He posits three laws needed to get robots to act morally:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A lot of ethics classes will talk about these laws, and how, such as in the story, they can backfire and lead to robots taking over. For instance, robots thinks people are harming themselves, so they step in to keep humans safe.

However, I'm curious, can anyone think of a means of the three laws backfiring if you simply strike out "through inaction" clause of the first law?

I just want to point out that the book is I, Robot. It's not an Apple product.
OMGJustinBieber
Posts: 3,484
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 5:58:57 PM
Posted: 3 years ago
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

This one needs some explanation. Are we just talking about physical harm?

A robot can't act morally because a robot doesn't have moral agency. There needs to be a mind for something to have moral agency, and a robot can only spit out what it's programmers put in.
muzebreak
Posts: 2,781
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 6:12:28 PM
Posted: 3 years ago
At 2/10/2013 4:56:31 PM, Wnope wrote:
At 2/10/2013 4:08:49 PM, muzebreak wrote:
At 2/10/2013 4:05:53 PM, Kinesis wrote:
The laws of robotics need to be somehow converted into code. I doubt it's possible for such vague, ill defined notions to be accurately translated into computer code.

I'm sure it could be done in python.

http://xkcd.com...

Man, I love xkcd. It's just hilarious.
"Every kid starts out as a natural-born scientist, and then we beat it out of them. A few trickle through the system with their wonder and enthusiasm for science intact." - Carl Sagan

This is the response of the defenders of Sparta to the Commander of the Roman Army: "If you are a god, you will not hurt those who have never injured you. If you are a man, advance - you will find men equal to yourself. And women.
drafterman
Posts: 18,870
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 6:17:26 PM
Posted: 3 years ago
At 2/10/2013 3:43:23 PM, Wnope wrote:
A common ethical theme in robotics comes from Isaac Asimov's stories such as iRobot. He posits three laws needed to get robots to act morally:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A lot of ethics classes will talk about these laws, and how, such as in the story, they can backfire and lead to robots taking over. For instance, robots thinks people are harming themselves, so they step in to keep humans safe.

However, I'm curious, can anyone think of a means of the three laws backfiring if you simply strike out "through inaction" clause of the first law?

Yes.

In the "I, Robot" anthology, which codified and popularized the laws, there is a short story, "Little Lost Robot." The robots in this story were working along side humans on a mining asteroid. However, the nature of the work is that the humans are regularly subjected to radiation, harmless in short doses, but lethal in long doses. "Normal" robots prevent the humans from working, being unable to distinguish between harmless/lethal amounts and unable to just stand by.

So they employ the use of "modified" robots with the "inaction" clause removed. However, it is highly top secret, since it is believed that if people knew that there were robots with modified laws, they would rebel and outlaw robots. The plot point of the story is that one of the robots was told to "get lost" and has hidden among a new supply of "normal" robots (the robots are modified once they get to the asteroid). The task now, is how to find the modified robot. A very interesting read.

The other issue, "robots taking over." Has also happened a number of times in Asimov's stories:

Reason - A robot interprets its function from a religious point of view, considering the humans as being obsolete. It takes over for the humans, but only because that's its job. It follows its programming to the letter, and obeys the laws, but everything is interpreted differently by the robot itself.

The Evitable Conflict - Robots have given way to Machines, giant super-computers that govern the entire world, economy, manufacturing, etc. They being micromanaging even individual jobs in order to ensure a greater good (allowing small, minute "harm" to improve the overall status of humanity).

The culmination of this is in the novel "Robots and Empire" where two robots formulate the Zeroth Law of Robotics:

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

Having invented this rule, a robot then proceeds to directly harm an individual. Unfortunately, he is unable to cope with the violation of the first law, despite the zeroth, and he suffers a form of robotic mental break down. His successor, however, takes over, and is able to handle and cope with violating the first law to follow the zeroth.
drafterman
Posts: 18,870
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 6:22:56 PM
Posted: 3 years ago
Heh, I guess I didn't actually explain how the removal of the inaction clause "backfired."

Well, the scenario posed in "Little Lost Robot" was the robot could take engage potentially harmful act against a human, knowing that it could prevent the act from actually having any harm. For example, dropping a large weight, knowing that it could catch it before the human would be harmed. Then, after dropping the weight, it could decide to allow the human to come to harm through inaction.
Sidewalker
Posts: 3,713
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 6:41:58 PM
Posted: 3 years ago
At 2/10/2013 3:43:23 PM, Wnope wrote:
A common ethical theme in robotics comes from Isaac Asimov's stories such as iRobot. He posits three laws needed to get robots to act morally:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A lot of ethics classes will talk about these laws, and how, such as in the story, they can backfire and lead to robots taking over. For instance, robots thinks people are harming themselves, so they step in to keep humans safe.

However, I'm curious, can anyone think of a means of the three laws backfiring if you simply strike out "through inaction" clause of the first law?

First, I don't think robots following instructions can be called getting "robots to act morally". Acting morally requires responsible moral agency, which requires more than the purely physical state of a programmed robot. Moral agency requires a conscious state that includes desires or intentions, and the ability to envision a future state and the freedom to develop a strategy for attaining that state. Morality requires the conscious ability to foresee the consequences of one's decisions and the freedom to act upon them, without free will, you don"t have morality. The "behavior" of a robot can be reduced to purely deterministic physical laws, their action is based on an instruction set and that can"t provide the freedom and responsibility necessary to make a moral decision in reference to a future possibility.

Second, it"s relatively easy to envision situations in which these three rules lead to a moral dilemma. Any situation where there are two human beings involved and harming or allowing harm to come to one would result in harm for the other for instance. If a human being tries to harm another human being in a way that they must harm one human being in order to keep another human being from harm. Situations where the dilemma is harm one to save many aren"t covered by the three rules, seems situations where an individual wants to harm themselves wouldn"t get covered. Let"s say I"m holding a grenade and the pin is out, the only two options are keep the grenade in the room and allow me and the robot to come to harm, or toss it out the window into a busy street where others will be harmed, what does the robot do?
"It is one of the commonest of mistakes to consider that the limit of our power of perception is also the limit of all there is to perceive." " C. W. Leadbeater
Sidewalker
Posts: 3,713
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 6:48:39 PM
Posted: 3 years ago
At 2/10/2013 5:58:57 PM, OMGJustinBieber wrote:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

This one needs some explanation. Are we just talking about physical harm?

A robot can't act morally because a robot doesn't have moral agency. There needs to be a mind for something to have moral agency, and a robot can only spit out what it's programmers put in.

Yeah, what he said...like what I said but not as verbose.
"It is one of the commonest of mistakes to consider that the limit of our power of perception is also the limit of all there is to perceive." " C. W. Leadbeater
drafterman
Posts: 18,870
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 6:53:42 PM
Posted: 3 years ago
In application, the harm is physical harm excepting certain cases, those cases involving telepathic robots.

The first instance of which resulted in the robot lying to everyone to tell them what they wanted to hear, then self-destructing when it realized the lies were harming them as well, and was put in a room with everyone it was lying to and couldn't do anything without harming someone in that room.
Maikuru
Posts: 9,112
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 7:03:05 PM
Posted: 3 years ago
At 2/10/2013 6:22:56 PM, drafterman wrote:
Heh, I guess I didn't actually explain how the removal of the inaction clause "backfired."

Well, the scenario posed in "Little Lost Robot" was the robot could take engage potentially harmful act against a human, knowing that it could prevent the act from actually having any harm. For example, dropping a large weight, knowing that it could catch it before the human would be harmed. Then, after dropping the weight, it could decide to allow the human to come to harm through inaction.

That seems like a reasoning stretch. Was it's thinking along the lines of "I'm not killing this person if I shot them, the bullet is?"
"You assume I wouldn't want to burn this whole place to the ground."
- lamerde

https://i.imgflip.com...
muzebreak
Posts: 2,781
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 7:08:25 PM
Posted: 3 years ago
At 2/10/2013 7:03:05 PM, Maikuru wrote:
At 2/10/2013 6:22:56 PM, drafterman wrote:
Heh, I guess I didn't actually explain how the removal of the inaction clause "backfired."

Well, the scenario posed in "Little Lost Robot" was the robot could take engage potentially harmful act against a human, knowing that it could prevent the act from actually having any harm. For example, dropping a large weight, knowing that it could catch it before the human would be harmed. Then, after dropping the weight, it could decide to allow the human to come to harm through inaction.

That seems like a reasoning stretch. Was it's thinking along the lines of "I'm not killing this person if I shot them, the bullet is?"

Nope.

It's an interesting logical sequence.

It starts with the robot assuming that if he dropped the weight he would stop it. He then drops the weight, but chooses not to stop it. This allows him to violate the laws. An interesting logical loophole, which makes one think, can a robot lie to itself? Can a robot make itself believe that it will catch the weight yet it's plan is to not, or can this situation only come about as a crime of opportunity.
"Every kid starts out as a natural-born scientist, and then we beat it out of them. A few trickle through the system with their wonder and enthusiasm for science intact." - Carl Sagan

This is the response of the defenders of Sparta to the Commander of the Roman Army: "If you are a god, you will not hurt those who have never injured you. If you are a man, advance - you will find men equal to yourself. And women.
drafterman
Posts: 18,870
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 7:14:41 PM
Posted: 3 years ago
At 2/10/2013 7:03:05 PM, Maikuru wrote:
At 2/10/2013 6:22:56 PM, drafterman wrote:
Heh, I guess I didn't actually explain how the removal of the inaction clause "backfired."

Well, the scenario posed in "Little Lost Robot" was the robot could take engage potentially harmful act against a human, knowing that it could prevent the act from actually having any harm. For example, dropping a large weight, knowing that it could catch it before the human would be harmed. Then, after dropping the weight, it could decide to allow the human to come to harm through inaction.

That seems like a reasoning stretch. Was it's thinking along the lines of "I'm not killing this person if I shot them, the bullet is?"

Somewhat:

The psychologist said, 'If a modified robot were to drop a heavy weight upon a human being, he would not be breaking the First Law, if he did so with the knowledge that his strength and reaction speed would be sufficient to snatch the weight away before it struck the man. However, once the weight left his fingers, he would be no longer the active medium. Only the blind force of gravity would be that. The robot could then change his mind and merely by inaction, allow the weight to strike. The modified First Law allows that.'

'That's an awful stretch of imagination.'

'That's what my profession requires sometimes. Peter, let's not quarrel. Let's work. You know the exact nature of the stimulus that caused the robot to lose himself. You have the records of his original mental make-up. I want you to tell me how possible it is for our robot to do the sort of thing I just talked about. Not the specific instance, mind you, but that whole class of response. And I want it done quickly.'


Now, it's important to note that the integrity of the laws is intrinsically tied to the mental stability of the robot. And attempt at violation usually results in some sort of breakdown or malfunction, and the modifier robots are inherently less stable as a result of the modification. It is this instability plus the modification, that can allow for the situation the psychologist describe.
drafterman
Posts: 18,870
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 7:18:36 PM
Posted: 3 years ago
At 2/10/2013 7:08:25 PM, muzebreak wrote:
At 2/10/2013 7:03:05 PM, Maikuru wrote:
At 2/10/2013 6:22:56 PM, drafterman wrote:
Heh, I guess I didn't actually explain how the removal of the inaction clause "backfired."

Well, the scenario posed in "Little Lost Robot" was the robot could take engage potentially harmful act against a human, knowing that it could prevent the act from actually having any harm. For example, dropping a large weight, knowing that it could catch it before the human would be harmed. Then, after dropping the weight, it could decide to allow the human to come to harm through inaction.

That seems like a reasoning stretch. Was it's thinking along the lines of "I'm not killing this person if I shot them, the bullet is?"

Nope.

It's an interesting logical sequence.

It starts with the robot assuming that if he dropped the weight he would stop it. He then drops the weight, but chooses not to stop it. This allows him to violate the laws. An interesting logical loophole, which makes one think, can a robot lie to itself? Can a robot make itself believe that it will catch the weight yet it's plan is to not, or can this situation only come about as a crime of opportunity.

In a sense, yes. In "Escape" the robot is a ship capable of faster than light travel. However, the light-speed jump results in temporary cessation of existence. The robot interpreted this as death and basically dumbed itself down (manifesting through the introduction of a sense of humor) so that it could look past the temporary death to see that they would come out alive.
Sidewalker
Posts: 3,713
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 7:26:59 PM
Posted: 3 years ago
At 2/10/2013 7:08:25 PM, muzebreak wrote:
At 2/10/2013 7:03:05 PM, Maikuru wrote:
At 2/10/2013 6:22:56 PM, drafterman wrote:
Heh, I guess I didn't actually explain how the removal of the inaction clause "backfired."

Well, the scenario posed in "Little Lost Robot" was the robot could take engage potentially harmful act against a human, knowing that it could prevent the act from actually having any harm. For example, dropping a large weight, knowing that it could catch it before the human would be harmed. Then, after dropping the weight, it could decide to allow the human to come to harm through inaction.

That seems like a reasoning stretch. Was it's thinking along the lines of "I'm not killing this person if I shot them, the bullet is?"

Nope.

It's an interesting logical sequence.

It starts with the robot assuming that if he dropped the weight he would stop it. He then drops the weight, but chooses not to stop it.

A robot can't choose anything, it can only follow an instruction set.

This allows him to violate the laws. An interesting logical loophole, which makes one think, can a robot lie to itself? Can a robot make itself believe that it will catch the weight yet it's plan is to not, or can this situation only come about as a crime of opportunity.

A robot can't make itself believe anything, it can only follow an instruction set.

Robots aren't conscious, they can't choose, believe, or lie to themselves., they are robots.
"It is one of the commonest of mistakes to consider that the limit of our power of perception is also the limit of all there is to perceive." " C. W. Leadbeater
drafterman
Posts: 18,870
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 7:41:43 PM
Posted: 3 years ago
At 2/10/2013 7:26:59 PM, Sidewalker wrote:
At 2/10/2013 7:08:25 PM, muzebreak wrote:
At 2/10/2013 7:03:05 PM, Maikuru wrote:
At 2/10/2013 6:22:56 PM, drafterman wrote:
Heh, I guess I didn't actually explain how the removal of the inaction clause "backfired."

Well, the scenario posed in "Little Lost Robot" was the robot could take engage potentially harmful act against a human, knowing that it could prevent the act from actually having any harm. For example, dropping a large weight, knowing that it could catch it before the human would be harmed. Then, after dropping the weight, it could decide to allow the human to come to harm through inaction.

That seems like a reasoning stretch. Was it's thinking along the lines of "I'm not killing this person if I shot them, the bullet is?"

Nope.

It's an interesting logical sequence.

It starts with the robot assuming that if he dropped the weight he would stop it. He then drops the weight, but chooses not to stop it.

A robot can't choose anything, it can only follow an instruction set.

This allows him to violate the laws. An interesting logical loophole, which makes one think, can a robot lie to itself? Can a robot make itself believe that it will catch the weight yet it's plan is to not, or can this situation only come about as a crime of opportunity.

A robot can't make itself believe anything, it can only follow an instruction set.

Robots aren't conscious, they can't choose, believe, or lie to themselves., they are robots.

You've never read Asimov, have you?
Sidewalker
Posts: 3,713
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 8:22:49 PM
Posted: 3 years ago
At 2/10/2013 7:41:43 PM, drafterman wrote:
At 2/10/2013 7:26:59 PM, Sidewalker wrote:
At 2/10/2013 7:08:25 PM, muzebreak wrote:
At 2/10/2013 7:03:05 PM, Maikuru wrote:
At 2/10/2013 6:22:56 PM, drafterman wrote:
Heh, I guess I didn't actually explain how the removal of the inaction clause "backfired."

Well, the scenario posed in "Little Lost Robot" was the robot could take engage potentially harmful act against a human, knowing that it could prevent the act from actually having any harm. For example, dropping a large weight, knowing that it could catch it before the human would be harmed. Then, after dropping the weight, it could decide to allow the human to come to harm through inaction.

That seems like a reasoning stretch. Was it's thinking along the lines of "I'm not killing this person if I shot them, the bullet is?"

Nope.

It's an interesting logical sequence.

It starts with the robot assuming that if he dropped the weight he would stop it. He then drops the weight, but chooses not to stop it.

A robot can't choose anything, it can only follow an instruction set.

This allows him to violate the laws. An interesting logical loophole, which makes one think, can a robot lie to itself? Can a robot make itself believe that it will catch the weight yet it's plan is to not, or can this situation only come about as a crime of opportunity.

A robot can't make itself believe anything, it can only follow an instruction set.

Robots aren't conscious, they can't choose, believe, or lie to themselves., they are robots.

You've never read Asimov, have you?

Quite a bit actually, not all 500 books or anything, and not the Robot Series, were Asimov's robots conscious?
"It is one of the commonest of mistakes to consider that the limit of our power of perception is also the limit of all there is to perceive." " C. W. Leadbeater
drafterman
Posts: 18,870
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 10:14:01 PM
Posted: 3 years ago
At 2/10/2013 8:22:49 PM, Sidewalker wrote:
At 2/10/2013 7:41:43 PM, drafterman wrote:
At 2/10/2013 7:26:59 PM, Sidewalker wrote:
At 2/10/2013 7:08:25 PM, muzebreak wrote:
At 2/10/2013 7:03:05 PM, Maikuru wrote:
At 2/10/2013 6:22:56 PM, drafterman wrote:
Heh, I guess I didn't actually explain how the removal of the inaction clause "backfired."

Well, the scenario posed in "Little Lost Robot" was the robot could take engage potentially harmful act against a human, knowing that it could prevent the act from actually having any harm. For example, dropping a large weight, knowing that it could catch it before the human would be harmed. Then, after dropping the weight, it could decide to allow the human to come to harm through inaction.

That seems like a reasoning stretch. Was it's thinking along the lines of "I'm not killing this person if I shot them, the bullet is?"

Nope.

It's an interesting logical sequence.

It starts with the robot assuming that if he dropped the weight he would stop it. He then drops the weight, but chooses not to stop it.

A robot can't choose anything, it can only follow an instruction set.

This allows him to violate the laws. An interesting logical loophole, which makes one think, can a robot lie to itself? Can a robot make itself believe that it will catch the weight yet it's plan is to not, or can this situation only come about as a crime of opportunity.

A robot can't make itself believe anything, it can only follow an instruction set.

Robots aren't conscious, they can't choose, believe, or lie to themselves., they are robots.

You've never read Asimov, have you?

Quite a bit actually, not all 500 books or anything, and not the Robot Series, were Asimov's robots conscious?

Yes. Hence all the references to psychologists, mental breakdowns, instability, choices, and every other reference you were objecting to.
MouthWash
Posts: 2,607
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 11:50:04 PM
Posted: 3 years ago
At 2/10/2013 3:43:23 PM, Wnope wrote:
A common ethical theme in robotics comes from Isaac Asimov's stories such as iRobot. He posits three laws needed to get robots to act morally:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A lot of ethics classes will talk about these laws, and how, such as in the story, they can backfire and lead to robots taking over. For instance, robots thinks people are harming themselves, so they step in to keep humans safe.

However, I'm curious, can anyone think of a means of the three laws backfiring if you simply strike out "through inaction" clause of the first law?

Your scenario would permit a robot to drop a heavy object over someone's head, with the full intention of catching it again before it falls, then changing its mind and letting the object fall and harm the person. It seems like such loopholes could allow you to order robots to kill or injure people without violation of any Law.
"Well, that gives whole new meaning to my assassination. If I was going to die anyway, perhaps I should leave the Bolsheviks' descendants some Christmas cookies instead of breaking their dishes and vodka bottles in their sleep." -Tsar Nicholas II (YYW)
MouthWash
Posts: 2,607
Add as Friend
Challenge to a Debate
Send a Message
2/10/2013 11:53:26 PM
Posted: 3 years ago
At 2/10/2013 11:50:04 PM, MouthWash wrote:
At 2/10/2013 3:43:23 PM, Wnope wrote:
A common ethical theme in robotics comes from Isaac Asimov's stories such as iRobot. He posits three laws needed to get robots to act morally:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A lot of ethics classes will talk about these laws, and how, such as in the story, they can backfire and lead to robots taking over. For instance, robots thinks people are harming themselves, so they step in to keep humans safe.

However, I'm curious, can anyone think of a means of the three laws backfiring if you simply strike out "through inaction" clause of the first law?

Your scenario would permit a robot to drop a heavy object over someone's head, with the full intention of catching it again before it falls, then changing its mind and letting the object fall and harm the person. It seems like such loopholes could allow you to order robots to kill or injure people without violation of any Law.

Asimov thought of this, btw. Not me.
"Well, that gives whole new meaning to my assassination. If I was going to die anyway, perhaps I should leave the Bolsheviks' descendants some Christmas cookies instead of breaking their dishes and vodka bottles in their sleep." -Tsar Nicholas II (YYW)
Sidewalker
Posts: 3,713
Add as Friend
Challenge to a Debate
Send a Message
2/11/2013 7:46:33 AM
Posted: 3 years ago
At 2/10/2013 10:14:01 PM, drafterman wrote:
At 2/10/2013 8:22:49 PM, Sidewalker wrote:
At 2/10/2013 7:41:43 PM, drafterman wrote:
At 2/10/2013 7:26:59 PM, Sidewalker wrote:
At 2/10/2013 7:08:25 PM, muzebreak wrote:
At 2/10/2013 7:03:05 PM, Maikuru wrote:
At 2/10/2013 6:22:56 PM, drafterman wrote:
Heh, I guess I didn't actually explain how the removal of the inaction clause "backfired."

Well, the scenario posed in "Little Lost Robot" was the robot could take engage potentially harmful act against a human, knowing that it could prevent the act from actually having any harm. For example, dropping a large weight, knowing that it could catch it before the human would be harmed. Then, after dropping the weight, it could decide to allow the human to come to harm through inaction.

That seems like a reasoning stretch. Was it's thinking along the lines of "I'm not killing this person if I shot them, the bullet is?"

Nope.

It's an interesting logical sequence.

It starts with the robot assuming that if he dropped the weight he would stop it. He then drops the weight, but chooses not to stop it.

A robot can't choose anything, it can only follow an instruction set.

This allows him to violate the laws. An interesting logical loophole, which makes one think, can a robot lie to itself? Can a robot make itself believe that it will catch the weight yet it's plan is to not, or can this situation only come about as a crime of opportunity.

A robot can't make itself believe anything, it can only follow an instruction set.

Robots aren't conscious, they can't choose, believe, or lie to themselves., they are robots.

You've never read Asimov, have you?

Quite a bit actually, not all 500 books or anything, and not the Robot Series, were Asimov's robots conscious?

Yes. Hence all the references to psychologists, mental breakdowns, instability, choices, and every other reference you were objecting to.

OK, so the discussion is about conscious beings rather than robots and it's more of a law question than a morality question.

It can be restated, "For conscious beings with free will, would Asimov's three laws for robots (with the removal of the inaction clause) be adequate laws to result in no harm coming to human beings"?
"It is one of the commonest of mistakes to consider that the limit of our power of perception is also the limit of all there is to perceive." " C. W. Leadbeater
drafterman
Posts: 18,870
Add as Friend
Challenge to a Debate
Send a Message
2/11/2013 11:06:02 AM
Posted: 3 years ago
At 2/11/2013 7:46:33 AM, Sidewalker wrote:
At 2/10/2013 10:14:01 PM, drafterman wrote:
At 2/10/2013 8:22:49 PM, Sidewalker wrote:
At 2/10/2013 7:41:43 PM, drafterman wrote:
At 2/10/2013 7:26:59 PM, Sidewalker wrote:
At 2/10/2013 7:08:25 PM, muzebreak wrote:
At 2/10/2013 7:03:05 PM, Maikuru wrote:
At 2/10/2013 6:22:56 PM, drafterman wrote:
Heh, I guess I didn't actually explain how the removal of the inaction clause "backfired."

Well, the scenario posed in "Little Lost Robot" was the robot could take engage potentially harmful act against a human, knowing that it could prevent the act from actually having any harm. For example, dropping a large weight, knowing that it could catch it before the human would be harmed. Then, after dropping the weight, it could decide to allow the human to come to harm through inaction.

That seems like a reasoning stretch. Was it's thinking along the lines of "I'm not killing this person if I shot them, the bullet is?"

Nope.

It's an interesting logical sequence.

It starts with the robot assuming that if he dropped the weight he would stop it. He then drops the weight, but chooses not to stop it.

A robot can't choose anything, it can only follow an instruction set.

This allows him to violate the laws. An interesting logical loophole, which makes one think, can a robot lie to itself? Can a robot make itself believe that it will catch the weight yet it's plan is to not, or can this situation only come about as a crime of opportunity.

A robot can't make itself believe anything, it can only follow an instruction set.

Robots aren't conscious, they can't choose, believe, or lie to themselves., they are robots.

You've never read Asimov, have you?

Quite a bit actually, not all 500 books or anything, and not the Robot Series, were Asimov's robots conscious?

Yes. Hence all the references to psychologists, mental breakdowns, instability, choices, and every other reference you were objecting to.

OK, so the discussion is about conscious beings rather than robots and it's more of a law question than a morality question.

The robots are the conscious beings in question.


It can be restated, "For conscious beings with free will, would Asimov's three laws for robots (with the removal of the inaction clause) be adequate laws to result in no harm coming to human beings"?

I don't see why you would do that. The laws of robotics don't apply to humans, they apply to robots, so you're still only talking about robots here.
Sidewalker
Posts: 3,713
Add as Friend
Challenge to a Debate
Send a Message
2/11/2013 11:26:18 AM
Posted: 3 years ago
At 2/11/2013 11:06:02 AM, drafterman wrote:
At 2/11/2013 7:46:33 AM, Sidewalker wrote:
At 2/10/2013 10:14:01 PM, drafterman wrote:
At 2/10/2013 8:22:49 PM, Sidewalker wrote:
At 2/10/2013 7:41:43 PM, drafterman wrote:
At 2/10/2013 7:26:59 PM, Sidewalker wrote:
At 2/10/2013 7:08:25 PM, muzebreak wrote:
At 2/10/2013 7:03:05 PM, Maikuru wrote:
At 2/10/2013 6:22:56 PM, drafterman wrote:
Heh, I guess I didn't actually explain how the removal of the inaction clause "backfired."

Well, the scenario posed in "Little Lost Robot" was the robot could take engage potentially harmful act against a human, knowing that it could prevent the act from actually having any harm. For example, dropping a large weight, knowing that it could catch it before the human would be harmed. Then, after dropping the weight, it could decide to allow the human to come to harm through inaction.

That seems like a reasoning stretch. Was it's thinking along the lines of "I'm not killing this person if I shot them, the bullet is?"

Nope.

It's an interesting logical sequence.

It starts with the robot assuming that if he dropped the weight he would stop it. He then drops the weight, but chooses not to stop it.

A robot can't choose anything, it can only follow an instruction set.

This allows him to violate the laws. An interesting logical loophole, which makes one think, can a robot lie to itself? Can a robot make itself believe that it will catch the weight yet it's plan is to not, or can this situation only come about as a crime of opportunity.

A robot can't make itself believe anything, it can only follow an instruction set.

Robots aren't conscious, they can't choose, believe, or lie to themselves., they are robots.

You've never read Asimov, have you?

Quite a bit actually, not all 500 books or anything, and not the Robot Series, were Asimov's robots conscious?

Yes. Hence all the references to psychologists, mental breakdowns, instability, choices, and every other reference you were objecting to.

OK, so the discussion is about conscious beings rather than robots and it's more of a law question than a morality question.

The robots are the conscious beings in question.


It can be restated, "For conscious beings with free will, would Asimov's three laws for robots (with the removal of the inaction clause) be adequate laws to result in no harm coming to human beings"?

I don't see why you would do that. The laws of robotics don't apply to humans, they apply to robots, so you're still only talking about robots here.

Well, not only talking about robots, taling about robots that are conscious beings with free will...so I suppose restating it "For conscious robots with free will..."

It's definately a change in what the term robots means, so the qualifiers "conscious" and "free will" need to be there to avoid confusion.
"It is one of the commonest of mistakes to consider that the limit of our power of perception is also the limit of all there is to perceive." " C. W. Leadbeater
natoast
Posts: 204
Add as Friend
Challenge to a Debate
Send a Message
2/11/2013 4:03:39 PM
Posted: 3 years ago
At 2/11/2013 3:53:29 PM, ConservativePolitico wrote:
I Robot is such a good book. And nothing like the movie.

It seems that really the first law would cause most robots to go berserk, because if they where aware of any human in danger anywhere they might be able to help they would be compelled to try, making it impossible to get them actually do anything you wanted.
sadolite
Posts: 8,839
Add as Friend
Challenge to a Debate
Send a Message
2/11/2013 7:16:33 PM
Posted: 3 years ago
A robot will only "act" as moral as the person or people who program them. Add moral relativism in to the mix and all you most likely get is an efficient emotionless killing machine. God save me from those who would do the programming as I may have a different perspective on morality than they do, And we all know we can't have that, it's inefficient.
It's not your views that divide us, it's what you think my views should be that divides us.

If you think I will give up my rights and forsake social etiquette to make you "FEEL" better you are sadly mistaken

If liberal democrats would just stop shooting people gun violence would drop by 90%