Total Posts:56|Showing Posts:1-30|Last Page
Jump to topic:

Can AI have Rights?

bsh1
Posts: 27,504
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 1:17:00 AM
Posted: 2 years ago
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?
Live Long and Prosper

I'm a Bish.


"Twilight isn't just about obtuse metaphors between cannibalism and premarital sex, it also teaches us the futility of hope." - Raisor

"[Bsh1] is the Guinan of DDO." - ButterCatX

Follow the DDOlympics
: http://www.debate.org...

Open Debate Topics Project: http://www.debate.org...
Envisage
Posts: 3,646
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 6:58:32 AM
Posted: 2 years ago
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

If an AI can:

1. Have values
2. Have a strong emotional attachment to said values

Then yes, since rights are just something granted by society to an individual. All our right granting processes fundamentally boil down to those 2 attributes as a method of maintaining the social construct.
Wylted
Posts: 21,167
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 8:00:43 AM
Posted: 2 years ago
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

I don't think so. I don't think there is a lot separating a sufficiently advanced AI and a human. People working in the field of AI are even doing their best to study the human brain, so they can use what they learn there to help them create human level AI.

So what we're talking about is probably just a mechanical version of a biological machine. I think these machines will be human in every way except for the fact they were created in a less natural way.

You also have to look at the repercussions of not granting AI rights. Once we have human level AI, I expect people will be able to download human consciousness shortly after, and expand their natural lifespan substantially.

We're talking about taking away our own rights.

Another factor many don't consider is that if human level AI becomes available, it's only a matter of time for it to be cheap enough to create "ancestor simulation", basically a matrix style program, where AI thinks it's living in the real world.

We need to consider, if we create an ancestor simulation that we in fact may be an ancestor simulation ourselves.

If we consider this possibility, we're probably more likely AI than not. How can we seriously demand rights from our creators if we're denying them to the AI we created?

Another possibility to consider is that, we may not always be in control of this planet. If human level AI becomes available, it won't be long until a super AI is created.

This super AI who has total control of the world if he wants (especially considering our dependence on technology), he may look to see how we treat life forms we view as inferior to determine how he should treat us.

Denying rights to AI could cause them to deny us our rights.
mortsdor
Posts: 1,181
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:06:26 AM
Posted: 2 years ago
I don't think so. I don't think there is a lot separating a sufficiently advanced AI and a human.

I have my doubts anyone anytime soon will see such AI.

People working in the field of AI are even doing their best to study the human brain, so they can use what they learn there to help them create human level AI.

sure. doesn't mean they'll succeed.

They may succeed in getting AI to Mimic human responses and behavior..
Perhaps even such that it's convincing.

However, I think there's a difference between valuing things because of spontaneous humane feeling, and carrying out a programmed course of action.

That's not to say I think it's impossible to come to a relevantly similar AI...
Humans are, after all, physical beings..

But I don't think we're anywhere close to it, or going to be very soon. People's brains and the ways in which we experience things are very complicated... I don't think we'll be seeing any AI like people anytime close to soon.

So what we're talking about is probably just a mechanical version of a biological machine. I think these machines will be human in every way except for the fact they were created in a less natural way.

I agree it's theoretically possible.

I don't agree that it looks very plausible.

If there was good reason to think it was really the case... And the AI was sufficiently like us so as to evoke empathy.. Then yeah, I'd want it respected in some respect.

Though the extent to which I'd grant it rights would be dependent upon how much I empathized with, and respected, it.

You also have to look at the repercussions of not granting AI rights. Once we have human level AI, I expect people will be able to download human consciousness shortly after, and expand their natural lifespan substantially.

Also not super likely in any meaningful way, any time soon, from my perspective.

Maybe I'm wrong... but I don't think so.

We're talking about taking away our own rights.

Another factor many don't consider is that if human level AI becomes available, it's only a matter of time for it to be cheap enough to create "ancestor simulation", basically a matrix style program, where AI thinks it's living in the real world.

We need to consider, if we create an ancestor simulation that we in fact may be an ancestor simulation ourselves.
no we don't.

Only have to consider that if it seems to be the case. It doesn't.

If we consider this possibility, we're probably more likely AI than not. How can we seriously demand rights from our creators if we're denying them to the AI we created?

lol, I'm not too worried about it.

Another possibility to consider is that, we may not always be in control of this planet. If human level AI becomes available, it won't be long until a super AI is created.

This super AI who has total control of the world if he wants (especially considering our dependence on technology), he may look to see how we treat life forms we view as inferior to determine how he should treat us.

Denying rights to AI could cause them to deny us our rights.

I don't think we're anywhere near approaching truly autonomous, self purposed, learning machines that care about things.

They do what we make them, what we program them to.
mortsdor
Posts: 1,181
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:08:33 AM
Posted: 2 years ago
At 12/6/2014 11:06:26 AM, mortsdor wrote:
We need to consider, if we create an ancestor simulation that we in fact may be an ancestor simulation ourselves.
no we don't.

Only have to consider that if it seems to be the case. It doesn't.

If we consider this possibility, we're probably more likely AI than not. How can we seriously demand rights from our creators if we're denying them to the AI we created?

lol, I'm not too worried about it.

this would be akin to worrying that there really is a God, so you should just do what people say god wants...

Meanwhile, there's no good reason to think there's a god...
and how the hell would you possibly know what he wants?
Wylted
Posts: 21,167
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 12:26:11 PM
Posted: 2 years ago
At 12/6/2014 11:08:33 AM, mortsdor wrote:
At 12/6/2014 11:06:26 AM, mortsdor wrote:
We need to consider, if we create an ancestor simulation that we in fact may be an ancestor simulation ourselves.
no we don't.

Only have to consider that if it seems to be the case. It doesn't.

If we consider this possibility, we're probably more likely AI than not. How can we seriously demand rights from our creators if we're denying them to the AI we created?

lol, I'm not too worried about it.

this would be akin to worrying that there really is a God, so you should just do what people say god wants...

Meanwhile, there's no good reason to think there's a god...
and how the hell would you possibly know what he wants?

I'd say if ancestor simulations become possible we are more than likely living in one. I also think there is plenty of evidence we are living in a simulation. (See Morgan Freeman's through the wormhole).

One important thing about these ancestor simulations is that, the fact we are in one should be hidden.

I do believe that the brain is complicated just like a forest. A forest has billions of details that are impossible to replicate exactly.

The thing you need to realize is that all these features don't have to be replicated exactly to get a forest. You really just need to copy and paste a bunch of the same leaves, and trees, as well as the grass.

So to get AI you don't have to copy the human brain exactly, you just need to copy those "leaves and trees".

I think that a human level AI is already possible. Certain parts of the human brain have been replicated already. To replicate the entire thing would be too burdensome and expensive regardless of how much money is at your disposal, but as the prices of computing get lower and pending "Moore's Law" holding up, we should see human level AI within our lifetime.

If this is an ancestor simulation it would be fair to assume they aren't much different than us and don't necessarily want anything.

However I think you'd have a good chance of an afterlife, if you became larger than life. People want to be entertained so being extremely funny, villainous or heroic may save your life in the long run.
xXCryptoXx
Posts: 5,000
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 1:09:06 PM
Posted: 2 years ago
At 12/6/2014 8:00:43 AM, Wylted wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

I don't think so.

*takes debate as Pro*

I don't think there is a lot separating a sufficiently advanced AI and a human. People working in the field of AI are even doing their best to study the human brain, so they can use what they learn there to help them create human level AI.

So what we're talking about is probably just a mechanical version of a biological machine. I think these machines will be human in every way except for the fact they were created in a less natural way.

You also have to look at the repercussions of not granting AI rights. Once we have human level AI, I expect people will be able to download human consciousness shortly after, and expand their natural lifespan substantially.

We're talking about taking away our own rights.

Another factor many don't consider is that if human level AI becomes available, it's only a matter of time for it to be cheap enough to create "ancestor simulation", basically a matrix style program, where AI thinks it's living in the real world.

We need to consider, if we create an ancestor simulation that we in fact may be an ancestor simulation ourselves.

If we consider this possibility, we're probably more likely AI than not. How can we seriously demand rights from our creators if we're denying them to the AI we created?

Another possibility to consider is that, we may not always be in control of this planet. If human level AI becomes available, it won't be long until a super AI is created.

This super AI who has total control of the world if he wants (especially considering our dependence on technology), he may look to see how we treat life forms we view as inferior to determine how he should treat us.

Denying rights to AI could cause them to deny us our rights.
Nolite Timere
xXCryptoXx
Posts: 5,000
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 1:12:57 PM
Posted: 2 years ago
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

No. AI cannot be self aware, sentient, or have values. Nor do they at least posses the potential to be by their own nature. Intelligence does not necessarily equate to possessing moral worth.
Nolite Timere
tahir.imanov
Posts: 272
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 2:45:32 PM
Posted: 2 years ago
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

Any self-aware being (where it is organic or non-organic) should have rights.....

Homo sapiens has no right of monopoly over rights.
This is red.
YYW
Posts: 36,392
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 9:50:55 PM
Posted: 2 years ago
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

I had to look up the precise definition of an android, because outside of the phone (I think there is a phone called an android?) I don't have a clue what it is. I am told by various sources that androids are robots that have human appearance.

As a general rule, the more human-like something is, the more reasonable it is to entertain the idea that it has rights. That's because rights are a uniquely human conception, and they govern human behavior. But, even if a robot could think and feel in the same way that a human does, the distance between man and machine is nevertheless significant.

If a robot could think, feel, and interact with the world in the way that humans do, I think denying them rights would be problematic. We would have to get into more or less weak philosophical arguments that dealt with metaphysical differences between humans and robots rather than a discussion of things that are more "of this world."

But at the same time, I have a really hard time envisioning robots that were sufficiently like humans to merit human rights. Robots are not organic, so they don't age and die in the same way that we do. They do not grow or change over time without some manifest intention to change them.

Humans, in contrast, can not not age. We can not not change over time. Our bodies are fragile, precarious, and we have requirements to sustain life that an android never could. I think it means something that a robot could survive in a zero oxygen environment, and we could not. Humans depend on the world for life; a robot, if alive, need only depend on a power source that may be natural or otherwise. So, there is always going to be some degree of distance between humans and androids no matter how human-like they are.

What I'm moderately curious about, though, is whether clones of humans would be due rights. When I was in high school I wrote a short story about a world in which humans cloned themselves to use clones as a source of labor that humans could exploit for their own purposes. The ethics of slavery and exploitation weren't really something that I was interested in grappling with, but I think it is inconceivable that a clone of a human could to think, feel and live in the world in the same way that humans do. The question is whether clones are sufficiently "artificial" to be distant enough from humans to entitle us to exploit them.

That's a pandoras box that I think most would prefer to avoid. Clones, ultimately, could be the ultimate test of our humanity, because they are us, but are not us. The story I wrote sort of explored that theme, in more or less dark ways. Eventually, the clones rebelled, and won what became the war between humans and clones. The story was supposed to metaphorically represent class warfare, but there is something deeper to it that I realize now that I never considered.

I think clones fall into the category of artificial intelligence; but there is at the same time something unavoidably "natural" about them that makes any consideration of exploiting them deeply morally convicting. But, I just don't see the same kind of reaction to something that is not organic.
Tsar of DDO
bsh1
Posts: 27,504
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:03:48 PM
Posted: 2 years ago
At 12/6/2014 8:00:43 AM, Wylted wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

I don't think so. I don't think there is a lot separating a sufficiently advanced AI and a human.

Agreed.

So what we're talking about is probably just a mechanical version of a biological machine. I think these machines will be human in every way except for the fact they were created in a less natural way.

I think that this is a good way to characterize it.

You also have to look at the repercussions of not granting AI rights. Once we have human level AI, I expect people will be able to download human consciousness shortly after, and expand their natural lifespan substantially.

We're talking about taking away our own rights.

I am not sure I really agree with that--or at least the desirability of downloading our essences into a machine.
Live Long and Prosper

I'm a Bish.


"Twilight isn't just about obtuse metaphors between cannibalism and premarital sex, it also teaches us the futility of hope." - Raisor

"[Bsh1] is the Guinan of DDO." - ButterCatX

Follow the DDOlympics
: http://www.debate.org...

Open Debate Topics Project: http://www.debate.org...
bsh1
Posts: 27,504
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:04:56 PM
Posted: 2 years ago
At 12/6/2014 1:12:57 PM, xXCryptoXx wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

No. AI cannot be self aware, sentient, or have values. Nor do they at least posses the potential to be by their own nature. Intelligence does not necessarily equate to possessing moral worth.

Why is it not possible for technology to progress such that it could lead to a self-aware or sentient AI?
Live Long and Prosper

I'm a Bish.


"Twilight isn't just about obtuse metaphors between cannibalism and premarital sex, it also teaches us the futility of hope." - Raisor

"[Bsh1] is the Guinan of DDO." - ButterCatX

Follow the DDOlympics
: http://www.debate.org...

Open Debate Topics Project: http://www.debate.org...
mortsdor
Posts: 1,181
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:07:36 PM
Posted: 2 years ago
At 12/6/2014 9:50:55 PM, YYW wrote:
I think clones fall into the category of artificial intelligence

Well, Clones literally are people too... So pretty sure that doesn't qualify as AI.

(also, just in case SciFi movies are any kind of source of confusion, the most realistic way of cloning doesn't involve a someone being Artificially Constructed, or a Fully Adult human coming out of a vat or anything, Rather clones would get birthed the same way as anyone else.)

As to the robots, I agree I'd be hesitant in signing up to give any robots rights.. Ever..

but, if there was undebiably Good reason to think that some machines had the same Exact functionality as people, and could think independently in the same way as people, and had spontaneous feelings akin to people..
I think I'd wind up supporting their having the same exact rights.
bsh1
Posts: 27,504
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:15:34 PM
Posted: 2 years ago
At 12/6/2014 9:50:55 PM, YYW wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

I had to look up the precise definition of an android, because outside of the phone (I think there is a phone called an android?) I don't have a clue what it is. I am told by various sources that androids are robots that have human appearance.

You are so cute :)

As a general rule, the more human-like something is, the more reasonable it is to entertain the idea that it has rights. That's because rights are a uniquely human conception, and they govern human behavior. But, even if a robot could think and feel in the same way that a human does, the distance between man and machine is nevertheless significant.

Agreed--but the question is whether that gap between AI and humanity is sufficient to deny them rights. What are the significant traits that have to do rights? I don't think merely being biological vice mechanical matters. Sentience, self-awareness--things like that seem more relevant.

If a robot could think, feel, and interact with the world in the way that humans do, I think denying them rights would be problematic. We would have to get into more or less weak philosophical arguments that dealt with metaphysical differences between humans and robots rather than a discussion of things that are more "of this world."

Okay--agreed.

But at the same time, I have a really hard time envisioning robots that were sufficiently like humans to merit human rights. Robots are not organic, so they don't age and die in the same way that we do. They do not grow or change over time without some manifest intention to change them.

They could be engineered to age. That's not inconceivable. But, regardless, if humans could live forever, would that change our status as rights-holders; I don't think it would.

What I'm moderately curious about, though, is whether clones of humans would be due rights. When I was in high school I wrote a short story about a world in which humans cloned themselves to use clones as a source of labor that humans could exploit for their own purposes. The ethics of slavery and exploitation weren't really something that I was interested in grappling with, but I think it is inconceivable that a clone of a human could to think, feel and live in the world in the same way that humans do. The question is whether clones are sufficiently "artificial" to be distant enough from humans to entitle us to exploit them.

I don't think that clones ought to be denied rights. I also think that artificiality is a bit of a red-herring. The questions we should be asking are, can sufficiently developed AI's have interests sufficient enough to gran them rights or moral status.
Live Long and Prosper

I'm a Bish.


"Twilight isn't just about obtuse metaphors between cannibalism and premarital sex, it also teaches us the futility of hope." - Raisor

"[Bsh1] is the Guinan of DDO." - ButterCatX

Follow the DDOlympics
: http://www.debate.org...

Open Debate Topics Project: http://www.debate.org...
YYW
Posts: 36,392
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:33:25 PM
Posted: 2 years ago
At 12/6/2014 11:15:34 PM, bsh1 wrote:
At 12/6/2014 9:50:55 PM, YYW wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

I had to look up the precise definition of an android, because outside of the phone (I think there is a phone called an android?) I don't have a clue what it is. I am told by various sources that androids are robots that have human appearance.

You are so cute :)

As a general rule, the more human-like something is, the more reasonable it is to entertain the idea that it has rights. That's because rights are a uniquely human conception, and they govern human behavior. But, even if a robot could think and feel in the same way that a human does, the distance between man and machine is nevertheless significant.

Agreed--but the question is whether that gap between AI and humanity is sufficient to deny them rights. What are the significant traits that have to do rights? I don't think merely being biological vice mechanical matters. Sentience, self-awareness--things like that seem more relevant.

Idk. The difference is that humans make mistakes that computers never could. A computer, or some kind of AI could be programmed to never harm a human, but a human could not be programed to never harm another human. The human can only be socialized.

Robots are not socialized; the apparatus of their sentience (even if they had it) would be pre-programmed. That's their "baseline." They could never be, as humans are, a "blank slate" upon which to be carved, because robots are not born... they are programmed.

But, at the same time, to say that something has rights is to say that there are some things that should not be done to it. That's why animals can have rights, or nature, in a more broad sense, is due certain things -although there is an equally compelling argument that the only ethical reason that nature should not be exploited is to advance human interests (which isn't really to transpose rights onto nature so much as it is to extent certain human interests beyond humans themselves).

If a robot could think, feel, and interact with the world in the way that humans do, I think denying them rights would be problematic. We would have to get into more or less weak philosophical arguments that dealt with metaphysical differences between humans and robots rather than a discussion of things that are more "of this world."

Okay--agreed.

But at the same time, I have a really hard time envisioning robots that were sufficiently like humans to merit human rights. Robots are not organic, so they don't age and die in the same way that we do. They do not grow or change over time without some manifest intention to change them.

They could be engineered to age. That's not inconceivable. But, regardless, if humans could live forever, would that change our status as rights-holders; I don't think it would.

Could metal be engineered to deteriorate in the same physical way that human bodies do? I mean... I'm not an engineer... but electronic technology (even if it becomes obsolete) tends to not wear out if properly maintained. A human body, no matter how well maintained, is necessarily temporal.

Human existence; like, the very meaning of what it means to be human; human life is meaningful because it is defined against death. What truly distinguishes humans from robots is that unlike robots, we will die. Our lives will end naturally, whereas theirs will not. Robots could have eternal life, and the intellect within a robot could be downloaded and uploaded to a new host body even if the old robot host body became obsolete.

I also think that artificiality is a bit of a red-herring. The questions we should be asking are, can sufficiently developed AI's have interests sufficient enough to gran them rights or moral status.

I think that that's the natural-artificial divide, and that's why I can't agree with you that it's a red herring to the topic; if anything, it's indispensable to the discussion because the "naturalness" of humanity goes to he heart of what it means to be human.

I don't think that clones ought to be denied rights.

I don't either, but theoretically consider for a moment a world where clones did all the work that humans ever would need to not only survive, but thrive. Think of a world where no human had to work, ever, for anything, because there would always be a clone to do it.
Tsar of DDO
xXCryptoXx
Posts: 5,000
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:33:33 PM
Posted: 2 years ago
At 12/6/2014 11:04:56 PM, bsh1 wrote:
At 12/6/2014 1:12:57 PM, xXCryptoXx wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

No. AI cannot be self aware, sentient, or have values. Nor do they at least posses the potential to be by their own nature. Intelligence does not necessarily equate to possessing moral worth.

Why is it not possible for technology to progress such that it could lead to a self-aware or sentient AI?

Depends on the AI. Mechanical AI cannot be self aware or sentient. Organic AI would not be AI at all, but rather a lab created life form, and assuming it were self aware and sentient it would have rights, at least to some degrees depending on the moral traits it exhibits.
Nolite Timere
bsh1
Posts: 27,504
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:35:22 PM
Posted: 2 years ago
At 12/6/2014 11:33:33 PM, xXCryptoXx wrote:
At 12/6/2014 11:04:56 PM, bsh1 wrote:
At 12/6/2014 1:12:57 PM, xXCryptoXx wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

No. AI cannot be self aware, sentient, or have values. Nor do they at least posses the potential to be by their own nature. Intelligence does not necessarily equate to possessing moral worth.

Why is it not possible for technology to progress such that it could lead to a self-aware or sentient AI?

Depends on the AI. Mechanical AI cannot be self aware or sentient.

I fundamentally disagree with that--we just don't know what technology could be capable of.

Organic AI would not be AI at all, but rather a lab created life form, and assuming it were self aware and sentient it would have rights, at least to some degrees depending on the moral traits it exhibits.

Okay.
Live Long and Prosper

I'm a Bish.


"Twilight isn't just about obtuse metaphors between cannibalism and premarital sex, it also teaches us the futility of hope." - Raisor

"[Bsh1] is the Guinan of DDO." - ButterCatX

Follow the DDOlympics
: http://www.debate.org...

Open Debate Topics Project: http://www.debate.org...
xXCryptoXx
Posts: 5,000
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:40:38 PM
Posted: 2 years ago
At 12/6/2014 9:50:55 PM, YYW wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

I had to look up the precise definition of an android, because outside of the phone (I think there is a phone called an android?) I don't have a clue what it is. I am told by various sources that androids are robots that have human appearance.

As a general rule, the more human-like something is, the more reasonable it is to entertain the idea that it has rights. That's because rights are a uniquely human conception, and they govern human behavior. But, even if a robot could think and feel in the same way that a human does, the distance between man and machine is nevertheless significant.

If a robot could think, feel, and interact with the world in the way that humans do, I think denying them rights would be problematic. We would have to get into more or less weak philosophical arguments that dealt with metaphysical differences between humans and robots rather than a discussion of things that are more "of this world."

But at the same time, I have a really hard time envisioning robots that were sufficiently like humans to merit human rights. Robots are not organic, so they don't age and die in the same way that we do. They do not grow or change over time without some manifest intention to change them.

Humans, in contrast, can not not age. We can not not change over time. Our bodies are fragile, precarious, and we have requirements to sustain life that an android never could. I think it means something that a robot could survive in a zero oxygen environment, and we could not. Humans depend on the world for life; a robot, if alive, need only depend on a power source that may be natural or otherwise. So, there is always going to be some degree of distance between humans and androids no matter how human-like they are.

What I'm moderately curious about, though, is whether clones of humans would be due rights. When I was in high school I wrote a short story about a world in which humans cloned themselves to use clones as a source of labor that humans could exploit for their own purposes. The ethics of slavery and exploitation weren't really something that I was interested in grappling with, but I think it is inconceivable that a clone of a human could to think, feel and live in the world in the same way that humans do. The question is whether clones are sufficiently "artificial" to be distant enough from humans to entitle us to exploit them.

That's a pandoras box that I think most would prefer to avoid. Clones, ultimately, could be the ultimate test of our humanity, because they are us, but are not us. The story I wrote sort of explored that theme, in more or less dark ways. Eventually, the clones rebelled, and won what became the war between humans and clones. The story was supposed to metaphorically represent class warfare, but there is something deeper to it that I realize now that I never considered.

I think clones fall into the category of artificial intelligence; but there is at the same time something unavoidably "natural" about them that makes any consideration of exploiting them deeply morally convicting. But, I just don't see the same kind of reaction to something that is not organic.

I'm confused. Why wouldn't clones have rights? Clones are human beings, which can both actively posess and by their nature potentially posess sentience, rationality, and desires. I see no reason why they would not have equal rights.
Nolite Timere
YYW
Posts: 36,392
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:42:03 PM
Posted: 2 years ago
At 12/6/2014 11:40:38 PM, xXCryptoXx wrote:
At 12/6/2014 9:50:55 PM, YYW wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

I had to look up the precise definition of an android, because outside of the phone (I think there is a phone called an android?) I don't have a clue what it is. I am told by various sources that androids are robots that have human appearance.

As a general rule, the more human-like something is, the more reasonable it is to entertain the idea that it has rights. That's because rights are a uniquely human conception, and they govern human behavior. But, even if a robot could think and feel in the same way that a human does, the distance between man and machine is nevertheless significant.

If a robot could think, feel, and interact with the world in the way that humans do, I think denying them rights would be problematic. We would have to get into more or less weak philosophical arguments that dealt with metaphysical differences between humans and robots rather than a discussion of things that are more "of this world."

But at the same time, I have a really hard time envisioning robots that were sufficiently like humans to merit human rights. Robots are not organic, so they don't age and die in the same way that we do. They do not grow or change over time without some manifest intention to change them.

Humans, in contrast, can not not age. We can not not change over time. Our bodies are fragile, precarious, and we have requirements to sustain life that an android never could. I think it means something that a robot could survive in a zero oxygen environment, and we could not. Humans depend on the world for life; a robot, if alive, need only depend on a power source that may be natural or otherwise. So, there is always going to be some degree of distance between humans and androids no matter how human-like they are.

What I'm moderately curious about, though, is whether clones of humans would be due rights. When I was in high school I wrote a short story about a world in which humans cloned themselves to use clones as a source of labor that humans could exploit for their own purposes. The ethics of slavery and exploitation weren't really something that I was interested in grappling with, but I think it is inconceivable that a clone of a human could to think, feel and live in the world in the same way that humans do. The question is whether clones are sufficiently "artificial" to be distant enough from humans to entitle us to exploit them.

That's a pandoras box that I think most would prefer to avoid. Clones, ultimately, could be the ultimate test of our humanity, because they are us, but are not us. The story I wrote sort of explored that theme, in more or less dark ways. Eventually, the clones rebelled, and won what became the war between humans and clones. The story was supposed to metaphorically represent class warfare, but there is something deeper to it that I realize now that I never considered.

I think clones fall into the category of artificial intelligence; but there is at the same time something unavoidably "natural" about them that makes any consideration of exploiting them deeply morally convicting. But, I just don't see the same kind of reaction to something that is not organic.

I'm confused. Why wouldn't clones have rights? Clones are human beings, which can both actively posess and by their nature potentially posess sentience, rationality, and desires. I see no reason why they would not have equal rights.

Clones are necessarily man's creation. That's sort of the foundation for the argument to be made that clones should be servient to man.
Tsar of DDO
xXCryptoXx
Posts: 5,000
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:44:02 PM
Posted: 2 years ago
At 12/6/2014 11:35:22 PM, bsh1 wrote:
At 12/6/2014 11:33:33 PM, xXCryptoXx wrote:
At 12/6/2014 11:04:56 PM, bsh1 wrote:
At 12/6/2014 1:12:57 PM, xXCryptoXx wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

No. AI cannot be self aware, sentient, or have values. Nor do they at least posses the potential to be by their own nature. Intelligence does not necessarily equate to possessing moral worth.

Why is it not possible for technology to progress such that it could lead to a self-aware or sentient AI?

Depends on the AI. Mechanical AI cannot be self aware or sentient.

I fundamentally disagree with that--we just don't know what technology could be capable of.

I suppose. I read somewhere as to why mechanical objects are incapable of self awareness or rationality. I'll try to find it.

Organic AI would not be AI at all, but rather a lab created life form, and assuming it were self aware and sentient it would have rights, at least to some degrees depending on the moral traits it exhibits.

Okay.
Nolite Timere
xXCryptoXx
Posts: 5,000
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:47:01 PM
Posted: 2 years ago
At 12/6/2014 11:42:03 PM, YYW wrote:
At 12/6/2014 11:40:38 PM, xXCryptoXx wrote:
At 12/6/2014 9:50:55 PM, YYW wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

I had to look up the precise definition of an android, because outside of the phone (I think there is a phone called an android?) I don't have a clue what it is. I am told by various sources that androids are robots that have human appearance.

As a general rule, the more human-like something is, the more reasonable it is to entertain the idea that it has rights. That's because rights are a uniquely human conception, and they govern human behavior. But, even if a robot could think and feel in the same way that a human does, the distance between man and machine is nevertheless significant.

If a robot could think, feel, and interact with the world in the way that humans do, I think denying them rights would be problematic. We would have to get into more or less weak philosophical arguments that dealt with metaphysical differences between humans and robots rather than a discussion of things that are more "of this world."

But at the same time, I have a really hard time envisioning robots that were sufficiently like humans to merit human rights. Robots are not organic, so they don't age and die in the same way that we do. They do not grow or change over time without some manifest intention to change them.

Humans, in contrast, can not not age. We can not not change over time. Our bodies are fragile, precarious, and we have requirements to sustain life that an android never could. I think it means something that a robot could survive in a zero oxygen environment, and we could not. Humans depend on the world for life; a robot, if alive, need only depend on a power source that may be natural or otherwise. So, there is always going to be some degree of distance between humans and androids no matter how human-like they are.

What I'm moderately curious about, though, is whether clones of humans would be due rights. When I was in high school I wrote a short story about a world in which humans cloned themselves to use clones as a source of labor that humans could exploit for their own purposes. The ethics of slavery and exploitation weren't really something that I was interested in grappling with, but I think it is inconceivable that a clone of a human could to think, feel and live in the world in the same way that humans do. The question is whether clones are sufficiently "artificial" to be distant enough from humans to entitle us to exploit them.

That's a pandoras box that I think most would prefer to avoid. Clones, ultimately, could be the ultimate test of our humanity, because they are us, but are not us. The story I wrote sort of explored that theme, in more or less dark ways. Eventually, the clones rebelled, and won what became the war between humans and clones. The story was supposed to metaphorically represent class warfare, but there is something deeper to it that I realize now that I never considered.

I think clones fall into the category of artificial intelligence; but there is at the same time something unavoidably "natural" about them that makes any consideration of exploiting them deeply morally convicting. But, I just don't see the same kind of reaction to something that is not organic.

I'm confused. Why wouldn't clones have rights? Clones are human beings, which can both actively posess and by their nature potentially posess sentience, rationality, and desires. I see no reason why they would not have equal rights.

Clones are necessarily man's creation. That's sort of the foundation for the argument to be made that clones should be servient to man.

I was the creation of my parents, yet I don't think anyone (rational) would argue I am their slave and posess no rights. That argument would be flawed on the grounds that it ignores what creates moral worth and replaces it with an irrelevent criteria.
Nolite Timere
thett3
Posts: 14,378
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:51:26 PM
Posted: 2 years ago
I guess it depends on what is meant by Intelligence. If they're merely a cold, calculating machine probably not. If they have emotions and feelings, absolutely.

I fail to see any justification for denying them rights if they're essentially artificially created humans. I mean, I guess you don't believe as I do but we too are just creations...

I would've probably taken your debate on this subject.
DDO Vice President

#StandwithBossy

#UnbanTheMadman

#BetOnThett

"Don't quote me, ever." -Max

"My name is max. I'm not a big fan of slacks"- Max rapping

"Walmart should have the opportunity to bribe a politician to it's agenda" -Max

"Thett, you're really good at convincing people you're a decent person"-tulle

"You fit the character of Regina George quite nicely"- Sam

: At 11/12/2016 11:49:40 PM, Raisor wrote:
: thett was right
YYW
Posts: 36,392
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:54:22 PM
Posted: 2 years ago
At 12/6/2014 11:47:01 PM, xXCryptoXx wrote:
At 12/6/2014 11:42:03 PM, YYW wrote:
At 12/6/2014 11:40:38 PM, xXCryptoXx wrote:
At 12/6/2014 9:50:55 PM, YYW wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

I had to look up the precise definition of an android, because outside of the phone (I think there is a phone called an android?) I don't have a clue what it is. I am told by various sources that androids are robots that have human appearance.

As a general rule, the more human-like something is, the more reasonable it is to entertain the idea that it has rights. That's because rights are a uniquely human conception, and they govern human behavior. But, even if a robot could think and feel in the same way that a human does, the distance between man and machine is nevertheless significant.

If a robot could think, feel, and interact with the world in the way that humans do, I think denying them rights would be problematic. We would have to get into more or less weak philosophical arguments that dealt with metaphysical differences between humans and robots rather than a discussion of things that are more "of this world."

But at the same time, I have a really hard time envisioning robots that were sufficiently like humans to merit human rights. Robots are not organic, so they don't age and die in the same way that we do. They do not grow or change over time without some manifest intention to change them.

Humans, in contrast, can not not age. We can not not change over time. Our bodies are fragile, precarious, and we have requirements to sustain life that an android never could. I think it means something that a robot could survive in a zero oxygen environment, and we could not. Humans depend on the world for life; a robot, if alive, need only depend on a power source that may be natural or otherwise. So, there is always going to be some degree of distance between humans and androids no matter how human-like they are.

What I'm moderately curious about, though, is whether clones of humans would be due rights. When I was in high school I wrote a short story about a world in which humans cloned themselves to use clones as a source of labor that humans could exploit for their own purposes. The ethics of slavery and exploitation weren't really something that I was interested in grappling with, but I think it is inconceivable that a clone of a human could to think, feel and live in the world in the same way that humans do. The question is whether clones are sufficiently "artificial" to be distant enough from humans to entitle us to exploit them.

That's a pandoras box that I think most would prefer to avoid. Clones, ultimately, could be the ultimate test of our humanity, because they are us, but are not us. The story I wrote sort of explored that theme, in more or less dark ways. Eventually, the clones rebelled, and won what became the war between humans and clones. The story was supposed to metaphorically represent class warfare, but there is something deeper to it that I realize now that I never considered.

I think clones fall into the category of artificial intelligence; but there is at the same time something unavoidably "natural" about them that makes any consideration of exploiting them deeply morally convicting. But, I just don't see the same kind of reaction to something that is not organic.

I'm confused. Why wouldn't clones have rights? Clones are human beings, which can both actively posess and by their nature potentially posess sentience, rationality, and desires. I see no reason why they would not have equal rights.

Clones are necessarily man's creation. That's sort of the foundation for the argument to be made that clones should be servient to man.

I was the creation of my parents, yet I don't think anyone (rational) would argue I am their slave and posess no rights. That argument would be flawed on the grounds that it ignores what creates moral worth and replaces it with an irrelevent criteria.

Ahh but you are the NATURAL product of your parent's getting it on... for clones... it doesn't work that way.
Tsar of DDO
bsh1
Posts: 27,504
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:57:48 PM
Posted: 2 years ago
At 12/6/2014 11:44:02 PM, xXCryptoXx wrote:
At 12/6/2014 11:35:22 PM, bsh1 wrote:
At 12/6/2014 11:33:33 PM, xXCryptoXx wrote:
At 12/6/2014 11:04:56 PM, bsh1 wrote:
Why is it not possible for technology to progress such that it could lead to a self-aware or sentient AI?

Depends on the AI. Mechanical AI cannot be self aware or sentient.

I fundamentally disagree with that--we just don't know what technology could be capable of.

I suppose. I read somewhere as to why mechanical objects are incapable of self awareness or rationality. I'll try to find it.

I'd be interested in reading that.
Live Long and Prosper

I'm a Bish.


"Twilight isn't just about obtuse metaphors between cannibalism and premarital sex, it also teaches us the futility of hope." - Raisor

"[Bsh1] is the Guinan of DDO." - ButterCatX

Follow the DDOlympics
: http://www.debate.org...

Open Debate Topics Project: http://www.debate.org...
xXCryptoXx
Posts: 5,000
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:58:15 PM
Posted: 2 years ago
At 12/6/2014 11:54:22 PM, YYW wrote:
At 12/6/2014 11:47:01 PM, xXCryptoXx wrote:
At 12/6/2014 11:42:03 PM, YYW wrote:
At 12/6/2014 11:40:38 PM, xXCryptoXx wrote:
At 12/6/2014 9:50:55 PM, YYW wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

I had to look up the precise definition of an android, because outside of the phone (I think there is a phone called an android?) I don't have a clue what it is. I am told by various sources that androids are robots that have human appearance.

As a general rule, the more human-like something is, the more reasonable it is to entertain the idea that it has rights. That's because rights are a uniquely human conception, and they govern human behavior. But, even if a robot could think and feel in the same way that a human does, the distance between man and machine is nevertheless significant.

If a robot could think, feel, and interact with the world in the way that humans do, I think denying them rights would be problematic. We would have to get into more or less weak philosophical arguments that dealt with metaphysical differences between humans and robots rather than a discussion of things that are more "of this world."

But at the same time, I have a really hard time envisioning robots that were sufficiently like humans to merit human rights. Robots are not organic, so they don't age and die in the same way that we do. They do not grow or change over time without some manifest intention to change them.

Humans, in contrast, can not not age. We can not not change over time. Our bodies are fragile, precarious, and we have requirements to sustain life that an android never could. I think it means something that a robot could survive in a zero oxygen environment, and we could not. Humans depend on the world for life; a robot, if alive, need only depend on a power source that may be natural or otherwise. So, there is always going to be some degree of distance between humans and androids no matter how human-like they are.

What I'm moderately curious about, though, is whether clones of humans would be due rights. When I was in high school I wrote a short story about a world in which humans cloned themselves to use clones as a source of labor that humans could exploit for their own purposes. The ethics of slavery and exploitation weren't really something that I was interested in grappling with, but I think it is inconceivable that a clone of a human could to think, feel and live in the world in the same way that humans do. The question is whether clones are sufficiently "artificial" to be distant enough from humans to entitle us to exploit them.

That's a pandoras box that I think most would prefer to avoid. Clones, ultimately, could be the ultimate test of our humanity, because they are us, but are not us. The story I wrote sort of explored that theme, in more or less dark ways. Eventually, the clones rebelled, and won what became the war between humans and clones. The story was supposed to metaphorically represent class warfare, but there is something deeper to it that I realize now that I never considered.

I think clones fall into the category of artificial intelligence; but there is at the same time something unavoidably "natural" about them that makes any consideration of exploiting them deeply morally convicting. But, I just don't see the same kind of reaction to something that is not organic.

I'm confused. Why wouldn't clones have rights? Clones are human beings, which can both actively posess and by their nature potentially posess sentience, rationality, and desires. I see no reason why they would not have equal rights.

Clones are necessarily man's creation. That's sort of the foundation for the argument to be made that clones should be servient to man.

I was the creation of my parents, yet I don't think anyone (rational) would argue I am their slave and posess no rights. That argument would be flawed on the grounds that it ignores what creates moral worth and replaces it with an irrelevent criteria.

Ahh but you are the NATURAL product of your parent's getting it on... for clones... it doesn't work that way.

Ergo, my point that this is irrielevent criteria when determining the moral worth of something.
Nolite Timere
bsh1
Posts: 27,504
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:59:28 PM
Posted: 2 years ago
At 12/6/2014 11:51:26 PM, thett3 wrote:
I guess it depends on what is meant by Intelligence. If they're merely a cold, calculating machine probably not. If they have emotions and feelings, absolutely.

Agreed.

I fail to see any justification for denying them rights if they're essentially artificially created humans.

I also agree with this.

I would've probably taken your debate on this subject.

Maybe we can do a similar debate sometime. You would've be an amazing opponent on this, but I am also really happy that Wylted accepted. You're both good, and you both would've made it a fun debate.
Live Long and Prosper

I'm a Bish.


"Twilight isn't just about obtuse metaphors between cannibalism and premarital sex, it also teaches us the futility of hope." - Raisor

"[Bsh1] is the Guinan of DDO." - ButterCatX

Follow the DDOlympics
: http://www.debate.org...

Open Debate Topics Project: http://www.debate.org...
YYW
Posts: 36,392
Add as Friend
Challenge to a Debate
Send a Message
12/6/2014 11:59:40 PM
Posted: 2 years ago
At 12/6/2014 11:58:15 PM, xXCryptoXx wrote:
At 12/6/2014 11:54:22 PM, YYW wrote:
At 12/6/2014 11:47:01 PM, xXCryptoXx wrote:
At 12/6/2014 11:42:03 PM, YYW wrote:
At 12/6/2014 11:40:38 PM, xXCryptoXx wrote:
At 12/6/2014 9:50:55 PM, YYW wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

I had to look up the precise definition of an android, because outside of the phone (I think there is a phone called an android?) I don't have a clue what it is. I am told by various sources that androids are robots that have human appearance.

As a general rule, the more human-like something is, the more reasonable it is to entertain the idea that it has rights. That's because rights are a uniquely human conception, and they govern human behavior. But, even if a robot could think and feel in the same way that a human does, the distance between man and machine is nevertheless significant.

If a robot could think, feel, and interact with the world in the way that humans do, I think denying them rights would be problematic. We would have to get into more or less weak philosophical arguments that dealt with metaphysical differences between humans and robots rather than a discussion of things that are more "of this world."

But at the same time, I have a really hard time envisioning robots that were sufficiently like humans to merit human rights. Robots are not organic, so they don't age and die in the same way that we do. They do not grow or change over time without some manifest intention to change them.

Humans, in contrast, can not not age. We can not not change over time. Our bodies are fragile, precarious, and we have requirements to sustain life that an android never could. I think it means something that a robot could survive in a zero oxygen environment, and we could not. Humans depend on the world for life; a robot, if alive, need only depend on a power source that may be natural or otherwise. So, there is always going to be some degree of distance between humans and androids no matter how human-like they are.

What I'm moderately curious about, though, is whether clones of humans would be due rights. When I was in high school I wrote a short story about a world in which humans cloned themselves to use clones as a source of labor that humans could exploit for their own purposes. The ethics of slavery and exploitation weren't really something that I was interested in grappling with, but I think it is inconceivable that a clone of a human could to think, feel and live in the world in the same way that humans do. The question is whether clones are sufficiently "artificial" to be distant enough from humans to entitle us to exploit them.

That's a pandoras box that I think most would prefer to avoid. Clones, ultimately, could be the ultimate test of our humanity, because they are us, but are not us. The story I wrote sort of explored that theme, in more or less dark ways. Eventually, the clones rebelled, and won what became the war between humans and clones. The story was supposed to metaphorically represent class warfare, but there is something deeper to it that I realize now that I never considered.

I think clones fall into the category of artificial intelligence; but there is at the same time something unavoidably "natural" about them that makes any consideration of exploiting them deeply morally convicting. But, I just don't see the same kind of reaction to something that is not organic.

I'm confused. Why wouldn't clones have rights? Clones are human beings, which can both actively posess and by their nature potentially posess sentience, rationality, and desires. I see no reason why they would not have equal rights.

Clones are necessarily man's creation. That's sort of the foundation for the argument to be made that clones should be servient to man.

I was the creation of my parents, yet I don't think anyone (rational) would argue I am their slave and posess no rights. That argument would be flawed on the grounds that it ignores what creates moral worth and replaces it with an irrelevent criteria.

Ahh but you are the NATURAL product of your parent's getting it on... for clones... it doesn't work that way.

Ergo, my point that this is irrielevent criteria when determining the moral worth of something.

I think so too... but not everyone agrees with that.
Tsar of DDO
xXCryptoXx
Posts: 5,000
Add as Friend
Challenge to a Debate
Send a Message
12/7/2014 12:01:45 AM
Posted: 2 years ago
At 12/6/2014 11:57:48 PM, bsh1 wrote:
At 12/6/2014 11:44:02 PM, xXCryptoXx wrote:
At 12/6/2014 11:35:22 PM, bsh1 wrote:
At 12/6/2014 11:33:33 PM, xXCryptoXx wrote:
At 12/6/2014 11:04:56 PM, bsh1 wrote:
Why is it not possible for technology to progress such that it could lead to a self-aware or sentient AI?

Depends on the AI. Mechanical AI cannot be self aware or sentient.

I fundamentally disagree with that--we just don't know what technology could be capable of.

I suppose. I read somewhere as to why mechanical objects are incapable of self awareness or rationality. I'll try to find it.

I'd be interested in reading that.

Its Aristotelian I believe.
Nolite Timere
bsh1
Posts: 27,504
Add as Friend
Challenge to a Debate
Send a Message
12/7/2014 12:02:21 AM
Posted: 2 years ago
At 12/7/2014 12:01:45 AM, xXCryptoXx wrote:
At 12/6/2014 11:57:48 PM, bsh1 wrote:
At 12/6/2014 11:44:02 PM, xXCryptoXx wrote:
I suppose. I read somewhere as to why mechanical objects are incapable of self awareness or rationality. I'll try to find it.

I'd be interested in reading that.

Its Aristotelian I believe.

Okie dokie.
Live Long and Prosper

I'm a Bish.


"Twilight isn't just about obtuse metaphors between cannibalism and premarital sex, it also teaches us the futility of hope." - Raisor

"[Bsh1] is the Guinan of DDO." - ButterCatX

Follow the DDOlympics
: http://www.debate.org...

Open Debate Topics Project: http://www.debate.org...
xXCryptoXx
Posts: 5,000
Add as Friend
Challenge to a Debate
Send a Message
12/7/2014 12:04:27 AM
Posted: 2 years ago
At 12/6/2014 11:59:40 PM, YYW wrote:
At 12/6/2014 11:58:15 PM, xXCryptoXx wrote:
At 12/6/2014 11:54:22 PM, YYW wrote:
At 12/6/2014 11:47:01 PM, xXCryptoXx wrote:
At 12/6/2014 11:42:03 PM, YYW wrote:
At 12/6/2014 11:40:38 PM, xXCryptoXx wrote:
At 12/6/2014 9:50:55 PM, YYW wrote:
At 12/6/2014 1:17:00 AM, bsh1 wrote:
Can an artificial intelligence have rights? I have recently issued a debate challenge on the subject [http://www.debate.org...] but I am interested in a forum discussion of the topic too.

If we make the following two assumptions: (1) rights exist, and (2) all humans have rights, can we then deny rights to sufficiently developed artificial intelligences? If so, why? If not, why not?

I had to look up the precise definition of an android, because outside of the phone (I think there is a phone called an android?) I don't have a clue what it is. I am told by various sources that androids are robots that have human appearance.

As a general rule, the more human-like something is, the more reasonable it is to entertain the idea that it has rights. That's because rights are a uniquely human conception, and they govern human behavior. But, even if a robot could think and feel in the same way that a human does, the distance between man and machine is nevertheless significant.

If a robot could think, feel, and interact with the world in the way that humans do, I think denying them rights would be problematic. We would have to get into more or less weak philosophical arguments that dealt with metaphysical differences between humans and robots rather than a discussion of things that are more "of this world."

But at the same time, I have a really hard time envisioning robots that were sufficiently like humans to merit human rights. Robots are not organic, so they don't age and die in the same way that we do. They do not grow or change over time without some manifest intention to change them.

Humans, in contrast, can not not age. We can not not change over time. Our bodies are fragile, precarious, and we have requirements to sustain life that an android never could. I think it means something that a robot could survive in a zero oxygen environment, and we could not. Humans depend on the world for life; a robot, if alive, need only depend on a power source that may be natural or otherwise. So, there is always going to be some degree of distance between humans and androids no matter how human-like they are.

What I'm moderately curious about, though, is whether clones of humans would be due rights. When I was in high school I wrote a short story about a world in which humans cloned themselves to use clones as a source of labor that humans could exploit for their own purposes. The ethics of slavery and exploitation weren't really something that I was interested in grappling with, but I think it is inconceivable that a clone of a human could to think, feel and live in the world in the same way that humans do. The question is whether clones are sufficiently "artificial" to be distant enough from humans to entitle us to exploit them.

That's a pandoras box that I think most would prefer to avoid. Clones, ultimately, could be the ultimate test of our humanity, because they are us, but are not us. The story I wrote sort of explored that theme, in more or less dark ways. Eventually, the clones rebelled, and won what became the war between humans and clones. The story was supposed to metaphorically represent class warfare, but there is something deeper to it that I realize now that I never considered.

I think clones fall into the category of artificial intelligence; but there is at the same time something unavoidably "natural" about them that makes any consideration of exploiting them deeply morally convicting. But, I just don't see the same kind of reaction to something that is not organic.

I'm confused. Why wouldn't clones have rights? Clones are human beings, which can both actively posess and by their nature potentially posess sentience, rationality, and desires. I see no reason why they would not have equal rights.

Clones are necessarily man's creation. That's sort of the foundation for the argument to be made that clones should be servient to man.

I was the creation of my parents, yet I don't think anyone (rational) would argue I am their slave and posess no rights. That argument would be flawed on the grounds that it ignores what creates moral worth and replaces it with an irrelevent criteria.

Ahh but you are the NATURAL product of your parent's getting it on... for clones... it doesn't work that way.

Ergo, my point that this is irrielevent criteria when determining the moral worth of something.

I think so too... but not everyone agrees with that.

Well if truth and opinion were one in the same we probably wouldn't live in a very nice world would we?
Nolite Timere