Total Posts:17|Showing Posts:1-17
Jump to topic:

Should AI's have Human Right's?

lannan13
Posts: 23,073
Add as Friend
Challenge to a Debate
Send a Message
12/29/2014 4:06:02 PM
Posted: 1 year ago
What do you think, should they and if so which right's?
-~-~-~-~-~-~-~-Lannan13'S SIGNATURE-~-~-~-~-~-~-~-

If the sky's the limit then why do we have footprints on the Moon? I'm shooting my aspirations for the stars.

"If you are going through hell, keep going." "Sir Winston Churchill

"No one can make you feel inferior without your consent." "Eleanor Roosevelt

Topics I want to debate. (http://tinyurl.com...)
-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~
Range
Posts: 29
Add as Friend
Challenge to a Debate
Send a Message
1/2/2015 8:46:22 PM
Posted: 1 year ago
At 12/29/2014 4:06:02 PM, lannan13 wrote:
What do you think, should they and if so which right's?

Hah, I assume anyone who saw this just went, "Wow, what a stupid question, why the hell would they." This is a valid point, however it's quite boring

For this reason, I believe they should have the right to freedom of speech.
Now I expect your response, lannan, after which I will continue with my justification.
Anything can be justified. You just need a solid framework and some duct tape.
lannan13
Posts: 23,073
Add as Friend
Challenge to a Debate
Send a Message
1/8/2015 5:26:23 AM
Posted: 1 year ago
At 1/2/2015 8:46:22 PM, Range wrote:
At 12/29/2014 4:06:02 PM, lannan13 wrote:
What do you think, should they and if so which right's?

Hah, I assume anyone who saw this just went, "Wow, what a stupid question, why the hell would they." This is a valid point, however it's quite boring

For this reason, I believe they should have the right to freedom of speech.
Now I expect your response, lannan, after which I will continue with my justification.

They're machines. They are programed what to say.
-~-~-~-~-~-~-~-Lannan13'S SIGNATURE-~-~-~-~-~-~-~-

If the sky's the limit then why do we have footprints on the Moon? I'm shooting my aspirations for the stars.

"If you are going through hell, keep going." "Sir Winston Churchill

"No one can make you feel inferior without your consent." "Eleanor Roosevelt

Topics I want to debate. (http://tinyurl.com...)
-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~
Range
Posts: 29
Add as Friend
Challenge to a Debate
Send a Message
1/8/2015 3:08:58 PM
Posted: 1 year ago
They're programmed to a certain extent.
You give them mostly if-then types of reason.

If you observe this happening: a temperature reaching a low, then do this: turn up the heating. If you observe this happening: a hostile attacking a civilian area, then do this: launch a kill code to the gun, automatically.
If this is not a hostile, but instead a civilian attacking what he thought were hostiles instead, then the code's purpose was not true, however its observations were, making its killing of the civilian justified. The free action of this machine was the free speech of its code, and although people will mourn and call for the machine to be abolished, it will be the wrong call, and will not be carried through.

If you observe this happening: civilians are protesting the government and killings are occurring, then do this: launch a protection sequence which kills the armed and hostile civilians in order to protect the officials.
The next day panic arises around the country. Protesters killed by government machine! In my POV, this machine as practicing freedom of speech, freedom of execution of its own code, put there by humans who obviously have the freedom of speech in the first place. It had no fault in these situations and therefore should continue to exist.
Anything can be justified. You just need a solid framework and some duct tape.
Idealist
Posts: 2,520
Add as Friend
Challenge to a Debate
Send a Message
1/14/2015 8:08:10 PM
Posted: 1 year ago
At 12/29/2014 4:06:02 PM, lannan13 wrote:
What do you think, should they and if so which right's?

How would you define an A.I.? And what kind of votes are we talking about? If they are ever actually invented, and given the right to vote, then it would put a whole new spin on politics.

I don't think that this is a question which can be addressed until and unless A.I. becomes real, and I don't think that having this happen would be a good thing. It wouldn't take long for A.I.s to become dominant, and the dominant "species" is usually aggressive and possessive about it. Look at us humans, we think we own the world.
Andromeda_Z
Posts: 4,151
Add as Friend
Challenge to a Debate
Send a Message
1/19/2015 9:30:33 PM
Posted: 1 year ago
This is actually the whole premise of a star trek tng episode. I wish i could remember which so that i could link to it. But basically it revolved around a character named Data, who was an android. There was only one of his kind ever made. Well, someone was eventually ordered to take him apart and figure out how to make more. Data objected to it on the basis that it would be much the same as forcing his human coworkers to have their eyes replaced with superior cybernetic ones and that it is only because he is not human that he is asked to participate in this dangerous procedure. The case ended up in court, and the issue turned into that of whether Data was an individual (and therefore must be granted rights) or Starfleet property. The argument made is that creating AIs without giving them rights would be to create a race of slaves. It doesn't matter that we created them; we create our children and still give them rights. We would be denying an entire species basic rights. Sentience consists of self-awareness and intelligence. If a being is sentient, as we are, it ought to have rights. We would be permitting slavery to argue otherwise, and I doubt anyone wants to go down that road again.
Andromeda_Z
Posts: 4,151
Add as Friend
Challenge to a Debate
Send a Message
1/19/2015 9:34:06 PM
Posted: 1 year ago
Ultimately this issue isn't about AIs. It's about how we interact with other beings. We deny carrots rights because carrots don't know jack sh!t about what's going on anyway. A carrot doesn't know it's a carrot, much less that a carrot is delicious. We give rights to other humans because humans are sentient. They know they exist and they understand what's going on. If we create an AI that has those same properties, we must also give it rights.
Paleophyte
Posts: 57
Add as Friend
Challenge to a Debate
Send a Message
1/20/2015 6:36:14 PM
Posted: 1 year ago
At 12/29/2014 4:06:02 PM, lannan13 wrote:
What do you think, should they and if so which right's?

No. Humans should have human rights. AIs should have AI rights. They're different entities with different needs and (likely) values.

We will most likely need to make a set of rights for both groups. Sentient rights? That will get really entertaining because sentience isn't a simple binary option and arguments could be made for certain animals to be partially or completely included in most definitions.
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
1/20/2015 8:40:59 PM
Posted: 1 year ago
At 1/19/2015 9:30:33 PM, Andromeda_Z wrote:
This is actually the whole premise of a star trek tng episode. I wish i could remember which so that i could link to it.

Well you're in luck... Star Trek nerd here (flashes badge). The episode is called "The Measure of a Man" from Season 2. The idea was actually regurgitated in Star Trek Voyager in season 7 (like many episodes were in that series) to be used for the holographic doctor in the episode "Author, Author." Although the legal outcome for Data and the Doctor were not the same...
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
Range
Posts: 29
Add as Friend
Challenge to a Debate
Send a Message
1/20/2015 8:44:25 PM
Posted: 1 year ago
Should AIs have freedom of speech/expression/stating whatever conclusions their code draws?
Anything can be justified. You just need a solid framework and some duct tape.
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
1/20/2015 9:10:24 PM
Posted: 1 year ago
The term "AI" is intrinsically conflicting. Not only is there no evidence that intelligence will EVER be created by man, but there's no evidence that we can even create the simplest possible life-form imaginable. We can't create life, consciousness, or intelligence. Computers have no ability, even theoretically, to accomplish any of this.

We all suppose that algorithms and the like are going to someday resemble intelligence so much that the line will be blurred. This is preposterous. No matter how sophisticated a computer gets, it will never resemble life at all, because it has not that spark that gives rise to consciousness - that spark we are completely impotent to describe, let alone CREATE. Furthermore, I think we're getting ahead of ourselves by even assuming computers are going to remotely resemble intelligence at any point. While they are getting more sophisticated and I will agree they have a long way to go in terms of power and storage capacity (quantum computers are intriguing beyond the limits of my imagination), we can't just assume that this will all come together neatly to produce an intelligent-like machine. The more sophisticated such a machine gets, the more complex it gets. When complexity increases, so does the propensity for things to go wrong. In my 30 years I've seen things get much more sophisticated. My car now has computers in it... which is why I can't get the thing to start right now lol! My next car, no joke, is going to be an older car with no damned computers to malfunction and leave me stranded even though the mechanical parts work fine! My computer is much more sophisticated as well, and like my car, I have much less ability to manipulate this generation than my earlier computers 25 years ago. When I was a kid, I had control over every directory on it and I knew what every one was there for. Nowadays I have no hopes of controlling it all, I can only point and click and hope some automation keeps things in order for me. Every new OS takes me farther away and creates new layers of complexity on top of the old ones.

By the time computers get to the sophistication you are describing, they will be so unstable, complicated, and difficult to maintain that it would take the entirety of the world's programmers just to support one unit. Such a unit would never get off the ground, plagued with viruses both maliciously created as well as accidentally.
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
Smithereens
Posts: 5,512
Add as Friend
Challenge to a Debate
Send a Message
1/22/2015 7:23:14 AM
Posted: 1 year ago
At 1/20/2015 6:36:14 PM, Paleophyte wrote:
At 12/29/2014 4:06:02 PM, lannan13 wrote:
What do you think, should they and if so which right's?

No. Humans should have human rights. AIs should have AI rights. They're different entities with different needs and (likely) values.

What's the difference between an intelligence constructed by humans, and intelligence birthed by humans?

We will most likely need to make a set of rights for both groups. Sentient rights? That will get really entertaining because sentience isn't a simple binary option and arguments could be made for certain animals to be partially or completely included in most definitions.
Music composition contest: http://www.debate.org...
Paleophyte
Posts: 57
Add as Friend
Challenge to a Debate
Send a Message
1/22/2015 7:28:54 AM
Posted: 1 year ago
At 1/22/2015 7:23:14 AM, Smithereens wrote:
At 1/20/2015 6:36:14 PM, Paleophyte wrote:
At 12/29/2014 4:06:02 PM, lannan13 wrote:
What do you think, should they and if so which right's?

No. Humans should have human rights. AIs should have AI rights. They're different entities with different needs and (likely) values.

What's the difference between an intelligence constructed by humans, and intelligence birthed by humans?

Intent. Human intelligence is an accident that's the result of a long and undirected organic process. We have a lot of interesting baggage as a result.

Different evolutionary histories.

Completely different outlooks given that AI may well be more or less immortal.

That's just to start. >;)
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
1/22/2015 7:50:59 AM
Posted: 1 year ago
At 1/22/2015 7:23:14 AM, Smithereens wrote:
At 1/20/2015 6:36:14 PM, Paleophyte wrote:
At 12/29/2014 4:06:02 PM, lannan13 wrote:
What do you think, should they and if so which right's?

No. Humans should have human rights. AIs should have AI rights. They're different entities with different needs and (likely) values.

What's the difference between an intelligence constructed by humans, and intelligence birthed by humans?

One is fact, one is fiction.
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
Andromeda_Z
Posts: 4,151
Add as Friend
Challenge to a Debate
Send a Message
1/22/2015 8:10:02 PM
Posted: 1 year ago
At 1/20/2015 8:40:59 PM, R0b1Billion wrote:
At 1/19/2015 9:30:33 PM, Andromeda_Z wrote:
This is actually the whole premise of a star trek tng episode. I wish i could remember which so that i could link to it.

Well you're in luck... Star Trek nerd here (flashes badge). The episode is called "The Measure of a Man" from Season 2. The idea was actually regurgitated in Star Trek Voyager in season 7 (like many episodes were in that series) to be used for the holographic doctor in the episode "Author, Author." Although the legal outcome for Data and the Doctor were not the same...

Thank you so much!
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
1/23/2015 8:24:42 AM
Posted: 1 year ago
At 1/22/2015 8:10:02 PM, Andromeda_Z wrote:
At 1/20/2015 8:40:59 PM, R0b1Billion wrote:
At 1/19/2015 9:30:33 PM, Andromeda_Z wrote:
This is actually the whole premise of a star trek tng episode. I wish i could remember which so that i could link to it.

Well you're in luck... Star Trek nerd here (flashes badge). The episode is called "The Measure of a Man" from Season 2. The idea was actually regurgitated in Star Trek Voyager in season 7 (like many episodes were in that series) to be used for the holographic doctor in the episode "Author, Author." Although the legal outcome for Data and the Doctor were not the same...

Thank you so much!

No problem, I love Star Trek!
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.