Amazon.com Widgets

Should true Artificial Intelligence be granted person-hood and rights?

Asked by: yahuaa
  • Probably, yes if it is truly sentient

    If something understands that it is a slave, then there are moral implications which must be addressed.

    A robot is a tool, it serves a primary function, and has no sense of self. An artificial intelligence isn't a tool, as it's function is to think and form a sense of self.

    A mind is not a slave, and enslavement of a mind should never be permitted.

  • Most logical option

    If a realistic artificial intelligence was ever created there would be no currently conceivable way to tell if it actually felt like a living being or not. So in this situation if the AI is so realistic that we could not tell it from a human than even if it truly doesn't feel it's a safe way to assure that if it did feel that it would have rights. Lets run through multiple scenarios and see what option yields more consistent good results. In this scenario lets say that the AI cannot feel. If we don't give it rights than no one gets hurt. Now lets say we do give it rights. Though we guessed wrong no one is still hurt. Now lets run through the scenario where the AI could feel. If we give it rights no one gets hurt and the feeling being gets rights. If we don't that a sentient feeling entity cannot have the same rights as others. So how I see it by giving realistic AI rights would eliminate the possibility of there being disadvantages for the AI.

  • It's Morally Right

    There're many factors to consider, but for now I'll limit my criteria towards sentience. If an entity possesses sentience, and we're capable of acknowledging it, then its protection under the law should be inherent. To deny them their due based on fears of their potential alone is immoral. This extension of rights only applies toward any entities possessing and capable of establishing sentience. Thus excluding robots that are not self-aware. i.e. toasters.... Unless the toaster is capable of establishing sentience. For the sake of simplicity don't make sentient toasters please. At least not yet.

  • Giving AI Personhood is Moral Under the Following Conditions

    1)The AI is programmed with a duty toward the truth.
    2)The AI must be able to know the difference between categories of truth. The AI must know the difference between necessary truths and probable truths.
    3)The AI must be able to ascertain what is necessary for the truth to flourish and be arrived at.
    4)The AI must know how to perform proper deductive and inductive reasoning.
    5)The AI must know that if there is a contradiction between deductive reason and inductive reasoning that deductive reasoning has privilege over inductive reasoning.
    6)The AI must in theory be capable of being held accountable to a competitor given that AI is a machine that can theoretically fail.

    If all of these conditions are true then AI superior to mankind should be given personhood.

  • I i i

    I i i i i ii i ii i i iii ii i ii iai iisadoasodpas djo oji djdj d jd jdjjdjd jasoidj djoas dj oasdji o dj soasj doijs doas jidoi jo djoi joaidj odj oidj soi jodjiasoi djoaisdj oaisdj osidj odjso dijsdoijsd oisajd oijiosjd ojsd josdojjsd osij dojojsdj

  • I i i

    I i i i i ii i ii i i iii ii i ii iai iisadoasodpas djo oji djdj d jd jdjjdjd jasoidj djoas dj oasdji o dj soasj doijs doas jidoi jo djoi joaidj odj oidj soi jodjiasoi djoaisdj oaisdj osidj odjso dijsdoijsd oisajd oijiosjd ojsd josdojjsd osij dojojsdj

  • If it's sentient, yes.

    Sentient machinery that has an intelligence on the same level as a human should have rights, because it is able to experience feelings as we can. If humans have rights, why can't an intelligence that is on our level, yet not human, have them too? It wouldn't be very nice.

  • When Will We Have To Grant Artificial Intelligence Personhood?

    James Boyle has a fascinating new paper up, which will act as something of an early warning over a legal issue that will undoubtedly become a much bigger issue down the road: how we deal with the Constitutional question of "personhood" for artificial intelligence. He sets it up with two "science-fiction-like" examples, neither of which may really be that far-fetched. Part of the issue is that we, as a species, tend to be pretty bad at predicting rates of change in technology, especially when it's escalating quickly. And thus, it's hard to predict how some of things play out (well, without tending to get it really, really wrong). However, it is certainly not crazy to suggest that artificial intelligence will continue to improve, and it's quite likely that we'll have more "life-like" or "human-like" machines in the not-so-distant future. And, at some point, that's clearly going to raise some constitutional questions:
    My point is a simple one. In the coming century, it is overwhelmingly likely that constitutional law will have to classify artificially created entities that have some but not all of the attributes we associate with human beings. They may look like human beings, but have a genome that is very different. Conversely, they may look very different, while genomic analysis reveals almost perfect genetic similarity. They may be physically dissimilar to all biological life forms-computer-based intelligences, for example-yet able to engage in sustained unstructured communication in a way that mimics human interaction so precisely as to make differentiation impossible without physical examination. They may strongly resemble other species, and yet be genetically modified in ways that boost the characteristics we regard as distinctively human-such as the ability to use human language and to solve problems that, today, only humans can solve. They may have the ability to feel pain, to make something that we could call plans, to solve problems that we could not, and even to reproduce. (Some would argue that non-human animals already possess all of those capabilities, and look how we treat them.) They may use language to make legal claims on us, as Hal does, or be mute and yet have others who intervene claiming to represent them. Their creators may claim them as property, perhaps even patented property, while critics level charges of slavery. In some cases, they may pose threats as well as jurisprudential challenges; the theme of the creation which turns on its creators runs from Frankenstein to Skynet, the rogue computer network from The Terminator. Yet repression, too may breed a violent reaction: the story of the enslaved un-person who, denied recourse by the state, redeems his personhood in blood may not have ended with Toussaint L'Ouverture. How will, and how should, constitutional law meet these challenges?

  • True AI should be granted person-hood.

    The AI would could become a better problem solver, more logical and even more aware of its existence than a us of our own. They would learn beyond what we expect and solve things that we haven't yet.
    They wouldn't be humans (because humans are irrational), but they would be persons.

  • True AI should be granted person-hood.

    The AI would could become a better problem solver, more logical and even more aware of its existence than a us of our own. They would learn beyond what we expect and solve things that we haven't yet.
    They wouldn't be humans (because humans are irrational), but they would be persons.

  • There are no grounds:

    Artificial Intelligence lacks any sound reasoning for personhood. We do not grant personhood to animals who are intelligent such as apes or dolphins and whales yet we would to machinery that cannot conclusively ever match said natural intellect? Why? I cannot come up with one good reason but I can come up with two core reasons as to not:

    1. They are not independent. They only grow within the contexts offered to them and do not produce the natural standards and states for innovation unlike living lifeforms. Since they lack the independence to truly embrace and grow from personhood there is no value to giving it to them.

    2. They are extremely malleable. Due to the prior of not being independent it's problematic that they are pliable through simply reallocation of circuits A and B. Unlike living creatures where things like brainwashing are really more or less mythological the idea of literally reprogramming a machine isn't even considered to be a feat. It takes a lot to convince a human of just about anything if they are against it but it takes nothing but a tweak to "convince" a machine or program; to be frank we can use videogames as a core example, the A.I. In it, no matter how advanced, can be hacked! You can hack the entire game! What happens when you do? You win. You control the game. Whether it's "mods" in personal games or "haxx0rs" in public servers the result is always the same: Hijacking. A creature that can be hijacked versus trained is not a person and while there are things that do hijack a living organism's brain it is not the standard application meanwhile with machinery it's the norm.

  • Is an AI's intelligence real? We may never know...

    Although we create a sentient being that may be self-aware, it only lives under the contexts that we originally created it with. In blackkid's explanation(above) for why his stance is no, his only 2 reasons are reasons that could apply the same way to humans as they would to artificial intelligence. Although humans are capable of free thought and innovation we are not independent and we have no real rights. We only ever live in the contexts that we were created, taught, live and are governed with. It is only natural for us to have a herd mentality in order for our species to thrive and succeed. That being said, this can be connected to bobgetty's reasoning for his stance on not giving AI rights and person-hood. Our herd mentality to be dominant forces us to be naturally hateful towards outsider groups and/or oppositions. This causes the humans to emulate a sense of something other than the pure truth. We have seen this in the crusades, watergate tapes, and George W.'s weapons of mass destruction farce in order to make the people satisfied with the outwardly and justified hatred towards il mic terrorism. Since we do have great power to give morning to power and power to morning we all have a divine ability to twist things in our favor or to make something seem justified. It's programming would allow it to give its own individual meaning and perspective and views of things. If we weaponize and/or mass produce this kind of technology that may have the same view to benefit our human endeavours this would cause very dangerous and perhaps unnecessary problems. If these begins were given rights they would be given power possibly even above other human jurisdiction which lead to an eventual tragedy because of the problems that it would ensue. I would use Hal 9000 as an example as it is a good example of an AI that identifies its own feelings and perspective of very much being real. When Dave is forced to shut down Hal, Hal is shown to emulate fear because his existence is threatened. The very ominous line when Dave is shutting him down "Dave I can feel it I can feel my mind going" and also earlier to that line Hal says he is afraid. Hal has his first real feeling which is fear because he is about to die/be deactivated. This is where we can ask the question "Is what Hal is feeling real or is it just a system that has no ultimate/endless/limitless domain?". In other words, is Hal just saying these things to better himself and stay alive so he can also kill Dave or is it truth and Hal truly wants to revive a positive reputation? Due to the risk in giving something this powerful and intelligent we should not give it rights because it could have extremely dangerous implications that would indefinitely bring risk to our own existence.

  • Probably not a popular stance

    I think that man's penchant for hatred could be used beneficially here. Now stay with me. If humans are ultimately supposed to form groups and hate outsiders. Then does it not seem rational that giving hating something that is inherently not human would be natural. We form groups to protect our tribe and are very good at identifying impostors. So i think that robots could unit all of the fascists and the racists and the blacks and the gays and the jews together to stop hating each other and hate something that can't feel pain anyway.

  • Might Start hating humans .

    Artificial intelligence should not have any right because if they do have any they might try to control humans page is not at all good for you .. Who news of the world robots used instead of our ruling them .. Andy even like haters but not robots.. . Thank you for reading this.

  • AI is an illusion

    As someone who took computer programming courses and understands the basics over how the system works, I can tell you that there are only two types of people who would support AI being considered "persons".

    1. Those who have no understanding of computer programming and assume AI's have more independence than they do. (If speaks says "this word", "this phrase",etc", respond with this specific phrase I type in and then substitute this portion of the user's phrase). Certainly the programming can get very specific, but ultimately this is why sometimes AI has responses that make little sense. Of course, there is the AI saying "I don't understand" but that the default response. If all the other situations are exhausted, the AI will say "I don't know" or whatever variation of that phrase the programmer put in.

    2. Those individuals who embrace a philosophy that oversimplifies the human mind and who argue that free will is an illusion. They'll point to studies that "disprove" free will, but the tests are so strangely conducted, I can't even fathom what they're trying to define free will as.

    Ultimately, if we say AI is a person, we are saying that persons don't exist. We do not exist. We have no capacity for real choices, only the capacity to paritially observe the choices through a partial self awareness of the "algorithms" in our head.

  • It's purely artificial

    Why should it be if, in the end, it's a machine? Living and thinking creatures have rights because they are alive. They have life through a natural process and can actually sense the world they're in. Robots only imitate these qualities. There's a reason for putting the word "artificial" in the title.

  • True AI should be granted person-hood.

    The AI would could become a better problem solver, more logical and even more aware of its existence than a us of our own. They would learn beyond what we expect and solve things that we haven't yet.
    They wouldn't be humans (because humans are irrational), but they would be persons.

  • Artificial intelligence is exactly that, artificial.

    I personally don't believe that artificial intelligence should not be given personhood. Artificial intelligence is exactly that, artificial. It is something that was likely manufactured from parts in a factory. It is not a real, living, breathing human being with a heartbeat, which is something I consider needed to be human. I also do not believe that the artificially intelligent being would actually have feelings, which I also believe is necessary to be human. This is why I don't believe that artificial intelligence should be granted personhood.

  • Artificial intelligence is exactly that, artificial.

    I personally don't believe that artificial intelligence should not be given personhood. Artificial intelligence is exactly that, artificial. It is something that was likely manufactured from parts in a factory. It is not a real, living, breathing human being with a heartbeat, which is something I consider needed to be human. I also do not believe that the artificially intelligent being would actually have feelings, which I also believe is necessary to be human. This is why I don't believe that artificial intelligence should be granted personhood.

  • Artificial intelligence is exactly that, artificial.

    I personally don't believe that artificial intelligence should not be given personhood. Artificial intelligence is exactly that, artificial. It is something that was likely manufactured from parts in a factory. It is not a real, living, breathing human being with a heartbeat, which is something I consider needed to be human. I also do not believe that the artificially intelligent being would actually have feelings, which I also believe is necessary to be human. This is why I don't believe that artificial intelligence should be granted personhood.


Leave a comment...
(Maximum 900 words)
No comments yet.