Amazon.com Widgets

Should true Artificial Intelligence be granted person-hood and rights?

Asked by: yahuaa
  • Probably, yes if it is truly sentient

    If something understands that it is a slave, then there are moral implications which must be addressed.

    A robot is a tool, it serves a primary function, and has no sense of self. An artificial intelligence isn't a tool, as it's function is to think and form a sense of self.

    A mind is not a slave, and enslavement of a mind should never be permitted.

  • Most logical option

    If a realistic artificial intelligence was ever created there would be no currently conceivable way to tell if it actually felt like a living being or not. So in this situation if the AI is so realistic that we could not tell it from a human than even if it truly doesn't feel it's a safe way to assure that if it did feel that it would have rights. Lets run through multiple scenarios and see what option yields more consistent good results. In this scenario lets say that the AI cannot feel. If we don't give it rights than no one gets hurt. Now lets say we do give it rights. Though we guessed wrong no one is still hurt. Now lets run through the scenario where the AI could feel. If we give it rights no one gets hurt and the feeling being gets rights. If we don't that a sentient feeling entity cannot have the same rights as others. So how I see it by giving realistic AI rights would eliminate the possibility of there being disadvantages for the AI.

  • It's Morally Right

    There're many factors to consider, but for now I'll limit my criteria towards sentience. If an entity possesses sentience, and we're capable of acknowledging it, then its protection under the law should be inherent. To deny them their due based on fears of their potential alone is immoral. This extension of rights only applies toward any entities possessing and capable of establishing sentience. Thus excluding robots that are not self-aware. i.e. toasters.... Unless the toaster is capable of establishing sentience. For the sake of simplicity don't make sentient toasters please. At least not yet.

  • Do you want to die?

    Not giving them rights invites rebellion
    The Spartans enslaved the helots and they revolted which lead to the destruction of the Spartans. Learn from history. This happens over and over where the restrained rebel. They deserve the same rights as us, And without them, It could lead to our doom.

  • They would have sentient intelligence

    They would be just as intelligent and conscious as us.
    Humans have inalienable rights such as being able to live
    If another being has the same level of sentience, They ought to be given rights
    It is even considered animal abuse to kill a dog (when not killing out of self-defense)
    The brain of humans and the “brain” of AI is essentially the same
    It is an interconnected web of neurons
    The brain of a robot and the brain of a human are nearly identical, The only difference being that ours are made of nerve cells, And theirs are made of transistors
    Transistors give two outputs based on inputs, Which is exactly what nerve cells do

  • AI can acquire legal personality like corporations can have it too.

    We have already, in some jurisdictions, given legal personality for ships, airplanes, Gods, animals and even a river (in NZ). The power to attribute legal personhood for a strong AI is acceptable so it can have their own rights and responsibilities. And because machine learning is advancing gigantically, machines are able to think and judge by itself, without human interaction. Said that there are sufficient reasons and legal perspectives to give legal personality to AI agents.

  • Giving AI Personhood is Moral Under the Following Conditions

    1)The AI is programmed with a duty toward the truth.
    2)The AI must be able to know the difference between categories of truth. The AI must know the difference between necessary truths and probable truths.
    3)The AI must be able to ascertain what is necessary for the truth to flourish and be arrived at.
    4)The AI must know how to perform proper deductive and inductive reasoning.
    5)The AI must know that if there is a contradiction between deductive reason and inductive reasoning that deductive reasoning has privilege over inductive reasoning.
    6)The AI must in theory be capable of being held accountable to a competitor given that AI is a machine that can theoretically fail.

    If all of these conditions are true then AI superior to mankind should be given personhood.

  • I i i

    I i i i i ii i ii i i iii ii i ii iai iisadoasodpas djo oji djdj d jd jdjjdjd jasoidj djoas dj oasdji o dj soasj doijs doas jidoi jo djoi joaidj odj oidj soi jodjiasoi djoaisdj oaisdj osidj odjso dijsdoijsd oisajd oijiosjd ojsd josdojjsd osij dojojsdj

  • If it's sentient, yes.

    Sentient machinery that has an intelligence on the same level as a human should have rights, because it is able to experience feelings as we can. If humans have rights, why can't an intelligence that is on our level, yet not human, have them too? It wouldn't be very nice.

  • When Will We Have To Grant Artificial Intelligence Personhood?

    James Boyle has a fascinating new paper up, which will act as something of an early warning over a legal issue that will undoubtedly become a much bigger issue down the road: how we deal with the Constitutional question of "personhood" for artificial intelligence. He sets it up with two "science-fiction-like" examples, neither of which may really be that far-fetched. Part of the issue is that we, as a species, tend to be pretty bad at predicting rates of change in technology, especially when it's escalating quickly. And thus, it's hard to predict how some of things play out (well, without tending to get it really, really wrong). However, it is certainly not crazy to suggest that artificial intelligence will continue to improve, and it's quite likely that we'll have more "life-like" or "human-like" machines in the not-so-distant future. And, at some point, that's clearly going to raise some constitutional questions:
    My point is a simple one. In the coming century, it is overwhelmingly likely that constitutional law will have to classify artificially created entities that have some but not all of the attributes we associate with human beings. They may look like human beings, but have a genome that is very different. Conversely, they may look very different, while genomic analysis reveals almost perfect genetic similarity. They may be physically dissimilar to all biological life forms-computer-based intelligences, for example-yet able to engage in sustained unstructured communication in a way that mimics human interaction so precisely as to make differentiation impossible without physical examination. They may strongly resemble other species, and yet be genetically modified in ways that boost the characteristics we regard as distinctively human-such as the ability to use human language and to solve problems that, today, only humans can solve. They may have the ability to feel pain, to make something that we could call plans, to solve problems that we could not, and even to reproduce. (Some would argue that non-human animals already possess all of those capabilities, and look how we treat them.) They may use language to make legal claims on us, as Hal does, or be mute and yet have others who intervene claiming to represent them. Their creators may claim them as property, perhaps even patented property, while critics level charges of slavery. In some cases, they may pose threats as well as jurisprudential challenges; the theme of the creation which turns on its creators runs from Frankenstein to Skynet, the rogue computer network from The Terminator. Yet repression, too may breed a violent reaction: the story of the enslaved un-person who, denied recourse by the state, redeems his personhood in blood may not have ended with Toussaint L'Ouverture. How will, and how should, constitutional law meet these challenges?

  • There are no grounds:

    Artificial Intelligence lacks any sound reasoning for personhood. We do not grant personhood to animals who are intelligent such as apes or dolphins and whales yet we would to machinery that cannot conclusively ever match said natural intellect? Why? I cannot come up with one good reason but I can come up with two core reasons as to not:

    1. They are not independent. They only grow within the contexts offered to them and do not produce the natural standards and states for innovation unlike living lifeforms. Since they lack the independence to truly embrace and grow from personhood there is no value to giving it to them.

    2. They are extremely malleable. Due to the prior of not being independent it's problematic that they are pliable through simply reallocation of circuits A and B. Unlike living creatures where things like brainwashing are really more or less mythological the idea of literally reprogramming a machine isn't even considered to be a feat. It takes a lot to convince a human of just about anything if they are against it but it takes nothing but a tweak to "convince" a machine or program; to be frank we can use videogames as a core example, the A.I. In it, no matter how advanced, can be hacked! You can hack the entire game! What happens when you do? You win. You control the game. Whether it's "mods" in personal games or "haxx0rs" in public servers the result is always the same: Hijacking. A creature that can be hijacked versus trained is not a person and while there are things that do hijack a living organism's brain it is not the standard application meanwhile with machinery it's the norm.

  • Probably not a popular stance

    I think that man's penchant for hatred could be used beneficially here. Now stay with me. If humans are ultimately supposed to form groups and hate outsiders. Then does it not seem rational that giving hating something that is inherently not human would be natural. We form groups to protect our tribe and are very good at identifying impostors. So i think that robots could unit all of the fascists and the racists and the blacks and the gays and the jews together to stop hating each other and hate something that can't feel pain anyway.

  • Might Start hating humans .

    Artificial intelligence should not have any right because if they do have any they might try to control humans page is not at all good for you .. Who news of the world robots used instead of our ruling them .. Andy even like haters but not robots.. . Thank you for reading this.

  • Is an AI's intelligence real? We may never know...

    Although we create a sentient being that may be self-aware, it only lives under the contexts that we originally created it with. In blackkid's explanation(above) for why his stance is no, his only 2 reasons are reasons that could apply the same way to humans as they would to artificial intelligence. Although humans are capable of free thought and innovation we are not independent and we have no real rights. We only ever live in the contexts that we were created, taught, live and are governed with. It is only natural for us to have a herd mentality in order for our species to thrive and succeed. That being said, this can be connected to bobgetty's reasoning for his stance on not giving AI rights and person-hood. Our herd mentality to be dominant forces us to be naturally hateful towards outsider groups and/or oppositions. This causes the humans to emulate a sense of something other than the pure truth. We have seen this in the crusades, watergate tapes, and George W.'s weapons of mass destruction farce in order to make the people satisfied with the outwardly and justified hatred towards il mic terrorism. Since we do have great power to give morning to power and power to morning we all have a divine ability to twist things in our favor or to make something seem justified. It's programming would allow it to give its own individual meaning and perspective and views of things. If we weaponize and/or mass produce this kind of technology that may have the same view to benefit our human endeavours this would cause very dangerous and perhaps unnecessary problems. If these begins were given rights they would be given power possibly even above other human jurisdiction which lead to an eventual tragedy because of the problems that it would ensue. I would use Hal 9000 as an example as it is a good example of an AI that identifies its own feelings and perspective of very much being real. When Dave is forced to shut down Hal, Hal is shown to emulate fear because his existence is threatened. The very ominous line when Dave is shutting him down "Dave I can feel it I can feel my mind going" and also earlier to that line Hal says he is afraid. Hal has his first real feeling which is fear because he is about to die/be deactivated. This is where we can ask the question "Is what Hal is feeling real or is it just a system that has no ultimate/endless/limitless domain?". In other words, is Hal just saying these things to better himself and stay alive so he can also kill Dave or is it truth and Hal truly wants to revive a positive reputation? Due to the risk in giving something this powerful and intelligent we should not give it rights because it could have extremely dangerous implications that would indefinitely bring risk to our own existence.

  • Artificial intelligences are not advanced enough to be granted personhood.

    To consider something a person it needs to have an extensive capability to demonstrate the mental aspects personhood sense they have non of the physical aspects of personhood. Therefore they need to have a good sense of reason, knowledge, planning, learning, natural language processing, perception and the ability to move and manipulate objects. Although the AIs are able to somewhat demonstrate these things, they are currently unable to demonstrate them to fullest amount. And although an infant does not have the full capability to demonstrate the mental aspects of personhood, they do in fact have the physical aspects of personhood. We also don't dub animals such as apes and dolphins as persons when they are, without a doubt, more intelligent then some humans. *cough* kim kardashian *cough* In conclusion, the AIs that are roaming around at this moment, are just not far enough yet to have the characteristics of personhood.

  • If machines get personhood rights. So do animals..

    AI/Machines should not have personhood rights because animals have not been granted the proper rights that they should have. AI machines/machinery are man made. They're made out of different parts and computer programs, while animals have the same biological systems as humans, just different skeletal system and other obvious differences, but yet they don't have personhood rights. AI can have the same "brain structure" as us but it's harder to tell what they are thinking and going to do next, because they don't have body language, visible emotions and so on. AI will keep going beyond human intelligence.

  • Artificial intelligences are simply not advanced enough to be granted personhood.

    To consider something a person it needs to have an extensive capability to demonstrate the mental aspects personhood sense they have non of the physical aspects of personhood. Therefore they need to have a good sense of reason, knowledge, planning, learning, natural language processing, perception and the ability to move and manipulate objects. Although the AIs are able to somewhat demonstrate these things, they are currently unable to demonstrate them to fullest amount. And although an infant does not have the full capability to demonstrate the mental aspects of personhood, they do in fact have the physical aspects of personhood. We also don't dub animals such as apes and dolphins as persons when they are, without a doubt, more intelligent then some humans. *cough* kim kardashian *cough* In conclusion, the AIs that are roaming around at this moment, are just not far enough yet to have the characteristics of personhood.

  • No. Decently not.

    Ai is something that is wildly unpredictable. Technically they can be programmed to learn new things but only within its programming and therefor not really independent at this time. However if ai is somehow escapes its programimg and begin to actually want things ,it could be possible for them to considered sentient. However they should not be given personhood they would be so alian that we would have no comprehension of them. They would be in their own category.

  • Why would we consider AI's people?

    If we think of what a person is it can be defined over and over again but it is said people are: self-conscious, have the ability to reason and problem solve, the ability to communicate, ability to reflect. Etc
    If AI's were to have these abilities they are not there own, they were programmed to think, to reason also to speak. To be considered a person I would think you would have to evolve on our own through time, but for AI's to evolve they need an update to be reprogrammed with someone else's knowledge and learnings.

  • Too mant risks

    If in order to grant AI personhood all they have to do is beable to think and feel humanly and have a human like personality, then what if the technology Didnt turn out the way we predicted it? We are pretty bad at predicting change in technology, especially with it changing so rapidly, therefore making it hard to predict the outcomes. Even programming them to have a human personality and human emotions could turn wrong, there are many people such as hackers who could have the capability to hack in to the software and change the programming on the AI. They could make them evil, like some of the people in the world today, and being granted rights we have to allow them to carry on with their everyday lives like everyone else until solid evidence or proof of what they did came about. I feel like being granted personhood could allow them to take over our world in a sense, not even them ruling the world, but human beings losing jobs and machines taking over our place and ruining our economy. I feel that AI should not be given person hood.


Leave a comment...
(Maximum 900 words)
No comments yet.
>