The Instigator
Pro (for)
8 Points
The Contender
Con (against)
2 Points

The Ethics of Star Trek: Holograms' Rights

Do you like this debate?NoYes+4
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 2 votes the winner is...
Voting Style: Open with Elo Restrictions Point System: 7 Point
Started: 3/15/2015 Category: TV
Updated: 1 year ago Status: Post Voting Period
Viewed: 1,651 times Debate No: 71744
Debate Rounds (5)
Comments (50)
Votes (2)





This is my fifth installment in 12-part series on the "Ethics of Star Trek." I think Star Trek holds a wealth of moral and philosophical quandaries that are fertile ground for debate and controversy. While some may see these kinds of TV show-related debates as "fluff" topics, I think that the serious ethical implications that underpin the topics I have selected belie that idea.

My hope for this debate and for this series is that I will be able to have some fun delving into my inner nerd and my inner trekkie, while still having some lively and informative discussions.

In this debate, particularly, I hope to explore the concepts behind and ethics of autonomy, artificial intelligence, and moral worth through the examples of specific holograms. This is similar to my earlier debate about Androids and whether they can have rights, but I think holograms are substantively different from Androids, since they are non-corporeal. You must have an ELO of 2500 to vote on this debate, and have completed 3 debates to be able to accept.

Full Topic

That the Moriarty hologram, the villagers of the Yaderan Colony, and/or “the Doctor” can hold moral rights.


Moral - 'concerned with the ethical principles of right and wrong'
Right - 'something that one may properly claim as being entitled to them' and/or 'an entitlement to (not) perform or have others (not) perform certain action(s) and to (not) be in certain states'


This debate deals with information located in the Star Trek: Deep Space Nine and Star Trek: The Next Generation episodes "Shadowplay," "Elementary," "Dear Data," and "Ship in a Bottle." It also involves events portrayed in the whole Star Trek: Voyager series. Moriarity, particularly, is an incredibly intelligent hologram, designed to rival an android in intellectual ability. Twice he seized control of the Enterprise from its crew and nearly thwarted their attempts to defeat him and regain the vessel.

The Yaderan Colony is a colony made up of almost all holograms, who have lived in their village, generation after generation (yes, their programming kills them off and allows new ones to be created) passing down their cultural traditions.

Finally, the Doctor is the head medical officer on Voyager. His hobbies include writing and opera, and he is known for his cowardice as much as for his sarcastic wit. Originally confined to sickbay, he was soon given a device that allowed him to move throughout the ship and to function more as a "normal" member of the crew.


1. No forfeits
2. Any citations or foot/endnotes must be provided in the text of the debate
3. No new arguments in the final round
4. Maintain a civil and decorous atmosphere
5. No trolling or semantics
6. My opponent accepts all definitions and waives his/her right to add definitions
7. Showing 1 right is sufficient to affirm
8. The BOP is shared
9. Violation of any of these rules or of any of the R1 set-up merits a loss


R1. Acceptance
R2. Pro's Case, Con's Case
R3. Pro rebuts Con's Case, Con rebuts Pro's Case
R4. Pro defends Pro's Case, Con defends Con's Case
R5. Pro rebuts Con's Case, Con rebuts Pro's Case, both Crystallize

Thanks... advance to whomever accepts; I look forward to a debate that takes me where no man has gone before!


Couldn't resist the temptation...

I accept!
Debate Round No. 1


Thanks to WYMM for accepting. I anticipate that this will be a fun debate! Before I begin, I want to note that I may be using arguments from other debates I've had on DDO; I say this to avoid any accusations of self-plagiarism and to be up-front with the issue. This is not prohibited by the rules of the debate.


C1. A Fundamental Due: Needless Suffering

SC1. Preventing Needless Suffering is a Fundamental Due

Perhaps the most fundamental due we ascribe to beings is the right to be free from needless or wanton suffering. In fact, one could make the argument that one of the primary purposes of rights is to prevent needless suffering. A person denied of liberty, unable to own property, and without any guarantees of bodily integrity is a person apt to be subject to needless suffering, and so the existence of these rights shields against that likelihood.

Moreover, we can claim that even if preventing needless suffering weren't a goal or justification for the existence of certain rights, we can affirm that it does coincide with the moral community's understanding of rights and justice--someone who is needlessly suffering is, by necessity, having at least one of their rights violated and is being treated unjustly.

A third and final justification for the importance of preventing needless suffering is an appeal to the idea of Justice itself. Justice is fairness or reasonableness, as well as giving each their due. Needless suffering is neither fair, nor reasonable, nor due--it is, by its very nature, needless.

SC2. The Holograms in Question can Suffer

The available evidence indicates that the Holograms in question can indeed suffer. The Yaderan colonists, for instance, exhibit genuine fear, love, happiness, etc. during the episode "Shadowplay." Chief Security Office Odo specifically observes that: "they appear to think, feel, and reproduce." [1] The same is true of Moriarty, who displayed love, fear of death, and so forth. [2]

The Doctor most certainly is capable of feeling, and even of experience emotional pain: "He even contemplated deactivating it permanently after his 'daughter' was fatally wounded in an accident and he could not take the pain that came with losing her. However, Tom Paris helped The Doctor learn to cope with the negatives as well as the positives in having a family and The Doctor returned to the program to mourn the loss of his daughter with his wife and son." [3]

C2. Give and Take

Relationships where one party uses the other as a tool merit some kind of compensation for the party used as a tool. An employer uses his employees as tool, and so he must pay them. Give and take is a concept naturally intertwined with justice. If we imagine justice as a scale (return to the idea of fairness), then it would be unbalanced (unfair) if I failed to give back. If that employer takes your time without paying you, then the scale tips noticeably in his favor. Similarly, when a thief steals, we acknowledge the victim's right to be reimbursed for the value of the stolen object(s).

It seems obviously unfair and unreasonable to take from holograms, without giving anything in return. For instance, it seems unfair to demand of the Doctor that he treat the wounded, while denying him the right to engage in pastimes or enjoy recreation.

C3: Interests of their Own

SC1. Interests are Linked with Rights

"Now, there is a very important insight expressed in the requirement that a being have interests if he is to be a logically proper subject of rights. This can be appreciated if we consider just why it is that mere things cannot have rights. Consider a very precious 'mere thing'--a beautiful natural wilderness, or a complex and ornamental artifact, like the Taj Mahal. Such things ought to be cared for, because they would sink into decay if neglected, depriving some human beings or perhaps even all human beings, of something of great value. Certain persons may even have as their own special job the care and protection of these valuable objects. But we are not tempted in these cases to speak of "thing-rights" correlative to custodial duties, because, try as we might, we cannot think of mere things as possessing interests of their own. Some people may have a duty to preserve, maintain, or improve the Taj Mahal; but they can hardly have a duty to help or hurt it, benefit or aid it, succor or relieve it. Custodians may protect it for the sake of a nation's pride and art lovers' fancy; but they don't keep it in good repair for 'its own sake,' or for 'its own true welfare,' or 'well-being.' A mere thing, however valuable to others, has no good of its own." [4]

SC2. The Holograms in Question are Not "Mere Things"

Clearly, the holograms have a "good" of their own--a good attested to by Moriarty's own fear of death. Once their sentience is realized, they acquire the ability to pursue purposes, to self-actualize, and this constitutes their welfare. Just as my rights protect my ability to pursue happiness (they have utility in that respect), holograms' rights would do the same for them. It would allow them to access their dreams without fear of losing that chance or having it unjustly degraded. A chair cannot hope to have purposes or a welfare of its own, yet a hologram does have these things. Therefore, it has interests, the denial of which would be a wrong.

I apologize for the brevity of my arguments, but as I am running out of time, I will leave things there. Over to Con...!


1 -
2 -
3 -
4 -


Thanks to Bsh1 for what is sure to be an interesting debate!
I'll be presenting my constructive case this round.
It will be taking the general form of the following syllogism:

P1) The existence of moral rights is contingent on personal autonomy
P2) Holograms do not possess personal autonomy
C1) Holograms do not have moral rights

Since this is a valid syllogism, all I have to do to affirm the conclusion is demonstrate that both premises are true.

P1) Personal Autonomy & Rights

What is personal autonomy? It is the capability to exercise full conscious control over one's own self (aka "free will"). Without it, morality becomes meaningless. If reality were wholly deterministic, then its resident beings would operate mechanically, driven solely by naturalistic processes, and would not have any ability to exert conscious control over themselves; they would more or less be inanimate objects-- completely devoid of any moral significance whatsoever. Suppose that a large boulder falls off a rocky ledge and crushes an unsuspecting passer-by to death. To hold this boulder morally culpable for its actions would be ridiculous, because it didn't really "commit" any action in the first place; it lacks consciousness and has absolutely no control over what happens to it. On the flip side, if a person were to use dynamite to destroy the boulder, we would not consider this person to have done anything morally wrong either. Why? It is for precisely the same reason-- the boulder lacks moral significance; it does not have any of the moral rights or responsibilities whichare intrinsic toethically relevant beings.

Without personal autonomy, a being is reduced to an impersonal machine-- the moral equivalent of a rock or a tree stump. On its own, the ability to experience pain cannot grant ethical relevance; inflicting suffering upon a being which lacks the faculties of free will and consciousness is essentially just stimulating the flow of electrical signals through its nervous system... causing a physical reaction within an inanimate object is not immoral by any reasonable standard, and thus the capacity for suffering is not an adequate criterion for moral significance. Only if the being is *animate* can it truly have any such intrinsic value, and the possession of personal autonomy is precisely what distinguishes the animate from the inanimate. Therefore, a being's moral significance is completely dependent upon the possession of personal autonomy, and since moral rights obviously can't apply to a being which is not even morally significant, the conclusion follows: the existence moral rights is contingent on personal autonomy.

By the way, having read my opponent's debates before, I should note that "personal autonomy" is in no way connected to "rationality". Infants, sentient animals, and the mentally disabled are perfectly capable of having basic rights because they still are autonomous to some extent; they are able to freely move about andexert control their own actions, even if they lack the cognitive capacity to understand their moralsignificance. Accepting personal autonomy as a criterion for ethical relevance does not conflict with our ethical intuitions at all.

P2) Holograms are not Autonomous

I assume that my opponent will be fair and agree that holograms in the Star Trek universe must abide by the same physical and metaphysical limitations that would bind holograms in the real world; after all, this debate is about applied ethics, implying that it should have some relevance to the reality. It would be abusive of my opponent to define holograms as being virtually identical to human beings and then somehow expect me to argue that they should't have the same rights as humans. I will be arguing that because of the limitations of computation in the real world, it is not possible for holograms to truly possess personal autonomy; it is only possible for their creators to producea convincingillusion of autonomyin them through highly complex programming.

To simplify this a bit, picture two children who want to buy a teddy bear. One child is led into an expansive mega-store full of nothing but rows upon rows of different varieties and designs of teddy bear, while the other child is led into a build-a-bear workshop with all the tools and materials necessary to build his own teddy bear in any sort of style he wants to. The first child may have an enormous variety of options at his fingertips, but ultimately his freedom of choice is still very limited-- if for some reason he wants a teddy bear which is not included in the mega-store's inventory, then there is nothing he can do about it; he is entirely restricted to the limited choices available to him-- his seemingly unlimited range of choices is merely an illusion.Meanwhile, the second child is virtually without constraint as to what kind of teddy bear he can buy, as he has been empowered with the freedom to buildone in any fashionhe desires; he truly possesses the freedomof choice.

If it wasn't already clear, the first child's situation represents "artificial intelligence", whereas the second child's situation represents true free will. Ultimately, artificially intelligent computers are limited by the choices afforded to them by their programming; no matter how vast the array of potential decisions may seem, they are still impersonal and mechanistic devices which lack the faculties of consciousness and free will that distinguish the animate from the inanimate. A computer program can, at best, mimic consciousness; it is simply not possible to actually replicate something like consciousness through syntax (i.e. 1's and 0's). A being with personal autonomy is able to fully control its own actions to do as it pleases; a hologram's actions are completely directed by and limitedto its programming. It is not conscious, and it is not autonomous-- it is an inanimate object which has been designed to merely emulate autonomy. This premise is affirmed.


The syllogism has been shown to be logically sound, and thus its conclusion holds true: holograms do not have moral rights. They fail to meet the fundamental qualifier for moral significance (personal autonomy), and thus cannot be said to have any sort of moral significance. The resolution is negated.

Back to Pro!

Disclaimer: If, by some incredible fluke, I happen to be wrong about this, I offer my most sincere apologies to any and all holograms which I have offended. I did not mean to objectify you. Sorry.
Debate Round No. 2


Thanks, WYMM! I will now rebut Con's case.


P1. Personal Autonomy

Con talks about the importance of autonomy in establishing moral culpability. To use his words, it is in virtue of the fact that a begin makes conscious choices that it has "moral significance." There are three concerns I have here.

Firstly, Con makes an effort to claim that babies and the mentally handicapped "are perfectly capable of having basic rights" because "'personal autonomy' is in no way connected to 'rationality'." But, this doesn't seem plausible. If, for instance, a baby chooses to pick up and toss an electric device into the tub as he watches his mom wash the dog (and the dog is electrocuted to death), is the baby to be held to account for his actions? No--we deny that the child is morally culpable because he was unable to appreciate the consequences of his actions. In other words, the baby is not culpable because he could not make rational choices. Similarly, if I have 2 options (one which is right and one which is wrong), and I lack the capacity for determining which is ethically right and which is ethically wrong, I cannot be blamed for choosing the ethically wrong option.

Con asserts that morality becomes meaningless if reality were deterministic because people could never be culpable for their actions. But, using that logic, if rationality too is a necessary precondition for culpability, then rationality is necessary for morality as well. If we cannot assess our decisions to determine or understand what actions were right and which were wrong, we can never be blamed for our actions since we will perpetually be unable to appreciate what we are doing. Ergo, rationality matters if Con wants personal autonomy to work they way he wants it too.

And, even if you don't buy that, Con defines autonomy as "conscious control;" clearly, the baby throwing electronics lacks that. Research tells us that babies develop consciousness sometime between 5 and 15 months. [1] It also seems entirely plausible that many of the mentally ill are not conscious either, but react in a merely reflexive fashion.

Consequently, Con's arguments both support the idea that agents must be rational and conscious. Babies and the mentally impaired are neither, and so Con must necessarily regard them as lacking moral significance. But he doesn't, and that's the inherent contradiction in his first premise.

Secondly, (a) Con's boulder example doesn't really prove anything. It commits an omitted variable bias, where it fails to consider other factors that may contribute to a thing's moral status. (b) Why is choice necessary to have moral worth? Even if we grant that choice is necessary to act morally/to be a moral agent, why does that necessarily imply that choice is necessary to be worthy of moral consideration? One doesn't seem to follow from the other. For instance, a citizen of a nation might be considered a legal actor in society, someone who has certain responsibilities that citizens don't, but if a non-citizen is attacked on the streets, they still have legal worth and the government must come to their aid. Even if you don't buy the example I give, though, Con still needs to explain the link, otherwise his argument is just a non-sequitur.

Thirdly, Con says that without autonomy, one is reduced to the level of a tree stump. Given my first objections, it seems that Con would agree that babies are moral tree stumps. Fortunately, however, there are other qualities that distinguish babies from tree stumps so as to prevent this diminution in status. Tree stumps, for example, have no interests and don't suffer (see my case for a fuller explanation).

Con attempts to argue that suffering is not a moral criterion because it's merely a physical reaction. But, I don't see why this objection succeeds. Con is missing a premise to his argument here; he he has failed to explain why physicality--why the fact that suffering is merely and electrical response--matters, morally-speaking.

Even more importantly, thought rationality, and consciousness are also products of biological and neurological processes. I can only make autonomous choices because of my biological and physical condition. Does this then invalidate autonomy as a standard for rights?

But, even if you don't buy my attacks on Con's P1, if I can dismantle his P2, I will have still rendered his conclusion false. Remember, only one of his premises need to be false in order for the conclusion he draws from them to also be erroneous. So I don't need to defeat both of his premises, I only need to take out one. In some cases, I only need to win one argument to invalidate a premise.

P2. Holograms

Con begins with a lengthy discussion on debate parameters. The holograms described in the OP must have all of the traits that they are described as possessing in Star Trek Canon. This is, first and foremost, a debate about the ethics of Star Trek, and thus asks us to put ourselves in the position of the actors of the Star Trek universe. It is not a debate about the ethics of real-life Earth. And, while this debate seeks to apply broad principles to reality (such as autonomy, fairness, etc.), it does not seek to apply specifics. For instance, I might have a debate about the ethics of the Q Continuum (a society of virtually omnipotent beings); while the general principles are applicable to real-life, many details are not. In other words, I specifically chose Star Trek because it is otherworldly, and because it is not an exact analog of Earth; any attempts to force this debate into a real-world paradigm thus violate the principles of this debate. Also, there are significant differences between humans and holograms even in the Star Trek universe, so it doesn't seem abusive to frame the debate in this fashion.

Con then gives us the children and the teddy bear example. He explains that the differences between the two children (one being humanity, the other being holograms) can be boiled down again to autonomy. Con writes: "artificially intelligent computers are limited by the choices afforded to them by their programming; no matter how vast the array of potential decisions may seem, they are still impersonal and mechanistic devices which lack the faculties of consciousness and free will that distinguish the animate from the inanimate."

Firstly, the obvious response to this argument is: aren't we, in our own way, machines as well? Humans have limits to our programming too. There are concepts we cannot grasp, things we cannot do, and so forth. Why are the limits of human design any different from the limits of artificial programming? Con never explains. Con also commits an ipse dixit fallacy when he suggests--without warrant--that the holograms in question lack free will. Humans were programmed to breath, and holograms were designed to do certain things, but that we were programmed with built-in goals that doesn't mean that we cannot exercise free will by expanding beyond our initial programming through the choices we make. Clearly, the holograms in question have done just that. The Doctor, for instance, went from being a medical instrument to something that wanted to play music and to write prose. He acquired desires and expanded beyond the limits that he was initially given. That is the hallmark of free will.

Secondly, Con also suggests that it is impossible for computers to gain a conscious. But aren't our brains just complex computers? If we could develop it, why is it impossible that, at some point, machines could become complex enough to get it as well?

Thirdly, Con's arguments pretty much rest solely on his intuition. He has no evidence that artificial intelligences could not become sufficiently complex to have a consciousness. He has no evidence that they could not, in the future, develop free will or defy their own programming. He has no evidence that computers couldn't display traits like creativity, ingenuity, or independence. His case is built almost totally on assumptions.

The following replies will concerns themselves with showing that presumptions against holograms having (or being incapable of) consciousness or in favor of human beings having consciousness are not logical.

Fourthly, "There is no way we can determine if other people's subjective experience is the same as our own. We can only study their behavior...Nils Nilsson writes 'If a program behaves as if it were multiplying, most of us would say that it is, in fact, multiplying." [2]

Fifthly, "[I]magine that engineers have invented a tiny computer that simulates the action of an individual neuron. What would happen if we replaced one neuron at a time? Replacing one would clearly do nothing to change conscious awareness. Replacing all of them would create a digital computer that simulates a brain." If we believe Con, "then conscious awareness must disappear during the procedure (either gradually or all at once)...[T]here would be no point during the procedure when he can claim that conscious awareness ends and mindless simulation begins." [2]

Sixthly, suppose that a human being is born that is not actually a human being, but a being who is a "zombie;" they lack some property of autonomy (like free will). "This new animal would reproduce just as any other human and eventually there would be more of these zombies. Natural selection would favor the zombies, since their design is (we could suppose) a bit simpler. Eventually the humans would die out. [I]t is...impossible to know whether we are all zombies or not. Even if we are all zombies, we would still believe that we are not." [2] The impact of this argument is fairly straightforward: if we assign rights to ourselves, and we are zombies (or cannot disprove that we are zombies), then there doesn't seem to be any distinction between us and a machine.


1 -
2 -


I concede the debate.
Sorry for wasting your time.

I promise there's a good reason for this :(
Debate Round No. 3


Thanks to Con for that graceful concession, though I am puzzled as to precisely why he chose to concede.


WillYouMarryMe forfeited this round.
Debate Round No. 4


Vote Pro. Thanks.


WillYouMarryMe forfeited this round.
Debate Round No. 5
50 comments have been posted on this debate. Showing 1 through 10 records.
Posted by bsh1 1 year ago
Why did you conceded?
Posted by WillYouMarryMe 1 year ago
fvck. I'm going to have to refute physicalism to win this....
Posted by bsh1 1 year ago
Posted by WillYouMarryMe 1 year ago
haha I hate when that happens. I wish there was a way to go back and make small edits, but that would be too easily abused by some people...
Posted by bsh1 1 year ago
All the spelling errors...grrr...
Posted by WillYouMarryMe 1 year ago
Posted by bsh1 1 year ago
I posted.
Posted by WillYouMarryMe 1 year ago
writing my* -____-
Posted by WillYouMarryMe 1 year ago
wtf I think my space bar was broken while writingmy argument
Posted by WillYouMarryMe 1 year ago
well actually, if you posted today, that would be fine too...
2 votes have been placed for this debate. Showing 1 through 2 records.
Vote Placed by tejretics 1 year ago
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:-Vote Checkmark-1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:Vote Checkmark--3 points
Used the most reliable sources:Vote Checkmark--2 points
Total points awarded:51 
Reasons for voting decision: Concession.
Vote Placed by 1harderthanyouthink 1 year ago
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:-Vote Checkmark-1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:Vote Checkmark--3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:31 
Reasons for voting decision: Concession