The Instigator
Pro (for)
0 Points
The Contender
Con (against)
0 Points

Utilitarianism is the Best Available Ethical Theory

Do you like this debate?NoYes+1
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Post Voting Period
The voting period for this debate has ended.
after 0 votes the winner is...
It's a Tie!
Voting Style: Open Point System: 7 Point
Started: 6/19/2014 Category: Philosophy
Updated: 7 years ago Status: Post Voting Period
Viewed: 3,053 times Debate No: 56890
Debate Rounds (4)
Comments (15)
Votes (0)




1st: Acceptance only.

2nd: New arguments and rebuttals.
3rd: New arguments and rebuttals.
4th: Rebuttals and summaries.


1. No assumptions (such as the goodness or badness of freedom, rights, societal views, chocolate, emotion, etc.) are accepted by both parties without proof.
2. Con must not just attack Utilitarianism, but also provide another ethical theory that is superior to Utilitarianism.
3. Star Trek cannot be mentioned. Don't ask.


I accept!
Debate Round No. 1



Utility: "the state of being useful, profitable, or beneficial," [1]. Within the context of 2P1, this means useful for promoting potential value.


2P1. Utilitarianism Best Handles Moral Uncertanity

Value "denotes something's degree of importance, with the aim of determining what ... is best to do," [2]. Value is that which is ethically significant, which definitionally must be maximized.

To attempt to determine what is valuable, living sentient entities attempt to construct ethical systems. However, because living sentient entities have limited knowledge, we cannot know which, if any, of the ethical systems available is/are perfect, and thus we don't know what, if anything, is actually valuable.

So how should we act when we do not know what the ultimate goal(s) is/are, if there is/are one/any?

In this dearth of knowledge, we must attempt to maximize the chance that we can determine what is valuable. Thus, we must keep the maximum number of sentient entities alive and ensure their maximum ability to make reason, because only living sentient entities are likely to significantly increase the probability that we can determine what is valuable. (Because non-living entities and non-sentient entities don't usually develop or test ethical or scientific theories, and both of those actions are vital for increasing available knowledge and the chance of ascertaining what is valuable.) Further, if what is valuable is determined, then only living sentient entities would be able to act towards it.

Thus, we must value:

1. Maximizing the number of living sentient entities, and

2. Maximizing the capability of living sentient entities to objectively reason

Utilitarianism best achieves these goals, because it values both keeping the maximum number of living sentient entities alive (because life is a prerequisite for utility and determining what is valuable) and in maximizing potential utility by giving living sentient entities long, interconnected, educated, scientifically engaged, relatively comfortable, and relatively free lives (because a high quality of life, high level of knowledge, and high level of productivity increase utility both for the person and society in general, and thus increase the probability of determining what is valuable).

No other ethical system values both of these values as high as Utilitarianism does, and some value neither of the values, and thus fall short of the burdens of moral uncertainty.


2P2. Utilitarianism Best Handles Conflict Between Values

Under many other systems, such as deontology, it is nearly impossible to make a decision that will not have internal value conflict. For example, should I remove the right to life of one person or remove the right of free speech of ten? This is especially true in public policy, which must benefit some groups at the expense of others.

Under action-based systems, there are two paths. The first is if rights and duties are absolute: there is effectively no way to resolve this problem. If rights and duties are pragmatic and situational, then you've already admitted that it's not an action-based system, and then you must weigh the consequences of your action and determine the better of the two worlds (ie, Util) -- leading back to consequentialism, and destroying an action-based system as being action-based.

Under virtue ethics systems, if one must choose between harming one virtue at the gain of another, one must weigh which value is worth more in the long run, and ultimately this become a form of Utilitarian weighing.

However, under Utilitarianism, one simply counts up the gains and costs of an action as opposed to an alternative, and chooses between the two. There is never unresolveable value conflict under Utilitarianism, because everything is quantifiable or at least rankable, and thus can be weighed against an alternative.


2P3. Rule Utilitarianism Best Maximizes Utility

Rule Utilitarianism avoids most of the problems that might result from Act Utilitarianism and also allows for a theory of rights to emerge. Rule Utilitarianism is also especially useful for government action, because it acknowledges that government policy is rarely a one-and-done single action but instead an ongoing policy.

Rule Utilitarianism takes a long-term look at maximizing utility and sets in place those rules that, if followed, would result in a greater total utility than if everyone always acted in their own favor. For example, in an Act Utilitarian world, it would be beneficial to steal from stores if it would make you better off than it would reduce the utility of the storekeeper. Rule Utilitarianism acknowledges that rules are necessary to maximize utility, as having a world in which shopkeepers can keep wares without having them get stolen gains more utility from this commerce than an Act one would get from stealing.

This also allows for easier obedience in everyday life, as we generally run out lives by societal rules. We don't steal, or insult others, or break traffic laws, because society has ingrained them into us and because they maximize utility, effectively preventing us from needing to perform a utility calculus for every action.

Moreover, under Rule Utilitarianism it may be recognized that allowing people autonomy in their own affairs generally allows a greater increase of utility than attempting to run everyone's lives from an Act perspective, and allows a larger than mere personal-life-running theory of rights to emerge. For example, even though it might be Act Utilitarian to silence dissenting press during a rough time, a Rule Utilitarian can look both to the general benefits of maintaining a free flow of information to the populace and to the long-term loss of trust of government that would result from such treatment.







2C0. Prerequisite
The definition of value (2P1) is out of context and contradictory to utilitarianism.
From pro's source: “In ethics, value denotes something's degree of importance, with the aim of determining what action or life is best to do or live (Deontology), or at least attempt to describe the value of different actions (Axiology).”
Also, the definition of utility is given (2CDefinitions), but the formal definition of utilitarianism is not. “Utilitarianism: doctrine that the useful is the good and that the determining consideration of right conduct should be the usefulness of its consequences; specifically : a theory that the aim of action should be the largest possible balance of pleasure over pain or the greatest happiness of the greatest number” – Merriam Webster FULL DEFINITION [12].

To 2P1. On the contrary, Utilitarianism ensures moral uncertainty, unless of course consequences are 100% certain. Also, 2P1 says that we should be utilitarians and value maximizing the number of living sentient entities. This implies the value of sentience and the value of life. Utilitarians value sentience I would somewhat agree because pleasure and pain is important to utilitarians. However, life itself has no intrinsic value to any (formal sense) of the (not adjusted) academic utilitarian theory. The second value of pro’s concoction implies value in reason. The problem here is not in the fact of utilitarian lack of value in reason, but in the conflict of reason and sentience. Moral uncertainty in which is valued more? Finally, in 2P1, the given reason for given values: to find real values.

To 2P2.
See 2C2A/B.

To 2P3. Rule Utilitarianism: “An action is morally right if and only if it does not violate the set of rules of behavior whose general acceptance in the community would have the best consequences--that is, at least as good as any rival set of rules or no rules at all” [6]. This is great because it’s a sort of ‘short cut’ to the blown out calculus. However, Rule Utilitarianism, like any other utilitarianism, is in favor of the best consequences. The addition is the extension to the best for the community. Therefor, if I get home from cheating on my spouse, and am asked where I have been, Rule Utilitarianism collapses into Act Utilitarianism. Even though truthfulness is generally accepted as a moral rule, this incident could justify lying as the greater good for the community. If genocide is generally accepted to promote better consequences, then genocide is morally right.

2C1. Consequentialism, regardless of type, is fundamentally mistaken.

Utilitarianism is consequentialism and bases morality in consequences. For a utilitarian, there is no action, whatsoever, wrong in itself. Whether an act is morally right depends only on the consequences of that act or of something related to that act, NOT on the act itself. The end can always justify the means. This is a huge issue with utilitarianism. Imagine the most grotesque and inhumane possible action; it is likely there is a scenario in which it is justified by utilitarianism. However, regardless of justifiability, utilitarianism requires it be calculated before it is wrong.

2C1A. Felicific Calculus
Utilitarianism utilizes the Felicific Calculus when promoting the greater good. The pain-pleasure of one moral decision is weighed against that of another [2]. June 21, 2014: a 20-year-old woman was raped and hanged from a tree in Pakistan – by three men [3]. Before a utilitarian can pass a moral judgment in this case, the theory requires that they take into account the pleasures experienced by the three men.

2C1B. Lack of Moral Obligation
Imagine: You see a train hurdling down a track heading towards five people – they are tied to the track. Fortunately, you can flip a switch that will divert the train onto another track away from the five. However, this comes at a cost… there is one person tied to the other track. Now, most people would not hesitate to flip the switch in light of the five they are saving. Very rarely is the one’s life valued over the five. This is why utilitarianism is often an easily accepted theory. Until…
Imagine the same situation. This time, the only way to save the five is by pushing someone in front of the train. While this would be the utilitarian decision, the only people that actually did this (in the simulated experiment) were sociopaths [4]. Sociopaths have no sense of moral obligation [5], which could be why they have no trouble pushing the man in front of the train.
A utilitarian is required to push the man in front of the track in this case, even if this man is his family member. A utilitarian is required to leave the train un-switched if the five are orphaned babies that nobody cares about, and the one is a member of One Direction.
Utilitarianism revokes personal autonomy.

2C1B. Controlling the Consequences
How can a person be held morally responsible for that which is out of their control? And, since utilitarian behavior is based on their calculation of consequences, should their action be defined on their calculation, or actual consequence? Really, the morality of the action is just that – the morality of the action itself, regardless of consequence.

2C2. Alternatives

While pro does the utilitarian shuffle in justifying (or un-justifying) the examples given, I will provide alternatives that reduce the moral uncertainty. Each of the following sub points addresses the morality of the action itself, regardless of the consequence. The means is the end in these cases.

2C2A. Contractarianism [10]
Legitimizes political authority because moral norm for right or wrong action (and how right or wrong) is defined by a contract – mutual agreement of morals. Purpose of government is in holding that contract. Example is human rights.

2C2B. Human Rights
Type of contractarianism; for a full list of your human rights, as defined by the United Nations, see The Universal Declaration of Human Rights [7].
Human Right
Human rights are rights: “rights are entitlements (not) to perform certain actions, or (not) to be in certain states; or entitlements that others (not) perform certain actions or (not) be in certain states” [8].
Human rights are universal: All living persons are born with these rights. One’s human rights are not to de discriminated against. The second strongest just cause for war (after self-defense) is to protect the most fundamental human right: life. i.e. in going to war to prevent or end genocide.
Human rights have high priority: some call this inalienable, but this term can be misunderstood. Inalienable does not mean that rights are sacred; this would annihilate the death penalty. Rather, inalienable means that the rights cannot be taken away or given away [9].

2C2C. Kantian Ethics
Kantian philosophy is quite arguably the most beautifully constructed theory I have ever read. Kantian ethical theory ties in to his philosophy. Central to Kantian philosophy is human autonomy, understanding, and rationality. Because humans share rationality and freedom of reason, Kant would say that morality is also known. He gives his theory of morality through categorical imperatives:
The first categorical imperative: “Act only on that maxim through which you can at the same time will that it should become a universal law.” Basically, do not do it if it kant be applied to everything. Before I give moral examples, I will provide concrete and tangible examples for comparison:
White is light in color.
To say, “All white should be dark in color” is logically contradictory. Either there is no white, or, it should not be dark in color.
(Continue applying structure)
A square is made of straight sides.
[…] “All squares should have round sides” […] either there is no square, or, it should not have one curved side.
Truth is factual.
“All truth should be fictional;” either there is no truth, or, truth should not be fictional.
Property is given.
“All property should be taken;” either there is no property, or, property should not be taken.
Life is alive.
“All life should be dead;” either there is no life, or, life should not be killed.
This first categorical imperative extends to a number of contradictions: suicide (dying should improve life), breaking the law (all rules should be broken), laziness (all talents should be unused).
The second categorical imperative: “Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.” Basically, this means not to use people (as a means to an end). This is because using someone takes away his or her rational autonomy, therefor violating the first categorical imperative.

2C3. Why the alternatives, and non-consequentialism, could be considered better than utilitarianism.
The alternatives given put people in control, and hold them responsible. Are people obligated to do what is right? If so, then morality should be based in actions. Are people obligated to do what will be right? If so, then morality should be based in consequences. But what if the outcome is wrong? The action must be done before the effect can be done. The effect is not done before the action. Man is responsible and obligated to do right action before right consequence. Utilitarianism puts the cart before the horse...

Debate Round No. 2


3P0: Definitions

3P0A: “The definition of value ... is ... contradictory to utilitarianism.”

1: The definition doesn’t really affect the debate; value is that which should be maximized.

3P0B: “ ‘Utilitarianism: ... right conduct should be ... usefulness of ... consequences; specifically ... the greatest happiness of the greatest number[.]’ “

1: Con’s definition is fine until the semicolon. Utilitarianism is not inherently based on maximizing happiness, but the public good [1].

This can be seen in the definition of the word “utility”, or “the state of being useful, profitable, or beneficial,” [2].

Utility is that which maximizes the public good; different philosophers have different views of what this is.

As the IEP states, “Hedonistic Utilitarianism is rarely endorsed ... mainly because of its reliance on Prudential Hedonism as opposed to its utilitarian element. Non-hedonistic versions of utilitarianism are about as popular as the other leading theories of right action, especially when ... the actions of institutions ... are being considered,” [3].

Some of this confusion may come from Bentham and Mill’s use of happiness in their calculations of utility [1].

Regardless, I do not use happiness as a measure of utility; as I have pointed out in 2CVD, I base utility on that which maximizes potential value, namely that of increasing probability of determining value, and that through quality of life. (See 2CVD for further subsections of my definition of utility.)


3P1: Moral Uncertanity

3P1A: “Utilitarianism ensures ... uncertainty, unless ... consequences are ... certain.”

1: If one knows how the impact of a consequence and the probability of it occurring versus the probability of another consequence occurring, one can multiply the probability by the impact to determine the actual value of said action and choose said action.

Most actions (stealing a car, saving a baby) have large-scale, probable, immediate impacts (broken rules, more lives).

Most actions (stealing a car, saving a baby) also have large-scale, improbable, and/or long-term impacts (nuclear holocaust, cure for cancer).

However, these improbable events can be effectively discounted, because they are so unlikely that their actual value is very, very insignificant.

Moreover, they will likely cancel out with other very, very unlikely events with a value in the opposite direction.

Moreover, any action is unlikely to cause significant deviation from what a world without that action would have been in the long-term, and so one can ignore much long-term impacts for small actions.

2: By “moral uncertainty”, I do not merely mean that, under some ethical systems, we may not know what to do; I mean that, epistemologically, we do not actually know what is actually valuable, because we do not have perfect knowledge. Unless Con can objectively prove that something is value, we cannot assume that anything is valuable; in this lack of value, the only thing we can do is maximize potential value and attempt to find value.

3P1B: “Utilitarians value sentience ... because pleasure and pain is important to utilitarians. However, life ... has no intrinsic value[.]”

1: Utilitarianism within this debate is nonhedonic. See 3P0B.2.

2: Life has instrumental value because it increases the probability of determining value.

3P1C: “[T]he conflict of reason and sentience.”

1: I don’t see a conflict between reason, or “the power of the mind to think, understand, and form judgments by a process of logic,” [4], and sentience, or “the ability to experience sensations,” [5].

3P1D: “[T]he ... reason for ... values: to find ... values.”

1: I have justified this. See 3P1A.2.


3P2: Value Conflict

1: Con does not address that multiple values often conflict with each other and make decision-making impossible, as I show in 3C2B and C.


3P3: Rule Utilitarianism

3P3A: “Rule Utilitarianism collapses into Act Utilitarianism. ... [T]his incident could justify lying as the greater good[.]”

1: Rules also have utility, and violating them reduces that utility.

If everyone always acted towards the public good, then people would steal from stores whenever it would increase their wealth more than it would decrease the storeowner’s.

However, if everybody acted like this, nobody would open stores, and the total utility would be far less than one where anti-theft laws were enforced.

Only in situations where violating an anti-theft law would generate more utility than the loss of utility from the now-weaker-from-less-enforcement rule, such as stealing a gun in order to stop a suicide bomber.

3P3B: “If genocide ... better consequences, then ... morally right.”

1: Utilitarianism attempts to keep people alive and well, not to mass-murder them. Genocide would ONLY be justified to prevent a far greater loss of life.

2: Con makes moral assumptions about the badness of genocide. As agreed in Round 1, “No assumptions are accepted ... without proof.” Note, however, that I am not arguing that this action is correct.


2C1: Against Consequentialism

2C1A: “[A] ... woman was raped ... by three men[.] ... [U]tilitarian ... take into account the pleasures experienced by the three men.”

1: Utilitarianism within this debate is nonhedonic. See 3P0B.2.

2: Even from a happiness-based calculus, the suffering and death of the woman far negatively outweigh the pleasure of the three men.

3: Con makes moral assumptions about the badness of rape. See 3P3B.2.

2C1B: “[T]he only people that ... did ... were sociopaths. .... A utilitarian ... leave ... train un-switched if ... five are orphaned babies ... nobody cares about, and ... one is a member of One Direction. Utilitarianism revokes ... autonomy.”

1: According to Con’s source, Con means psychopaths, not sociopaths [6]. Further, Con’s source asserts merely that psychopaths do push, rather than that only psychopaths push [6].

2: 18% of those tested would push when asked in their native language, versus 44% when asked in a second language [7]. Apparently psychopathy comes and goes with language.

3: Only ~1% of people are psychopaths [8], though 18% would push [7]. According to Con, at least 1 in 5 are emotionless murderers.

4: One Direction has negative utility, and babies positive. Push away!

5: Con asserts that Utilitarianism removes autonomy without providing a reason. Further, Con makes moral assumptions about the goodness of autonomy. See 3P3B.2.

2C1B: “[C]an a person be ... responsible for that which is out of their control? ... [S]hould ... action be defined on ... calculation, or actual consequence?”

1: No. Hence why we don’t punish people for not stopping earthquakes, and do for starving their children. Why is this a problem for Utilitarianism?

2: Reeducation should be based on calculation. Driving drunk is a result of a bad calculation in need of reeducation. Nothing else should matter in terms of criminal courts, etc.


2C2: Alternatives

2C2A: Contractarianism

1: Con provides a description, rather than reasons to accept.

2C2B: Human Rights

1: See 2C2A.1.

2: Any system of rights faces the value conflict problem of 2P2.

Imagine that I must either remove one person’s right to X or another’s right to Y.

If both are equal, then I cannot choose, and am stuck.

If one is more important than another, this implies that there is a higher a way to judge which right is more important, which means that a system of rights relies on another, higher level of morality, such as utilitarianism, and are not a moral system in themselves.

3: Further, rights provide no limit; why only certain rights, and not others? This, again, implies a higher level of morality, to determine what should and should not be a right.

2C2C: “Kantian philosophy is ... beautifully constructed[.] .... Because humans share rationality and freedom of reason ... morality is also known. .... ‘Act only on that maxim through which you can at the same time will that it should become a universal law.’ .... ‘[T]reat humanity ... always ... as an end.’ ... This ... takes away ... rational autonomy ... violating the first categorical imperative.”

1: Beauty is not truth.

2: Please show how the existence of rational, free humans makes morality known.

3: Kant’s First Formulation relies on non-specificity. While action A, or “stealing”, if universalized, would result in a non-existence of property rights, action B, or “stealing on Wednesdays while wearing a red shirt in London at the Tea & Spices Shop during a lunar eclipse”, would not. All one must do to evade the First Formulation is define actions as extremely specific. What is too specific and what is not? Kant provides no method.

4: Kant’s First Formulation is absurd. Action C, “making a poor person not poor”, if universalized, would result in a (at least temporary) non-existence of the poor, making Action C immoral. So, in fact, would ANY action that causes a complete change. Action D, “promoting my entry-level employee”, would result in a non-existence of entry-level employees, making Action D immoral. This is ridiculous.

5: Imagine that I must choose between stealing or lieing. Which do I do? See 2C2B.2.

6: Kant’s Second Formulation relies on the First, which has been proven absurd and contradictory. (And the unstated Third is pretty much the First.)


2C3. Choose Alternatives

2C3A: “The alternatives ... put people in control, and hold them responsible.”

1: Unlike Utilitarianism how?

2C3B: “[M]orality ... in consequences. ... [W]hat if the outcome is wrong?”

1: See 2C1B.2.

2: Well-intentioned actions can cause harm, and vice versa. Utilitarianism recognizes this.













alyfish126 forfeited this round.
Debate Round No. 3


I ask that voters not penalize Con for forfeiture.

Con has not successfully demonstrated any ethical system to be superior to Utilitarianism, while Pro has demonstrated Utilitarianism to be useful for three reasons. Vote Pro.

Thanks for the debate, aly. I would love to have it again, if you so desire.


Con has given some reason as to why utilitarianism is a backwards ethical theory, and provided the alternatives that our laws and human rights are built on. If you are utilitarian, then vote con because it would be more profitable or beneficial to con than to pro. If you are not utilitarian, vote for con because pro did not prove utilitarianism is the best.

Thanks for the debate, Fuzz! I enjoyed it and I would love another go as well... To be continued!?
Debate Round No. 4
15 comments have been posted on this debate. Showing 1 through 10 records.
Posted by alyfish126 7 years ago
No problemo!
Posted by FuzzyCatPotato 7 years ago
if u posted @ 0 mins, that'd give me till Thurs @ 6PM. that'd be great ty for ur help! ^_^
Posted by alyfish126 7 years ago
Dang!! I'll post it at the last minute.. I think you'll have most of thursday that way
Posted by FuzzyCatPotato 7 years ago
From Monday till Wednesday, I'm going to be gone. So I might not be able to post a response. :/
Posted by alyfish126 7 years ago
About dang time!!
Posted by FuzzyCatPotato 7 years ago
Posted by FuzzyCatPotato 7 years ago
Sorry for using the Deontology definition rather than the Axiology definition, it was a few characters shorter.
Posted by FuzzyCatPotato 7 years ago
Sorry for the wait, 5 debates of mine needed an argument. I'll post tomorrow.
Posted by FuzzyCatPotato 7 years ago
Finally, someone else used my numbering system!
I'll take that as a concession, then?
Posted by alyfish126 7 years ago
Let the games begin!
No votes have been placed for this debate.

By using this site, you agree to our Privacy Policy and our Terms of Use.