The Instigator
longjonsilver
Pro (for)
Losing
12 Points
The Contender
YaleMM
Con (against)
Winning
21 Points

Consequentialism is the most justifiable ethical theory.

Do you like this debate?NoYes+0
Add this debate to Google Add this debate to Delicious Add this debate to FaceBook Add this debate to Digg  
Vote Here
Pro Tied Con
Who did you agree with before the debate?
Who did you agree with after the debate?
Who had better conduct?
Who had better spelling and grammar?
Who made more convincing arguments?
Who used the most reliable sources?
Reasons for your voting decision
1,000 Characters Remaining
The voting period for this debate does not end.
Voting Style: Open Point System: 7 Point
Started: 12/18/2007 Category: Society
Updated: 9 years ago Status: Voting Period
Viewed: 4,975 times Debate No: 673
Debate Rounds (3)
Comments (4)
Votes (11)

 

longjonsilver

Pro

Natural rights and deontological rules are arbitrary claims. There is not one sound justification for a deontological ethical system. I challenge you to prove me wrong. If the means justify the ends then how do we know what just means are?

Moral relativism is week. You cannot claim that the ethical status of a particular action is founded in the disposition of the majority. Was slavery acceptable? It seems to me that moral relativism would have to say that it is.

What does this leave? The only remaining option: Consequentialism. Everyone subconsciously knows this. Thats why almost every debate you see on this website banks on the effects of a policy/idea. Any other argument must function off of some kind of implicit axiom that could easily be rejected by an opponent. Utility is the only thing that is intrinsically valuable and disutility is the only thing that is intrinsically evil. How could any ethical system function off of anything other than the maximization of valuable and the minimization of the undesirable. Moreover, consquentialist views, at minimum, should be the only form of acceptable debate in government and the public because it is the only system of thought that uses axioms that EVERYONE accepts.
YaleMM

Con

A system of ethics, it seems to me, is best defined as a system of behavior whose aim is to produce moral results. Being that most often consequentialist morality defines itself as "morality aimed at 'good' " it is tough to forumulate arguments against - I have a couple, but the bulk of my argumentation is going to be ethical, not moral.

First the moral argument:

So there are two basic responses to, and problems with, a purely utilitarian framework. The first is the Utility Vampire (or monster, which sounds funnier) and the second is the repugnant conclusion.

- Utility Vampire: So if you have someone in your system who gets a LOT of utility from torturing people, like, -way- more utility than the disutility it produces, is this a moral action?
- Repugnant Conclusion: Doesn't utilitarianism suggest that the optimal situation for a society to be in is to have the maximum number of citizens possible while still maintaining the barest possible positive utility value? That seems awful, intuitively.

But really, again, the proposition was about ethics. So while I sort of agree with you that ontology is a wierd way of looking at -morality-, it seems like a perfectly justifiable way of looking at ethics, and also closest to the system of ethics we actually employ most often.
Using consequentialism as an ethical tool depends entirely on individual actors ability to determine the course of action that is going to have the most moral consequences in ALL situations. That's a huge burden for individuals to bear, one they are going to fail and often, it also will not produce results that everyone will agree upon as being beneficial. There will be argument after argument about what produces the most utility. It's impossible to determine 100% one way or the other, so you are just going to have bunches of people making the wrong choices and arguing they were the right ones.
Better then to establish a set of rules, of ontological principles, that guide individuals towards what we usually consider to be moral ends. Sometimes these will be wrong, but I contend it will be less often than individuals privately making utility determinations for every action. Sometimes we call rules like this "rights" - exceptions to the standard of utility.

But -also- ! haha, Im going to go ahead and make one final argument about a pet subject, moral relativism. Your own argumentation is extremely relativistic. "The undesireable" is what you claim disutility represents. I have made arguments before about infanticide that I sort of believe. Undesireable! I alluded to this above as well, the fact is that it is retardedly difficult to try and establish truly universal utility and disutility standards. Even the harm principle is pretty anemic.

We're all relativists underneath.
Debate Round No. 1
longjonsilver

Pro

YaleMM thank you for your response.

//Utility Monster (I've always preferred monster to vampire.)
This seems to be the strategy of attacks against utilitarianism. Frequently people go to great extent to find a thought experiment that is not only highly unlikely (in a probability way) but also impossible for the earth we live on. Realistically there aren't any utility monsters and the existence of one is almost logically impossible. Imagine say a 20 year old. If he fed himself to the utility monster then the opportunity cost would be so large it would be impossible for the monster to be more happy. A 20 year old has somewhere around 60 years of utility ahead of him. It would be virtually impossible for even a "monster" to get that much happiness from one meal. So not only does this monster not exist, but if he did, he would have diminishing marginal utility like everyone else does. This means that an overloading of someone utility will never maximize the existence of utility because an individual's utility graph is logarithmic. Furthermore, if you don't like the last response, then it could be said that in this crazy situation it might be right to feed ourselves to the monster. I'll admit that intuitively it seems wrong but we can't really answer that question since we do not live in that type of world. Also, I have made arguments before that utility gained from the direct disutility of others may, in a utilitarian society, be disregarded. The idea functions similar to something like John Locke's ideas of forfeiture. We do not necessarily have to respect the utility of those who get their utility from the disutility of others because there desires are counterproductive and malicious. There are several reasons for this that may root from effects but I don't think it's necessary to analyze them at this point.

//Repugnant Conclusion
I don't see why it does imply this situation. In your last example you were telling me that we would have less people. Here you're telling me that we would have more people. It has never been proven to me that utilitarianism would ask for a maximal number of people. If anything you could take the same logic and turn it the other way around. We should have a minimum number of people so that everyone can bask in an abundance of resources so as to maximize utility and minimize pain. Just for a quick numbers game I'll show you that more people is not intrinsically good even on a planet where everyone had a positive utility. World: 10 people all with 10 units of happiness. Making 100 units. World After Population Boost Policy X: 99 people with 1 unit of happiness. Making 99 units. Now I know that this is terribly simplified and kind of trivial but the point is to show that more people are not necessarily good. I'd argue that it is impossible to know if any kind of population manipulation could help the total utility, so for this reason we should remain with a free market in regards to population

Overall: Both of these arguments have based themselves off of intuitive assumptions that have not been proven and are not necessarily shared by all. I think any kind of disproof of the desirability of these two situations would in itself prove why a utilitarian wouldn't support it. I know its kind of cheap but thats why I say consequentialism is the easiest ethical theory to justify. So, the main point is that, if you were to disprove the morality of these 2 situations then it would almost certainly rest on the undesirability of the outcome. Which if you actually prove that the outcome is not desirable then you have justified why a consequentialist would not support your thought experiments. It's a win-win for a consequentialist. It's almost true by definition.

As for the rest: I mistakenly did not clarify that I use ethics and morality interchangeably but I will try to acknowledge your differentiation.
Here you have framed me with something I, and most consequentialist do not believe. It is no secret that it would be a shady world if you left moral decisions entirely in the hands of the people. Obviously it would not be acceptable or desirable (by all standards) to give people the right to do anything as long as it was in the name of utility. Specifically for the reasons you have given. It is not acceptable for a standard individual to consider the utility of others and make rash solo decisions. Because of this ANY utilitarian society would address this problem by most likely creating a set of rules. (This is why I think rule and act utilitarian societies converge.) These rules would not be arbitrary or based of the fallible intuition of the select few, but rather off of the assumption that the set of rules that maximizes utility would be the best system. This could be measured in a fairly objective sense. The goals of ethics should be to support a maximally successful economy (keeping in mind diminishing marginal utility) with minimal infringements on the functions of life that people enjoy. But it is specifically for the reasons that you've stated, that a utilitarian society would have a set of rules and not allow each individual to handle complex decisions on his own. The way I see it, this part of our argument could easily be viewed as discourse between 2 utilitarians on what a utilitarian society should look like. For this reason I think rights are not "exceptions" but rather tools for the maximization of utility.

Finally: I'm a little unsure about the intricacies of your last paragraph so if I bastardize it, please forgive me. Remember, arguments against the difficulties of deciding what utility is in a practical sense is not necessarily an argument against the philosophy's truth, but rather it's practicality. But if these difficulties exist, then utilitarianism would merely support whatever system produced a successful society. Also, I think utilitarianism can be as relativistic as what you are saying. Of course different people prefer different things and therefore utilitarian societies that encompass different people will almost always differ in appearance. However, the basic goals will be the same for all. My arguments were mainly against those (if there are any) that say that the morality of something is based upon what most people think is moral and of course I used the clich´┐Ż example of slavery. The reason for this is that most people thought that slavery was moral.
YaleMM

Con

So first off, thanks for the interesting debate.
But secondly, just to make this clear - while it may seem that you have a tautologically true position "that morality is best which produces good" and thereby any argument I make as to why some other morality is better, so long as it is rooted in producing good, will be buying that utility structure, I think I actually have the significantly more definitional side of this discussion, a fact I don't think you are going to be able to get out from under.

Look at it this way: You say that you can't imagine a more clear vision of morality than that morality which produces utility. You say in essence that the only thing that is intrinsically (definitionally) true is that good actions are good. But that sounds like a universal truth. Like an -ontological-truth. Actions that are utile, are inherently good. That's ontology.
Gotcha.

The fact is that we can't escape our intuitions, and our intuititions all line up like this, so calling it Utilitarianism simply because the -only- ontological maxim the theory holds "it is good to produce good" doenst make it any different than a very singular application of Ontological Morality.

But adressing the specific arguments:

- Utility Monster -
Your 4 responses
1) Low probability of finding one
2) Couldnt enjoy eating people that much
3) Maybe we should feed them, but they dont exist so so what?
4) Possibly societies can ignore monsters.

1) Not enough of a defense here. The moral realm is the realm of the theoretical, and as such has to be capable of robust responses even to out-there claims. It is possible to envision a world in which this kind of monster is more common, would morality cease to function there? If so thats a pretty thin morality.
2) I just don't buy this, but it also seems to highlight the relativism of utility's defintion. 60 years of utility is what? 60...utiles? Quantitatively measuring these things seems ridiculous. Anyway, for the sake of argument, I promise, PROMISE, that this utility monster freaking loves eating people. Moreover, he loves it more when they are in a position to lose utility, so much so that his utility gain every time is X + a billion utiles, where X is the amount they stand to lose. Moral?
3) This is part where you go: "yeah, maybe moral, but we don't have to deal with it because this is intuitive analysis in a made up world." Which is a good responce, but I'm not sure a sufficient one. The problem is that morality is bound to our intuitions, and if I can construct what amount to false sentences using your moral terms, then the moral system seems incorrect, contraindicated by our understanding of the universe.
4) Why would socities want to? If the society is utilitarian, and these monsters are -hugely- satisfyed, then logically (though not intuitively) we just feed the monsters.

- Repugnant Conclusion -

Also called the Mere Addition paradox, and the issue here is again one of incoherence in utilitarianism. It's. It's totally boring though. Like, I think the paradox is a valid one, and I'll present it here in total if it becomes really important, but I kind of hate talking about it and I'm not resting on it very heavily. Anyway your last claim under this argument is telling, in which you indicate that it doesn't matter because we shouldn't adopt a policy based on this paradox given the impossibility of knowing if it wil work. This is A) a concession of the impossibility of using a utilitarian calculus properly in anything except the very small scale, and B) pragmatic analysis that you will later say is unimportant. "Remember, arguments against the difficulties of deciding what utility is in a practical sense is not necessarily an argument against the philosophy's truth, but rather it's practicality."

So. Ethics!

I'm sorry this isn't what you imagined the debate to be about, its one of the things I find most interesting about philosophy, the divide between ethics and morals. It is what I believe is the constructive debate to be had, so thanks for engaging in it.

Ethics are a practical subject, so yes, the problems that I illustrated in reaching a group consensus about what is utile in any given situation still apply.
When you say
"These rules would not be arbitrary or based of the fallible intuition of the select few, but rather off of the assumption that the set of rules that maximizes utility would be the best system." It seems like you are just asserting that the assumptition based rules set that is trying to maximize that really vague goal wouldnt be arbitrary, intuitive, of exclusive to a few.

But! To advance what I think is a clever argument. Ethics is something that must be taught, and must be easily understood if it is to be particularly effective at creating moral ends. Bearing in this in mind I think that your "Rule-based-ethics" is basically just another way of saying ontological. In teaching we wouldnt sit kids down and go "Ok now if you take your sisters toys it reduces net utility" we say "stealing is wrong" - the ethics is most enduring, most easily accessible, and most in concert with out other notions and intuitions about the order present in the universe, is that kind of ontological presentation. Ontology, while it may be a bit dodgy, is really -effective- as an ethical system of behavior to adavance, on that ground I find it emminently easy to justify.

The problem with discussions about morality like this one is that if -you- get to say: "any system that has good results is utilitarian" then conversely -I- get to say: "any system that proposes an absolute is ontological."
In the end, we are talking about the same thing. The serious question then is which of us gets there better.

I think it's me.

I think it's easier to understand and obey propositions of moral truth that it is to weigh utility constantly, be wrong frequently, and never escape the relativism you purport to defeat.
Debate Round No. 2
longjonsilver

Pro

Definitional side? I'm unsure as to what you mean here. My claim is that most of your attacks try to pin me to an action that will result in an awful society. What I'm saying is that you can't make these attacks because you are implicitly assuming the principle of utility.

Sorry if I boiled down to something tautological like "good things are good." What I mean to get to is that the fulfillment of wants is good. This must be true because everyone does this for themselves continuously. It is impossible to bring me a situation in which the fulfillment of a want is bad unless you stipulate that the fulfillment harms the wants of others. Naturally the violation of someone's negative want would follow as the intrinsic bad, whereas anything between would be morally neutral. It's not meant to be any kind of universal truth or divine type idea, it's merely what people want. Furthermore, if every individual seeks to fulfill his wants then society's actions should follow. In other words, if everyone acts consequentially to themselves then society should act consequentially to itself.

This 3rd paragraph of yours is a little hard for me to interpret but I think I have a response that will work. I think you want to claim that our intuitions line up on more than they actually do. It is my claim that the ONLY common factor between all humans is the desire to fulfill wants.

As for the utility monster.
Your Responses:
1) Not enough defense. What about a world with many UMs?
2) Quantifying. A stipulation that corrects for my objection.
3) Morality is bound to intuition. If we construct something false then the system is false.
4) Ignoring contradicts the principle of utility.

My Responses:
1) Other than mitigation, what was mainly intended with this response was another implicit argument. (Sorry for not making it more clear.) The point is that hypothetical, earthly-impossible, extreme examples will most likely exist for any theory. Sure it may cast some form of intuitive doubt onto the idea but to say that this type of thought experiment can be considered decisive would throw us into nihilism. For example, what if everyones intuition thought that it was acceptable to enslave someone (as they once did). Would this now become moral? You may not think that this is a powerful enough example but you should at least see my point. With enough deliberation almost any kind of thought experiment can arise. The question is how important and probable is the example. I say that this one is too far-fetched. As for your last concern under this objection. I'm not exactly sure what the society would do. I would guess that people would be rationed to the UMs. Whatever the response was, the system wouldn't fall apart it.
2) My goals were mitigation. (I'll address quantifying later.)
3) Here is where my biggest objection lyes. This can and will serve as my response to most all of your case. I have 2 objections here. First I would like to question your assumption. Morality is NOT bound to our intuitions. Bodily reaction to an idea is nowhere near relevant to truth. For example, it is almost everyones natural intuition that heavier objects fall faster however this is simply not true. Intuition is frequently flawed and is by no means infallible. (For more depth here, look for a moral dilemmas debat that I will propose soon.) But even if intuition isn't flawed right now, it is certainly possible to imagine that everyone could have a clearly flawed intuition. (Take for example the slavery thing that I proposed earlier.) Moreover, everyones intuition is different. Therefore, intuition cannot be a defining mark or otherwise an individuals action could be both good/right and bad/wrong. That's logically impossible and proves that the intuition cannot establish any moral truths. Now, if for some reason you disagree with my evaluation of intuition then I will bring you a second argument working off of your assumption that morality is bound to our intuition. The problem here is that intuition will inevitably be different as according to the planet that you live on. The point is that you are trying to apply an Earthly intuition to a situation that does not exist on Earth. Intuition roots from our existence, experiences, past, present, ancestors, biology, and more. It would be unjust to apply Earth intuition to the workings of a far different hypothetical society. Seeing that we don't know what the disposition of intuition in this place would be, it is impossible for you to claim that it is intuitively false. In fact this proves that it could be intuitively true.
4) You're right in that it does maximize utility to let the UM do whatever she wants but I think you're unjustly categorizing me into act utilitarianism. (I know I claimed that they might converge and I still do uphold that but for the purpose of this round I will let them separate.) The point of rule utilitarianism is to apply the principle of utility to the set of all possible rules for society and uphold the one that maximizes wellbeing. This makes it much easier to overcome your pragmatic worries of calculation and is one of the reasons that I support rule consequentialism. The claim is that society would inevitably include a rule that says that utility gained specifically from the disutility of others shall not be respected. Therefore, a rule consequentialist society would not allow for this act.

Onward to the Repugnant Conclusion.
Your argument is that:
A) With my response I have conceded the impossibility of using a utilitarian calculus.
B) I have stated that pragmatic analysis is unimportant.

My responses:
A) Not true. With my response I have conceded the impossibility of knowing the EXACT total utility in the present and future. But this has not eliminated utilitarianism. One point is that there is reason to believe that if adding more people sacrifices everyone, then it could be said that the disutility of others in society would prevent this. But in truth I don't need to show that it would decrease utility because the burden is on you to show that in some way it would likely increase utility. Until then a utilitarian society wouldn't do anything in regards to population manipulation. Also, if necessary you can cross apply my attacks on your intuitive belief that this is wrong. You haven't shown why it would be and, as I've said before, if you were to do this then you would undoubtedly rely on the negative consequences of population inflation. Meaning that you would show exactly why a utilitarian society wouldn't take that action.
B) When I say that pragmatic analysis is not important I am claiming that pragmatism can never disprove utilitarianism in a ultimate truth sense. Utilitarianism could easily be the ultimate truth in morality, it may just be that unfortunately it cannot be practiced. But in truth I think you have bastardized utilitarianism and I will talk about it in ethics.

Unfortunately I have reached my character limit and must keep ethics as short as possible.
My claim is that any system that maximizes wellbeing AND was chosen because of this is, in fact, consequentialist. Your arguments are not anything other than a rule consequentialist's. If the underlying reason why we select your "Ontological" rules rests on consequentialism then I have won this round, regardless of what we teach children. You have based your reasoning for ontological rules on the consequences of their existence. Your strategy (not deliberately) has never once been to claim that anything is intrinsically or a priori bad. That is what you needed to do to win this round. Instead you have chosen the path of rule consequentialism and tried to win by pinning me to act utilitarianism.

PS: I have loved reading everything you have written, it's much better than what I experience face to face with my friends. I look forward to your response.
YaleMM

Con

Thanks, it has been a pleasure.

So there are two main concerns within this discussion. There is a moral issue, and an ethical issue.

The moral issue disintegrates in the final analysis, because of what I noted in my second argument - neither of us has a unique claim to the moral system under discussion, conceded by both of us to be "Actions are good when they create good in the world." I haven't been arguing very strongly for some kind of more elaborate ontological construction, rather I have been attempting to appropriate that moral stance under an ontological view. I have succeeded to the best of my knowledge.

You say: "What I mean to get to is that the fulfillment of wants is good. ... It's not meant to be any kind of universal truth or divine type idea, it's merely what people want."

The trouble is that even if it is not -meant- to be a universal truth, that is what you are outlining. Every instance in which your rhetoric becomes "this must be true" or "this is always true" or "definitionally" you are, tacitly, conceding an ontological truth as the basis for your consequentialism.

You note my strategy as lacking an illustration that "anything is intrinsically or a priori bad." To which I would respond twofold.
First, I do not have to, I merely have to indicate that something is intrinsically good, which I have by agreeing with you that actions which create good, are intrinsically good. Because good is intrinsically good, and thereby any action that creates it is consistent with an ontological morality that only has that one dictum.
Secondly, I could argue that any action that creates disutility is -intrinsically- bad by being connected to disutility.

So again, we are at a draw. I must argue for a system which produces good results in the world, and thereby I am arguing for a utiliatarian system. You must argue that utility is -always- good, intrinsically good, and good without exception. That makes you argue for an ontological system.

What this wash on the moral level leaves us with, is the second concern - Ethics.

Ethics are different than morality insofar as they make no truth claim on the nature of the universe. Ethics are system that attempt to -achieve- morality. Their measure is therefore in efficacy, not in truth.
It is possible, bearing this in mind, to have an Ontological Ethics that supports a Consequentialist Morality.

That is what I have justified in this round.

I have justified by appeals to common intuition, by statements of accessibility to those being taught the ethical system, and by pointing out some of the difficulties in a strictly utiliarian ethic.
I think you answered a lot of those difficulties admirably, but since the moral issue is one that can't particularly impact the round, and it was primarily claims of moral truth that informed your ethical analysis, I think I win the day. Saying that the "underlying reason why we select your "Ontological" rules rests on consequentialism" is not an argument as to my Ontological Ethics being less effective, and thereby less justified, then your utilitarian ethical system.

No one uses Utilitarianism in a way that doesnt just look like Ontology. No one gets out the calculator and goes: "ok if I tell my friend his boyfriend is cheating on him that will create 60 point of disutility, which I can offset with the generally utile outcome of honesty worth 70 points of utility."
What we do instead if ontological. We employ an ethical system in which we use intuition (which, yes, vary, although perhaps less widely than you indicated) and we say to ourselves: "this is the right thing to do."

That is the ethical system that is most easily justified, because its the one that works like we want it to. Underlying this is the Consequentialist position, yes, we all want things to work out well, but underlying that is the Ontological position - things working out well is universally, unfailingly, the right way for the world to work.

All we really have is our intuition. It is intuition that solves the problem of causality, of free-will. We rely on our intuitions to tell us that Modes Ponens works. What we do then when we are constructing an ethical system is pick one that works, that feels correct, and that we can teach to others to gauruntee a perpetuated morality.

It is this that is easiest to justify, and this isn't Consequentialism.

Thank you for an excellent debate.
Debate Round No. 3
4 comments have been posted on this debate. Showing 1 through 4 records.
Posted by Daxitarian 9 years ago
Daxitarian
What I would have liked to see more of in this debate are other theories that are consequentialist but not utilitarian (e.g. Aristotelian virtue ethics or ethical egoism).

The reason that I think consequentialist ethical theories are better than deontological is that they better account for the things we like from the other theory. I might like freedoms and rights, not because they adhere to any sort of procedural view of the good, but because they yield better results.

And procedural ethical theories mostly trace their origins to Kant, and that CANT be right (get it, Kant/can't, I make funny right?)
Posted by spencetheguy 9 years ago
spencetheguy
succses is a journey not a destination. the path we as individuals make is more important than the result.
Posted by YaleMM 9 years ago
YaleMM
That's interesting and true, but even were I to concede that I was defending what you outline, I don't see really compelling reason to compromise my maxim there. its like, yeah, he would be happier, but I dont have to defend maximizing happiness. Also, so what if its immoral? Doesnt mean you cant do it. Intuitively your trap functions, its like "but it -shouldnt- be immoral" and since Im a fan of intutitions I would, hypothetically, suggest examining your maxim.
Posted by dullurd 9 years ago
dullurd
yalemm, I don't think it's unfair to introduce constraints of reality to moral systems. It certainly doesn't make a morality "thin" that it doesn't work under wildly unusual conditions. There are test cases that will make every reasonable moral system fall. For example, you're defending deontology; I think one could agree that a reasonable deontological maxim could be "don't punch people in the face unless it's self-defense." But then let's say there's a "masochist monster" who LOVES to be punched in the face and furthermore, he's a superhero, so getting punched won't damage him in any way. So when a Kant disciple ponders punching masochist monster in the face, the main reasons for that maxim are null and void, but due to the uncompromising nature of deontology, the maxim stands.
11 votes have been placed for this debate. Showing 1 through 10 records.
Vote Placed by Oolon_Colluphid 9 years ago
Oolon_Colluphid
longjonsilverYaleMMTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:Vote Checkmark--3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:30 
Vote Placed by spencetheguy 9 years ago
spencetheguy
longjonsilverYaleMMTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03 
Vote Placed by jwebb893 9 years ago
jwebb893
longjonsilverYaleMMTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03 
Vote Placed by Eiffel1300 9 years ago
Eiffel1300
longjonsilverYaleMMTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03 
Vote Placed by Princeton_A 9 years ago
Princeton_A
longjonsilverYaleMMTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03 
Vote Placed by YaleMM 9 years ago
YaleMM
longjonsilverYaleMMTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03 
Vote Placed by tus_mama 9 years ago
tus_mama
longjonsilverYaleMMTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:Vote Checkmark--3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:30 
Vote Placed by gogott 9 years ago
gogott
longjonsilverYaleMMTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03 
Vote Placed by mikelwallace 9 years ago
mikelwallace
longjonsilverYaleMMTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:-Vote Checkmark-3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:03 
Vote Placed by longjonsilver 9 years ago
longjonsilver
longjonsilverYaleMMTied
Agreed with before the debate:--Vote Checkmark0 points
Agreed with after the debate:--Vote Checkmark0 points
Who had better conduct:--Vote Checkmark1 point
Had better spelling and grammar:--Vote Checkmark1 point
Made more convincing arguments:Vote Checkmark--3 points
Used the most reliable sources:--Vote Checkmark2 points
Total points awarded:30