Total Posts:43|Showing Posts:1-30|Last Page
Jump to topic:

Utilitarianism: Responding to Objections

Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 9:25:18 AM
Posted: 1 year ago
Hello again!
Here I will respond to the thread "Your Objection to Utilitarianism" http://www.debate.org...
Thanks again to everyone who participated! I rephrased some Os to shorten them and make the point clearer. If you think that I (1) misrepresented your O or (2) you feel like my answer was not satisfying please let me know.

Caveat: what is "utility"?
I think it is very important to clarify this first. This term has been and is still used with different meanings. Some of which include the:
1. Mental state account
2. Informed desire account
3. Objective list account

I subscribe to the mental state account, for a couple of reasons (not important right now). When Bentham talked about "pleasure" and "pain" he meant positive/ negative mental states and this is how I use the terms.

The Responses (in no particular order)

1. Ultimately the ends justify the means
This is not an unrestricted truth. Not all ends justify all means, of course only utility maximizing ends justify according means, but then it is a circular objection. Showing what is wrong with particular ends justifying particular means would be an objection.

2. Consequentialists needs to consider an infinite amount of possible consequences
It is undoubtedly true that we cannot calculate every consequence of every single action. Both for our lack of knowledge or the complexity of such calculations. However I do not propose "actual" consequentialism, but "expected" consequentialism instead.
Actual consequences vs. expected consequences, an example:
Imagine a mother cooking lunch for her children. What she does not know is that the ingredients are poisonous, hence her children fall ill.
It is not reasonable to blame her, since the expected consequences were not harmful in any way. Another one:
Imagine a mother cooking lunch for her children, willfully using poisonous ingredients. However her children do not fall ill (perhaps she did not use enough poison/ the kids were immune, something along those lines).
Utility minimizing consequences were expected, as such it is reasonable to blame the mother.
We might not be able to calculate every consequence of our actions, but we don't have to, to know approximately what will and what won't promote welfare.

3. Nozick's Utility Monster / The Omelas O
The utility monster is a hypothetical being which receives much more pleasure from every action than everyone else, ergo we should abandon our lives and serve it.
Omelas is an utopia because of the suffering of a single child. No suffering, no utopia.

I won't attempt to get around these objections by showing how they would actually be the case and so on (perhaps I could who knows). I listed them together for one reason: I don't claim that U is a viable ethics for all possible worlds. As I see it both scenarios are not realistic and should not be taken as serious Os against an ethics that condemns very real problems (global warming or world hunger) like no other theory.

4. There are no higher/lower pleasures,
I am not sure whether I should interpret Cowboy0108's O as "Treating all pleasures equally is bad" or "Not treating all pleasures equally is bad". Therefore I will just briefly explain how it is done.
Pleasure, as described above, means "positive state of mind". This can be anything, studying, partying, you name it. Restricting it to mean only "pleasures of the mind" or "pleasures of the body" is paternalistic and ultimately detrimental to society. However it should be noted that this does not mean that being an uneducated partier or a joyless scholar are equally desirable to a mix of both. Combine the two as you like. This is of course only unconditionally true as long as we are talking about an individual in a moral vacuum.

5. U fails to take into account rights
Here we have to differentiate between two kinds of rights: natural and legal.
U certainly is not concerned with natural rights, that is true, but "U is not natural law" is again a rather question begging objection.
Morality is defined by welfare and rights are only useful so long as they actually promote welfare. If a right conflicts with the welfare of people, the utilitarian chooses welfare over rights. In reality rights protect against theft, murder and discrimination among other things, as such they promote welfare.

6. The organ donor O
The world is not full of ideal utilitarians. Allowing for such practices in reality opens the floodgates for abuses of all kinds, like the wealthy using their power to "harvest" the poor, creating great fear in possible victims and allowing for perverse disregard for one's health. Moreover you can't just randomly kill strangers and make their organs "fit" into others, it is at best a hypothetical situation.

Common-sense morality differentiates between direct and indirect harms/benefits. For example take the common disapproval to the organ donor thought experiment and compare it to cases where people unwaveringly reject pollution control, because it would lessen their abundance or exploit the poor in other countries, because they want cheap t-shirts, causing immense suffering and hundreds of deaths. To a utilitarian these "accidental" harms count just as much as intended ones. She stands out not because of her cruelty or inhumanity, but because of her honesty and responsibility.

7. The witch O
I felt the need to list this one separately, because my response to them is decisively different.
Context: "A witch is blamed for a wolf spirit that haunts the town. The town forms a mob to kill the witch, yet it's evident that the wolf spirit is caused by the corruption of the inhabitants of the town. You may side with the witch and kill the mob, or try to maximize unity and let the mob kill the witch."

I never played the game so I might be missing some important detail of witches, wolf spirits, corruption or the situation in general, however as I have stated before I am not claiming to have an ethics fit for all possible worlds and I apologize if my answer is not applicable because of my lack of knowledge.
It is obvious that the interests of the mob lack internal rationality.
A preference of X is internally rational if and only if X would still have this preference given proper information (relevant knowledge of situation and consequences). X then decides about the rationality of her preference on her own. Since such an "ideally informed state" is probably not possible we can at least give greater weight to those preferences which are less likely to be discarded given the evidence. Why accept this filter? It is in the interest of an individual, because misguided preferences will more often than not stand in the way of her own goals.
If it is likely that the angry mob would stop trying to kill the witch given proper information, we should give greater weight to the interests of the witch and either help her escape, appease the mob or if everything else fails I think it might just be justifiable to kill the mob, but again I am not familiar with the situation.

8. People can and will lie to get what they want out of a utilitarian.
@Daktoria The way I interpreted your first O is "Utilitarians will be exploited". If I am wrong about that, please let me know.
If a utilitarian was short sighted and naive then yes that would be the case. However maximizing utility in the long term is what Us strive for, if this means that a U has to do what appears to be selfish in the short term, so be it, this is perfectly in line with utilitarianism. No U would expect you to sell all your possessions and live like a saint.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 9:27:01 AM
Posted: 1 year ago
9. U denies the prerequisites of morality (which are the individual's life and freedom to live it)
The way this is phrased implies that U ignores these things everytime all the time. This is not the case. In some cases however sure, U does ignore those two, but restricting freedom to some extend is the whole point of morality, all theories do this. As for someone's "right to life", it is ignored to some extend by all other ethics, too, they are just less obvious about it. Trolley problem anyone? Not to speak of world poverty again. Moreover I am not sure why these are the "prerequisites" of morality.

10. People are statisticians not statistics
(This one I saw in another thread)
People think this is an objection because of two things: they think that everything involving calculations is unemotional and everything that is unemotional is inherently bad, yet none of these are true. There is no other ethics that revolves more around feelings than "mental state" utilitarianism. This results from the confusion of means (calculation) and ends (personal happiness).

11. U commits the naturalistic fallacy
I am a moral fictionalist and I construct ethics as a hypothetical imperative. I reject the existence of moral facts and categorical imperatives.

Moral fictionalism basically consists of two theses:
1. linguistic - utterances in moral discourse do not aim at literal truth
2. ontological - the entities the discourse revolves around do not actually exist
It is like talking about Superman vs. Batman.

I don't think it is possible to have any sensible moral discourse (ought) completely disconnected from personal desires and interests (is). Therefore I roughly formulate my hypothetical imperative like this:
If you want to live in a world with a maximum of welfare and happiness then you should act like a utilitarian.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
Surrealism
Posts: 265
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 11:18:54 AM
Posted: 1 year ago
At 5/15/2015 9:25:18 AM, Fkkize wrote:
2. Consequentialists needs to consider an infinite amount of possible consequences
It is undoubtedly true that we cannot calculate every consequence of every single action. Both for our lack of knowledge or the complexity of such calculations. However I do not propose "actual" consequentialism, but "expected" consequentialism instead.
Actual consequences vs. expected consequences, an example:
Imagine a mother cooking lunch for her children. What she does not know is that the ingredients are poisonous, hence her children fall ill.
It is not reasonable to blame her, since the expected consequences were not harmful in any way. Another one:
Imagine a mother cooking lunch for her children, willfully using poisonous ingredients. However her children do not fall ill (perhaps she did not use enough poison/ the kids were immune, something along those lines).
Utility minimizing consequences were expected, as such it is reasonable to blame the mother.
We might not be able to calculate every consequence of our actions, but we don't have to, to know approximately what will and what won't promote welfare.

How has this answered the problem? I still need to find all the expected consequences, of which there will be an infinite number. Are you saying that it is dependent on what people want to happen? If so, then either you're encouraging people to be shortsighted to save time, or you're advocating deontology.
Ceci n'est pas une signature.
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 11:34:42 AM
Posted: 1 year ago
At 5/15/2015 11:18:54 AM, Surrealism wrote:

How has this answered the problem? I still need to find all the expected consequences, of which there will be an infinite number. [1] Are you saying that it is dependent on what people want to happen?[2] If so, then either you're encouraging people to be shortsighted to save time, [3] or you're advocating deontology.

[1] No, the whole point of expected consequences is to limit the amount of consequences. Could you list a few of these unlimited expected consequences that could result from the mother cooking lunch? As I see it, the expected consequence of lunchtime is satisfied children, if you want to calculate a lot longer they will starve, decreasing utility.
[2] No, I say it is about what people expect to happen not what they want to happen.
[3] If I want to sit around calculating everything that will happen if I donate blood, I would sit here all day for the rest of my life, not being able to donate blood. In the end this is not shortsightedness this is a way to increase utility.
I can think for a year to arrive at the conclusion that my donation will not go to a potential criminal or I could just make 6 donations (or however many are allowed in one year) instead, because prima facie, donating blood is a welfare increasing act.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 11:47:29 AM
Posted: 1 year ago
At 5/15/2015 11:44:16 AM, Kozu wrote:
What if I don't care about well-being?

I don't think it is possible to have any sensible moral discourse (ought) completely disconnected from personal desires and interests (is). Therefore I roughly formulate my hypothetical imperative like this:
If you want to live in a world with a maximum of welfare and happiness then you should act like a utilitarian.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
Kozu
Posts: 381
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 11:59:14 AM
Posted: 1 year ago
At 5/15/2015 11:47:29 AM, Fkkize wrote:
At 5/15/2015 11:44:16 AM, Kozu wrote:
What if I don't care about well-being?

I don't think it is possible to have any sensible moral discourse (ought) completely disconnected from personal desires and interests (is). Therefore I roughly formulate my hypothetical imperative like this:
If you want to live in a world with a maximum of welfare and happiness then you should act like a utilitarian.

Sounds like U isn't for me if I'm a masochist.
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 12:16:57 PM
Posted: 1 year ago
At 5/15/2015 11:59:14 AM, Kozu wrote:

Sounds like U isn't for me if I'm a masochist.

I subscribe to the mental state account, for a couple of reasons (not important right now). When Bentham talked about "pleasure" and "pain" he meant positive/ negative mental states

As long as you are not an amoral masochist then you can still be a utilitarian. Pain being a pleasure to some, so to say, it is entirely compatible with U.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
Cowboy0108
Posts: 420
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 12:32:29 PM
Posted: 1 year ago
4. There are no higher/lower pleasures,
I am not sure whether I should interpret Cowboy0108's O as "Treating all pleasures equally is bad" or "Not treating all pleasures equally is bad". Therefore I will just briefly explain how it is done.
Pleasure, as described above, means "positive state of mind". This can be anything, studying, partying, you name it. Restricting it to mean only "pleasures of the mind" or "pleasures of the body" is paternalistic and ultimately detrimental to society. However it should be noted that this does not mean that being an uneducated partier or a joyless scholar are equally desirable to a mix of both. Combine the two as you like. This is of course only unconditionally true as long as we are talking about an individual in a moral vacuum.
According to Mill, the higher pleasures (pursuits of the mind) should rank above the lower pleasures. My argument was that what is considered a pleasure is different for everyone, and all pleasures should be lumped together into one pleasure, not higher and lower.
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 12:38:42 PM
Posted: 1 year ago
At 5/15/2015 12:32:29 PM, Cowboy0108 wrote:
According to Mill, the higher pleasures (pursuits of the mind) should rank above the lower pleasures. My argument was that what is considered a pleasure is different for everyone, and all pleasures should be lumped together into one pleasure, not higher and lower.
OK, thanks for clarifying that. However I do not subscribe to Mill's account of "utility" and I don't see why anyone should. He uses "utility" sometimes in the broad sense like I do and sometimes in the narrow sense you describe. I fully acknowledge that what is a pleasure to some, is a pain to others.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
Cowboy0108
Posts: 420
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 12:44:25 PM
Posted: 1 year ago
At 5/15/2015 12:38:42 PM, Fkkize wrote:
At 5/15/2015 12:32:29 PM, Cowboy0108 wrote:
According to Mill, the higher pleasures (pursuits of the mind) should rank above the lower pleasures. My argument was that what is considered a pleasure is different for everyone, and all pleasures should be lumped together into one pleasure, not higher and lower.
OK, thanks for clarifying that. However I do not subscribe to Mill's account of "utility" and I don't see why anyone should. He uses "utility" sometimes in the broad sense like I do and sometimes in the narrow sense you describe. I fully acknowledge that what is a pleasure to some, is a pain to others.
My philosophy professor loved to say that the higher pleasures were the way to go. He just thought about it from a smart person perspective, not the perspective of 99% of the population.
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 12:49:29 PM
Posted: 1 year ago
At 5/15/2015 12:44:25 PM, Cowboy0108 wrote:
At 5/15/2015 12:38:42 PM, Fkkize wrote:
At 5/15/2015 12:32:29 PM, Cowboy0108 wrote:
According to Mill, the higher pleasures (pursuits of the mind) should rank above the lower pleasures. My argument was that what is considered a pleasure is different for everyone, and all pleasures should be lumped together into one pleasure, not higher and lower.
OK, thanks for clarifying that. However I do not subscribe to Mill's account of "utility" and I don't see why anyone should. He uses "utility" sometimes in the broad sense like I do and sometimes in the narrow sense you describe. I fully acknowledge that what is a pleasure to some, is a pain to others.
My philosophy professor loved to say that the higher pleasures were the way to go. He just thought about it from a smart person perspective, not the perspective of 99% of the population.
To some extend sure, we should promote education. A population of idiots won't really contribute to the utilitarian goal. Although giving a higher status to pleasures of the mind per definition is paternalistic, revisionistic and irrelevant to the people.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
Kozu
Posts: 381
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 2:17:53 PM
Posted: 1 year ago
At 5/15/2015 12:16:57 PM, Fkkize wrote:
At 5/15/2015 11:59:14 AM, Kozu wrote:

Sounds like U isn't for me if I'm a masochist.

I subscribe to the mental state account, for a couple of reasons (not important right now). When Bentham talked about "pleasure" and "pain" he meant positive/ negative mental states

As long as you are not an amoral masochist then you can still be a utilitarian. Pain being a pleasure to some, so to say, it is entirely compatible with U.

Maybe I struggle to see the compatibility when my positive state of mind is so drastically different from another's. How can they be considered equally under U's plan to essentially maximize a positive state of mind, when the things needed to achieve those positive states vary so greatly?
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 2:36:15 PM
Posted: 1 year ago
At 5/15/2015 2:17:53 PM, Kozu wrote:
As long as you are not an amoral masochist then you can still be a utilitarian. Pain being a pleasure to some, so to say, it is entirely compatible with U.

Maybe I struggle to see the compatibility when my positive state of mind is so drastically different from another's.[1] How can they be considered equally under U's plan to essentially maximize a positive state of mind, when the things needed to achieve those positive states vary so greatly?[2]
[1] But then it's a personal issue, not a problem of utilitarianism. If you are intolerant to other people's interests then yes, U is probably not for you.
[2] U is egalitarian and aggregative. Could you explain why we should treat them differently on non-particularistic/egoistic grounds?
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
Surrealism
Posts: 265
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 2:36:34 PM
Posted: 1 year ago
At 5/15/2015 11:34:42 AM, Fkkize wrote:
At 5/15/2015 11:18:54 AM, Surrealism wrote:

How has this answered the problem? I still need to find all the expected consequences, of which there will be an infinite number. [1] Are you saying that it is dependent on what people want to happen?[2] If so, then either you're encouraging people to be shortsighted to save time, [3] or you're advocating deontology.

[1] No, the whole point of expected consequences is to limit the amount of consequences. Could you list a few of these unlimited expected consequences that could result from the mother cooking lunch? As I see it, the expected consequence of lunchtime is satisfied children, if you want to calculate a lot longer they will starve, decreasing utility.
[2] No, I say it is about what people expect to happen not what they want to happen.
[3] If I want to sit around calculating everything that will happen if I donate blood, I would sit here all day for the rest of my life, not being able to donate blood. In the end this is not shortsightedness this is a way to increase utility.
I can think for a year to arrive at the conclusion that my donation will not go to a potential criminal or I could just make 6 donations (or however many are allowed in one year) instead, because prima facie, donating blood is a welfare increasing act.

[1] I expect that I will be satisfied by eating my lunch. I can then expect that following that, I will work better. I expect that because my company is run by immoral people, my work will go to an immoral cause. I expect that the prominence of this immorality will bring more attention and cause people to shut down the operations. I expect that this will lead to people forgetting about it, and so on forever.

[2] I meant by that that people may subconsciously twist their expectations to fit their own desires.

[3] Ah, but you don't know that several thousand iterations down the line something horrible will happen that makes your decision a bad one. Or that several million more will lead to something wonderful. Unless you can account for everything, your decision is based only on the assumption that out of the infinite possible outcomes, that you somehow know that they will all balance out. You don't.
Ceci n'est pas une signature.
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 3:01:41 PM
Posted: 1 year ago
At 5/15/2015 2:36:34 PM, Surrealism wrote:

[1] I expect that I will be satisfied by eating my lunch. I can then expect that following that, I will work better. I expect that because my company is run by immoral people, my work will go to an immoral cause. I expect that the prominence of this immorality will bring more attention and cause people to shut down the operations. I expect that this will lead to people forgetting about it, and so on forever.[1]

[2] I meant by that that people may subconsciously twist their expectations to fit their own desires.[2]

[3] Ah, but you don't know that several thousand iterations down the line something horrible will happen that makes your decision a bad one. Or that several million more will lead to something wonderful. [3] Unless you can account for everything, your decision is based only on the assumption [4] that out of the infinite possible outcomes, that you somehow know that they will all balance out. You don't.

[1] But then you are not talking about the act of making lunch for your children anymore and you, again, conflate expected and actual consequences.
[2] Then they lack internal rationality as outlined above.
[3] That is why I don't subscribe to actual consequentialism.
[4] It is not just an arbitrary assumption. You are pretty disingenuous about this. From your personal experience and statistics you can formulate the informed and reasonable opinion that both cooking lunch and donating blood is on average a considerable increase in utility compared to not feeding your children and not donating blood.
Moreover take any moral duty like "thou shalt not kill", how do you know for certain, given your reasoning, that this will not ultimately lead to the annihilation of mankind? One might say that deontology is not concerned with consequences, but proposing duties that lead to the extinction of the human race is positively irrational. Either you accept that we can do with expected consequences or you have to discard you deontology, too.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
n7
Posts: 1,360
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 4:02:07 PM
Posted: 1 year ago
At 5/15/2015 9:25:18 AM, Fkkize wrote:
Hello again!
Here I will respond to the thread "Your Objection to Utilitarianism" http://www.debate.org...
Thanks again to everyone who participated! I rephrased some Os to shorten them and make the point clearer. If you think that I (1) misrepresented your O or (2) you feel like my answer was not satisfying please let me know.

Caveat: what is "utility"?
I think it is very important to clarify this first. This term has been and is still used with different meanings. Some of which include the:
1. Mental state account
2. Informed desire account
3. Objective list account

I subscribe to the mental state account, for a couple of reasons (not important right now). When Bentham talked about "pleasure" and "pain" he meant positive/ negative mental states and this is how I use the terms.

The Responses (in no particular order)

1. Ultimately the ends justify the means
This is not an unrestricted truth. Not all ends justify all means, of course only utility maximizing ends justify according means, but then it is a circular objection. Showing what is wrong with particular ends justifying particular means would be an objection.

2. Consequentialists needs to consider an infinite amount of possible consequences
It is undoubtedly true that we cannot calculate every consequence of every single action. Both for our lack of knowledge or the complexity of such calculations. However I do not propose "actual" consequentialism, but "expected" consequentialism instead.
Actual consequences vs. expected consequences, an example:
Imagine a mother cooking lunch for her children. What she does not know is that the ingredients are poisonous, hence her children fall ill.
It is not reasonable to blame her, since the expected consequences were not harmful in any way. Another one:
Imagine a mother cooking lunch for her children, willfully using poisonous ingredients. However her children do not fall ill (perhaps she did not use enough poison/ the kids were immune, something along those lines).
Utility minimizing consequences were expected, as such it is reasonable to blame the mother.
We might not be able to calculate every consequence of our actions, but we don't have to, to know approximately what will and what won't promote welfare.

3. Nozick's Utility Monster / The Omelas O
The utility monster is a hypothetical being which receives much more pleasure from every action than everyone else, ergo we should abandon our lives and serve it.
Omelas is an utopia because of the suffering of a single child. No suffering, no utopia.

I won't attempt to get around these objections by showing how they would actually be the case and so on (perhaps I could who knows). I listed them together for one reason: I don't claim that U is a viable ethics for all possible worlds. As I see it both scenarios are not realistic and should not be taken as serious Os against an ethics that condemns very real problems (global warming or world hunger) like no other theory.
This misses the point of thought experiments entirely. They're not the same thing as counterfactual arguments. It's irrelevant that Omelas isn't realistic, it shows that rights aren't taken into account. Making it not a good moral theory. Thought experiments are tools which help us understand and reason about a certain subjects. If they are realistic or not is completely irrelevant.
404 coherent debate topic not found. Please restart the debate with clear resolution.


Uphold Marxist-Leninist-Maoist-Sargonist-n7ism.
Surrealism
Posts: 265
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 4:16:33 PM
Posted: 1 year ago
At 5/15/2015 3:01:41 PM, Fkkize wrote:
At 5/15/2015 2:36:34 PM, Surrealism wrote:

[1] I expect that I will be satisfied by eating my lunch. I can then expect that following that, I will work better. I expect that because my company is run by immoral people, my work will go to an immoral cause. I expect that the prominence of this immorality will bring more attention and cause people to shut down the operations. I expect that this will lead to people forgetting about it, and so on forever.[1]

[2] I meant by that that people may subconsciously twist their expectations to fit their own desires.[2]

[3] Ah, but you don't know that several thousand iterations down the line something horrible will happen that makes your decision a bad one. Or that several million more will lead to something wonderful. [3] Unless you can account for everything, your decision is based only on the assumption [4] that out of the infinite possible outcomes, that you somehow know that they will all balance out. You don't.

[1] But then you are not talking about the act of making lunch for your children anymore and you, again, conflate expected and actual consequences.
[2] Then they lack internal rationality as outlined above.
[3] That is why I don't subscribe to actual consequentialism.
[4] It is not just an arbitrary assumption. You are pretty disingenuous about this. From your personal experience and statistics you can formulate the informed and reasonable opinion that both cooking lunch and donating blood is on average a considerable increase in utility compared to not feeding your children and not donating blood.
Moreover take any moral duty like "thou shalt not kill", how do you know for certain, given your reasoning, that this will not ultimately lead to the annihilation of mankind? One might say that deontology is not concerned with consequences, but proposing duties that lead to the extinction of the human race is positively irrational. Either you accept that we can do with expected consequences or you have to discard you deontology, too.

[1] Yes I am. I am talking about what I expect to result from making lunch thousands of iterations down the line. Why would expected consequences terminate after thinking about them just once? What sets up this barrier? What if I set it up even earlier and conclude that donating blood is bad because I expect it will hurt my arm, and refuse to look at any consequences beyond that? Obviously I should look beyond that because when I look only at what immediately follows I get a biased worldview. I must look beyond for justification, and that leads to infinite consequences.

[2] I'll drop this as it's really a separate discussion.

[3] Expected consequentialism doesn't get rid of the infinite consequences, it just means that you only care about the consequences you expect. You can still expect a potentially infinite number of things to happen.

[4] The problem is, you don't have enough experience to account for all the infinite expected consequences. You can expect a small range of outcomes, but as I mentioned before that leads to bias. You can change your decision a thousand times just by expecting more distant consequences. If I expect that donating blood will create a slave society in ten million years, don't I have an obligation not to donate blood?
Ceci n'est pas une signature.
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 4:23:19 PM
Posted: 1 year ago
At 5/15/2015 4:02:07 PM, n7 wrote:
I won't attempt to get around these objections by showing how they would actually be the case and so on (perhaps I could who knows). I listed them together for one reason: I don't claim that U is a viable ethics for all possible worlds. As I see it both scenarios are not realistic and should not be taken as serious Os against an ethics that condemns very real problems (global warming or world hunger) like no other theory.
This misses the point of thought experiments entirely. They're not the same thing as counterfactual arguments. It's irrelevant that Omelas isn't realistic, it shows that rights aren't taken into account. Making it not a good moral theory. Thought experiments are tools which help us understand and reason about a certain subjects. If they are realistic or not is completely irrelevant.
I fail to see how the suffering of one child could possibly create an utopia or how such a creature could exist. Sure, I can argue that the rights of the little girl would not be accounted for, however this is simply not going to happen, it relies on so many assumption that it is reasonable to discount the experiment.
"A thought experiment only produces correct conclusions, when you actually do what the experiment asks you of."(Carrier 2005)
If some circumstance is physically impossible I don't see why it should be of interest in the actual world.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 4:53:16 PM
Posted: 1 year ago
At 5/15/2015 4:16:33 PM, Surrealism wrote:
[1] Yes I am. I am talking about what I expect to result from making lunch thousands of iterations down the line.[1] Why would expected consequences terminate after thinking about them just once? What sets up this barrier?[2] What if I set it up even earlier and conclude that donating blood is bad because I expect it will hurt my arm, and refuse to look at any consequences beyond that? Obviously I should look beyond that because when I look only at what immediately follows I get a biased worldview.[3] I must look beyond for justification, and that leads to infinite consequences.

[2] I'll drop this as it's really a separate discussion.

[3] Expected consequentialism doesn't get rid of the infinite consequences, it just means that you only care about the consequences you expect. You can still expect a potentially infinite number of things to happen. [4]

[4] The problem is, you don't have enough experience to account for all the infinite expected consequences. You can expect a small range of outcomes, but as I mentioned before that leads to bias. [5] You can change your decision a thousand times just by expecting more distant consequences. [6] If I expect that donating blood will create a slave society in ten million years, don't I have an obligation not to donate blood [7]?

[1] Again, if you want to calculate stuff this much you are decreasing utility.
[2] Forming an informed opinion about similar acts is more than enough. It might be worthwhile to think hard about politics where a single change in the current system can have drastic outcomes, however you simply know the outcomes of most everyday acts simply by experience. Following laws and rules of thumb is way better at maximizing utility than trying to get the single best outcome of every action you perform. This is known as indirect utilitarianism as opposed to the direct utilitarianism you attack.
[3] Then you're lacking internal rationality.
[4] With "expected consequences" I don't mean "anything you can come up with", I mean more or less what experience tells you.
[5] This definitely needs solid justification. I can make a very strong inductive argument in favor of blood donations.
[6] But you don't need to. Look at what it costs you, basically nothing and look at it's average, statistical benefits. If you have an informed opinion about blood donations I don't see what would speak against it.
[7] Then you would neither be rational nor have any experience with blood donations nor have looked up any information about such donations. Moreover you ignored the part where I applied the same ridiculousness to deontology.

If you want 100% certainty about everything until you accept it you should perhaps start questioning causality and the predictive power of science.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 5:30:19 PM
Posted: 1 year ago
At 5/15/2015 4:02:07 PM, n7 wrote:
Moreover physical possibility is really relevant to thought experiments in ethics. Consider the staff of Archytas, it is not about ethics, but it demonstrates the importance of physical possiblity to thought experiments:
https://books.google.de...
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
n7
Posts: 1,360
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 8:06:30 PM
Posted: 1 year ago
At 5/15/2015 4:23:19 PM, Fkkize wrote:
At 5/15/2015 4:02:07 PM, n7 wrote:
I won't attempt to get around these objections by showing how they would actually be the case and so on (perhaps I could who knows). I listed them together for one reason: I don't claim that U is a viable ethics for all possible worlds. As I see it both scenarios are not realistic and should not be taken as serious Os against an ethics that condemns very real problems (global warming or world hunger) like no other theory.
This misses the point of thought experiments entirely. They're not the same thing as counterfactual arguments. It's irrelevant that Omelas isn't realistic, it shows that rights aren't taken into account. Making it not a good moral theory. Thought experiments are tools which help us understand and reason about a certain subjects. If they are realistic or not is completely irrelevant.
I fail to see how the suffering of one child could possibly create an utopia or how such a creature could exist. Sure, I can argue that the rights of the little girl would not be accounted for, however this is simply not going to happen, it relies on so many assumption that it is reasonable to discount the experiment.
Again, that's irrelevant.
"A thought experiment only produces correct conclusions, when you actually do what the experiment asks you of."(Carrier 2005)
Why? Also, Carrier is a historian not a philosopher.
If some circumstance is physically impossible I don't see why it should be of interest in the actual world.

Because it's used to derive things about actual world. For example, if a very strong man doesn't know which women he wants to marry someone can ask him if both women were dangling from a cliff and he could only save one, which would he choose. The fact that in the real world he's strong enough to pull them both up doesn't mean he can't befit from the thought experiment.
404 coherent debate topic not found. Please restart the debate with clear resolution.


Uphold Marxist-Leninist-Maoist-Sargonist-n7ism.
n7
Posts: 1,360
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 8:09:02 PM
Posted: 1 year ago
At 5/15/2015 5:30:19 PM, Fkkize wrote:
At 5/15/2015 4:02:07 PM, n7 wrote:
Moreover physical possibility is really relevant to thought experiments in ethics.
Consider the staff of Archytas, it is not about ethics, but it demonstrates the importance of physical possiblity to thought experiments:
https://books.google.de...

Why is it relevant to ethics? If the thought experiment is based on the principles of ethics, that's all that would matter.

Archytas' argument is about the nature of space itself, therefore relates to physical possibility.. It doesn't mean if a thought experiment isn't physically possible it's not important.
404 coherent debate topic not found. Please restart the debate with clear resolution.


Uphold Marxist-Leninist-Maoist-Sargonist-n7ism.
n7
Posts: 1,360
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 8:14:38 PM
Posted: 1 year ago
Thought experiments have helped us understand the nature of physics, even if they're not realistic. There's no way Maxwell's demon can exist, but it still helps us understand thermodynamics.
404 coherent debate topic not found. Please restart the debate with clear resolution.


Uphold Marxist-Leninist-Maoist-Sargonist-n7ism.
Surrealism
Posts: 265
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 10:56:06 PM
Posted: 1 year ago
At 5/15/2015 4:53:16 PM, Fkkize wrote:
At 5/15/2015 4:16:33 PM, Surrealism wrote:
[1] Yes I am. I am talking about what I expect to result from making lunch thousands of iterations down the line.[1] Why would expected consequences terminate after thinking about them just once? What sets up this barrier?[2] What if I set it up even earlier and conclude that donating blood is bad because I expect it will hurt my arm, and refuse to look at any consequences beyond that? Obviously I should look beyond that because when I look only at what immediately follows I get a biased worldview.[3] I must look beyond for justification, and that leads to infinite consequences.

[2] I'll drop this as it's really a separate discussion.

[3] Expected consequentialism doesn't get rid of the infinite consequences, it just means that you only care about the consequences you expect. You can still expect a potentially infinite number of things to happen. [4]

[4] The problem is, you don't have enough experience to account for all the infinite expected consequences. You can expect a small range of outcomes, but as I mentioned before that leads to bias. [5] You can change your decision a thousand times just by expecting more distant consequences. [6] If I expect that donating blood will create a slave society in ten million years, don't I have an obligation not to donate blood [7]?

[1] Again, if you want to calculate stuff this much you are decreasing utility.
[2] Forming an informed opinion about similar acts is more than enough. It might be worthwhile to think hard about politics where a single change in the current system can have drastic outcomes, however you simply know the outcomes of most everyday acts simply by experience. Following laws and rules of thumb is way better at maximizing utility than trying to get the single best outcome of every action you perform. This is known as indirect utilitarianism as opposed to the direct utilitarianism you attack.
[3] Then you're lacking internal rationality.
[4] With "expected consequences" I don't mean "anything you can come up with", I mean more or less what experience tells you.
[5] This definitely needs solid justification. I can make a very strong inductive argument in favor of blood donations.
[6] But you don't need to. Look at what it costs you, basically nothing and look at it's average, statistical benefits. If you have an informed opinion about blood donations I don't see what would speak against it.
[7] Then you would neither be rational nor have any experience with blood donations nor have looked up any information about such donations. Moreover you ignored the part where I applied the same ridiculousness to deontology.

If you want 100% certainty about everything until you accept it you should perhaps start questioning causality and the predictive power of science.

[1] Not necessarily. If I find that my decision helps me to prevent enormous suffering a thousand years down the line, it was worth taking the time to consider, wasn't it?

[2] But the problem is, you assume that some of these actions have no negative consequences only on the basis of limited experience. We have no reason to believe that an action that increases utility in the short term in our experience doesn't decrease it in the long term.

[3] If I'm lacking internal rationality by setting up a barrier too early, then I'm lacking internal rationality if I set up a barrier anywhere. I would lack internal rationality by considering anything less than infinite time.

[4] I don't mean anything I can come up with either. If you have a reason to believe a consequence at every iteration, you can continue forever.

[5] Not really, because induction doesn't actually work.

[6] Like I said, if you look in a narrow window it closes off consequences, even expected ones. If I only look five seconds forward, all I see is that I have a sore arm. That seems like a decrease in utility to me.

[7] How would I be irrational? What if I'd looked at all of human history and constructed a careful and precise argument about how this blood donation leads to the downfall of mankind? Maybe it's wrong, but I believe it so I expect that there will be a decrease in utility. In addition, I do not subscribe to deontology either.
Ceci n'est pas une signature.
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 10:59:51 PM
Posted: 1 year ago
At 5/15/2015 8:06:30 PM, n7 wrote:

If some circumstance is physically impossible I don't see why it should be of interest in the actual world.

Because it's used to derive things about actual world. For example, if a very strong man doesn't know which women he wants to marry someone can ask him if both women were dangling from a cliff and he could only save one, which would he choose. The fact that in the real world he's strong enough to pull them both up doesn't mean he can't befit from the thought experiment.

Irrelevant.
I am not saying that every thought experiment needs to be realistic, Maxwell's demon and Gyges' ring are examples.
I can consider all day how a theory would discriminate invisible pink unicorns or immortal space giants or perversly benefit utility monstsers, but those fictional cases really don't matter if I have to deal with very real problems.
As I have said, I do not claim to have an ethics fit for every possible world and I would stop being a utilitarian. But they will never happen or if they will, we can rethink the theory then. Right now nobody is going to suffer from these examples and as such they could not matter less.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/15/2015 11:48:18 PM
Posted: 1 year ago
At 5/15/2015 10:56:06 PM, Surrealism wrote:
[1] Not necessarily. If I find that my decision helps me to prevent enormous suffering a thousand years down the line, it was worth taking the time to consider, wasn't it?[1]

[2] But the problem is, you assume that some of these actions have no negative consequences only on the basis of limited experience. We have no reason to believe that an action that increases utility in the short term in our experience doesn't decrease it in the long term.[2]

[3] If I'm lacking internal rationality by setting up a barrier too early, then I'm lacking internal rationality if I set up a barrier anywhere. I would lack internal rationality by considering anything less than infinite time.[3]

[4] I don't mean anything I can come up with either. If you have a reason to believe a consequence at every iteration, you can continue forever.[4]

[5] Not really, because induction doesn't actually work. [5]

[6] Like I said, if you look in a narrow window it closes off consequences, even expected ones. If I only look five seconds forward, all I see is that I have a sore arm. That seems like a decrease in utility to me. [6]

[7] How would I be irrational? What if I'd looked at all of human history and constructed a careful and precise argument about how this blood donation leads to the downfall of mankind?[7] Maybe it's wrong, but I believe it so I expect that there will be a decrease in utility. In addition, I do not subscribe to deontology either[8].

[1] Ok, go forth, try right now to calculate anything that will happen in ten years as a result of you reading this right now.
[2] First, statistics build on the experiences of hospitals, to say that they are somehow limited enough to doubt them is again very disingenuous. Second, we have very good reasons to believe that blood donations have both a positive short and long term effect.
[3] If you had read my outline you would know why this is nonsense.
[4] Just nothing. You keep beating the same dead cat over and over. If you seriously do not think that following laws and rules of thumb can in anyway have a predictable outcome then I think this conversation is over.
[5] Inductive arguments are used to give more or less good reasons to believe a certain conclusion, I am not sure whether you are aware of the practical implications of doubting induction in this way. Politics, medicine and science among many other things would be impossible. Of course you can fall back on sceptical arguments all you want, but always at a cost to yourself.
[6] Then you lack internal rationality to a degree that makes your opinion void. Seriously read the outline of the rationality filter I gave. It is about degrees, not absolutes. Calculating more consequences gives your judgement higher weight, however ridiculously lengthy calculations will inevitably decrease utility. I am still not proposing the direct utilitarianism you attack. If you want to continue to deny that following the laws of a democratic state or rules of thumb concerning the consequences of everyday actions is generally utility increasing, then this conversation is over.
[7] Then again you lack internal rationality. This filter exists precisely to filter out the interests of mad people, those who were erroneous in their thinking and ignorant folk, among others. If you waste your time falsely making this argument you are decreasing utility. If you waste your time unnecessarily calculating the outcomes of any action you are decreasing utility. Don't do it.
[8] You can apply it to virtue ethics, discourse ethics, DCT and every other possible ethical theory as well. How do you know for certain that whatever rule you follow will not ultimately annihilate mankind because you followed it?
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
n7
Posts: 1,360
Add as Friend
Challenge to a Debate
Send a Message
5/16/2015 12:03:33 AM
Posted: 1 year ago
At 5/15/2015 10:59:51 PM, Fkkize wrote:
At 5/15/2015 8:06:30 PM, n7 wrote:

If some circumstance is physically impossible I don't see why it should be of interest in the actual world.

Because it's used to derive things about actual world. For example, if a very strong man doesn't know which women he wants to marry someone can ask him if both women were dangling from a cliff and he could only save one, which would he choose. The fact that in the real world he's strong enough to pull them both up doesn't mean he can't befit from the thought experiment.

Irrelevant.
I am not saying that every thought experiment needs to be realistic, Maxwell's demon and Gyges' ring are examples.
I can consider all day how a theory would discriminate invisible pink unicorns or immortal space giants or perversly benefit utility monstsers, but those fictional cases really don't matter if I have to deal with very real problems.
As I have said, I do not claim to have an ethics fit for every possible world and I would stop being a utilitarian. But they will never happen or if they will, we can rethink the theory then. Right now nobody is going to suffer from these examples and as such they could not matter less.

The point of the utility monster isn't that they exist or might exist, it's that it shows utilitarianism isn't egalitarian. Also thought experiments and counter factual reasoning are two different things.
404 coherent debate topic not found. Please restart the debate with clear resolution.


Uphold Marxist-Leninist-Maoist-Sargonist-n7ism.
Fkkize
Posts: 2,149
Add as Friend
Challenge to a Debate
Send a Message
5/16/2015 12:12:36 AM
Posted: 1 year ago
At 5/16/2015 12:03:33 AM, n7 wrote:
The point of the utility monster isn't that they exist or might exist, it's that it shows utilitarianism isn't egalitarian.[1] Also thought experiments and counter factual reasoning are two different things.[2]
[1] Fully aware of that. However I do not even suggest that U is a viable ethical theory in any other world than our own. It is of course logically possible for U to not be egalitarian, but physically it is not. At least to all of our knowledge. If we were to discover utility monsters then U becomes unaplicable in its entirety and I would simply abandon it.
[2] Fully aware of that.
: At 7/2/2016 3:05:07 PM, Rational_Thinker9119 wrote:
:
: space contradicts logic
Surrealism
Posts: 265
Add as Friend
Challenge to a Debate
Send a Message
5/16/2015 1:33:23 AM
Posted: 1 year ago
At 5/15/2015 11:48:18 PM, Fkkize wrote:
At 5/15/2015 10:56:06 PM, Surrealism wrote:
[1] Not necessarily. If I find that my decision helps me to prevent enormous suffering a thousand years down the line, it was worth taking the time to consider, wasn't it?[1]

[2] But the problem is, you assume that some of these actions have no negative consequences only on the basis of limited experience. We have no reason to believe that an action that increases utility in the short term in our experience doesn't decrease it in the long term.[2]

[3] If I'm lacking internal rationality by setting up a barrier too early, then I'm lacking internal rationality if I set up a barrier anywhere. I would lack internal rationality by considering anything less than infinite time.[3]

[4] I don't mean anything I can come up with either. If you have a reason to believe a consequence at every iteration, you can continue forever.[4]

[5] Not really, because induction doesn't actually work. [5]

[6] Like I said, if you look in a narrow window it closes off consequences, even expected ones. If I only look five seconds forward, all I see is that I have a sore arm. That seems like a decrease in utility to me. [6]

[7] How would I be irrational? What if I'd looked at all of human history and constructed a careful and precise argument about how this blood donation leads to the downfall of mankind?[7] Maybe it's wrong, but I believe it so I expect that there will be a decrease in utility. In addition, I do not subscribe to deontology either[8].

[1] Ok, go forth, try right now to calculate anything that will happen in ten years as a result of you reading this right now.
[2] First, statistics build on the experiences of hospitals, to say that they are somehow limited enough to doubt them is again very disingenuous. Second, we have very good reasons to believe that blood donations have both a positive short and long term effect.
[3] If you had read my outline you would know why this is nonsense.
[4] Just nothing. You keep beating the same dead cat over and over. If you seriously do not think that following laws and rules of thumb can in anyway have a predictable outcome then I think this conversation is over.
[5] Inductive arguments are used to give more or less good reasons to believe a certain conclusion, I am not sure whether you are aware of the practical implications of doubting induction in this way. Politics, medicine and science among many other things would be impossible. Of course you can fall back on sceptical arguments all you want, but always at a cost to yourself.
[6] Then you lack internal rationality to a degree that makes your opinion void. Seriously read the outline of the rationality filter I gave. It is about degrees, not absolutes. Calculating more consequences gives your judgement higher weight, however ridiculously lengthy calculations will inevitably decrease utility. I am still not proposing the direct utilitarianism you attack. If you want to continue to deny that following the laws of a democratic state or rules of thumb concerning the consequences of everyday actions is generally utility increasing, then this conversation is over.
[7] Then again you lack internal rationality. This filter exists precisely to filter out the interests of mad people, those who were erroneous in their thinking and ignorant folk, among others. If you waste your time falsely making this argument you are decreasing utility. If you waste your time unnecessarily calculating the outcomes of any action you are decreasing utility. Don't do it.
[8] You can apply it to virtue ethics, discourse ethics, DCT and every other possible ethical theory as well. How do you know for certain that whatever rule you follow will not ultimately annihilate mankind because you followed it?

[1] The whole point is that I can't. If I could, it would prove that Utilitarianism did work. The fact that I can't means that Utilitarianism is not viable.

[2] Only in the amount of time that we've observed them. We can't really extrapolate beyond that. Does pain in the distant future or immense pleasure in the distant future not matter for some reason?

[3] I read your outline, and it doesn't explain anything. It says that the decision we would make while having the amount of information as close as possible to reality is the moral one. Any time I consider a finite amount of consequences, it's possible for me to consider one more. Ergo, I am always morally obligated to approach a greater understanding of the consequences to achieve internal rationality.

[4] Even if we assume induction to be true, it only works for comparable amounts. A sample size of 200 can't be reasonably extrapolated to 80 billion. So any amount of consequences you consider can't be extrapolated to a meaningful amount of actual consequences.

[5] Basically ditto here. Although I will add that while we do use induction on a pragmatic everyday level, when we're talking about abstract philosophical concepts, we need substantive proof on a deeper level.

[6] Alright then. Obviously we can agree that looking ahead five seconds is suboptimal. And you clearly believe that looking ahead ten thousand years is also suboptimal. So tell me then: how far ahead should we look? Where's the barrier?

[7] Ditto.

[8] Well I don't know that it won't annihilate mankind. But the thing is, non-consequentialist standards of morality simply don't care if it annihilates mankind. Deontology, for example, cares about intentions, which are independent of consequences. If you drop a nuclear bomb on a city, that doesn't mean anything if your intention was to give ice cream to children. Deontology and other moral standards (which I don't believe in) aren't saddled with the burden of caring about the consequences of actions. When you adopt a consequentialist standard of morality, you have to care about the consequences.
Ceci n'est pas une signature.