Total Posts:95|Showing Posts:1-30|Last Page
Jump to topic:

Trolley Problem does not measure ethics!

R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
5/25/2014 1:05:35 AM
Posted: 2 years ago
The Trolley Problem (TP) puts one in a scenario where they are asked to make a choice to kill in order to save. The scenario is manipulative for several reasons.

First, it measures prudence, not ethics. Is there any question during the process that your only desire is to minimize harm to everybody? What your actual ability to do is irrelevant, since morality is not dependent upon one's ability to effectively carry out their wishes, only that their wishes, as reflected in their actions, be just.

Second, it is too simplistic - all the pertinent moral details are left out. In particular, my actions (and certainly the actions of the people getting killed) before the situation came to be should be analyzed. For instance, am I making this decision because I was involved in creating the situation? If so, then I'd be negatively culpable no matter what I did. Am I being dropped into it randomly? In that case my impetus is to minimize harm and either choice is the moral thing to do. You want me to discover the ethics of the TP? Then give me a backdrop to the story!

Third, it is unrealistic. It's sad that we make moral choices at so many moments during the day, yet our tool to measure morality is a scenario that one could conceivably live billions of lives before actually being in a similar situation, simply because it is so weird and unlikely to occur. I would argue that the unlikeliness of being thrust into a scenario where you have to kill to save is incredible unless there is some element of culpability surrounding the persons involved. It would take either a statistical probability of epic proportions or else a criminal mastermind to engineer.
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
wrichcirw
Posts: 11,196
Add as Friend
Challenge to a Debate
Send a Message
5/27/2014 7:32:47 AM
Posted: 2 years ago
At 5/25/2014 1:05:35 AM, R0b1Billion wrote:
The Trolley Problem (TP) puts one in a scenario where they are asked to make a choice to kill in order to save. The scenario is manipulative for several reasons.

First, it measures prudence, not ethics. Is there any question during the process that your only desire is to minimize harm to everybody? What your actual ability to do is irrelevant, since morality is not dependent upon one's ability to effectively carry out their wishes, only that their wishes, as reflected in their actions, be just.

How is prudence divorced from ethics?

Second, it is too simplistic - all the pertinent moral details are left out. In particular, my actions (and certainly the actions of the people getting killed) before the situation came to be should be analyzed. For instance, am I making this decision because I was involved in creating the situation? If so, then I'd be negatively culpable no matter what I did. Am I being dropped into it randomly? In that case my impetus is to minimize harm and either choice is the moral thing to do. You want me to discover the ethics of the TP? Then give me a backdrop to the story!

The simplicity is due to the attempt to control for all other externalities. This is how any valid scientific experiment is conducted.

Third, it is unrealistic. It's sad that we make moral choices at so many moments during the day, yet our tool to measure morality is a scenario that one could conceivably live billions of lives before actually being in a similar situation, simply because it is so weird and unlikely to occur. I would argue that the unlikeliness of being thrust into a scenario where you have to kill to save is incredible unless there is some element of culpability surrounding the persons involved. It would take either a statistical probability of epic proportions or else a criminal mastermind to engineer.

Newton's theory of gravity is unrealistic when taken at its core assertion, that our physical mass moves the world to a proportional extent that the world moves us...most people would consider far more realistic that "what goes up must come down", i.e. we are anchored to the earth and not the other way around.

Regardless, Newton's theories are largely sound and imminently practical and are something we take into consideration every single time we get out of bed in the morning. I'd rather not have to exert so much force to get out of bed, but FVCK! GRAVITY!
At 8/9/2013 9:41:24 AM, wrichcirw wrote:
If you are civil with me, I will be civil to you. If you decide to bring unreasonable animosity to bear in a reasonable discussion, then what would you expect other than to get flustered?
Graincruncher
Posts: 2,799
Add as Friend
Challenge to a Debate
Send a Message
5/27/2014 8:30:54 AM
Posted: 2 years ago
At 5/25/2014 1:05:35 AM, R0b1Billion wrote:
The Trolley Problem (TP) puts one in a scenario where they are asked to make a choice to kill in order to save. The scenario is manipulative for several reasons.

First, it measures prudence, not ethics. Is there any question during the process that your only desire is to minimize harm to everybody? What your actual ability to do is irrelevant, since morality is not dependent upon one's ability to effectively carry out their wishes, only that their wishes, as reflected in their actions, be just.

It measures the criteria upon which prudence is judged. Which is also the criteria for moral decision making. It tells us things like:

- How much weight do individuals give to direct agency vs. inaction?
- What criteria do people take into account when doing the moral calculus required to 'minimize harm'?
- What is a 'just act'?
- When acting within limitations, how do people adjust their idealised morality to non-ideal situations and what does this tell us about the other factors involved in doing so?

Second, it is too simplistic - all the pertinent moral details are left out.

Most people's moral codes - it is wrong to steal, it is wrong to kill etc. - are too simplistic. The whole point of an initially simple system is to force the person to consider how they'd apply their moral ideals in a non-ideal environment. It is then useful to consider why they would apply them in such-and-such way rather than another.

In particular, my actions (and certainly the actions of the people getting killed) before the situation came to be should be analyzed. For instance, am I making this decision because I was involved in creating the situation?

It doesn't matter. The question is, if faced with a particular problem, what would your decision-making process be and why? We never have all the information about anything and we often have to act quickly, almost (or actually) instinctively.

If so, then I'd be negatively culpable no matter what I did. Am I being dropped into it randomly? In that case my impetus is to minimize harm and either choice is the moral thing to do. You want me to discover the ethics of the TP? Then give me a backdrop to the story!

No. The scenario works because it is layered and progressive. You start off with simple examples that test a moral position, then you gradually alter the scenario for further investigation. In this way you test the validity of reasoning from simple to complex. If you start off with lots of information then the question is 1) easier and 2) provides less useful data.

Third, it is unrealistic. It's sad that we make moral choices at so many moments during the day, yet our tool to measure morality is a scenario that one could conceivably live billions of lives before actually being in a similar situation, simply because it is so weird and unlikely to occur.

It is a thought experiment. If you don't like it, I suggest you avoid the fields of philosophy and science entirely.

I would argue that the unlikeliness of being thrust into a scenario where you have to kill to save is incredible unless there is some element of culpability surrounding the persons involved. It would take either a statistical probability of epic proportions or else a criminal mastermind to engineer.

If you think the Trolley Problem is about working out what you'd do in that specific scenario for the sake of itself, it is no wonder you do not understand the value and intent of it. It is akin to moral stress-testing, seeing at which point certain moral propositions bend or break, how different ones interact and are prioritised based on differing circumstances and so on and so forth. They are not really about the situation under examination, but about what your answer says about you and your moral decision-making processes, the values underlying them and how these things interact.
sdavio
Posts: 1,801
Add as Friend
Challenge to a Debate
Send a Message
5/27/2014 9:40:51 AM
Posted: 2 years ago
At 5/25/2014 1:05:35 AM, R0b1Billion wrote:
...

I don't think the trolley problem is manipulative at all, it points out the problems with some systems of morality which attempt to hold semantics above common sense; ie, if a theory says you should let more people die simply because some action is put into an arbitrary category called 'violence', there is a problem with the theory.

Second, it is too simplistic - all the pertinent moral details are left out. In particular, my actions (and certainly the actions of the people getting killed) before the situation came to be should be analyzed. For instance, am I making this decision because I was involved in creating the situation? If so, then I'd be negatively culpable no matter what I did. Am I being dropped into it randomly? In that case my impetus is to minimize harm and either choice is the moral thing to do. You want me to discover the ethics of the TP? Then give me a backdrop to the story!

"It's more complicated than that" is a non-argument which can be used as a response to literally anything. If you think an additional detail should be added, it would be more productive to spell out why that detail would be relevant; since any moral judgement will necessarily be made will less than the 'entire story'.

I would argue that the unlikeliness of being thrust into a scenario where you have to kill to save is incredible unless there is some element of culpability surrounding the persons involved.

Such an analysis of pros and cons of saving people or causing suffering or death would seem to be the situation in any war, providing charity, relief to poor people, torturing someone to get information, etc etc. I don't see how it's unrealistic.
"Logic is the money of the mind." - Karl Marx
dylancatlow
Posts: 12,255
Add as Friend
Challenge to a Debate
Send a Message
5/27/2014 10:51:29 AM
Posted: 2 years ago
At 5/25/2014 1:05:35 AM, R0b1Billion wrote:
The Trolley Problem (TP) puts one in a scenario where they are asked to make a choice to kill in order to save. The scenario is manipulative for several reasons.

First, it measures prudence, not ethics. Is there any question during the process that your only desire is to minimize harm to everybody? What your actual ability to do is irrelevant, since morality is not dependent upon one's ability to effectively carry out their wishes, only that their wishes, as reflected in their actions, be just.


Ethics deals with the question: how we should act? Since the Trolley Problem presents us with the choice of acting in two different ways, it is an ethical dilemma by definition. There's a difference between "what should you do" and "are you moral if you do X".
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 9:28:14 AM
Posted: 2 years ago
At 5/27/2014 7:32:47 AM, wrichcirw wrote:
At 5/25/2014 1:05:35 AM, R0b1Billion wrote:
The Trolley Problem (TP) puts one in a scenario where they are asked to make a choice to kill in order to save. The scenario is manipulative for several reasons.

First, it measures prudence, not ethics. Is there any question during the process that your only desire is to minimize harm to everybody? What your actual ability to do is irrelevant, since morality is not dependent upon one's ability to effectively carry out their wishes, only that their wishes, as reflected in their actions, be just.

How is prudence divorced from ethics?

They are completely different. One measures good will, the other measures ability. If they are not divorced, then somebody with a high IQ is always ethical, and somebody with a low IQ is always not. To accentuate this point, recognize the relationship of privilege and responsibility. Somebody with a low IQ has less responsibility to be ethical (because they have less ability to understand ethics, less ability to hurt others, etc.) - therefore they must be separate! Prudence is such an over-arching, general virtue however, that I suppose nothing good is really "divorced" from it, but again that is speaking very generally.

Second, it is too simplistic - all the pertinent moral details are left out. In particular, my actions (and certainly the actions of the people getting killed) before the situation came to be should be analyzed. For instance, am I making this decision because I was involved in creating the situation? If so, then I'd be negatively culpable no matter what I did. Am I being dropped into it randomly? In that case my impetus is to minimize harm and either choice is the moral thing to do. You want me to discover the ethics of the TP? Then give me a backdrop to the story!

The simplicity is due to the attempt to control for all other externalities. This is how any valid scientific experiment is conducted.

CORRECT! And this is not science, it is philosophy! Isolating the variables associated in, say, gravity, is a different thing than isolating a person in space and time. Gravity is identical everywhere. On Mars in the year 1,000,000 BCE the variables involved in gravity are identical to here and now. But people are different. You can't just hypothetically pop a person into an isolated world, created in your own mind, who has no history, no personality, no intelligence, etc. Who was his mother? Where did he grow up? How did he come to be here? A person does not exist is if she was in some computer-generated landscape, created artificially and independent of the universe. She makes a series of decisions to be in that location that CANNOT BE ISOLATED FROM THE MORAL ANALYSIS, if for no other reason than it's a physically-impossible situation. By creating the TP, we are necessarily creating a scenario that is IMPOSSIBLE to exist in reality. Vice cannot be isolated like gravity, it depends on consciousness and emotion. As far as science is concerned, consciousness does not exist.

Third, it is unrealistic. It's sad that we make moral choices at so many moments during the day, yet our tool to measure morality is a scenario that one could conceivably live billions of lives before actually being in a similar situation, simply because it is so weird and unlikely to occur. I would argue that the unlikeliness of being thrust into a scenario where you have to kill to save is incredible unless there is some element of culpability surrounding the persons involved. It would take either a statistical probability of epic proportions or else a criminal mastermind to engineer.

Newton's theory of gravity is unrealistic when taken at its core assertion, that our physical mass moves the world to a proportional extent that the world moves us...most people would consider far more realistic that "what goes up must come down", i.e. we are anchored to the earth and not the other way around.

You're confusing something which is intrinsically unrealistic (the TP) with something that is only apparently unrealistic ONLY because we are taught the wrong thing from experience (gravity).
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
wrichcirw
Posts: 11,196
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 9:43:20 AM
Posted: 2 years ago
At 5/28/2014 9:28:14 AM, R0b1Billion wrote:
At 5/27/2014 7:32:47 AM, wrichcirw wrote:
At 5/25/2014 1:05:35 AM, R0b1Billion wrote:
The Trolley Problem (TP) puts one in a scenario where they are asked to make a choice to kill in order to save. The scenario is manipulative for several reasons.

First, it measures prudence, not ethics. Is there any question during the process that your only desire is to minimize harm to everybody? What your actual ability to do is irrelevant, since morality is not dependent upon one's ability to effectively carry out their wishes, only that their wishes, as reflected in their actions, be just.

How is prudence divorced from ethics?

They are completely different. One measures good will, the other measures ability.

Ridiculous classifications. Does prudence measure ability or good will? No. Does ethics measure ability or good will? No.

If they are not divorced, then somebody with a high IQ is always ethical, and somebody with a low IQ is always not. To accentuate this point, recognize the relationship of privilege and responsibility. Somebody with a low IQ has less responsibility to be ethical (because they have less ability to understand ethics, less ability to hurt others, etc.) - therefore they must be separate! Prudence is such an over-arching, general virtue however, that I suppose nothing good is really "divorced" from it, but again that is speaking very generally.

How is prudence related to IQ?

Second, it is too simplistic - all the pertinent moral details are left out. In particular, my actions (and certainly the actions of the people getting killed) before the situation came to be should be analyzed. For instance, am I making this decision because I was involved in creating the situation? If so, then I'd be negatively culpable no matter what I did. Am I being dropped into it randomly? In that case my impetus is to minimize harm and either choice is the moral thing to do. You want me to discover the ethics of the TP? Then give me a backdrop to the story!

The simplicity is due to the attempt to control for all other externalities. This is how any valid scientific experiment is conducted.

CORRECT! And this is not science, it is philosophy! Isolating the variables associated in, say, gravity, is a different thing than isolating a person in space and time. Gravity is identical everywhere. On Mars in the year 1,000,000 BCE the variables involved in gravity are identical to here and now. But people are different. You can't just hypothetically pop a person into an isolated world, created in your own mind, who has no history, no personality, no intelligence, etc. Who was his mother? Where did he grow up? How did he come to be here? A person does not exist is if she was in some computer-generated landscape, created artificially and independent of the universe. She makes a series of decisions to be in that location that CANNOT BE ISOLATED FROM THE MORAL ANALYSIS, if for no other reason than it's a physically-impossible situation. By creating the TP, we are necessarily creating a scenario that is IMPOSSIBLE to exist in reality. Vice cannot be isolated like gravity, it depends on consciousness and emotion. As far as science is concerned, consciousness does not exist.

This is again absolutely ridiculous. You're making ridiculous assumptions and classifications that are internally inconsistent and do not logically flow even if they were.

1) People and planets both change. Nothing is static.
2) Wondering about how different people have different mothers is similar to wondering about the origins of the planets in our own solar system vs planets in other solar systems.
3) Science and philosophy are not mutually exclusive.
4) Scientific methods can be applied to human decision-making.

Third, it is unrealistic. It's sad that we make moral choices at so many moments during the day, yet our tool to measure morality is a scenario that one could conceivably live billions of lives before actually being in a similar situation, simply because it is so weird and unlikely to occur. I would argue that the unlikeliness of being thrust into a scenario where you have to kill to save is incredible unless there is some element of culpability surrounding the persons involved. It would take either a statistical probability of epic proportions or else a criminal mastermind to engineer.

Newton's theory of gravity is unrealistic when taken at its core assertion, that our physical mass moves the world to a proportional extent that the world moves us...most people would consider far more realistic that "what goes up must come down", i.e. we are anchored to the earth and not the other way around.

You're confusing something which is intrinsically unrealistic (the TP) with something that is only apparently unrealistic ONLY because we are taught the wrong thing from experience (gravity).

The internal logic of the TP can easily be applied to realistic scenarios, just as much so as Newton's theories of gravity. I had a debate that involved several of them.
At 8/9/2013 9:41:24 AM, wrichcirw wrote:
If you are civil with me, I will be civil to you. If you decide to bring unreasonable animosity to bear in a reasonable discussion, then what would you expect other than to get flustered?
dylancatlow
Posts: 12,255
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 10:11:14 AM
Posted: 2 years ago
You're essentially saying that in answering the question "should a mentally ill person murder someone", we do not invoke ethics because they are not morally responsible. This is obviously absurd.
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 4:51:17 PM
Posted: 2 years ago
At 5/27/2014 8:30:54 AM, Graincruncher wrote:
At 5/25/2014 1:05:35 AM, R0b1Billion wrote:
The Trolley Problem (TP) puts one in a scenario where they are asked to make a choice to kill in order to save. The scenario is manipulative for several reasons.

First, it measures prudence, not ethics. Is there any question during the process that your only desire is to minimize harm to everybody? What your actual ability to do is irrelevant, since morality is not dependent upon one's ability to effectively carry out their wishes, only that their wishes, as reflected in their actions, be just.

It measures the criteria upon which prudence is judged. Which is also the criteria for moral decision making. It tells us things like:

- How much weight do individuals give to direct agency vs. inaction?

... nothing to do with morality.

- What criteria do people take into account when doing the moral calculus required to 'minimize harm'?

Minimizing harm is not moral calculus. Political committees can be set up for "ethics" but I disagree with the way they use the term. Morality/ethics is a personal decision, usually grappling with selfishness vs selflessness. Minimization of harm implies the immoral decision has already been made, and it's simply being justified somehow.

- What is a 'just act'?

I don't think the TP helps us with this inquiry!

- When acting within limitations, how do people adjust their idealised morality to non-ideal situations and what does this tell us about the other factors involved in doing so?

Morality is not about ideal vs non-ideal, at least to somebody who understands it well enough. There is no ideal right versus a less ideal right. The TP, the way it is set up, is simply playing out a result after a moral decision was already decided.

Second, it is too simplistic - all the pertinent moral details are left out.

Most people's moral codes - it is wrong to steal, it is wrong to kill etc. - are too simplistic.

Stealing and killing are actions that depend on motives. I will examine these motives, not make general statements about actions that could possibly be good or bad.

The whole point of an initially simple system is to force the person to consider how they'd apply their moral ideals in a non-ideal environment. It is then useful to consider why they would apply them in such-and-such way rather than another.

In particular, my actions (and certainly the actions of the people getting killed) before the situation came to be should be analyzed. For instance, am I making this decision because I was involved in creating the situation?

It doesn't matter. The question is, if faced with a particular problem, what would your decision-making process be and why? We never have all the information about anything and we often have to act quickly, almost (or actually) instinctively.

I disagree, I would say "we never" are simply thrust into a meaningless situation and asked to kill somebody versus another. In REAL life, we make a series of decisions (which contain moral value) which lead us to such a situation, and by then, the morality has been pretty-much played out and we are simply making the best out of a bad situation. If I rip off a bunch of drug-dealers and they all come to my house to kill me, would I start my moral analysis of the situation as soon as they knock on my door? Any good ethicist would point out that I shouldn't have ripped off (or even dealt with) such people in the first place, and while there's a lot more ethical meaning in this scenario than the TP (which is designed to eliminate any ethics, not isolate them), I'd argue most of the ethics involved is already played out.

If so, then I'd be negatively culpable no matter what I did. Am I being dropped into it randomly? In that case my impetus is to minimize harm and either choice is the moral thing to do. You want me to discover the ethics of the TP? Then give me a backdrop to the story!

No. The scenario works because it is layered and progressive. You start off with simple examples that test a moral position, then you gradually alter the scenario for further investigation. In this way you test the validity of reasoning from simple to complex. If you start off with lots of information then the question is 1) easier and 2) provides less useful data.

"Further investigation" of what? There's no ethical dilemma to begin with, you're simply measuring some sort of ability.

Third, it is unrealistic. It's sad that we make moral choices at so many moments during the day, yet our tool to measure morality is a scenario that one could conceivably live billions of lives before actually being in a similar situation, simply because it is so weird and unlikely to occur.

It is a thought experiment. If you don't like it, I suggest you avoid the fields of philosophy and science entirely.

The trademark response of somebody who's beaten.

I would argue that the unlikeliness of being thrust into a scenario where you have to kill to save is incredible unless there is some element of culpability surrounding the persons involved. It would take either a statistical probability of epic proportions or else a criminal mastermind to engineer.

If you think the Trolley Problem is about working out what you'd do in that specific scenario for the sake of itself, it is no wonder you do not understand the value and intent of it. It is akin to moral stress-testing, seeing at which point certain moral propositions bend or break, how different ones interact and are prioritised based on differing circumstances and so on and so forth. They are not really about the situation under examination, but about what your answer says about you and your moral decision-making processes, the values underlying them and how these things interact.

Again, the TP has no ethical basis. Just because it involves a decision to kill, does NOT mean there's ethics being tested. This is an incredibly simplistic way of looking at ethics.
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 4:56:10 PM
Posted: 2 years ago
At 5/27/2014 9:40:51 AM, sdavio wrote:
At 5/25/2014 1:05:35 AM, R0b1Billion wrote:
...

I don't think the trolley problem is manipulative at all, it points out the problems with some systems of morality which attempt to hold semantics above common sense; ie, if a theory says you should let more people die simply because some action is put into an arbitrary category called 'violence', there is a problem with the theory.

Good point - the TP is designed to negate a specific theory, one which I don't hold. So it is entirely meaningless to me and anybody who doesn't hold that particular view.

Second, it is too simplistic - all the pertinent moral details are left out. In particular, my actions (and certainly the actions of the people getting killed) before the situation came to be should be analyzed. For instance, am I making this decision because I was involved in creating the situation? If so, then I'd be negatively culpable no matter what I did. Am I being dropped into it randomly? In that case my impetus is to minimize harm and either choice is the moral thing to do. You want me to discover the ethics of the TP? Then give me a backdrop to the story!

"It's more complicated than that" is a non-argument which can be used as a response to literally anything. If you think an additional detail should be added, it would be more productive to spell out why that detail would be relevant; since any moral judgement will necessarily be made will less than the 'entire story'.

I explained why we need more detail, and that is because the moral component of the problem has been left out. The TP describes more of a result of moral action which is neatly cut out of the equation, and places us suddenly and meaninglessly in the shoes of somebody poised to kill or not kill. No matter what course is taken in the TP, the ethics are not affected at all. Bad decision #1 versus bad decision #2 - who cares? I want to know what I did to get into this scenario. If it was a criminal mastermind, it's not my fault NO MATTER WHAT decision I make. If it's my own unethical decisions that put me there, then it's my fault NO MATTER WHAT decision I make. So you see, the TP is absolutely useless.

I would argue that the unlikeliness of being thrust into a scenario where you have to kill to save is incredible unless there is some element of culpability surrounding the persons involved.

Such an analysis of pros and cons of saving people or causing suffering or death would seem to be the situation in any war, providing charity, relief to poor people, torturing someone to get information, etc etc. I don't see how it's unrealistic.

I would analyze those situations case-by-case, and probably wouldn't be aided by the TP :P
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 4:57:34 PM
Posted: 2 years ago
At 5/27/2014 10:51:29 AM, dylancatlow wrote:
At 5/25/2014 1:05:35 AM, R0b1Billion wrote:
The Trolley Problem (TP) puts one in a scenario where they are asked to make a choice to kill in order to save. The scenario is manipulative for several reasons.

First, it measures prudence, not ethics. Is there any question during the process that your only desire is to minimize harm to everybody? What your actual ability to do is irrelevant, since morality is not dependent upon one's ability to effectively carry out their wishes, only that their wishes, as reflected in their actions, be just.


Ethics deals with the question: how should we act? Since the Trolley Problem presents us with the choice of acting in two different ways, it is an ethical dilemma by definition. There's a difference between "what should you do" and "are you moral if you do X".

I think that's a simplistic definition of ethics. "How should we act" could be used in countless examples which are clearly unethical. Should I press the A button or B button in Mario Bros? ad infinitum...
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
dylancatlow
Posts: 12,255
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 5:06:19 PM
Posted: 2 years ago
At 5/28/2014 4:57:34 PM, R0b1Billion wrote:
At 5/27/2014 10:51:29 AM, dylancatlow wrote:
At 5/25/2014 1:05:35 AM, R0b1Billion wrote:
The Trolley Problem (TP) puts one in a scenario where they are asked to make a choice to kill in order to save. The scenario is manipulative for several reasons.

First, it measures prudence, not ethics. Is there any question during the process that your only desire is to minimize harm to everybody? What your actual ability to do is irrelevant, since morality is not dependent upon one's ability to effectively carry out their wishes, only that their wishes, as reflected in their actions, be just.


Ethics deals with the question: how should we act? Since the Trolley Problem presents us with the choice of acting in two different ways, it is an ethical dilemma by definition. There's a difference between "what should you do" and "are you moral if you do X".

I think that's a simplistic definition of ethics. "How should we act" could be used in countless examples which are clearly unethical. Should I press the A button or B button in Mario Bros? ad infinitum...

That doesn't mean ethics doesn't deal with the question: how should we act? It's just that ethics only pertains to certain "acts". Namely, those with moral consequences. You're confusing two concepts: whether someone is moral (which depends on their intentions) and morality itself i.e. what is right (not whether someone is moral).
PeacefulChaos
Posts: 2,612
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 5:29:23 PM
Posted: 2 years ago
There are several different versions of the trolley problem.

The general solution is that you don't have the right to take another's life. So, in the "fat man" scenario, where you have the option to push the fat man off the ledge and onto the tracks, saving the five people, it would not be morally permissible to push the man.

But in the general scenario, where you are operating the trolley and you have the choice between the two tracks, it would be better to take the one with only one person on it, as in the end you have no choice but to take someone's life.
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 5:48:27 PM
Posted: 2 years ago
At 5/28/2014 5:06:19 PM, dylancatlow wrote:
At 5/28/2014 4:57:34 PM, R0b1Billion wrote:
At 5/27/2014 10:51:29 AM, dylancatlow wrote:
At 5/25/2014 1:05:35 AM, R0b1Billion wrote:
The Trolley Problem (TP) puts one in a scenario where they are asked to make a choice to kill in order to save. The scenario is manipulative for several reasons.

First, it measures prudence, not ethics. Is there any question during the process that your only desire is to minimize harm to everybody? What your actual ability to do is irrelevant, since morality is not dependent upon one's ability to effectively carry out their wishes, only that their wishes, as reflected in their actions, be just.


Ethics deals with the question: how should we act? Since the Trolley Problem presents us with the choice of acting in two different ways, it is an ethical dilemma by definition. There's a difference between "what should you do" and "are you moral if you do X".

I think that's a simplistic definition of ethics. "How should we act" could be used in countless examples which are clearly unethical. Should I press the A button or B button in Mario Bros? ad infinitum...

That doesn't mean ethics doesn't deal with the question: how should we act? It's just that ethics only pertains to certain "acts". Namely, those with moral consequences. You're confusing two concepts: whether someone is moral (which depends on their intentions) and morality itself i.e. what is right (not whether someone is moral).

Not "consequences." If you are a consequentialist then we won't see eye to eye. Ethics is means-based in my humble-but-asskicking-opinion.
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 6:07:16 PM
Posted: 2 years ago
At 5/28/2014 9:43:20 AM, wrichcirw wrote:

How is prudence divorced from ethics?

They are completely different. One measures good will, the other measures ability.

Ridiculous classifications. Does prudence measure ability or good will? No. Does ethics measure ability or good will? No.

I like those definitions, want to compare them with yours? Like I said, "prudence" is one of the most generalized terms in the English language so let's not delve into semantic BS...

If they are not divorced, then somebody with a high IQ is always ethical, and somebody with a low IQ is always not. To accentuate this point, recognize the relationship of privilege and responsibility. Somebody with a low IQ has less responsibility to be ethical (because they have less ability to understand ethics, less ability to hurt others, etc.) - therefore they must be separate! Prudence is such an over-arching, general virtue however, that I suppose nothing good is really "divorced" from it, but again that is speaking very generally.

How is prudence related to IQ?

It's not. Prudence is classically interchangeable with "wisdom," in that it measures your ability to apply knowledge to your every-day life experiences.

Second, it is too simplistic - all the pertinent moral details are left out. In particular, my actions (and certainly the actions of the people getting killed) before the situation came to be should be analyzed. For instance, am I making this decision because I was involved in creating the situation? If so, then I'd be negatively culpable no matter what I did. Am I being dropped into it randomly? In that case my impetus is to minimize harm and either choice is the moral thing to do. You want me to discover the ethics of the TP? Then give me a backdrop to the story!

The simplicity is due to the attempt to control for all other externalities. This is how any valid scientific experiment is conducted.

CORRECT! And this is not science, it is philosophy! Isolating the variables associated in, say, gravity, is a different thing than isolating a person in space and time. Gravity is identical everywhere. On Mars in the year 1,000,000 BCE the variables involved in gravity are identical to here and now. But people are different. You can't just hypothetically pop a person into an isolated world, created in your own mind, who has no history, no personality, no intelligence, etc. Who was his mother? Where did he grow up? How did he come to be here? A person does not exist is if she was in some computer-generated landscape, created artificially and independent of the universe. She makes a series of decisions to be in that location that CANNOT BE ISOLATED FROM THE MORAL ANALYSIS, if for no other reason than it's a physically-impossible situation. By creating the TP, we are necessarily creating a scenario that is IMPOSSIBLE to exist in reality. Vice cannot be isolated like gravity, it depends on consciousness and emotion. As far as science is concerned, consciousness does not exist.

This is again absolutely ridiculous. You're making ridiculous assumptions and classifications that are internally inconsistent and do not logically flow even if they were.
Yes I"m "ridiculous"ridiculous"internally inconsistent" but you"re not really combating my points effectively.
1) People and planets both change. Nothing is static.
I"m talking about consciousness, not "people" as a physical object.
2) Wondering about how different people have different mothers is similar to wondering about the origins of the planets in our own solar system vs planets in other solar systems.
You can suddenly *pop* spheres into orbit around a star and it doesn"t really change anything about the scientific analysis of how they got there. Science is cut-and-dried in that sense, but philosophy is not. The moral component of the TP happened before the point where you have the choice to kill or not. If it was a mastermind who put you there, you are not culpable regardless. If you made mistakes and now have to kill, you are culpable regardless. At no point does your choice make any moral difference. If anything, it"s just a measure of your non-ethical ability to save lives. Saving lives is ethically value-less. If I suddenly masterminded a way to save every person on the Earth from dying, it doesn"t matter one bit " I am simply endowed with great ability. Morality is about my ability to resist selfishness, not maximize utility, or lives, or happiness, or anything like that.
3) Science and philosophy are not mutually exclusive.
Sure they are. Philosophical topics which reach the realm of empiricism cease to become philosophical anymore. The stars used to be a philosophical topic (metaphysics) and are now captured by astronomy. You can use science to mold your philosophy, but there is not a direct connection between the two. Science studies phenomena, philosophy studies noumena.
4) Scientific methods can be applied to human decision-making.
You can try, but there"s always a disconnect between consciousness and the empirical world. If I do something immoral and feel badly later, we can apply science by polling me on how I feel, but as you can see that"s a very indirect method! We have machines now that can measure brain activity, but how far we are actually going to be able to take that is as of yet unknown.
Third, it is unrealistic. It's sad that we make moral choices at so many moments during the day, yet our tool to measure morality is a scenario that one could conceivably live billions of lives before actually being in a similar situation, simply because it is so weird and unlikely to occur. I would argue that the unlikeliness of being thrust into a scenario where you have to kill to save is incredible unless there is some element of culpability surrounding the persons involved. It would take either a statistical probability of epic proportions or else a criminal mastermind to engineer.

Newton's theory of gravity is unrealistic when taken at its core assertion, that our physical mass moves the world to a proportional extent that the world moves us...most people would consider far more realistic that "what goes up must come down", i.e. we are anchored to the earth and not the other way around.

You're confusing something which is intrinsically unrealistic (the TP) with something that is only apparently unrealistic ONLY because we are taught the wrong thing from experience (gravity).

The internal logic of the TP can easily be applied to realistic scenarios, just as much so as Newton's theories of gravity. I had a debate that involved several of them.
I would be interested to see what usefulness could come from the TP. I can"t see how studying the TP is any more useful than studying how Luke Skywalker deals with using "the force" :P
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 6:11:20 PM
Posted: 2 years ago
At 5/28/2014 5:29:23 PM, PeacefulChaos wrote:
There are several different versions of the trolley problem.

All of which are complete rubbish.

The general solution is that you don't have the right to take another's life. So, in the "fat man" scenario, where you have the option to push the fat man off the ledge and onto the tracks, saving the five people, it would not be morally permissible to push the man.

But in the general scenario, where you are operating the trolley and you have the choice between the two tracks, it would be better to take the one with only one person on it, as in the end you have no choice but to take someone's life.

That's all well and fine, but what usefulness is there? The scenarios are abstract beyond any realistic standard, and the "right" and "wrong" answers don't reflect ethical reasoning at all. Ethics is about fighting the urge to act selfishly; there's no selfish vs selfless component to these problems, you're simply popped meaninglessly into a no-win situation and asked to make a choice. No matter what choice you make, there is no change in culpability.
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
PeacefulChaos
Posts: 2,612
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 6:16:51 PM
Posted: 2 years ago
At 5/28/2014 6:11:20 PM, R0b1Billion wrote:
At 5/28/2014 5:29:23 PM, PeacefulChaos wrote:
There are several different versions of the trolley problem.

All of which are complete rubbish.

The general solution is that you don't have the right to take another's life. So, in the "fat man" scenario, where you have the option to push the fat man off the ledge and onto the tracks, saving the five people, it would not be morally permissible to push the man.

But in the general scenario, where you are operating the trolley and you have the choice between the two tracks, it would be better to take the one with only one person on it, as in the end you have no choice but to take someone's life.

That's all well and fine, but what usefulness is there? The scenarios are abstract beyond any realistic standard, and the "right" and "wrong" answers don't reflect ethical reasoning at all. Ethics is about fighting the urge to act selfishly; there's no selfish vs selfless component to these problems, you're simply popped meaninglessly into a no-win situation and asked to make a choice. No matter what choice you make, there is no change in culpability.

To clarify, is your view on ethics is that it's concerned with "selfishness"? That is to say, moral acts would be those where you "give" something beneficial, whilst immoral acts would be where you refuse to give?

More specifically, if you help someone fix their car, that would be giving your time and energy for a beneficial cause, correct? But if you did not help them fix their car, it would be a selfish act, as you refuse to give up your time and energy and leave them helpless.

Is this a correct interpretation of your views, or have I missed your point?
Graincruncher
Posts: 2,799
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 6:34:31 PM
Posted: 2 years ago
At 5/28/2014 4:51:17 PM, R0b1Billion wrote:
... nothing to do with morality.

I got part way through a lengthy reply, but then realised I want to check something: is English your first language?
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 9:46:25 PM
Posted: 2 years ago
So much ad hom in this thread... I can only assume I am making people nervous about their assumptions :)
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
dylancatlow
Posts: 12,255
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 9:53:16 PM
Posted: 2 years ago
At 5/28/2014 9:46:25 PM, R0b1Billion wrote:
So much ad hom in this thread... I can only assume I am making people nervous about their assumptions :)

No, your idiocy is just frustrating.
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 10:20:24 PM
Posted: 2 years ago
At 5/28/2014 6:16:51 PM, PeacefulChaos wrote:

To clarify, is your view on ethics is that it's concerned with "selfishness"? That is to say, moral acts would be those where you "give" something beneficial, whilst immoral acts would be where you refuse to give?

I wouldn't include the part about giving something that's beneficial... that seems to include judgment/valuation of ends and I care not for those. Simply put, our intelligence gives us privileges but also responsibilities - intelligent beings must deal with the propensity to artificially help ourselves. I think what confuses most is that we look at lower animals, say, a lion eating a gazelle, and say "hey, he's selfish too - it's natural!" But it really isn't. Selfishness by a human, like buying a home on the water that restricts others from accessing it (not to mention animals that want a drink) is different than a lion's kill. The lion's kill is 100% beneficial to the environment, and is in fact vital to the cycle of life our ecosystem depends on. Humans, because of our intelligence, must work to stay "in bounds" to keep our actions naturally beneficial, while avoiding artificiality that harms the environment. Artificiality gives us wonders, but when it contradicts natural phenomena it not only hurts the environment but it hurts us personally as well. And that's a big point - I recognize that morality benefits the user just as much as everyone else. Being a good person feels good, makes us healthy, and most importantly makes us loved by others.

More specifically, if you help someone fix their car, that would be giving your time and energy for a beneficial cause, correct? But if you did not help them fix their car, it would be a selfish act, as you refuse to give up your time and energy and leave them helpless.

Is this a correct interpretation of your views, or have I missed your point?

Your scenario is on the right track, but it still lacks information. What is your other choice? If you were just going to play video games, then yeah. If you were going to tend to your family, then it might even be selfish to help him because you aren't thinking of them. But that still isn't enough info. Why are you helping him? Is it for ONLY the reason of being helpful? You could have bad motives, like perhaps you really want to trap him into owing you a favor because you know you need him later on and haven't yet told him. My point here is two-fold: first, there is no relevance of value. I don't add the utility of helping versus the utility of not, I simply let the ends fall away as meaningless. Second, even a seemingly "good" act can be bad (and vice-versa) depending on the reasoning used to do it. What matters, or the moral analysis, is why you are doing it. If your reasoning includes one of several key immoral impetuses, namely pride, greed, or indulgence, then you can be sure you are not doing the right thing - even if the ends seem great and you can justify it. In fact you always WILL justify it, that's where strife comes from. Another way to look at it is Kant's Categorical Imperative - act in a way which, if repeated by others, would not cause harm.
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 10:21:13 PM
Posted: 2 years ago
At 5/28/2014 9:53:16 PM, dylancatlow wrote:
At 5/28/2014 9:46:25 PM, R0b1Billion wrote:
So much ad hom in this thread... I can only assume I am making people nervous about their assumptions :)

No, your idiocy is just frustrating.

I apologize for my stupidity...
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
PeacefulChaos
Posts: 2,612
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 10:26:33 PM
Posted: 2 years ago
At 5/28/2014 10:20:24 PM, R0b1Billion wrote:

I wouldn't include the part about giving something that's beneficial... that seems to include judgment/valuation of ends and I care not for those.

Ah, the reason I included that is because if I didn't, then "giving" could be interpreted to include anything. For example, I could "give" my time and energy to go kill someone with a deadly poison (which I would also be giving up, which I had to buy with money that I gave up), but that isn't a good thing.


Artificiality gives us wonders, but when it contradicts natural phenomena it not only hurts the environment but it hurts us personally as well. And that's a big point - I recognize that morality benefits the user just as much as everyone else. Being a good person feels good, makes us healthy, and most importantly makes us loved by others.

More specifically, if you help someone fix their car, that would be giving your time and energy for a beneficial cause, correct? But if you did not help them fix their car, it would be a selfish act, as you refuse to give up your time and energy and leave them helpless.

Is this a correct interpretation of your views, or have I missed your point?

Your scenario is on the right track, but it still lacks information. What is your other choice? If you were just going to play video games, then yeah. If you were going to tend to your family, then it might even be selfish to help him because you aren't thinking of them. But that still isn't enough info. Why are you helping him? Is it for ONLY the reason of being helpful? You could have bad motives, like perhaps you really want to trap him into owing you a favor because you know you need him later on and haven't yet told him. My point here is two-fold: first, there is no relevance of value. I don't add the utility of helping versus the utility of not, I simply let the ends fall away as meaningless. Second, even a seemingly "good" act can be bad (and vice-versa) depending on the reasoning used to do it. What matters, or the moral analysis, is why you are doing it. If your reasoning includes one of several key immoral impetuses, namely pride, greed, or indulgence, then you can be sure you are not doing the right thing - even if the ends seem great and you can justify it. In fact you always WILL justify it, that's where strife comes from. Another way to look at it is Kant's Categorical Imperative - act in a way which, if repeated by others, would not cause harm.

So you judge an act by its intentions instead of its ends?

This makes sense, as good intentions would be developing your own character regardless of the outcome, but what if you have purely good intentions and the outcome is something disastrous, such as hundreds of people dying? Surely this cannot be considered good. But at the same time, the person didn't want or mean for the people to get hurt.
R0b1Billion
Posts: 3,733
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 10:52:06 PM
Posted: 2 years ago
At 5/28/2014 10:26:33 PM, PeacefulChaos wrote:

So you judge an act by its intentions instead of its ends?

Definitely. This is the basis of nearly all religion, and if one does not respect the fact that "the ends don't justify the means," then you allow any act to be justified somehow. You become a consequentialist or a utilitarian. Ethics becomes an infinitely complex search for valuing and predicting outcomes, and perhaps most dangerous of all, the strong gain a distinct advantage over the meek because they are able to create THE PERCEPTION of value (e.g., wealth, indulgences, destruction of enemies). Our perceptions of value are distorted and we often don't even know what's really good for us (one need only look at smokers, gamblers, TV-watchers, etc. to see that truth) so we become essentially addicted to ends that aren't even good.

This makes sense, as good intentions would be developing your own character regardless of the outcome, but what if you have purely good intentions and the outcome is something disastrous, such as hundreds of people dying? Surely this cannot be considered good. But at the same time, the person didn't want or mean for the people to get hurt.

This is where faith comes in. Faith is a virtue, and I challenge you to wake up tomorrow and resist selfishness and see if any bad comes from it. Give a waitress an extra few bucks; let somebody pass you or go first at a stop sign; do something for somebody without expecting payment; resist indulgences you know you'd be better off without; hold back your anger if you feel like hurting somebody. I think you will find that these decision are never going to result in hurting others.

I believe the seven sins, which were created by the church in response to Jesus' teachings, are the best list of immoral impetuses to avoid: pride, greed, lust, envy, sloth, wrath, and gluttony (each of the preceding examples was based off of one of these). All day long we are affected by these desires that change how we naturally would act... one way to look at it is, looking back at your day while lying in bed at night, how would you have liked the day to have gone? You would like to say you were productive, ate healthy and got exercise, didn't let your anger go/hurt anybody, and helped others out as opposed to made their lives more difficult. All these things help yourself as much as they do anybody else. But immorality acts on us not at night when we reflect back (or when we plan ahead), it affects us in real time. It makes us buy that fast food we didn't want to, makes us snap on that coworker we probably shouldn't have, etc. Immorality is weakness that stops us from achieving our goals. So when you ask me about the potential danger of people dying because I valued the means over the ends, my only response is that it's simply impossible to make the wrong choice using this type of reasoning. Hurting people is only ever a result of the justification involved in suppressing your doubts about doing something you know is wrong.

With limitless examples right out of everyday life to choose from (that we all struggle with to an amazingly high extent), it is nothing short of insane to concentrate on something like the TP to analyze morality.
Beliefs in a nutshell:
- The Ends never justify the Means.
- Objectivity is secondary to subjectivity.
- The War on Drugs is the worst policy in the U.S.
- Most people worship technology as a religion.
- Computers will never become sentient.
The_Fool_on_the_hill
Posts: 6,071
Add as Friend
Challenge to a Debate
Send a Message
5/28/2014 11:41:21 PM
Posted: 2 years ago
At 5/25/2014 1:05:35 AM, R0b1Billion wrote:
The Trolley Problem (TP) puts one in a scenario where they are asked to make a choice to kill in order to save. The scenario is manipulative for several reasons.
R0b1Billion: First, it measures prudence, not ethics.

The Fool: Are you going to support that claim, or do you consider it ideologically self-evident.

R0b1Billion: Is there any question during the process that your only desire is to minimize harm to everybody?

The Fool: I believe you are questioning it right now..

R0b1Billion: What your actual ability to do is irrelevant, since morality is not dependent upon one's ability to effectively carry out their wishes, only that their wishes, as reflected in their actions, be just.

The Fool: Would you agree that if justice was a number, it would be even.
Or at least evenly split in half?

In other words, there is a certain intellectual necessity, to having Just desires.
As justice, requires balancing of biases, and knowing what follows, as fair..

To think in ratio" thus bond the expression "ratio-cination." Or something like that.

In concerto, we are less harsh on judging children, for crimes, and I believe it is because we feel that they have not mastered their rational capacity enough to be deemed fully responsible. And the same is true with animals no?

Look at it this way, a foolish way perhaps, but sincere and true.

Would you not agree that that you know more about ethics now, then you did when you were five years old?

If so, then it's fair to say that morality is something learned more or less, and so synonymously we should expect slow learner, to have a harder time being moral.

I stress "being moral", because it is not enough to simply desire to be moral, Nor is it simply enough to desire what is moral, and for its demonstration, I give the following title.

Good will is not good enough

Let's use our moral "punching bag", Hitler, for this one, again..

For he would say, and I think sincerely thought it would be best if we were to get rid of what was considered bad genes of humanity, so that humanity would be stronger and suffer less in the long run.

Insofar as this is true, and there is good reason to believe it, then would be in case where the general intention is good, but he is nonetheless immoral.

Another example?
Okay, let's say somebody thought that we would all be better off if we were dead, because that person had come to the conclusion, to themselves..And so killed 1000 people for this cause.

By virtue of being better off, the person had a good intention, to better us, nonetheless, these would not be moral actions, nor would that person be a moral person by virtue of their good intended actions.

The Fool: In fact, it can be argued pretty well, as it quite famously by Socrates, that "all intention is good intention."

Everybody wants what is good. For nobody truly wants what is bad, by virtue of being bad. (
For we are all guided by, a positive intuition; a good feeling we get when we do one thing or another. And correspondingly deterred by a negative intuition, which indicates to us that something is bad.

These feelings, in themselves share something with the good, and the bad, respectively, but are in a love and hate relationship, like opposites attract, and repel, the Ying and the Yang mix and mingle flirtatiously giving birth to many confusions.

It takes good structured and measured reason to demarcate, the offspring of the Tao effectively.

What seems good, at the time, and perhaps even is for certain times, can like a well, run dry, or reveal itself to be poisonous in summation. And so is the same with beliefs. For as the world changes, beliefs must change in synchronization with the world, to maintain their status.

In conclusion, it seems, that whether we like it or not, morality is a type of knowledge which can be used more efficiently by those with the right intellect to pave the way to justice. For we can be in moral error, and of fallacious morality, as we can be with any type of belief and or judgment,

Or at least that"s the word on the hill. A fool"s word at best. Take it for what it"s worth.

Perhaps everything.

Perhaps nothing.

Against The Ideologist

Perhaps 1 billion, Even..
<(8D)

https://www.youtube.com...
"The bud disappears when the blossom breaks through, and we might say that the former is refuted by the latter; in the same way when the fruit comes, the blossom may be explained to be a false form of the plant's existence, for the fruit appears as its true nature in place of the blossom. These stages are not merely differentiated; they supplant one another as being incompatible with one another." G. W. F. HEGEL
The_Fool_on_the_hill
Posts: 6,071
Add as Friend
Challenge to a Debate
Send a Message
5/29/2014 12:57:27 AM
Posted: 2 years ago
The Interdependence of Morality and Intelligence 1.1(Edit)

R0b1Billion: First, it measures prudence, not ethics.

The Fool: Are you going to support that claim, or do you consider it ideologically self-evident.

R0b1Billion: Is there any question during the process that your only desire is to minimize harm to everybody?

The Fool: I believe you are questioning it right now..

R0b1Billion: What your actual ability to do is irrelevant, since morality is not dependent upon one's ability to effectively carry out their wishes, only that their wishes, as reflected in their actions, be just.

The Fool: Would you agree that if justice was a number, it would be even.
Or at least evenly split in half?

In other words, there is a certain intellectual necessity to having "Just" desires.
As justice requires balancing of biases, and knowing what follows as fair..

To think in ratio" thus bond the expression "ratio-cination." Or something like that.

In concerto, we are less harsh on judging children for crimes, and I believe it is because we feel that they have not mastered their rational capacity enough to be deemed fully responsible. And the same is true with animals no?

Look at it this way, a foolish way perhaps, but sincere and true.

Would you not agree that you know more about ethics now, then you did when you were five years old?

If so, then it's fair to say that morality is something learned more or less, and so synonymously we should expect a slow learner, to have a harder time learning how to be moral.

I stress "being moral", because it is not enough to simply desire to be moral, Nor is it simply enough to desire what is moral, and for its demonstration, I give the following title:

Good will is not good enough

Let's use our moral "punching bag", Hitler, again, for this one..

For he would say, and I think sincerely thought it would be best if we were to get rid of what was considered bad genes of humanity, thus would be stronger and suffer less in the long run.

Insofar that he thought this, and there is good reason to believe it, then it"s a genuine case where the general intention is good, but he is none the less immoral, regardless of this intention, and be the act carried out in attempt to fulfill this desire.

Another example?
Okay, let's say somebody thought that we would all be better off if we were dead, because that person had come to the conclusion, to themselves, and they felt god or Jesus wanted this, and acted thus.

By virtue of being better off, the person had a good intention; to better us. Nonetheless, these would not be moral actions, nor would that person be a moral person by virtue of their good intentions. Why? It"s not Justified True Believe. Not Knowledge, which goes hand in hand with intelligence.

In fact, it can be argued pretty well, as it quite famously was by Socrates that "all intention is good intention."

Everybody wants what is good. For nobody truly wants what is bad, by virtue of what it means to be bad.
For we are all guided by, a positive intuition; a good feeling we get when we do one thing or another. And correspondingly deterred by a negative intuition, which indicates to us that something is bad.

These feelings, in themselves share something with the good, and the bad respectively, but are in a love and hate relationship. For like-opposites which attract and repel; the Ying and Yang mix and mingle flirtatiously giving birth to many Confusions.

It takes diligently structured reason and precise measurements to most effectively demarcate the children of the Tao.

What seems good at the time, and perhaps for certain times "is" in way, can like, a well, run dry, or reveal itself to be poisonous or become so when combined with something else. And so is the same with beliefs, and actions. For as the world changes, beliefs must change in synchronization with the world, to maintain their sincerity with reality.

In conclusion, it seems that whether we like it or not, morality is a type of knowledge which can be used more efficiently by those with the right intellect to pave the way to justice. One who can level better even out their biases. Thus Justice herself carries a balance. A measure.

And an imbalance is a type of moral error, and a set of values can be morally fallacious, as any type of belief or judgment. For even, religious morality has improves, from the days of burning people at stakes.
And the necessary condition of moral progress is a change in the belief of what is moral.

Or at least that"s the word on the hill. A fool"s word at best. Take it for what it"s worth.

Perhaps everything.

Perhaps nothing.

Against The Ideologist

Perhaps 1 billion, Even..
<(8D)

https://www.youtube.com...
"The bud disappears when the blossom breaks through, and we might say that the former is refuted by the latter; in the same way when the fruit comes, the blossom may be explained to be a false form of the plant's existence, for the fruit appears as its true nature in place of the blossom. These stages are not merely differentiated; they supplant one another as being incompatible with one another." G. W. F. HEGEL
Graincruncher
Posts: 2,799
Add as Friend
Challenge to a Debate
Send a Message
5/29/2014 7:30:35 AM
Posted: 2 years ago
At 5/28/2014 9:46:25 PM, R0b1Billion wrote:
So much ad hom in this thread... I can only assume I am making people nervous about their assumptions :)

Mine was an honest question. The fact you've apparently taken it as an insult is an answer in itself. I think I'll save myself the headache and leave others to try and deal with your idiocy.