I'd like to offer some advice to make things go a bit more smoothly. There's a widespread view that all beliefs are political, you can't be apolitical, and anyone arguing for a belief opposing yours must be an enemy. To me, that view is pretty much a type error. Beliefs are value-neutral. Only arguments for or against beliefs can be political or not.
More specifically, some arguments are rational (based on evidence) while other arguments are political (based on who benefits and who loses). You can be a very civil person, but still reach for political arguments when defending your beliefs, and thus cause net harm. Or you can be a rude person, but drawn to arguing based on evidence, and thus cause net benefit. It's up to you.
Now go forth and make a flamewar :-)
So if people choose to believe in something because that has certain consequences - then a belief can be political.
EDIT: To expand on this a little... It seems to me that you divorce a belief itself from its consequences. As there are a lot of beliefs that have immediate and direct political and social consequences, i think that this separation is questionable.
If you have a belief, you probably will act on that belief. Having a belief and _not_ acting on it _at all_ seems rather useless and abstract to me. I'm not saying that this doesn't happen, but in general, if some person has a belief, he (or she) will act on that belief.
So, to be blunt: for some beliefs, having them is a political act.
Nearly all beliefs are of this type. I believe the Earth orbits the Sun because it's helps me predict the seasons.
In a more general sense, consider the "laws of physics". People believe in them not because of their consequences, but because they explain reality (and quite well). So people believe in gravity because it has worked for them in the past, and because it has been verified.
Also... _not_ believing in gravity will not make much of a difference for your life (as long as you don't start jumping off cliffs of course).
EDIT: I forgot about religion. AFAIK, most people believe in religion because they have been taught/indoctrinated/raised to believe in it, and _not_ because they have (deeply) thought about the social consequences of their particular religion.
You might be implying that but it's worth stating as a particular case.
I'd say this article probably fits that category, and is why the flamewar will emerge :)
For example there is plentiful evidence that women are less represented in certain STEM fields. But is that because they are ill-suited to them or the fields themselves are ill-suited to women. Do we accept this lack of representation as an inevitable consequence? It is this way and it ought to be so. Or do we broaden the possibilities and consider what these fields might be like and may accomplish otherwise if they were more female friendly.
I'd take a wild stab in the dark that prior political world view is probably a greater predictor than most for which of those options seems the more appealing.
Formally:
1) A => B
2) I would really like B to be true, because it has some benefits.
3) Therefore A must be true.
If we are in search of truth and want to build a consistent model of the world then we just can't accept this kind of reasoning.
Also, even if A really turns out to be false, it doesn't mean that B can't be true and our world will forever be sad i.e. Even if it turns out that all people are not created equal, we can still live in just and enlightened society which treats everyone fairly.
And of course the implication between A => B may not even exist. Is it really true that if all people were created equal the society would benefit?
I wonder why you didn't go a step further to postulate "only actions can be political or not". Of course argument is action, but it could be claimed that only argument qua action is political.
> If we are in search of truth and want to build a consistent model of the world then we just can't accept this kind of reasoning.
Yes, you can. It's called generalizing. Remember, we're not doing math here, this is ethics.
Think about all the different situations where being treated equally to someone else is a good thing. Now you generalize that into "everyone is created equal", and that turns into a justification for all these situations.
If you want to criticise the generalization into A, there is a way:
Construct some C so that
1) A => C
2) C is not beneficial
Alternatively, you can attack the implication "A => B" or you can question whether or not B is beneficial.
>and because it has been verified. //
That may be true for you but is essentially an appeal to authority. Most scientists claim to be falsificationists and the current agreed Scientific Method is one of Popperian falsification.
Your comment on cliffs belies an ignorance of epistemology - people didn't used to have a gravitational theory, as far as we know no animals have one, that doesn't mean you then jump off cliffs. One's beliefs about reality don't fundamentally change reality.
If the gap is big enough, I think the result becomes indistinguishable from a theocracy, with analogues to blasphemy and ostracization for describing reality too closely, enforced by the state.
The valid generalization would be "everyone should be treated equally" not "everyone is created equal" which is a totally different thing.
These attributes are mostly orthogonal though. Most evidence, even if conclusive, will show that certain approaches benefit one group more than some others. There is no such thing as purely evidence-based policy, because there is no agreement on what to optimize for.
I think this whole evidence-based fad completely misses the point. It insinuates that there is a perfect possible outcome that will benefit all equally, and that is simply not the case. And by focusing on the outcomes, it detracts from the real sticking point: let's talk about how to harmonize goals, instead of only results.
EDIT: I think what you're doing here is criticising A by constructing A' and arguing that it is better (by having the same consequences while being less general). That is a good way of criticising the generalization as well.
GP said:
>> People can hold beliefs because they have certain consequences.
> Nearly all beliefs are of this type. I believe the Earth orbits the Sun because it's helps me predict the seasons.
I think there are a lot of beliefs that are held not because they have beneficial consequences, but for other reasons.
Religion as a belief is held by most people because they have been brought up with it. (converts nonwithstanding)
Scientific theories are believed (i.e. held true, used for explanations of reality) because they predict stuff that actually happens and can not be falsified.
Both of these are not believed primarily because their belief has actual consequences IMHO.
1) The placebo effect predicts that believing I'll get over a bout of the flu quickly increases my chances of getting over the flu quickly
2) I believe in the placebo effect and would like to get over the flu quickly
3) Therefore, I choose to believe I will get over the flu quickly.
There is no reason besides practicality to believe proposition 3), but if propositions 1) and 2) are accepted, proposition 3) follows naturally. To my own way of thinking, some beliefs are self-justifying, but I'd be interested to hear how your worldview deals with this example.
edit: spacing
For example, my decision whether to take an umbrella today must be based on my honest best guess whether it'll rain today (and the relative utilities of various outcomes). If I shift my best guess one inch away from what's warranted by evidence, to obey social pressure or something, then acting-as-if the new belief was true will predictably lead to lower expected utility for me. That holds always, no matter how controversial the belief.
When social pressure is weak, it mostly makes people lie about their beliefs, while still acting-as-if their best guess was true. When social pressure gets strong, people start acting-as-if false things were true, and get lower utility. No amount of pressure can change the fact that actions based on accurate beliefs lead to higher expected utility. That's why I'm not a fan of social pressure on beliefs. For both individuals and societies, the best consequences are achieved by believing what is true.
> Yes, you can. It's called generalizing. Remember, we're not doing math here, this is ethics.
To strive for a consistent set of beliefs grounded in reality is more akin to most branches of Philosophy than Mathematics as far as I am concerned, and seems particularly concerning to Ethics (e.g. you can achieve a set of beliefs of what you can consider 'good' or 'evil' ignoring all perception of reality but would the result be desirable?).
Two minor points:
1) What is truth? How do you know whether something is true? By perceiving it? Is your perception not influenced by belief?
I'm not saying that we cannot know truth, ever. But we have to keep in mind we might be wrong, too...
2) Believing in something might _make_ it become true. Or, to put it differently: Belief may lead to actions which change reality - essentially a self-fulfilling prophecy.
I might consider this an edge case and include it in my worldview as an exception ;)