zlacker

[parent] [thread] 1 comments
1. elliot+(OP)[view] [source] 2025-07-08 10:09:51
Ah I understand, you're exactly right I misinterpreted the notation of P(#). I was considering each model as assigning binary truth values to the propositions (e.g., physicalism might reject all but Postulate #1, while an anthropocentric model might affirm only #1, #2, and #6), and modeling the probability distribution over those models instead. I think the expected value computation ends up with the same downstream result of distributions over propositions.

By incoherent I was referring to the internal inconsistencies of a model, not the probabilistic claims. Ie a model that denies your own consciousness but accepts the consciousness of others is a difficult one to defend. I agree with your statement here.

Thanks for your comment I enjoyed thinking about this. I learned the estimating distributions approach from the rationalist/betting/LessWrong folks and think it works really well, but I've never thought much about how it applies to something unfalsifiable.

replies(1): >>hiAndr+u6
2. hiAndr+u6[view] [source] 2025-07-08 11:32:51
>>elliot+(OP)
You're welcome! Probability distributions over inherently unfalsifiable claims is exotic territory at first, but when I see actual philosophers in the wild debate things I often find a back-and-forth of such claims that definitely looks like two people shifting around likelihood values. I take this as evidence that such a process is what's "really" going on when we go one level removed from the arguments and their background assumptions themselves.
[go to top]