3 Comments
User's avatar
Mechanics of Aesthetics's avatar

" For an interesting alternative exposition, see Jaynes’ Probability Theory: The Logic of Science (Ch. 1-2)."

Great rec! The hilarious snarky remarks left by Jaynes everywhere in the book does not hurt its appeal.

Expand full comment
Mahin Hossain's avatar

In a graduate logic seminar at St Ands I learned a really good way to think about sigma-algebras. Imagine you're playing some variant of a game called the Observer Game where you make observations about what happened according to a ruleset. There are many rulesets, but every ruleset must be logically closed (if "X happened" is a possible observation in your ruleset, then "X didn't happen" must also be a possible observation in your rule set, and so on). A sigma-algebra is a ruleset for some variant of the Observer Game. This why the smallest possible sigma algebra is {⊤, ⊥} corresponding to the observations "nothing happened"/"something happened".

Expand full comment
Joe's avatar

I confess I got a bit confused at some of the technical details, but I'm curious as to justification for introducing "things we definitely know" or "things we outright believe" at all (I know it's commonplace to do so, but I still don't see its justification). If you take a large but finite set of things you (think you) definitely know, do you not hold some non-zero probability that one will be refuted? Suppose we define "to know a proposition" as something like "to have a kind of gut feeling about it". Does this really have perfect correlation with truth? The chance that "I have recently started having delusions" seems concretely above zero (though still tiny, I hope), and conditioning on this, I would be wary of any extremely high confidences. Similarly I don't see a clean divide between observations and inferences, and as such I'm struggling to understand the difference you've defined between credences and confidences (though it may be that I haven't studied your definitions with enough care or rigour, apologies). Why, other than to save time/energy, should I have credence/confidence 1 on the world being round? Maybe I misunderstood and you wouldn't claim that anyway - I'll have a read again at some point. I'm curious if you think we have some different underlying model for beliefs - I'm not currently able to formalise mine with as much rigour as you, but do you see anything contradictory/unworkable about the way I'm treating beliefs?

Expand full comment