11 Comments
User's avatar
Nathan Ormond's avatar

Great Post. To go in a direction that you weren't going in, as an autistic person, my personal opinion is that philosophy is very autistic, but I don't think it should simply be viewed as a green-pasture special interest.

In my opinion, mainstream academic philosophy, the problems it motivates and how it socialises and inducts curious autistic innocents into being confused about what ordinary words mean and trapped in a tangled knot of pseudo-problems actually develops a kind of disorder that requires therapy to cure! And I sincerely mean this, the only hope in hell people typically have are 1-1 interventions pressing on cognitive dissonance and bringing peoples attention away from the way they've been socialised to use words in philosophical contexts back to paying attention to the ordinary.

Expand full comment
my name is council roadworks.'s avatar

That's a deeply disturbing, pessimistic, & saddening thought. I hope it's not true. Philosophy is somewhat my only real hope, maybe the fact I Intuit that supports what you proposed.

Expand full comment
Max Slavin's avatar

Great stuff! A few points.

It's unclear how philosophy can reliably recognize when a theory becomes overly complex or ad hoc. Unlike empirical sciences, which test theories against observable predictions, or math, which relies on formal proofs, philosophy often depends on intuitions and thought experiments. These are tools with no clear external standard of validation.

There are of course internal criteria like coherence, explanatory scope, and parsimony. One could even argue that intuitions (e.g. moral intuitions) are a kind of data. But these standards are soft, often contested, and highly susceptible to bias or a kind of philosophizing that makes an external observer think that what he’s reading is unrealistic gibberish. Without empirical tools, theory creep sometimes becomes unstoppable.

The AlphaFold2 also analogy doesn't work here. Its predictions are rigorously checked against empirical protein structures. Philosophy lacks that kind of feedback loop. Without it, how do we distinguish genuine insight from elegant but unconstrained speculation?​​​​​​​​​​​​​​​​

Expand full comment
Oak's avatar

Thanks for the comment! Some thoughts:

To my mind, intuitions and thought experiments are nothing special: humans, like other primates and social animals more generally, have robust recognitional capacities for conditions like knowledge and fairness. (I take it that the "intuitively" in a sentence like "This set-up is intuitively fair" just specifies that we're relying on this natural pre-theoretical ability to judge fairness, rather than making a reflective theory-laden judgment.) To decide prospectively between different courses of action, a creature also needs to be able to know, with more or less specificity, what would result from each: we do this by applying the same recognitional capacities in the imagination. (I take it that "thought experiments" exploit this capacity; if anything, using clean hypothetical cases allows us to be more precise than using real-life scenarios, with all their confounding factors.) Our imaginative capacities—the analogue of AlphaFold—are constantly being validated against real-world data: if some course of action (our own, or another's) has unforeseen consequences, we are surprised and update our world-model accordingly. We also typically recognize when we cannot possibly know what would happen in some given situation—this feels like uncertainty or confusion.

So, skepticism most plausibly targets the reliability of our recognitional capacities for some condition of interest, rather than the applicabiity of such capacities to hypothetical cases. I think that, actually, this sort of skepticism is often correct: lots of philosophically-relevant judgments seem to be artifacts of less-than-fully-reliable heuristics. But I take it that the sciences seem to tell us that we—like other animals—can pretty reliably assess conditions like knowledge or fairness; at the same time, they seem to tell us not to trust supposed recognitional capacities for, say, bewitchment or divinity.

In general, then, our imaginative and recognitional capacities—the things at work in thought experiments and intuitive judgments—are open to outside scrutiny and feedback.

Expand full comment
Max Slavin's avatar

Nice! I really appreciate the epistemological Bayesianism you’re using to describe how our imaginative capacities function. I don’t see a better alternative. I also agree that thought experiments are a useful tool for clarifying counterfactuals, isolating distinctions, etc. Two points:

1. I’m not convinced that our recognitional capacities are particularly reliable for assessing things like knowledge or fairness.

Sure, those aren’t too autistic can often tell if someone knows what they’re talking about or if they’re being fair or selfish. But even there, reliability is often questionable (insert personal anecdote; here’s one I like: https://www.lesswrong.com/posts/7cAsBPGh98pGyrhz9/decoupling-vs-contextualising-norms). A decent trolley problem or a Gettier case recks the intuitive machinery that didn’t evolve do deal with fringe cases. We’re back to square one, with most of the proposed solutions feeling suspiciously emotivist (did someone say “shrimp welfare”?). And in metaphysics, our intuitions often aren’t even remotely reliable.

2. The Bayesian feedback loop is necessarily subjective, and messy once we try to agree on intersubjective standards.

What counts as evidence? How should we update priors? Are we even in the same epistemic universe? Plenty of people reject Bayesianism altogether, and once that happens, the whole feedback picture starts to unravel. Worse, in intellectual circles, things only deteriorate: intellectuals love a failed theory and quietly despise grounding their views in what has actually worked under real-world conditions. Too many hedgehogs, not enough foxes.

I guess we aren’t completely blind, but I’m still skeptical about the entire philosophical project. The discipline has no agree-upon mechanisms to restrain bloated, degenerative theorizing. Especially in domains where the intuitions are fragile and the priors are incompatible.

PS: Is have a Substack now a requirement for all Oxford philosophers? I graduated a few years ago, and that wasn’t the case back in the late 10s/early 20s. :)

Expand full comment
Oak's avatar

(See reply to previous version of the comment for the above points—the disagreement might come down to my being more optimistic about formal methods.) As for Oxford Substacks, yeah, it seems to be getting more popular—I know of at least half a dozen blogs by philosophy students here besides Matthew (who's visiting for the year) and Amos, though it doesn't seem to have been explicitly coordinated.

Expand full comment
Max Slavin's avatar

Agreed.

Uncoordinated and decentralized, a new Cathedral is forming… Scary stuff. :)

Expand full comment
User's avatar
Comment deleted
Apr 9
Comment deleted
Expand full comment
Oak's avatar

Measurement instruments used in the sciences are also less-than-fully accurate; our recognitional capacities for things like knowledge or fairness, while also imperfect, are certainly reliable enough to be useful. I agree that intuitive judgments about particular cases don’t get us very far in some areas of metaphysics (although it doesn’t feel too far off areas like rational choice theory or semantics); here, as with mathematics, formal regimentation becomes even more important—I do think that a fair bit of metaphysical theorizing is guilty of overfitting to rather unreliable data. Incidentally, formal regimentation also enables feedback from mathematics: some intuitively attractive principles about structure are inconsistent in higher-order logic, for instance, such that those working in formal metaphysics seem to take coarse-grained views about the individuation of properties much more seriously.

Expand full comment
Lance S. Bush's avatar

I'm not even convinced intuitions are an actual tool; I think they may be an imagined one. If intuitions are merely beliefs or dispositions to believe, it's unclear how they could serve as much in the way of evidence for anything, since they'd just be descriptive psychological facts about the philosopher's state of mind. If they're supposed to do any work, they may need features that simply don't correspond to actual psychological processes human beings have. I suspect intuitions as sui generis states do not refer to any actual processes going on in human minds, and that the whole idea of philosophical intuitions serving as a tool is fundamentally misguided. Which is to say, I don't simply think intuitions aren't reliable...I think there quite literally are no intuitions, and that the philosophers who believe that there are have imbibed a legacy of pseudopsychological speculation rooted in mysticism.

Expand full comment
Oak's avatar

(Reply copied from your note, which I see now is not a direct reply to the post.)

In this post, I follow Williamson’s habit of deliberately avoiding the word “intuition” to avoid this sort of baggage (that is, of appearing to treat philosophical judgments as exceptional in some way). But do you dispute the claim that humans and other primates have the capacity to react to conditions like knowledge or unfairness—including in the imagination (for instance, to prospectively evaluate different courses of action)—with enough reliability in favorable conditions to be useful?

Expand full comment
my name is council roadworks.'s avatar

This was fun to read to the extent I understood it. Thanks.

Expand full comment