On Zombie Philosophy
Physics has empiricism. If your physical theory doesn’t make a testable prediction, physicists will make fun of you. Those that do make a prediction are tested and adopted or refuted based on the evidence. Physics is trying to describe things that exist in the physical universe, so physicists have the luxury of just looking at stuff and seeing how it behaves.
Mathematics has rigor. If your mathematical claim can’t be broken down into the language of first order logic or a similar system with clearly defined axioms, mathematicians will make fun of you. Those that can be broken down into their fundamentals are then verified step by step, with no opportunity for sloppy thinking to creep in. Mathematics deals with ontologically simple entities, so it has no need to rely on human intuition or fuzzy high-level concepts in language.
Philosophy has neither of these advantages. That doesn’t mean it’s unimportant; on the contrary, philosophy is what created science in the first place! But without any way of grounding itself in reality, it’s easy for an unscrupulous philosopher to go off the rails. As a result, much of philosophy ends up being people finding justifications for what they already wanted to believe anyway, rather than any serious attempt to derive new knowledge from first principles. (Notice how broad the spread of disagreement is among philosophers on basically every aspect of their field, compared to mathematicians and physicists.)
This is not a big deal when philosophy is a purely academic exercise, but it becomes a problem when people are turning to philosophers for practical advice. In the field of artificial intelligence, things are moving quickly, and people want guidance about what’s to come. Should we consider AI to be a moral patient? Does moral realism imply that advanced AI will automatically prioritize humans’ best interests, or does the is-ought problem prevent that? What do concepts like «intelligence» and «values» actually mean?