Being a skeptic can be hard on one’s soul. That’s always been true, ever since the invention of skepticism as a philosophy of life back in the times of Pyrrho of Elis (365–275 BCE) (Bett 2019). It’s hard not just because it requires rigorous intellectual self-discipline but because—let’s face it—much of the world ain’t skeptical at all. Indeed, skeptics such as Socrates of Athens (470–399 BCE) have historically been deemed dangerous enough to occasionally be condemned to death on charge of “corrupting the youth,” that is, teaching critical thinking.
If you, like me, have ever felt a bit dispirited and perhaps even overwhelmed by the burden of skepticism, cheer up. Here comes my pep talk. Of sorts.
Let’s start with the basics if you don’t mind. Skeptics have a bad reputation, rooted in the dictionary definition of the word: “a person inclined to question or doubt accepted opinions.” But that’s not really what the word originally meant. The English term comes from the Greek skeptikos (pl. skeptikoi), meaning “inquirer” (which is why the ongoing joke with the editor of SI is that the title of this magazine is redundant: “Inquirer Inquirer,” like ATM machine).
In other words, skeptics are not people who are inclined to disbelief. On the contrary, they are prone to inquiry and therefore to adjust, or proportion, their beliefs to the available evidence—which just as importantly means that true skeptics are always open to changing their mind should the evidence warrant it.
As I said, people have been doing skepticism for a long time, and arguably the first book on pseudoscience was written in 44 BCE by Roman philosopher Marcus Tullius Cicero (see Damian Fernandez-Beanato 2020). Called On Divination, it was a systematic takedown of astrology and other types of future-telling. In book one, section 7, Cicero wisely writes: “To hasten to give assent to something erroneous is shameful in all things.” Indeed.
Notice that Cicero says that it is “shameful” to agree to notions we have good reason to suspect may be wrong, which implies that there is a moral dimension to the practice of skepticism. That dimension was spelled out clearly by mathematician William Kingdon Clifford, who in 1877 published an essay titled “The Ethics of Belief” in the journal The Contemporary Review. Clifford wrote that “It is wrong always, everywhere, and for anyone to believe anything on insufficient evidence.”

The gloves were off, and shortly thereafter philosopher and psychologist William James responded to Clifford with another famous essay, “The Will to Believe,” published in 1896. James argued that there are some circumstances in which it is permissible to believe despite the absence of evidence. Pretty much the only example he could come up with, however, was the existence of God on the grounds that such belief—even though unsubstantiated—has positive effects both at the individual and social levels. Sure.
The Variety of Challenges Faced by Skeptics
Skepticism is an uphill battle—and not just because it is usually shunned by society. One major obstacle for the practicing skeptic is what is informally known as Brandolini’s Law (Brandolini 2013), also referred to as the bullshit asymmetry principle. Italian programmer Alberto Brandolini articulated it in this fashion: “The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.”

I’m sure we have all encountered Brandolini’s Law in our skeptical practice. You barely have time to respond to one point raised by a purveyor of pseudoscience and they have already thrown at you another ten “but what about …” statements, each of which takes a few seconds to articulate and several minutes to respond to. That’s why I stopped doing formal debates against creationists a long time ago; the deck is rhetorically stacked against you, and there’s no way to come out of the experience with anything better than a mild loss.
Speaking of bullshit, a major recommended source on the topic is the essay aptly titled “On Bullshit,” published in 2005 by philosopher Harry Frankfurt. In it, Frankfurt states that “One of the most salient features of our culture is that there is so much bullshit.” He goes on to helpfully distinguish lying from bullshitting. You see, the liar, to be effective, has to have some sense of the truth, if nothing else to stay clear from it. The bullshitter, by contrast, doesn’t care and possibly doesn’t even know where the truth is; he just mixes truths, half-truths, lies, and made-up stories to manipulate the audience and achieve his goal, which is usually political or financial gain. Or both.
I promise the “pep” part of this article is coming. But first we need to ask the obvious question: Why do so many people fall for pseudoscience and associated bullshit? It’s complicated, but a lot of it has to do with several cognitive biases that come with being human. These have been intensely studied by psychologists (Kahneman 2012) and appear to map pretty well with what philosophers have for a long time been describing as (informal) logical fallacies (Pirie 2015).
Cognitive biases probably evolved as useful heuristics to navigate the relatively simple life that characterized human beings for much of their existence as a species, back in the Pleistocene. But in today’s complex world of fake news, alternative facts, and AI-generated deep fakes, biases are mostly dangerous. Unfortunately, the available evidence shows two disconcerting facts: 1) pretty much everyone is affected (yes, including skeptics), and 2) even being made aware of biases doesn’t do much to help the individual countering them.
What does help is to mindfully engage in open dialogue with other people, but I’ll get to that in a minute. First, the somewhat good news.
What, Then, Are Skeptics to Do?
Look, skeptics are reality based, yes? Well, the reality is, we ain’t ever gonna win this fight. Skepticism, critical thinking, science, and education will not (likely) ever turn the world into a paradise of enlightenment where everyone proportions their beliefs to the evidence and behaves in a way that is consciously ethical. History, if nothing else, should have taught us this much by now.
But in a sense, that’s actually the good news. If we accept reality and abandon quixotic goals, we can get down to the real business of a skeptic: to inquire into issues and educate as many people as will listen. The goal is not to change the world (not likely at all) but to make it a slightly better place, one changed mind at a time (far more feasible).
Some contemporary authors insist on telling us that we can’t do even that. Social psychologist Jonathan Haidt (2001), for instance, has famously argued that Homo sapiens is not really a rational species, as Aristotle said, but rather a rationalizing one. According to Haidt, we have strong gut feelings about things—for instance, moral judgments—and when challenged we make up plausible-sounding reasons such feelings are justified.
There is indeed some empirical evidence to support such a contention, but not as a general rule of human behavior and most definitely not as an exceptionless rule. Otherwise, presumably, we would have to conclude that Haidt himself writes papers that engage in rationalizations. The reality is that we rationalize largely under two circumstances: 1) when we have low quality information available to us; or 2) when we engage in motivated reasoning. The first one can be countered by providing better quality information, the second one by, again, engaging in dialogue.
But how do we do the latter? By taking seriously a much maligned ancient approach to the problem of knowledge and persuasion: rhetoric (Reames 2024). Rhetoricians know how to counter rationalization—to the extent that it is possible—by, for instance, using the Socratic method (Farnsworth and Godine 2021).
If you read any of the Socratic dialogues by Plato, such as the Laches, which is about the meaning of courage, you will notice a pattern. First, Socrates doesn’t lecture other people on why they are wrong (too many skeptics have precisely that tendency, so we need to work on that). Rather, he asks questions. These questions are not random, of course, but aim at eliciting a preliminary response from Socrates’s interlocutors. “What is courage?” “I think courage is X.” The conversation then continues by way of additional questions meant to explore the statements made so far. At some point, Socrates would say something along the lines of: “But wait a minute, earlier you said X, but now you just said Y. And X and Y seem to be in tension with each other. How do we resolve that?”
The other guy was usually surprised and would concede the point, in part because Socrates kept talking in collaborative, not adversarial, terms. This is a joint project, not a debate. After several rounds of this back and forth, the dialogue often ended in aporia, that is, confusion. Nobody, allegedly including Socrates, knew what the answer was. But such confusion—fundamentally the admission that perhaps we don’t know as much as we thought we did—is the beginning of wisdom and leads to continued inquiry.
The Socratic approach works because it induces what modern psychologists call cognitive dissonance. At some point, one or more of the interlocutors in the typical dialogue face the fact that they hold two beliefs (X and Y above) that don’t really go together very well. That’s an uncomfortable psychological state and often (though not always) prompts people to pause and reconsider. (At other times, it prompts them to storm out in a huff and a puff, which also happened in some of the Socratic dialogues.) So that should be the goal of an effective skeptic: neither to convince people on the spot (because that seldom works), nor certainly to insult and belittle them (which pretty much never works). Instead the goal is to trigger cognitive dissonance so that they may begin to admit to themselves that perhaps they need to do some more thinking.
More Challenges for the Modern Skeptic
There are other seemingly hard to overcome challenges facing the modern skeptic. For instance, you may have heard that one more explanation for why people are not going to listen to reason is that reason itself evolved not to discover truths about the world but to manipulate others. This is the so-called “Machiavellian” theory, named after the notorious (and much misunderstood) Renaissance Italian political writer Niccolò Machiavelli, author of The Prince.
The theory was first proposed by primatologist Frans de Waal (1982), based on the observation of complex “political” behaviors within groups of chimpanzees, including alliances between individuals and other kinds of maneuvering that seem to aim at better social positioning. If that’s what intelligence is for, then as the argument developed by other authors goes, we are simply not wired to inquire into the nature of the universe and are instead prone to manipulate or being manipulated by others.
Setting aside that other evolutionary biologists have criticized the Machiavellian theory on empirical grounds, and the further point that it’s really difficult to establish what any trait actually evolved for, let alone one as complex as the structure of the brain, there is of course nothing to preclude human intelligence to perform well at more than one task. We certainly do engage in social politics, but we also just as clearly continue to discover things about how the world works. Imagine how silly it would be for someone to argue (and to be clear, this is not what de Waal did!) that personal computers where invented “for” a particular task—say word processing—and that therefore we shouldn’t expect them to be any good at several other tasks, such as web browsing or spreadsheet calculations.
Perhaps a more difficult challenge got started as late as 2022 with the release of ChatGPT and the onset of publicly available generative AI software (gAI). This is not the place for an in-depth discussion of the pros and cons of the technology, but a paper by philosopher Michel Hicks and collaborators (Hicks et al. 2024) investigated gAI’s famous “hallucinations” and argued that these machines behave as bullshit generators in the sense of bullshit already discussed above.
In part, they conclude:
Calling [gAI’s] mistakes “hallucinations” isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived. This, as we’ve argued, is the wrong metaphor. … Calling chatbot inaccuracies “hallucinations” feeds in to overblown hype about their abilities among technology cheerleaders. … Calling these inaccuracies “bullshit” rather than “hallucinations” isn’t just more accurate; it’s good science and technology communication in an area that sorely needs it.
We just can’t catch a break, can we?
The Importance of (Virtuous) Persuasion
But do not despair. More help is forthcoming, again from the field of rhetoric. Still one of the best treatises on the topic is Aristotle’s Ars Rhetorica. In book II, he discusses the three means of persuasion we have available when we wish to change someone’s mind: logos, ethos, and pathos. Logos is the only one to which, usually, skeptics and scientists pay attention: it means to get one’s arguments and evidence straight, so that what we are trying to convey actually is, to the best of our knowledge, the truth.
That’s good, but we ain’t going to get very far if we ignore the other two tools. Ethos comes from the Greek word for “character,” and it has to do with establishing our credibility. With most people, especially outside of an academic environment, this isn’t just a matter of sporting “PhD” or “MD” after one’s name. It means being aware of our audience and making a connection that establishes trust. In fact, there is a whole philosophical and psychological literature on trust, which skeptics would be well served to read (see McLeod 2020). It’s much harder than we might at first think to lead people to trust us, even if we have done our homework in terms of logos.
Third, there is pathos, perhaps the most difficult for skeptics and scientists to wrap their minds around. It has to do with connecting emotionally with one’s audience, because if people don’t get the sense that you care about them, they will be prone to reject what you say. In my experience, many colleagues recoil from this suggestion on the grounds that it sounds a lot like emotional manipulation. In a sense, it is. Deal with it, because you are not trying to convince perfectly rational machines but actual people with flesh and blood, people who need not just good arguments but also trust in and emotional connections with a speaker before they will consider changing their minds—especially about something they care about or that contributes to their identity as human beings (e.g., belief in creationism because they are good Christians).
Both Socrates and Aristotle belong to a broader tradition known as virtue ethics. The notion is that what’s most important in life is one’s own character, which determines what sort of inclinations we have to behave “virtuously” (i.e., helpfully toward others) or not.
A spinoff of virtue ethics is the notion of virtue epistemology (Baehr n.d.), the idea that we ought to work toward improving our grasp of reality because it is the right thing to do. In other words, as Clifford argued in his essay on the ethics of belief, knowledge isn’t just a matter of getting things right; it has an ethical dimension: we ought to strive to get things right.
Virtue epistemologists have developed a list of virtues we should practice and of corresponding vices we should do our best to avoid. Among the virtues, they include benevolence (i.e., applying the principle of charity), conscientiousness, discernment, honesty, humility, studiousness, and several others. Vices comprise close-mindedness, dishonesty, dogmatism, gullibility, self-deception, and wishful thinking.
The problem is that it’s far too easy for everyone, including skeptics, to delude ourselves that we clearly belong to the first category, the virtuous, and not the second one—the vicious. But do we? There are ways to check, for instance by asking yourself (or, better yet, ask someone else) whether you pass the following tests.
Did I carefully consider the other person’s arguments rather than dismissing them out of hand?
Did I interpret what the other person was saying in a charitable fashion?
Did I entertain the possibility that I may be wrong?
Am I actually an expert in this matter? Why am I talking about it?
Did I check the reliability of my sources?
Am I simply repeating someone else’s opinion?
Be honest with yourself. How often did you fail one of the entries in the above checklist? Let’s all work on this together; good skepticism begins at home.
At the end of the day, I am struck by the usefulness of a metaphor articulated by Carl Sagan in the subtitle of his classic book, The Demon Haunted World (Sagan 1995), first published thirty years ago. Sagan referred to science, and by extension reason, as a candle in the dark. Our job is not to expand that candle so that the entire world is illuminated by it. History clearly tells us that is a fool’s errand. It can’t be done, probably ever. If we set our standards that high, then we are bound to be seriously disappointed, spiral into depression, and give up.
Instead, our job as skeptics is to make sure the candle remains lit, that those people who care about truth, science, and reason will always find kindred spirits in both moral and practical support. If we are very lucky, we will contribute to more candles being lit and passed around. That’s doable, has been done in the past, and can be done again. So, what are we waiting for?
References
Baehr, Jason S. N.d. Virtue epistemology. Internet Encyclopedia of Philosophy (peer reviewed). Online at https://iep.utm.edu/virtue-epistemology/.
Bett, Richard. 2019. How to Be a Pyrrhonist: The Practice and Significance of Pyrrhonian Skepticism. Cambridge, UK: Cambridge University Press.
Brandolini, Alberto. 2013. Original statement of Brandolini’s Law. X. Online at https://x.com/ziobrando/status/289635060758507521.
de Waal, Frans. 1982. Chimpanzee Politics: Power and Sex among Apes. Baltimore, MD: Johns Hopkins University Press.
Farnsworth, Ward, and David R. Godine. 2021. The Socratic Method: A Practitioner’s Handbook. Ashland, OR: Blackstone Publishing.
Fernandez-Beanato, Damian. 2020. Cicero’s demarcation of science: A report of shared criteria. Studies in History and Philosophy of Science Part A 83: 97–102.
Frankfurt, Harry. 2005. On Bullshit. Princeton, NJ: Princeton University Press.
Haidt, Jonathan. 2001. The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review 108(4): 814–834.
Hicks, Michael Townsen, James Humphries, and Joe Slater. 2024. ChatGPT is bullshit. Ethics and Information Technology 26(2): 1–10.
Kahneman, Daniel. 2012. Thinking, Fast and Slow. New York, NY: Penguin.
McLeod, Carolyn. 2020. Trust. Stanford Encyclopedia of Philosophy (peer reviewed). Online at https://plato.stanford.edu/entries/trust/.
Pirie, Madsen. 2015. How to Win Every Argument: The Use and Abuse of Logic. London, UK: Bloomsbury Academic.
Reames, Robin. 2024. The Ancient Art of Thinking for Yourself: The Power of Rhetoric in Polarized Times. New York, NY: Basic Books.
Sagan, Carl. 1995. The Demon-Haunted World: Science as a Candle in the Dark. New York, NY: Ballantine Books.
