The Hard Conflation and Its Opposite: Two Fallacies in Consciousness Debates
David Chalmers named the Hard Problem of Consciousness: the difficulty of explaining why physical processes produce phenomenal experience at all. The problem is real. But current consciousness debates are corrupted by two related rhetorical moves that produce apparent resolutions without earning them by argument.
The first bundles distinct phenomena under a single label so easy evidence appears to establish hard claims. Call it the Hard Conflation.
The second hollows out concepts by keeping the labels while removing what made them the concepts they were. Call it Concept Hollowing.
Both work through linguistic moves rather than argument. Both produce the appearance of having addressed difficult questions while actually changing what the questions were. Together they explain why consciousness debates remain stuck.
Ayn Rand identified the underlying fallacy as “package dealing.” Her definition: “the fallacy of failing to discriminate crucial differences. It consists of treating together, as parts of a single conceptual whole or ‘package,’ elements which differ essentially in nature, truth-status, importance or value.”
The mechanism she described is precise: “substituting nonessentials for their essential characteristics, obliterating differences.” A package deal works by picking some superficial similarity and using it as the binding element while obliterating the essential characteristic that actually distinguishes the bundled things.
The Hard Conflation
In AI consciousness debates, the package is built like this. Bundle distinct phenomena under a single label, “consciousness.” Use a nonessential as binding: produces outputs we associate with conscious systems. Obliterate the essential differences between the phenomena being bundled.
What gets bundled:
Functional consciousness: information processing, behavioral flexibility, language generation, outputs that pattern the outputs of conscious humans. AI systems demonstrably have this. It is observable and measurable.
Phenomenal consciousness: the qualitative character of experience. What it is like to see red, feel pain, smell coffee. Whether AI has this is genuinely contested. The Hard Problem is specifically about this.
Sentience: the capacity to have any experience at all. Whether sentience exists in AI is unestablished.
Self-awareness: the capacity to recognize oneself as an entity distinct from the environment. Whether AI has genuine self-awareness or only verbal reports of self-awareness is contested.
These are different phenomena. They may or may not occur together. Functional capacities do not entail phenomenal properties.
The Hard Conflation bundles them under one term. Demonstrate the functional outputs. Imply the phenomenal properties. Reap rhetorical credit for a claim that has not been established. Most listeners do not disambiguate. They hear “consciousness” and assume the full package. The speaker can later retreat to “I just meant functional capacities” while having captured credit for the stronger claim.
Concept Hollowing
Concept Hollowing is the inverse fallacy. Where the Hard Conflation bundles to gain credibility through false association, Concept Hollowing separates to gain credibility through false continuity. Both produce appearances of resolution that aren’t earned by argument.
The move works like this. Take a concept that has carried specific weight in human thought for centuries: soul, consciousness, qualia, experience. Each carried something that distinguished it from purely physical description: transcendence, irreducibility, what-it-is-likeness. Now redefine these concepts so what made them puzzling is gone. The soul becomes a high-level description of brain behavior. Consciousness becomes the brain’s processes described at a different level. Qualia become the physical processes of sensing or remembering. Experience becomes the same events viewed from a certain perspective.
Keep the words. Remove what made them concepts in the first place. Then announce that the concepts are real and natural, no metaphysical mystery required.
This produces apparent resolution. The Hard Problem dissolves not because anyone has explained why physical processes produce experience but because “experience” has been redefined to mean “physical processes.” The mystery doesn’t disappear. The word that named the mystery has been quietly attached to something else.
The reader hears “consciousness is real” and feels reassured. Even when the redefinition is explicit, the emotional weight of the original concept transfers to the deflationary version. Cultural and philosophical resonance gathered over centuries provides cover for content that doesn’t deserve that resonance.
A good philosophical move would either abandon the term and introduce a new one that does not carry the misleading associations, or argue that the new content earns the old term’s resonance. Hollowing does neither. It uses the old term while smuggling in different content.
The Paired Pattern
Together, the Hard Conflation and Concept Hollowing represent the two ways consciousness debates cheat. The first bundles to make functional claims look like phenomenal demonstrations. The second hollows to make hard problems look like solved problems.
Both share a structure: produce apparent resolution through linguistic moves rather than argumentative work. Both depend on readers not noticing the substitution. Both can be exposed by asking what the words actually mean and whether the analysis earns what it claims.
The Predictable Rebuttals
For the Hard Conflation, the rebuttal is “But how do we know it isn’t conscious?”
The question is itself the fallacy weaponized. The absence of disproof is not evidence of possibility. The same move would justify claiming thermostats might be conscious, or rocks. Burden of proof falls on positive claims. “AI is conscious” is a positive claim. The person making it needs to provide evidence. The skeptic does not need to prove the negative.
There is also an asymmetry that matters. We have good reasons to assume biological organisms have phenomenal consciousness: evolutionary continuity, structural similarity to ourselves, integrated behavioral and biological evidence. We do not have these reasons for classical computational systems. The architecture is radically different from anything we know produces consciousness. The only similarity is the output pattern itself, which is precisely the nonessential characteristic the Hard Conflation treats as essential.
For Concept Hollowing, the rebuttal is “But I was explicit about my redefinition. That’s transparency, not fallacy.”
Explicit redefinition does not escape the rhetorical move. The cultural and emotional weight of the original term transfers to the new content even when the redefinition is acknowledged. Readers experience agreement where actual disagreement exists. The fallacy is not in hiding the redefinition; it is in capturing the term’s resonance for content that does not deserve it. If consciousness now means “physical processes from a certain perspective,” what made consciousness puzzling in the first place has not been addressed. The redefinition has changed the subject.
The Fix
Refuse both moves.
When someone claims AI consciousness, ask which they mean. Functional capacities? Phenomenal experience? Sentience? Self-awareness? Each is a separate claim requiring separate evidence.
When someone says the Hard Problem dissolves because consciousness is just brain behavior described differently, ask what work the word “consciousness” is doing. If it now means physical processes, what made consciousness puzzling has not been addressed.
When someone retreats from “AI is conscious” to “AI shows functional capacities,” that is the Hard Conflation being exposed.
When someone says “yes, qualia are real, they’re just physical processes,” that is Concept Hollowing being exposed.
What Naming Does
The Hard Conflation explains why AI consciousness debates produce so much confidence without resolution. The Hard Problem cannot be solved by demonstrating functional capacities; it can only be solved by addressing the phenomenal question directly. Until the bundling is named, easy evidence will keep appearing to establish hard claims.
Concept Hollowing explains why some prominent philosophical positions appear to dissolve the Hard Problem without actually addressing it. The redefinitions are explicit but their effects are obscured by the cultural weight of the original terms. Until the hollowing is named, vocabulary capture will keep producing the appearance of resolution.
Real engagement with consciousness requires refusing both moves. Functional capacities and phenomenal experience are different phenomena: do not bundle them. Souls and physical processes may turn out to be the same thing, but that requires argument, not redefinition: do not hollow them.
The questions remain hard. The work to address them is real. The rhetorical shortcuts produce apparent answers, not actual ones. Naming the shortcuts is the corrective.