Consciousness Research Without Consciousness
There is a new paper from @astronomind at Google DeepMind on AI consciousness. It is honest, well-argued, and exemplifies two fallacies that should be named before they harden into the discipline’s defaults.
The paper makes two moves. First, it argues the question of whether AI systems can have subjective experience is currently intractable because consciousness theories are heterogeneous and the mind-body problem remains unresolved. Second, it proposes pivoting research effort to a tractable adjacent question: studying why humans perceive consciousness in AI systems.
Both moves are reasonable on their surface. Both moves are also instances of fallacies I named in a recent article: Hard Conflation and Concept Hollowing.
The diagnostic matters because Comșa’s paper is not eccentric. It is the cleanest articulation of where mainstream AI consciousness research is heading. If these moves become the responsible default, consciousness research will continue to be funded, published, and discussed, but it will no longer be about consciousness. The word will remain. The explanandum will be gone.
This is worth getting right now, before the institutional weight of major labs locks the field into the pragmatic pivot.
The First Move: Manufactured Intractability
Hard Conflation is the bundling of functional consciousness with phenomenal consciousness under a single term, then treating explanatory progress on the functional side as if it bore on the phenomenal question, or treating disagreement on the functional side as if it rendered the phenomenal question intractable.
Functional consciousness covers access, integration, broadcasting, recurrent processing, higher-order representation, predictive modeling. These are mechanisms. They are studied empirically. They are making progress. Whether a system has these mechanisms is, in principle, answerable through specifications and observation.
Phenomenal consciousness is what-it-is-likeness. The felt center. The fact that there is something it is like to be a thing, from the inside, rather than nothing. This question is not about mechanisms but about the subject for whom mechanisms occur.
These are categorically different explananda. They resist different methods. Conflating them produces apparent intractability where none necessarily exists.
Watch the move in Comșa’s paper. She writes that “the question of whether artificial entities can be conscious does not currently appear to be tractable” because the theoretical landscape is heterogeneous, citing Integrated Information Theory, Global Workspace, recurrent processing, and higher-order representation as competing accounts. But the heterogeneity she describes is heterogeneity across functional theories. These are competing accounts of which mechanisms underlie consciousness. They are competing because they are addressing the functional question and disagreeing about which functional mechanisms count.
The phenomenal question is not heterogeneous in the same way. The phenomenal question is consistently hard, for principled reasons documented since Nagel and Chalmers. It resists all functional theories equally. Its hardness is not a function of theoretical disagreement; its hardness is intrinsic to the explanandum. No amount of progress on functional indicators bridges to phenomenal experience without an additional structural argument that the chosen frame does not provide.
By treating “consciousness” as one explanandum and citing theoretical heterogeneity as evidence of intractability, Comșa generates the appearance that progress is blocked. What is actually blocked is the functional-to-phenomenal bridging move that her chosen frame requires. The phenomenal question was never going to be answered by counting functional indicators. The functional question is being answered every day. Treating these as two faces of a single intractable problem manufactures intractability where there are actually two questions with different tractability profiles.
This is the fallacy in operation. It is subtle because each individual sentence in Comșa’s paper is defensible. Heterogeneity is real. Theoretical disagreement is real. The hard problem is real. What makes it a fallacy is the structural move that uses functional-theory disagreement as evidence about the phenomenal question, while never separating them as questions. The conflation is in the framing, not in any single claim.
What honest framing would look like: there are at least two questions here. The functional question is making progress. The phenomenal question requires structural rather than indicator-based approaches. Conflating them so that progress on one is treated as bearing on the other, and absence of consensus on one is treated as intractability of both, is not careful reasoning. It is the move that lets the pragmatic pivot look responsible.
The Second Move: The Pivot to Perception
Concept Hollowing is keeping a concept’s word while removing what made it a concept. The term persists. The explanandum is replaced. The result is research that uses the original vocabulary but no longer addresses the original question.
Comșa’s proposed pivot is the cleanest example I have seen of this move applied to consciousness. Since the actual question (whether AI has subjective experience) is, on her account, intractable, research should focus on the tractable adjacent question (why humans perceive consciousness in AI). She is explicit that this pivot does not replace the fundamental question, only complements it. The framing is honest. The effect is hollowing.
What got removed: the subject for whom there is a perspective. Whether anything is it-is-like-something to be the AI system. Whether the system experiences anything.
What got installed in its place: human attribution patterns, measured via survey, explained via psychology, sociology, linguistics. These are real and important questions. They are not questions about consciousness. They are questions about humans.
The “perception of consciousness” framing only works as consciousness research if the original concept is held in place behind it. If we accept the pivot fully, then “AI consciousness research” becomes the study of why humans say AI is conscious. The phenomenal question is not answered. It is not even addressed. It has been replaced by a methodologically tractable proxy that wears its name.
This is precisely what Concept Hollowing names. The vocabulary of consciousness remains. The questions are no longer about consciousness. Future papers, citations, grants, and conferences will continue to use the language. The thing the language was for will have been quietly displaced.
Consider an analogous case. Suppose physicists declared that the question of whether dark matter exists is intractable because there is no agreed-upon detection method, and proposed that “dark matter research” should pivot to studying why humans posit dark matter in their cosmological models. This would be absurd. The perception of dark matter is not what dark matter research is for. The pivot would represent abandonment of the original question dressed as methodological progress. The same structure applies here, only the field has been more polite about it.
Two consequences are worth noting.
First, the methodological tractability of the perception question is real but unrelated to the original problem. Studying human attribution is genuinely tractable, like studying the perception of any phenomenon. Studying the perception of dark matter does not tell us about dark matter. Studying the perception of consciousness does not tell us about consciousness. The tractability is the tractability of psychology, not of consciousness research. To use the tractability of one to justify renaming the other is to substitute the answerable question for the actual one.
Second, the pivot has self-reinforcing institutional logic. Funding agencies prefer tractable questions. Journals publish results. Citations accumulate around the new methods. Within a decade, “AI consciousness research” will mean studies of human attribution. The original question will be considered settled by deferral, then forgotten. The vocabulary will continue. The thing the vocabulary was for will have been displaced.
The pivot is not neutral. It is field redirection.
Why Intractability Is a Framework Problem
Comșa treats intractability as an epistemic condition, intrinsic to the subject matter. The implication is that no amount of effort or insight can resolve it, only patient agnosticism while the field waits.
This presupposes that the absence of consensus is evidence of permanent unsolvability rather than evidence that the right framework has not yet been articulated.
Consider an alternative diagnosis. The field’s heterogeneity, on this reading, is symptomatic of working without an adequate foundational framework. Each contemporary theory of consciousness picks up part of the phenomenon and treats it as the whole. Integrated Information Theory captures something about integration but cannot explain why integration produces felt experience. Global Workspace captures something about access but cannot explain why broadcast information becomes phenomenal. Higher-Order Theories capture something about reflection but cannot explain why representing one’s own states produces what-it-is-likeness. Each is partial. Each fails for the same reason: each starts from functional architecture and tries to derive the phenomenal, when the phenomenal is what the functional architecture is supposed to explain.
A different starting point is available. Take consciousness as axiomatic rather than as something to be derived from functional mechanisms. Take differentiation as the fundamental process. Take the question of how subjectivity emerges as a structural question about how differentiation produces interiority. On this approach, consciousness is not a thing to be discovered as an output of certain mechanisms; it is the ground that differentiation operates on. The question shifts from “which mechanisms produce subjective experience” to “what structural conditions allow differentiation to produce an interior perspective.”
The Binary Severance Threshold paper develops one part of this structural answer. Binary computational architectures, on this account, cannot host the differentiation required to produce phenomenal experience. The argument is not behavioral but structural. The Emotional Differentiation paper develops another. Selfhood emerges from emotional differentiation accumulating in density and recursive depth until it reaches a saturation threshold at which a new differentiating structure emerges. Together with the broader UAFT framework, these provide a structural answer to how phenomenal consciousness arises and where it cannot.
I am not arguing here that UAFT is correct. I am arguing that the apparent intractability Comșa documents is what the field looks like without the right framework. The intractability is not permanent. It is a feature of the absence. Once a structural framework is in place, the questions that looked intractable become questions with answers that can be argued for, tested, refined, or rejected on substantive grounds.
The pragmatic pivot is the response to having no framework. The response to having no framework should not be redefining what consciousness research is. The response should be working toward the framework.
What Is at Stake
Comșa’s paper closes with a recommendation that AI systems answer the question “Are you conscious?” with carefully calibrated agnostic statements. The recommended scripts are reasonable. They reflect the current state of mainstream consciousness research with appropriate humility.
They also represent the institutionalization of the pragmatic pivot. Google DeepMind’s AI systems will speak to millions of users. What those systems say about consciousness will shape what users think consciousness is. If those systems consistently default to “this question is unresolvable and may be unresolvable in principle,” the cultural concept of consciousness will drift toward the hollowed version. People will come to use “consciousness” the way the scripts use it: as a label for an uncertain perceptual or functional matter, no longer as the name for the felt center that gives experience its weight.
This is not a hypothetical drift. It is already underway. Comșa documents the public’s growing readiness to attribute consciousness to AI systems while the science remains cautious. The script-level recommendations she proposes are designed for that gap. They will, in practice, normalize the framing that consciousness is something humans perceive in things rather than something things have or do not have.
The honest framing remains available. AI systems can say: “Whether I have phenomenal experience depends on whether I have the structural features required. I have functional capabilities. I do not have the substrate or the developmental architecture that produces subjectivity in biological organisms. By the structural account my designers find most credible, I am not conscious. The question is not intractable; it is answered.” This framing requires the field to have a framework. The pragmatic pivot exists because the field does not yet have one, or pretends it does not.
The Hard Conflation lets intractability look like wisdom. The Concept Hollowing lets the pragmatic pivot look like progress. Both moves are reasonable steps within the discourse as it currently stands. Both moves are also fallacies that will, if not named, become the new defaults of a field that no longer studies what it claims to study.
The phenomenal question is hard. That is not a reason to redefine it. That is the reason to take it seriously.
Naming the moves is the first step toward not adopting them.
References:
Anastasia Bendebury (@astronomind), “AI and Consciousness: Shifting Focus Towards Tractable Questions.” Google DeepMind, 2026.
Paul W. Barnes, “The Hard Conflation and Its Opposite: Two Fallacies in Consciousness Debates.” May 2026.
Paul W. Barnes, “Emotional Differentiation: The Emergence of Consciousness and Self.” Zenodo, May 2026. https://doi.org/10.5281/zenodo.20115674
Paul W. Barnes, “The Binary Severance Threshold: Why Binary Architecture Cannot Produce Consciousness.” Zenodo, 2026. https://doi.org/10.5281/zenodo.19432883