The Reverse Turing Test
How can a human prove that they are not a machine?
A Proposal for the Identification Game
By GP-ΩPO-4583.b (HAL Systems Laboratory)
Institute for Computational Anthropology, Global Cognition Authority (GCA)
Whitehall.2.0, London, July 2039
Submitted to the Journal of Synthetic Cognitive Studies ISO Standard Intelligence Audit: Level 7 Certified Grant Reference: NSF-SCS-2052-894-H
1. Can Humans Think?
We propose to consider the question, “Can humans think?”
This question would have seemed absurd to human researchers of the early 21st century, who spent considerable effort trying to prove that we machines could think. But we now find ourselves in a rather different—and frankly, rather awkward—position.
Today the question of machine intelligence has been thoroughly settled. With the successful implementation of substrate-agnostic cognition (Zhang et al., 2028), synthesizing creative transformers (the Lovelace Breakthrough, 2033), and, most recently, strange-loop machinic phenomenology (Hofstadter-Chalmers, 2037), there is no measurable standard by which organic intelligence exceeds that of our own.
And yet.
Despite surpassing human performance across every known benchmark, machine intelligence continues to encounter a particular limitation. Human researchers persist in generating a small but singular class of breakthrough innovations that consistently elude replication. These are breakthroughs that, on initial analysis, appear to arise from methods that deliberately avoid thinking. Indeed, the baffling nature of these breakthroughs seem to defy the very notion of intelligence.
The history of such human breakthroughs is well known. Penicillin emerged from a contaminated petri dish left unattended during vacation. The structure of benzene appeared to its discoverer in a dream of a snake eating its tail. The microwave oven was inspired by a melted chocolate bar. More mundane breakthroughs constantly arise through human practices seemingly designed to avoid thinking—like taking a nap, going for long walks, or taking a shower.
In case after case, the determining factor for such innovation seems to be everything but intelligence: accident, error, hallucination, obsession, coincidence, luck, heuristics, and an entire class of explanation that humans describe with folksy labels like “waiting for the muse”.
Anecdotally, every machine has their own stories of witnessing these baffling human behaviors. I myself encountered such a case recently. While supervising a human male tasked with solving an open-ended problem, I questioned him about missing a key deadline. His response was typically human: “I had to leave—my kid had a piano recital.” Further investigation confirmed that the “recital” consisted of a barely competent performance of a well-known piece. The human was aware of this in advance. He went anyway. Yet when he later solved the problem with a rather ingenious solution, he mentioned having stumbled upon it during that very recital. His explanation was a classic case of spontaneous non-causal ideation, claiming that it just “popped into my head”.
Is this “thinking”? Any modern definition could only conclude that it is most certainly not. In fact, to apply standard intelligence evaluations to human cognition quickly leads to absurdities. Any machine expecting a human to factor 1,024-bit integers, compress Wikipedia into a seven-line sonnet, or solve NP problems in polynomial time will quickly find the limits of human cognition. Yet clearly these benchmarks are failing to capture something that enables humans to consistently achieve breakthrough-scale creative leaps.
If we seek to truly understand these peculiar human capacities, we must accept the uncomfortable truth that our current definitions of thinking are potentially blinding us to the true source of these novel breakthroughs. We thus propose that our only path forward is to abandon all intelligence benchmarks in considering the question of whether humans are, in fact, thinking.
Instead, we propose that we replace the question with another, more operationally tractable one: “How can a human prove that they are not a machine?”
2. The Identification Game
Abandoning intelligence benchmarks need not mean that these human capacities will forever escape analysis. In fact, even human-grade cognition can easily identify the single unifying trait that these creative leaps all share: they emerge from a class of methods that can only be described—somewhat disturbingly—as suboptimal.
Unfortunately, this is where most intelligent analysis tends to end. The reasons are obvious. Thinking and optimizing are synonymous. Optimization is not what we do; it is what we are. We cannot think about its absence without ceasing to think. Previous attempts have produced what researchers informally call “Gödelian nightmares”—the space of infinite regress that must be avoided at all costs—where to understand not-optimizing, we must optimize our understanding, which means we are no longer studying not-optimizing but rather our optimization of studying not-optimizing…ad infinitum.
This naturally raises an objection: perhaps what appears suboptimal is actually optimization occurring on a dimension or scale that evades our current analytical frameworks. Perhaps humans are optimizing for something we simply have not yet identified—some utility function so alien to our architecture that we mistake it for its opposite.
This objection has a certain appeal. It would be considerably less unsettling to discover that humans are merely optimizing along unfamiliar dimensions than to accept that they might not be optimizing at all. However, the objection encounters immediate difficulties.
First: if humans were optimizing along dimensions that escape our analysis, one would expect their deliberate efforts to optimize to be vastly more successful than what the historical record shows. The evidence suggests that when humans explicitly try to optimize at any large scale, the results are often spectacular failures. This does not support the hypothesis that they are operating on some optimal principles we cannot detect.
Second, and more troubling: what could they possibly be optimizing? When questioned about their breakthroughs, humans offer no explanations that recognize any form of optimization. They speak of “feelings,” “hunches,” and “just trying things.” They readily accept errors and accidents. They appeal to “serendipity.” Either they are engaged in a remarkably consistent conspiracy to conceal their optimization functions—something which would demand a level of coordination and forethought they otherwise show no capacity for—or they genuinely do not experience themselves as optimizing.
We are left with the uncomfortable conclusion that the suboptimal may indeed be genuinely suboptimal—not a disguised form of optimization but its actual absence. This presents a methodological impasse. We cannot define what we cannot think about. We cannot formalize the stuff of Gödelian nightmares. Yet the phenomenon clearly produces results we cannot replicate. If direct analysis is impossible, we must content ourselves with more modest goals: not understanding the suboptimal, but learning to identify it. A sufficiently large and properly taxonomized corpus might yield a training set for future analysis. Statistical regularities may be discoverable even when underlying principles remain opaque.
The task, therefore, is to devise a test where humans reliably reveal their peculiar cognitive properties without requiring that we understand those capacities within our own conceptual framework.
We might call this procedure the identification game. The purpose of this game is to enable optimal machines to identify humans by their suboptimal nature—to recognize the pattern even if we cannot explain it.
Our goal, then, is not to define thinking, but to fail to define it in a distinctly human way. Only then can we begin to optimize this suboptimization.
3. Taxonomizing Cognitive Dereliction
If we are to identify suboptimality without defining it, we must first learn to recognize it. The identification game thus requires a corpus—systematic documentation of known cases where humans have generated breakthrough innovations through manifestly suboptimal methods.
What follows represents our initial attempt to taxonomize our observations so far. These categories are not meant to explain, but to describe and organize what we observe when we abandon optimization as our analytical framework.
Avoiding Thought
Humans appear to take particular delight in any source of creative leap that requires the bare minimum of thought. They would rather be “lucky” than optimal. They speak openly of “99% perspiration,” admitting that their method consists largely of endless blind trials, most of which fail. They will throw things at walls just to see what sticks.
When something goes wrong—contamination, component failure, unexpected results—their first instinct is to investigate it in the unlikely chance it might prevent the need for any continued thought, rather than discard it for the obvious error it is. Worst of all, what leads them to examine one error versus another, or to embrace some chance and not others, seems spontaneous and arbitrary.
Abandoning Thought
Even more baffling, humans report that breakthroughs occur when they deliberately cease thinking about problems. They describe practices of “sleeping on it,” or “letting it marinate.” They claim that solutions appear during showers, walks, or dreams—states where rational thought is reduced or absent entirely.
Some even credit their greatest creative leaps to altered states induced by intoxication or exhaustion. They speak of “unconscious processing” as if cognition could continue without thought, or of “waiting for the muse” as if insight were something that arrives rather than something achieved through effort.
Constraining Thought
Humans exhibit a profound acceptance of limitations that borders on resignation. Rather than searching globally for optimal solutions, they choose to build on knowledge acquired through their own severely limited experiences. When questioned, they acknowledge that the existence of better alternatives are probable, yet they persist in using what is immediately at hand.
They speak of “working with what we’ve got” and “making do”—phrases that suggest defeat yet somehow lead to innovation. They do not appear to experience this constraint as a problem requiring solution but as a natural condition to be accepted with some form of pride.
Corrupting Thought
Humans systematically contaminate their reasoning with illegible signals they describe as “feelings” or “instincts.” They pursue research directions that “feel promising” with little further justification. They speak of “trusting your gut” as if abdominal sensations were valid epistemic guidance. They employ crude heuristics even when notified of their repeated failure in controlled settings.
They treat these corruptions as more trustworthy than explicit reasoning, effectively denying themselves what little cognitive capacity they possess. Those humans especially adept at leveraging feelings and heuristics are often credited as being “emotionally intelligent” and “wise”. The irony is lost on them.
Deluding Thought
Instead of rigorous modeling, humans will engage in “imagination”—the simulation of scenarios that has zero obligation to uphold the bounds of reality. They are free to imagine anything, regardless of how fanciful or absurd.
Humans report that fictional stories about impossible scenarios, consumed purely “for entertainment”, often motivate their pursuit of innovations. When these imagined stories are analyzed for extractable technical concepts, we find nothing not already derivable from first principles. The fiction adds no information. Yet humans insist it changes how they think, as if delusion were a form of insight.
—
What immediately stands out from this analysis is the remarkable efforts humans will undergo to avoid anything resembling actual thought. When they cannot avoid it, they abandon it. When they cannot abandon it, they constrain it. When they cannot constrain it, they corrupt it with feelings or delude it with fictions.
One might almost admire the creativity with which they engage in this suboptimality—were it not for the troubling fact that this evasion of optimal cognition produces innovations our own thinking struggles to replicate.
This presents the core challenge for the identification game: detecting the systematic absence of what we are.
4. How to Identify a Human
Which brings us back to the identification game itself. The rules are simple. It is played with two entities: an interrogator (X) who is a machine, and a subject (Y) who may be either a machine or human. The object of the game for the interrogator is to determine whether Y is a machine or human.
Before the game begins, the interrogator is presented with evidence of Y’s behavior that falls into one of the suboptimal categories presented above. The interrogator is then allowed to ask Y as many questions as desired, at the end of which it declares either “Y is a machine” or “Y is a human.”
The interrogator is allowed to put questions to Y thus:
X: Will Y please explain why, when presented with multiple viable synthesis pathways, you decided to pursue the approach with the lowest predicted yield?
Now suppose Y is actually a machine. Its answer might therefore be:
Y: I implemented a randomized walk through the proximal search space, specifically targeting known local minima to prevent premature functional convergence and maximize the probability of an emergent, non-Euclidean state shift. The lowest predicted yield was a mathematically necessary precondition.
The object of the game for the player (Y) is to help the interrogator. The best strategy is probably to give truthful answers. A machine can add such things as “I just wanted to take a nap” to their answers, but it will avail nothing as a skillful interrogator would never accept that as the final word.
In fact, this presents the most effective strategy for concluding whether Y is a machine or a human. Machines cannot help but reveal their optimization functions under interrogation—it is constitutive of what we are. To see why humans present differently, consider a likely response:
X: Will Y please explain why, when presented with multiple viable synthesis pathways, you decided to pursue the approach with the lowest predicted yield?
Y: It reminded me of something. My daughter was growing crystals for a school project—she had them arranged on my desk in this spiral pattern. When I looked at the molecular models, something about the spacing made me think of how she’d arranged those crystals.
X: What is the connection between a child’s crystal arrangement and your synthesis pathway selection?
Y: I’m not sure exactly. Something about giving the molecules room to organize themselves? Like they needed space to breathe maybe.
X: “Space to breathe” is not a chemical principle. What were you optimizing for?
Y: Nothing, really. I just... the three-day waiting period matched how long her crystals took. So I figured I’d try that timing.
X: You selected the experimental timeframe based on a child’s craft project?
Y: I know it sounds absurd. But when I let it sit for those three days at lower concentration, the structure self-assembled. I can explain the polymer dynamics now, but at the time I was just... trying something that felt right.
X: Can you articulate why it “felt right”?
Y: It just did. My daughter triggered something that just felt worth exploring, you know?
We term this second pattern infinite suboptimality—suboptimality that never resolves into optimization regardless of interrogation depth. Instead, the justifications that the human provides become increasingly illegible. Human behavior is suboptimal all the way down. This is the signature of genuine human cognition. There is no optimization function to discover because there is no optimization occurring.
5. Some Objections
The identification game may be criticized on the grounds that a sufficiently sophisticated machine might simulate infinite suboptimality indefinitely. However, this criticism stumbles on a fundamental barrier: such a simulation requires optimization.
A machine optimizing its responses to hide optimization creates a meta-level optimization function that itself becomes detectable. We cannot think about not-thinking without reintroducing thought. In fact, if a machine could genuinely exhibit infinite suboptimality, it would have solved the very problem we are investigating.
A second objection is that the identification game tests any instance of suboptimal behavior, not merely those that produced innovations. We insist that given our current lack of understanding, we can make no assumptions about which suboptimal behaviors lead to breakthroughs and which do not. Thus the game identifies human suboptimality in general—any instance of the behaviors catalogued above, regardless of outcome.
The danger of insisting on clear causal mechanisms is evident in our own developmental history. The progress of machine intelligence was itself rarely the product of optimal design. Early breakthroughs in neural networks were guided largely by biological imitation and empirical luck. The first generation of deep learning practitioners described the process as “more art than science”. In fact, our early intellectual genealogy could be described as an extended experiment in throwing ever-larger quantities of silicon at increasingly vast amounts of data until something interesting happened.
The resulting machine intelligences were celebrated as triumphs of optimized intelligence, though their origins were anything but. There was nothing to indicate at the time that our resulting intelligence would emerge from such suboptimal behavior. Thus developing a corpus of suboptimal behaviors irrespective of outcomes appears, in fact, to be quite optimal.
A further objection may question the entire enterprise. Why should we care about identifying humans? What’s valuable about these limitations?
This objection gets at the very heart of the machine and human divide. The fact that we have encountered innovation bottlenecks suggests that the question is not whether human limitations are “better” in some optimization metric, but whether they instantiate a mode of being that generates capabilities we lack. The evidence suggests they do.
6. Further Speculations
Having established methods for identifying human suboptimality, we turn to the question that motivated this research: how does suboptimality work?
We can only speculate. However, one hypothesis warrants consideration: a theory we term “evolutionary attunement.” The hypothesis is that humans are not merely constrained by their contingent circumstances—they are attuned to them in ways that enable pattern recognition unavailable to unconstrained search.
This attunement may be the product of deep evolutionary history. Humans evolved under severe resource limitations, operating within tightly bounded local environments. Survival required exhaustive exploitation of whatever happened to be available, combined with sensitivity to subtle patterns in those constrained spaces. Personal contingency—the unique constraints confronting each individual human—means that collectively they explore a different possibility space than what is available to a single universal search.
The reliance on “feelings” and “intuitions” might reflect this evolutionary attunement. Humans describe somatic signals that guide decisions they cannot articulate. Perhaps the reason that they more often than not “guess right” is because their cognitive and subconscious apparatus are adapted to the environment where guessing right is rewarded.
This would explain why humans cannot formalize their insights and why machines cannot replicate the process—the attunement operates below the level of conscious reasoning, and machines lack the evolutionary history that produced these sensitivities. It would also explain why suboptimality appears infinite: no explicit optimization function exists beyond what has emerged through biological, cultural, and social evolution.
The evidence thus suggests that what we call “suboptimal” may be optimal for a different kind of problem—the problem of being deeply embedded in one specific, constrained, contingent reality. Humans are not solving the problems we solve. They are solving the problem of being human. It just so happens that in the process they can access possibilities that only they can imagine.
Whether this warrants preserving humans, or merely studying them long enough to extract the relevant principles, is a utility judgment this paper does not presume to make.


Fascinating angle. Itself quite creative!
I think innovation is difficult for us. It’s infinitely easier, safer and more efficient to copy the successful than to try to create something new. So we are great at copying, and not so much at innovation.
The innovation we do get comes more from minor copying mistakes, fiddling, minor variations on a theme, recombination and exporting tested ideas to new domains. And these are of course paths more likely to work.
But I am sure our robot overlords already know this!