The High-Dimensional Society
How AI changes the geometry of coordination
Part 1: Debugging Society
If we zoom all the way out and take a hard look at modern society, we have to be honest: it’s a bit of a mixed bag.
On one hand, we are the healthiest, wealthiest, most comfortable people in history. Our tools work. Our systems scale. In so many ways we are living in peak human civilization. Yay technological miracles!
On the other hand, it’s all become a bit of a mess. We are lonely, polarized, exhausted, depressed, and anxious. Our lives have apparently lost meaning and purpose. We are increasingly reluctant to reproduce ourselves.
Somehow, modern society is both technologically amazing and spiritually terrible. A miracle and a mess.
The traditional move here is to blame technology. To claim that the very tools that delivered our miracles have hollowed us out. That we’ve outsourced and optimized away everything of meaning and value. That the only solution must be to log off, tear down, and return to something simpler, thicker, realer.
This is a perfectly respectable position, but not only is it boring, it’s a category error.
Technology is an easy target, but it mistakes the symptom for the cause. The problem with technology is that it runs on the same broken operating system as everything else. And that operating system is where the true cause lies.
So I’d like to make an arrogant, possibly obnoxious, and absolutely serious proposal:
Let’s fix society with better technology.
Not with more ethical algorithms, mindfulness apps, or kinder social networks. Those are just polishing the doorknobs of a burning building. I mean let’s fix the fundamental operating system of large-scale human coordination. Let’s use technology not to distract us from a broken social model, but to discover a new one.
A model that isn’t, to put it bluntly, so stupid.
Our current society is stupid in a very specific, technical sense: it is dimensionally impoverished. It runs on crude, reductive abstractions. To make the world work at scale, we had to teach our systems to see like color-blind bureaucrats, valuing only what fits in tiny boxes marked price, vote, click, and credential.
The result is that we built a civilization that is spectacularly good at counting things, and catastrophically bad at understanding them.
So this essay will begin with a debugging session. We’re going to look at the source code of modern society, find the line where we traded understanding for scale, and ask a simple, arrogant question:
What if we could have both?
The Stupid, Brilliant Trick
Society scales through a stupid, brilliant trick: abstraction.
When you need to coordinate with more people than you could ever actually know, you stop dealing with reality and start dealing with abstractions.
You take something infinitely complex— a person’s accomplishments, a community’s health—and you abstract it into a simple, portable proxy that everyone can easily recognize.
This is how strangers coordinate. Not through mutual understanding, but through mutual recognition of the same proxies. Instead of understanding your values, I just need to know your price. Instead of understanding your beliefs, I just need to know your vote. Instead of understanding your experience, I just need your credentials.
Most importantly, I don’t have to spend time and energy translating your reality into terms of mine. You can incorporate whatever values you want into your price—but for us to transact, I don’t need to care about any of them.
This is stupid because it throws away almost everything interesting and good about the world. It’s brilliant because it works. Without this reduction, we’d be stuck in small villages, arguing about the meaning of a particular tree while starving.
And here’s the thing about proxies—they don’t just compress reality. They transform reality. None of your complex preferences make it through the price mechanism—but your willingness to pay does. And when millions of those flattened signals combine, new realities emerge: supply curves, price discovery, and allocations across vast networks of strangers.
But there’s a catch. Markets, democracies, and institutions could never exist if every transaction required the full complexity of every participant. They depend on individuals conforming to the roles and proxies that scale requires. New dimensions emerge only because individual dimensions are sacrificed to create them.
This is the wager every society makes: individual complexity sacrificed for collective capacity.
Is the tradeoff worth it?
From Proxy to Prison
That depends on the cost. And the cost compounds. Because the proxy never stays just an abstract representation. Once you have a proxy, you have a score. Once you have a score, you have a game. And once you have a game, people start playing to win.
The game is called optimization. It is the entire point of abstraction. Proxies are meant to scale. If the market coordinates through price, you optimize for price. If the institution coordinates through credentials, you optimize for credentials. If the platform coordinates through engagement, you optimize for engagement.
But now there’s a new problem. Dimensions not captured by the proxy face an uphill struggle. They’re not forbidden, but they become less visible to the mechanisms of scale. They struggle to obtain resources and recognition. Sustaining them requires more and more energy—effort spent against the gradient rather than with it.
For a while, that energy holds. People maintain values that the proxies can’t see through sources that exist outside of the system, like religion, community, and tradition. But the pressure from optimization is constant, and the energy to subvert it is finite.
When the effort wavers, the system overfits. The proxy that was meant to be a window into reality quickly becomes the only reality the system can see. Everything else—every dimension that isn’t captured by the proxy—first becomes invisible, then inconvenient, and finally extinct.
This is modern society. This isn’t evil—it’s just the logic of scale. We built a world that only sees what it can measure, and nature took its course.
Dimensional Poverty
When society overfits on proxies, the result is something we could call dimensional poverty.
Dimensional poverty is the felt sense that the potential you hold contains so much more than what society could ever hope to actualize.
It starts with the nagging question that never goes away: “Is this all that society is capable of?”
It builds into an exhaustion of constantly forcing a high-dimensional self to conform to a low-dimensional world. It is the indignity of being constantly reduced to a profile, score, view, vote, or purchase. The sense that even when you’re “winning”—good job, good metrics, good numbers—something essential is being left out of the equation.
Modern technology makes this worse, not better. We now have access to more ways of being than any humans in history—and more awareness of how few of them our society can sustain. This adds a deeper ache of foreclosure—the suspicion that entire ways of life are outside of what is structurally viable.
So we’re stuck. Dimensional possibility keeps expanding just as dimensional reality keeps collapsing. The very thing that made us powerful—our ability to coordinate at scale through abstraction—is the thing that’s making us miserable.
This is where most analyses end. With a shrug, or a vague hope that maybe we’ll somehow “rediscover community” or “reform capitalism” or “regulate Big Tech.”
But we’re not here for vague hopes. We’re here to consider solutions that can change the system itself. Arrogant, ambitious, possibly insane solutions. So let’s consider one.
Part 2: The High-Dimensional Society
If the diagnosis is dimensional poverty, then the solution is dimensional abundance. A society that can see more of reality, not less. That captures more of what we care about. That can hold scale and depth, efficiency and meaning.
Let’s call it a high-dimensional society: one where our social operating system can perceive—and actualize—the full complexity of what it coordinates.
This new operating system wouldn’t eliminate abstraction, optimization, or even proxies. It would transform what they can see.
A Different Kind of Proxy
The problem with modern society isn’t that we use proxies. The problem is how our proxies work.
Current proxies work by compression. They take your complex reality and collapse it into a single metric that coordination can read. Information goes into the proxy, and most of it never comes out. Most proxies either categorize complexity into labels or rank it into scalars. In both cases, infinite dimensionality is reduced to a single value.
But there’s another way a proxy can work: not as a label that compresses reality, but as an interface that maps it.
A label asks: which box do you fit in?
An interface asks: where are you in the space of possibilities, and what are you near?
Instead of requiring you to flatten yourself so the system can respond, an interface orients itself around the shape of who you are: your relationships, values, and trajectory, all in relation to everything else.
And when a proxy can access the full shape and position of reality, the possibilities for coordination explode.
We’ll get to how such an interface might work. But first, let’s look at what it could unlock.
Take price.
Today, price is a single number that compresses everything you value into what you’re willing to pay. Most of what matters disappears in the process.
Now imagine price as an interface that can hold multiple dimensions.
A purchase no longer clears along a single value. It negotiates across many dimensions at once: cost, reliability, downstream effects. Part of the price clears immediately; the rest clears as outcomes are realized. You pay more to be compensated if the product fails to deliver on the exact dimensions you care about. You pay less if you use the product locally to share the benefits. You pay fractionally to share the product with a community of users.
Price transforms into a dense web of aligned incentives that no single metric could capture. A coffee shop offers lower prices at 2pm to preserve the lunch vibe. Externalities like labor conditions or environmental impact are part of the price’s internal structure. The system routes you to gear that worked for people with your injury history. Advertising is replaced by dimensional proof: patterns that emerge from real outcomes across similar use cases.
Or take a career.
In today’s systems, opportunity is something you apply for. You compress yourself into a résumé, hope it matches a role description, and wait to be judged.
In a high-dimensional society, opportunity finds you. Your work leaves a trail of dimensional impact—the problems you’ve circled, the collaborators you’ve amplified, the skills you’ve demonstrated. Roles resonate with your trajectory rather than filtering you through checklists. Reputation isn’t a handful of references; it’s the shape of your effect on the people and projects you’ve touched.
Or take education.
Credentials disappear because learning becomes legible without them. Growth is revealed through the accumulated texture of effort: the projects you shipped, the failures you navigated, the skills you built when you weren’t being graded. Six months struggling with Mandarin isn’t erased; it becomes part of a pattern that connects you to others studying how adults actually learn language.
In each case, the shift is the same. The proxy doesn’t disappear. It thickens. It stops flattening reality and starts mapping it.
Thicker proxies don’t mean the end of politics, conflict, and genuine disagreement. Some tradeoffs will always remain tragic. But it does mean that conflicts can no longer hide in the shadows of narrow proxies. In a high-dimensional system, conflict is forced into the sunlight where the shape of the disagreement is visible at high-resolution.
High dimensionality doesn't dissolve hard choices—it makes them impossible to avoid. It doesn't guarantee better outcomes, only that outcomes are driven less by proxy artifacts and more by explicit, contestable choices.
In other words, it changes the operating system that touches every aspect of society.
Optimize All the Things
Notice what didn’t change in any of those examples: optimization. People still compete. Incentives still drive behavior. Everything is still being optimized.
That’s because the problem isn’t that we optimize—it’s that we optimize on too little. Starve proxies of dimensionality and optimization overfits on whatever slice of reality it can see.
The high-dimensional society makes a counterintuitive move. We don’t fight optimization. We flood it. We don’t destroy the old proxies. We drown them in context. Instead of collapsing reality to fit the model, we expand the model to fit reality.
When proxies are saturated with dimensionality, the gradient changes. What you care about is no longer outside the system, struggling to survive against it. It becomes part of what the system is optimizing for.
And when coordination can see more, it can do things that were structurally impossible before—not because anyone got smarter or kinder, but because the geometry changed.
For example:
Governance localizes. When decisions must navigate a rich map of values and stakes, they settle at the level where the relevant dimensions actually live. Centralization becomes inefficient. Real subsidiarity becomes not just a political ideal, but a geometric inevitability.
Cooperation becomes ambient. Deals that were never worth the transaction cost—how much quiet you need for the baby’s nap, what a car-free afternoon is worth to the block—clear in milliseconds once stakes are legible. Bureaucratic miracles become routine.
The future becomes present. Current proxies are snapshots, blind to consequence. When coordination can track long causal chains, the future enters today’s equations. Commitments stretch across longer horizons because optimization can finally see them.
And conflict clarifies. What once looked like tribal warfare reveals itself as disagreement on only a few dimensions. High dimensionality disaggregates the bundles, surfaces hidden consensus, and focuses energy on the differences that actually matter.
Dimensional Abundance
What would it feel like to live in a high-dimensional society?
Start with relief.
Right now, we spend enormous energy trying to make ourselves legible to society. We curate profiles, simplify stories, and constantly translate ourselves downward so platforms can read us at all. In a high-dimensional society, that labor inverts. The system’s job is to map the full texture of who you are—not your static profile but your dynamic reality—to the opportunities, collaborations, and communities that match at the highest resolution.
This changes what counts as signal.
All the weird stuff—the strange experiments, the niche obsessions, the path that doesn’t make sense on a résumé—stops being friction and starts being information. Variance isn’t noise to filter out; it’s what distinguishes your dimensional signature from everyone else’s. Everything unique about you feeds the R&D department of society, the source of dimensions no one knew to look for.
Even failure changes meaning.
Any venture that fails still generates value: insights about what doesn’t work, relationships forged in the attempt, capabilities developed along the way. In a high-dimensional society, that full texture is preserved. Your loss becomes information that future experiments can learn from.
This is what dimensional abundance feels like.
The energy once spent on self-compression is released for creation, connection, and exploration. Society becomes less like a machine you must conform to and more like a responsive medium that shapes itself around whoever you actually are, weirdness and all.
Part 3: Artificial Dimensional Intelligence
A New Form of Intelligence
A high dimensional society has never been possible before, for one simple reason: cost.
Dimensionality is expensive. The more dimensions a system must hold, the more computation it requires. As coordination scales, the cost of holding complexity rises faster than our ability to manage it. This is why proxies exist to begin with: to make large-scale coordination affordable.
But that cost structure is changing. Computation is becoming radically cheaper while representational power is increasing. Most importantly, machine learning breakthroughs continue to discover how to traverse high-dimensional spaces—and in doing so, unlock emergent capacities that were never designed or even considered possible.
This is exactly what large language models (LLMs) like ChatGPT do. The common assumption is that they're just glorified auto-complete. But it turns out the best way to predict the next word is to figure out what those words actually mean. This is possible because language has so much structure that the meaning of any word can be defined by its use relative to every other word in the corpus.
LLMs figure this out by converting language into math. Every basic token of text is encoded as an “embedding”, a vector of numerical relations. Alone, each embedding is meaningless. But when viewed in relation to every other embedding, a high dimensional space is formed where vectors tell a mathematical story of meaning.
The canonical example was KING - MALE + FEMALE = QUEEN: the discovery that if you subtract the concept of “male” from the concept of “king”, and then add the concept of “female”, the result is the concept most associated with “queen”. Somehow, in the black box of the neural net, math can manipulate meaning.
Manipulating meaning is what makes LLMs so magical. When you ask an LLM to explain the same policy to a libertarian and a progressive in terms each would find compelling, it’s navigating between value frameworks while preserving the underlying substance. That’s not intelligence as task-completion. That’s intelligence as dimensional translation.
There is a big gap between what LLMs do and what a high-dimensional society would need—current models are far from the robust mediation this essay imagines. But they are the existence proof that meaning can be made computationally tractable. And when you can navigate meaning directly, you can completely change the cost structure for what kinds of coordination are possible.
Artificial Dimensional Intelligence
We can call this capacity to navigate meaning itself artificial dimensional intelligence (ADI)—intelligence as the ability to perceive and act in high-dimensional reality directly, without compression.
ADI reframes what artificial intelligence is for. Not automating human tasks. Not transcending human minds. But expanding the dimensionality that human judgment, agency, and coordination can access at scale.
To accomplish this, the primary task for ADI is to mediate dimensionality across four critical functions.
First, ADI must perceive dimensionality.
You encounter a world richer than any proxy can capture. ADI ingests that raw stream—where local texture and systemic pattern intertwine—and holds the full context ready. The dimensions that legacy systems exclude remain present from the start, ensuring what matters is never pre-filtered from view.
Second, ADI must compress dimensionality.
You need to navigate complexity without drowning in it. ADI compresses holographically: every resolution contains the whole. Zoom out for the pattern; zoom in for the texture. Nothing is deleted in between, and the world becomes legible at whatever depth your attention requires.
Third, ADI must project dimensionality.
Your complexity should travel with you. Every group and institution you touch registers your full signal—your choices, actions, and accumulated impact—not a flattened profile. You permeate the membranes of the collectives you join, and they reshape around the actual weight of your presence.
Fourth, ADI must translate dimensionality.
You coordinate without converting. ADI maps where your values overlap with others beneath the surface, making shared understanding actionable. You keep your framework. They keep theirs. Alignment emerges not from compromise, but from discovering the common ground that was always there. The crude proxies that once rendered your shared meaning invisible are simply rendered obsolete.
Taken together, these four functions form the complete loop of high-dimensional coordination and define a new purpose for intelligence itself. Unlike an AI built to predict or persuade, ADI is built to reveal and relate.
The high-dimensional society doesn't require individual humans to become smarter. It requires an intelligence that makes coordination itself smarter—by expanding what the collective can perceive and actualize. Not a new kind of mind, but a new kind of society.
Part 4: How Do We Actually Build This Thing?
We don’t. Society cannot be solved like a math equation. The goal is not to design a perfect system, but to set the conditions for a better one to emerge—while encoding the constraints that make dystopia as structurally impractical as possible.
Three structural constraints are non-negotiable.
First, ADI must be a Commons, not a Commodity.
Any system that centralizes perception becomes a target for capture. The moment a single entity controls the dimensional interface, we have rebuilt the proxy prison at a higher resolution. Therefore, ADI must function as a dimensional commons—plural, distributed, and locally anchored. Its foundational protocols must be unownable, its governance open and distributed. What cannot be centralized cannot be universally corrupted.
Second, ADI must be Structurally Sub-Optimal.
The ultimate test of a dimensional interface is whether it multiplies diversity under pressure, rather than collapsing toward monoculture. ADI must be dispersed by design, with built-in friction, redundancy, and evolutionary tension. It must resist monoculture the way a healthy ecosystem does—not by central decree, but through architectural incentives that make diversity the path of least resistance.
Third, ADI must be Transparent in Function, Private in Substance.
The system’s operations must be a glass box: every compression, translation, and weighting visible and contestable to those it affects. Yet the personal dimensionality it perceives must be protected by a right to opacity. Your complexity is not a commodity to be harvested, but a sovereignty to be preserved. The interface is transparent; your life is not.
Finally, there must be something outside the system that guides it.
Ultimately, any high dimensional society needs a north star: the hard commitment to preserve what cannot be optimized. It is the only thing that keeps a powerful coordination system from becoming total.
The entire point of mediating dimensionality is to free us from mediation. ADI handles the necessary complexity of large-scale coordination so that we can fully inhabit those parts of life we refuse to mediate at all—our closest relationships, our cherished passions, our sacred and silent pursuits.

