How do you think about our technological future? Do you look forward to that future with a sense of hope? Or dread? Or some mix of both?
What’s the difference between a techno-optimist and a techno-pessimist anyway?
Oddly enough, the difference doesn’t seem to be about specific technologies. If you read enough manifestos and opinions you’ll discover that specific technologies are rarely mentioned. And the difference isn’t about wanting to see a future with more or less technology in it. Even the pessimist will recognize the potential of technology to solve real problems and improve the quality of our lives.
In fact, the difference doesn’t seem to be about technology at all. At least not directly. The difference seems to be about how we determine which technologies are adopted by society, and how that adoption happens. In other words, the difference seems to be about what constrains technology.
By constraints, I mean all the things that both limit and enable technological innovation. Constraints are what impact and guide the social adoption of technology. They include incentives, regulations, norms, policies, guidelines, resources, status, and even existing technologies. They come from all aspects of society—the market, the state, culture, philosophy, religion, the planet, and our individual agency.
What do pessimists see when they look at our current system of constraints? They see privacy-eroding surveillance systems, social scoring networks, and disinformation campaigns. They see biodiversity loss, ocean acidification, and species extinction. They see isolation, depression, and anxiety that seems to correlate with our new online existence at population scales. In other words, they don’t see much compatibility with human flourishing.
Unless these constraints change, why would pessimists expect the future to be any better? Wouldn’t any advanced technology that came out of our current constraints only make things worse?
So the pessimists are not pessimistic about technology per se. They are pessimistic about our ability to steward technology, particularly advanced technologies, in ways that clearly align with human flourishing and other values they care about. It’s an inherent lack of trust in our constraints that leads to general sense of dread that many pessimists have about the future.
The optimist, on the other hand, tends to misunderstand constraints altogether. They think that a constraint is just a limit, and anything that limits technology is necessarily bad. They see the free market as the best way to accelerate innovation, so anything that prevents the market from maximizing this acceleration should be removed.
But this is a mistake. Constraints are much more than limits. Constraints also enable. Imagine if there were no limits on the market. Technologies that emerged from an unconstrained market would still encounter limits, they would just be downstream of the market. They would be reactive instead of proactive. Limits like public backlash or legal challenges or political regulation are going to be much harsher precisely because they are reactive. So by wanting to remove all constraints, the optimists are actually removing the only effective means of acceleration.
But what if both are correct, in their own way? What if the pessimist is right in that something about our current constraints seems to be responsible for all the crappy technological outcomes we sense? And what if the optimist is right in that the key to unlocking innovation is more about constraints and less about technology?
This is why I am proposing a constraint theory of technology. It offers a new perspective on how to responsibly guide our future with advanced technology. Instead of focusing on specific technological outcomes or utopian/dystopian scenarios, we need to focus on implementing the right system of constraints that can enable a broad spectrum of technological outcomes compatible with human flourishing.
To understand this theory, we first need to understand exactly what a constraint is. The job of a constraint is not just to limit. The proper job of the constraint is to find the right limit that maximizes possibility. That may sound paradoxical, but freedom is paradoxical, and proper constraints are about enabling freedom. So let’s start there.
The paradox of freedom
“We might fancy some children playing on the flat grassy top of some tall island in the sea. So long as there was a wall round the cliff’s edge they could fling themselves into every frantic game and make the place the noisiest of nurseries. But the walls were knocked down, leaving the naked peril of the precipice. They did not fall over; but when their friends returned to them they were all huddled in terror in the center of the island; and their song had ceased.”
- G.K. Chesterton
The paradox of freedom is that it can only flourish through constraint, like Chesterton's playground at the edge of a cliff.
A fence along the cliff edge does not restrict freedom, it enables freedom. It removes the possibility of falling over the edge from the child's consciousness, so they can play without fear and hence with maximum freedom. The fence is the constraint that, by limiting one negative possibility, enables a much larger space of positive possibilities. It removes one freedom to enable others.
Or consider art. The constraint of any artistic medium sets the boundaries that define creativity. A haiku imposes severe limits on the poetic form, but these limits are precisely what can push creative expression into the sublime. The process of art itself is an enabling constraint. All art starts with some vision that you attempt to make real. That vision changes the very first instant you begin to actualize it. The first dab of paint becomes a constraint that defines every subsequent brush stroke. This is the process that transforms the work from vision into art. Art is always pushing against the very limits of its own constraints, often transgressing them to define new forms of possibility.
Or consider evolution. Evolution does not pursue every possible variation. It’s not allowed to, because evolution has evolved to conserve what works, and to enforce constraints that ruthlessly protect these features. Up to 5% of human DNA has remained unchanged for 200 million years and is responsible for constraining functional genetic expression.1 Yet these are the exact constraints that enables the possibility of variations in the remaining 95% to be adaptive.
In each example, the constraint is acting as the limit that maximizes possibility. The possibility space here is defined not by quantity, but by quality. Constraints narrow quantity to make more quality possible. This is how a limit becomes enabling.
This tension between limit and possibility means that every constraint is a balancing-act. On the one hand, the limit may not exclude enough negative possibility to enable the positive possibility to actualize. On the other hand, in that effort to exclude the negative, the limit may go too far and restrict too much of the possibility space.
This is especially true with technology. Without the right balance, you can get the pessimist’s nightmare of sub-optimal technological outcomes. You can also get the optimist’s fear of denying humanity of all the benefits that would come from the innovations that aren't happening.
Or in our case, you can get both.
A brief history of constraints
To understand how constraints define our technological future, we need to understand where constraints come from and how they work.
Foundational Constraints
Some constraints are foundational. They are relatively stable and provide a broad consensus. They are external to technology and are big enough to judge, guide and evaluate technology on their own terms.
Religion has traditionally played a powerful constraining role. Technology in ancient China was seen as a qi, or a means of mediating engagement with the cosmos.2 The constraint of putting technology in service of divine honor drove much of the innovation in architecture, materials, and art of the Middle Ages through cathedrals and artworks. It still can be a powerful constraint today, as seen in the complex adoption rituals of certain religions like the Amish.
Philosophy is also capable of establishing shared principles that can act as a judge and evaluator of technological progress. In Ancient Greece, technology was seen more as an art form, and any technology that wasn’t in service of virtue was seen as something less noble, as something that should only be pursued when necessary. Yet like religion, philosophy seems less likely to be a productive constraint at scale in a multipolar world. We seem to have given up on ideas of natural law or the moral philosophy of what C.S. Lewis called “The Dao”3- shared beliefs strong enough to constrain technology on sheer principle.
The planet is the most fundamental constraint, the limit of last resort. To exceed planetary limits is to invite disaster. The carrying cost of our planet is real. Technologies that encroach upon the planetary require planetary-scale constraints, much like how international geo-politics was entirely recast to constrain nuclear technologies. Like all limits, it also has the potential to enable innovation, as seen in the exponential growth of battery and renewable energy capacities.
Situational Constraints
Other constraints are situational, responding to technological change and societal forces. These influences provide important constraints, though their authority and power will be more contextual and diffuse.
The state can uniquely enable technological innovation through huge government programs like the Manhattan project, the Apollo program, or Operation Warp speed. Constraints like FDA trials, while certainly flawed, provide enabling limits for sensitive technologies like drug discovery and therapeutics. Defense spending and DARPA have played significant historical roles in enabling disruptive innovations.
Culture provides grounding norms that steer innovation in accordance with a society's deepest beliefs and ideals. Yet in pluralistic, fragmented societies, culture may struggle to impose anything more than vague or superficial values. Or worse, technologies can fall prey to “culture wars”, where a weaponization of values grinds technological progress to a halt.
Ethics provide frameworks for assessing the impact of technology on individuals, communities, and the environment. Failure to address ethical concerns can lead to backlash and public distrust at the state and cultural levels, so integrating ethical principles into technological development processes is essential for enabling innovation.
Finally, there are personal constraints. We each set up guidelines about what an appropriate relationship to technology should look like. Yet as these collective technological forces become more powerful, more economically embedded, and more inscrutable, we each will increasingly find ourselves with less agency to assert any kind of meaningful technological sovereignty.
The market as an innovation idiot savant
And then there’s the market, the biggest constraint of them all.
The market is both foundational and situational, a category all its own. It far and away plays the biggest role in both limiting and enabling the possibility space of future technologies. One of the best ways to predict the technologies of tomorrow is to study the market signals of today.
The most remarkable aspect of the market is that it doesn’t really care about technology, at least not directly. Technology just happens to be the best way to give the market what it does care about: more ways to meet customer demands cheaper, faster, and more efficiently. Innovation is a side effect.
In this way, the market is like the idiot-savant of technology constraints—a giant, unplanned incubator of innovation; prone to waste and redundancy and remarkable inefficiency; unwilling to cede to any values beyond profit and loss; externalizing any costs to society and the environment that it can get away with; blind to any second-order effects that exceed its immediate time horizon; all driven by the madness of advertising and the need to stimulate demand.
And yet this idiocy is somehow responsible for the vast majority of our technological progress. From a certain angle, it can appear nothing short of miraculous. The free market, with its “invisible hand” of decentralized coordination, funnels the productive forces of millions of innovators into a socially positive feedback loop.
The profit incentive creates a simple mechanism for assuring that technologies become well adopted—those that provide value to the customer are rewarded, while those that harm the customer are not. The collective wisdom of the market is the closest thing we have to an objective arbiter of technology.
The fact that the market has no need for values or religion or philosophy is a feature, not a bug. We can skip all the political debates, the religious uncertainty, and the cultural confusion. The price signal cuts through them all, showing us which technologies are possible and how we can make them real. We just need to convert them into profit.
We put up with the market’s idiocy because it has become such an innovation savant. The sheer success of the market has allowed it to drown out all other constraints. Other sources that have traditionally played the role of balancing the market—of productively guiding its impulses and checking its excesses—no longer seem capable of doing so.
So our technological future is left largely in the hands of the market. Yet how many of us look at the market and take comfort in its ability to constrain advanced technology in ways that are compatible with human flourishing?
Exactly.
The market is not big enough
If we are seeking a system of constraints that can combine advanced technology with human flourishing, then the market is necessary but not sufficient.
Advanced technology both exposes the weaknesses inherent to the market and demands constraints beyond what the market can bear. A few simple examples makes it clear that the market, particularly in its current form, is simply not up for the job.
1. Advanced technology breaks trial and error.
The market depends on a trial-and-error process that is extremely effective when trials are iterative, errors are immediate, and there is a market signal to reverse them. Otherwise it turns tragic. Leaded gasoline persisted for over half a century before it was finally addressed, and we’re still dealing with toxic aftermath. What is the modern day equivalent? We’re very early in discovering all the ways that microplastics are impacting both our ecosystems and our internal chemistries.
It’s not just about external environmental costs. We’re just now beginning to understand the effects of teenagers mediating their entire social life through digital technologies. The cost of this error may be a generation of lost youth. What kind of trial might have prevented this? Not one that the market would have any interest in running.
Advanced technologies can't rely on simple trial-and-error iteration under market constraints. The timelines are too long, the risks are too catastrophic, and the second-order effects may not reveal themselves until it's too late.
2. The market is not accountable to anything outside of itself
The market also fails to account for any values that cannot be made legible to its standards of profit and growth. Because the market is accountable to nothing outside of itself, it can only respond to questions of value when other forces—like public outrage, regulation, or political sanctions—turn them into overwhelming market signals.
Is digital technology making us lazier? More atomized? More fractured? Is it commodifying core experiences of what it means to be human? Unless it’s impacting near-term profit or growth, the market does not (and cannot) care.
As advanced technology encroaches further on the human condition, how will their impacts be converted to a pricing mechanism? They can’t. How do you put a price on human flourishing, sentient rights, or the moral weight of an artificial agent? You don’t. The market has no capacity to incorporate larger values unless something bigger than the market demands that it does so.
3. The market monopolizes vital decisions
The future of most advanced technologies currently rests in the hands of a small handful of actors. The trajectory of AI is largely controlled by the leadership of a few big tech companies and AI labs. Why would we allow such vital decisions to be monopolized by the market?
Part of the reason is the market has so few mechanisms for incorporating external signals. What frameworks or tools do we have to precisely articulate the values that should constrain innovation? Our ability as a society to enforce our democratic values onto the technological landscape is almost non-existent.
The market also excels at ignoring outside constraints. Any external influences must be able to play and win on the market’s own terms. This means overcoming the dynamics of game theory, first-mover advantage, and regulatory capture. The history of the market suggests that only legal requirements can overcome these dynamics, and often much too late.
4. The market forecloses too much of the possibility space
Think of all the possible technologies that could exist but don’t simply because the market could never make them profitable at scale. Entire domains of technological possibility are foreclosed simply because they cannot be converted into viable business models or revenue streams.
This is particularly true for technologies that could directly promote human flourishing, which include values that are often in opposition to quantification and profit. The market is not big enough for all the technologies that a flourishing future demands.
None of these are actual problems with the market. They only become problems when we become so enamored with the market’s power to drive innovation that we allow it to take over the entire burden of technological constraint.
In other words, it becomes a problem when we remove all constraints on the market’s ability to productively constrain technology.
Constraint-first futures
So where do we go from here? If the market is not sufficient to steward advanced technology, what is? What would a viable system of constraints look like?
We need to think about our technological future less as a collection of technologies and more as a system of constraints. We will never have the capacity to plan and implement a future around specific technologies that will guarantee some measure of human flourishing. But we can plan and implement systems that enable a broad spectrum of technological possibilities that are within the bounds of flourishing.
We need to be thinking “constraint-first” and start enabling a viable system of technological constraints. The following are a few steps to start with.
Empower the commons to expand the possibility space
We need to open up the possibility space of all the technologies that the market ignores, yet are crucial to human flourishing. While the state sometimes plays this role, the commons is a more appropriate container for stewarding technologies that directly impact human well-being.
Free from market pressures, a “digital commons” could provide enabling constraints to unlock technologies that elevate civic discourse, protect privacy and identity, manage reputation and social graphs, establish and report on public knowledge repositories, coordinate public deliberation, and other public goods that market would never touch.
Such a commons could still leverage the best features of the market by translating community values into price signals that incentivizes competition to ensure quality and efficiency.
Initiate a new field of constraint design
Constraints are themselves a technology. Every constraint can be radically limited and enabled by the constraints they are embedded in. The field of “constraint design” should be established to explore and develop best practices for creating and managing the most productive constraints.
For example, rather than constraints being purely external, limiting forces, we should explore models where the process of constraining technology itself becomes participatory and empowering for stakeholders.
This could take the form of decentralized governance protocols for managing advanced AI systems' objective functions, or stakeholder voting to adjust and calibrate constraint parameters on limit versus enablement, or open-sourcing constraint protocols for public auditing and remixing.
Empower an individual right of constraint
Powerful yet user-friendly tools enabling "constraint customization" at the individual level could help mitigate the failure of higher-level constraints. Technology could become a flexible service respecting our diverse values, not a binary take-it-or-leave-it imposition.
For example, imagine social media where you can implement different algorithms from a public trust, tweak them with simple tools, or reproduce settings from those you trust. Imagine new settings to route content based on values you care about. Imagine ignoring comments that exceed a polarization threshold, getting alerts on how usage is affecting your attention, or helping amplify constructive threads for others.
If the user has more control to moderate their feed, there’d be less need to impose top down moderation or draconian speech restrictions, and less opportunities for governments to corrupt moderation processes.
Incentivize constraint entrepreneurship
While constraints are often positioned as barriers to entrepreneurship, we could flip this framing. There are vast economic opportunities in developing core constraint capabilities that enable advanced technologies to bloom sustainably. Innovators who unlock the most enabling constraints should be richly rewarded.
Imagine whole new industries devoted to tools for better trial-and-error, innovation prediction markets, ethics auditing, or security mindsharing between firms. Or decentralized markets for trading and dynamically pricing risk estimates and "allowances" on transformative R&D initiatives.
By putting incentives and investors behind vital constraint infrastructure, we cultivate an entire entrepreneurial ecosystem devoted to responsibly unleashing technological progress. This is how constraints can turn into an innovation superpower.
Establish “separation of technology and control”
Much like the separation of church and state, or the partition of powers into branches of government, we may need enforced checks and balances when it comes to transformative technologies and the entities that control them.
This could take the form of imposing functional separations on research and commercialization, or keeping core protocols in the commons, or dividing development pipelines into isolated modules working without full context.
The concern is technological apotheosis - when the owners of advanced technologies achieve such centralized omnipotence and convergence that they become an autonomous power beyond the control of any human institutions. Separation of technology and control prevents any one actor from ever having the possibility of achieving such dominance.
Convince the market that constraints are a good thing
What markets don’t realize is that constraints are in the market’s best interest. The more constraints the market can provide, the less need there is for outside constraints to intervene. Advanced technologies that emerge from the market will increasingly become targets of culture wars, virtue signaling, and political regulation4. The goal of the market should be ensuring that new technologies never reach that point.
Sometimes the market recognizes this. You can see the AI industry navigating this with their incorporation of red teaming. This is a small step in upgrading trial and error. Other viable options exist to improve beta testing and iterative trials, like safe to fail probes and broader spectrum observation. This may require better epistemological tools to properly analyze relevant data beyond the obvious first order effects, but these are investments the market should be willing to make.
Enshrine “off-ramps" into critical systems
For certain advanced technologies, can we codify protocols that could enforce discontinuation "off-ramps" or quarantine measures when clear tripwires are triggered? Pre-agreed decision engines, immune response plans, and “kill switches” should be built into the technological infrastructure itself in anticipation of any worst-case scenarios.
This isn't a regressive principle, but a simple recognition of our inability to reliably forecast technological outcomes. The more confident we can be in rolling back a technology in the worst case scenarios, the more confident we can be in developing it.
Recognize fundamental “limits”
As powerful as any system of constraints will be, we must accept that certain technologies may defy the limits of any constraint that we could devise. Whether it be recursively self-improving AI, molecular bio-nanotechnology, or merging our consciousness with the machine—there may be hard limits on what we can "constrain" in any traditional sense.
In such cases, what if the only viable constraint is the courage to simply not go there? To demarcate intrinsically human boundaries and honor the mystery. No amount of technological capability necessarily obligates us to transgress all limits. Accepting that we don’t need to explore every possible future may be the key to ensuring that we have a future at all.
Constraining our way to a future of human flourishing
In summary, the constraint theory of technology offers a new perspective on how to responsibly steer our future with advanced technology towards outcomes that align with human flourishing.
Rather than focusing on specific technological goals or dystopian/utopian scenarios, we need to focus on developing the right system of constraints that can enable a broad spectrum of possible futures that remain compatible with our deepest values.
This requires rethinking our relationship to constraints. Instead of seeing them merely as limits, we need to recognize their enabling role in maximizing the possibilities that we can explore. By embracing an ethos of "constraint-first" technological development, we increase our chances of realizing a future where technology and human flourishing can advance as co-evolving forces.
Ultimately, the path forward demands nothing less than renegotiating our relationship to technological power itself.
John Smart calls this the 95/5 rule of evolutionary development.
See Yuk Hui’s The Question Concerning Technology in China for a fascinating (if dense) investigation into Chinese technological history and development. Qi here is 器, a standard Chinese word meaning container, vessel or instrument, which Hui places in a Dao-Qi duality.
See The Abolition of Man for Lewis’ prediction of what happens when man abandons traditional moral realism. It does not go well.
A future where the government mandates FDA-like clinical trials for all advanced technologies is not an impossibility.
I would be chill with the simple constraint of "humans continue to exist as biological beings."