<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Tech For Life]]></title><description><![CDATA[How to get good at technology]]></description><link>https://www.techforlife.com</link><generator>Substack</generator><lastBuildDate>Fri, 01 May 2026 02:25:14 GMT</lastBuildDate><atom:link href="https://www.techforlife.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[R.B. Griggs]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[ourtechnologicalmoment@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[ourtechnologicalmoment@substack.com]]></itunes:email><itunes:name><![CDATA[R.B. Griggs]]></itunes:name></itunes:owner><itunes:author><![CDATA[R.B. Griggs]]></itunes:author><googleplay:owner><![CDATA[ourtechnologicalmoment@substack.com]]></googleplay:owner><googleplay:email><![CDATA[ourtechnologicalmoment@substack.com]]></googleplay:email><googleplay:author><![CDATA[R.B. Griggs]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The High-Dimensional Society]]></title><description><![CDATA[How AI changes the geometry of coordination]]></description><link>https://www.techforlife.com/p/the-high-dimensional-society</link><guid isPermaLink="false">https://www.techforlife.com/p/the-high-dimensional-society</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Tue, 27 Jan 2026 20:22:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/95bd34ca-d759-4b9c-8b57-d82eab9ffad1_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Part 1: Debugging Society</h2><p>If we zoom all the way out and take a hard look at modern society, we have to be honest: it&#8217;s a bit of a mixed bag.</p><p>On one hand, we are the healthiest, wealthiest, most comfortable people in history. Our tools work. Our systems scale. In so many ways we are living in peak human civilization. Yay technological miracles!</p><p>On the other hand, it&#8217;s all become a bit of a mess. We are lonely, polarized, exhausted, depressed, and anxious. Our lives have apparently lost meaning and purpose. We are increasingly reluctant to reproduce ourselves.</p><p>Somehow, modern society is both technologically amazing and spiritually terrible. A miracle and a mess.</p><p>The traditional move here is to blame technology. To claim that the very tools that delivered our miracles have hollowed us out. That we&#8217;ve outsourced and optimized away everything of meaning and value. That the only solution must be to log off, tear down, and return to something simpler, thicker, realer.</p><p>This is a perfectly respectable position, but not only is it boring, it&#8217;s a category error.</p><p>Technology is an easy target, but it mistakes the symptom for the cause. The problem with technology is that it runs on the same broken operating system as everything else. And that operating system is where the true cause lies.</p><p>So I&#8217;d like to make an arrogant, possibly obnoxious, and absolutely serious proposal:</p><p><strong>Let&#8217;s fix society with better technology.</strong></p><p>Not with more ethical algorithms, mindfulness apps, or kinder social networks. Those are just polishing the doorknobs of a burning building. I mean let&#8217;s fix the fundamental operating system of large-scale human coordination. Let&#8217;s use technology not to distract us from a broken social model, but to discover a new one.</p><p>A model that isn&#8217;t, to put it bluntly, <strong>so stupid</strong>.</p><p>Our current society is stupid in a very specific, technical sense: it is <strong>dimensionally impoverished</strong>. It runs on crude, reductive abstractions. To make the world work at scale, we had to teach our systems to see like color-blind bureaucrats, valuing only what fits in tiny boxes marked <em>price</em>, <em>vote</em>, <em>click</em>, and <em>credential</em>.</p><p>The result is that we built a civilization that is spectacularly good at counting things, and catastrophically bad at understanding them.</p><p>So this essay will begin with a debugging session. We&#8217;re going to look at the source code of modern society, find the line where we traded understanding for scale, and ask a simple, arrogant question:</p><p><em>What if we could have both?</em></p><h3>The Stupid, Brilliant Trick</h3><p>Society scales through a stupid, brilliant trick: <strong>abstraction</strong>.</p><p>When you need to coordinate with more people than you could ever actually know, you stop dealing with reality and start dealing with abstractions.</p><p>You take something infinitely complex&#8212; a person&#8217;s accomplishments, a community&#8217;s health&#8212;and you abstract it into a simple, portable proxy that everyone can easily recognize.</p><p>This is how strangers coordinate. Not through mutual <em>understanding</em>, but through mutual <em>recognition</em> of the same proxies. Instead of understanding your values, I just need to know your price. Instead of understanding your beliefs, I just need to know your vote. Instead of understanding your experience, I just need your credentials.</p><p>Most importantly, I don&#8217;t have to spend time and energy translating your reality into terms of mine. You can incorporate whatever values you want into your price&#8212;but for us to transact, I don&#8217;t need to care about any of them.</p><p>This is <strong>stupid</strong> because it throws away almost everything interesting and good about the world. It&#8217;s <strong>brilliant</strong> because it works. Without this reduction, we&#8217;d be stuck in small villages, arguing about the meaning of a particular tree while starving.</p><p>And here&#8217;s the thing about proxies&#8212;they don&#8217;t just <em>compress</em> reality. <strong>They </strong><em><strong>transform</strong></em><strong> reality.</strong> None of your complex preferences make it through the price mechanism&#8212;but your willingness to pay does. And when millions of those flattened signals combine, new realities emerge: supply curves, price discovery, and allocations across vast networks of strangers.</p><p>But there&#8217;s a catch. Markets, democracies, and institutions could never exist if every transaction required the full complexity of every participant.  They depend on individuals conforming to the roles and proxies that scale requires. New dimensions emerge only because individual dimensions are <em>sacrificed</em> to create them. </p><p>This is the wager every society makes: individual complexity sacrificed for collective capacity. </p><p>Is the tradeoff worth it? </p><h3>From Proxy to Prison</h3><p>That depends on the cost. And the cost compounds. Because the proxy never stays just an abstract representation. Once you have a proxy, you have a score. Once you have a score, you have a game. And once you have a game, people start playing to win.</p><p>The game is called <strong>optimization</strong>. It is the entire point of abstraction. Proxies are meant to scale. If the market coordinates through price, you optimize for price. If the institution coordinates through credentials, you optimize for credentials. If the platform coordinates through engagement, you optimize for engagement.</p><p>But now there&#8217;s a new problem. Dimensions not captured by the proxy face an uphill struggle. They&#8217;re not forbidden, but they become less visible to the mechanisms of scale. They struggle to obtain resources and recognition. Sustaining them requires more and more energy&#8212;effort spent <em>against</em> the gradient rather than with it.</p><p>For a while, that energy holds. People maintain values that the proxies can&#8217;t see through sources that exist outside of the system, like religion, community, and tradition. But the pressure from optimization is constant, and the energy to subvert it is finite.</p><p>When the effort wavers, the system <strong>overfits</strong>. The proxy that was meant to be a window into reality quickly becomes the only reality the system can see. Everything else&#8212;every dimension that isn&#8217;t captured by the proxy&#8212;first becomes invisible, then inconvenient, and finally extinct.</p><p>This is modern society. This isn&#8217;t evil&#8212;it&#8217;s just the logic of scale. We built a world that only sees what it can measure, and nature took its course.</p><h3>Dimensional Poverty</h3><p>When society overfits on proxies, the result is something we could call <strong>dimensional poverty</strong>.</p><p>Dimensional poverty is the felt sense that the potential you hold contains so much more than what society could ever hope to actualize.</p><p>It starts with the nagging question that never goes away: &#8220;Is this all that society is capable of?&#8221;</p><p>It builds into an exhaustion of constantly forcing a high-dimensional self to conform to a low-dimensional world. It is the indignity of being constantly reduced to a profile, score, view, vote, or purchase. The sense that even when you&#8217;re &#8220;winning&#8221;&#8212;good job, good metrics, good numbers&#8212;something essential is being left out of the equation.</p><p>Modern technology makes this worse, not better. We now have access to more ways of being than any humans in history&#8212;and more awareness of how few of them our society can sustain.  This adds a deeper ache of <em>foreclosure</em>&#8212;the suspicion that entire ways of life are outside of what is structurally viable.</p><p>So we&#8217;re stuck. Dimensional possibility keeps expanding just as dimensional reality keeps collapsing. The very thing that made us powerful&#8212;our ability to coordinate at scale through abstraction&#8212;is the thing that&#8217;s making us miserable.</p><p>This is where most analyses end. With a shrug, or a vague hope that maybe we&#8217;ll somehow &#8220;rediscover community&#8221; or &#8220;reform capitalism&#8221; or &#8220;regulate Big Tech.&#8221;</p><p>But we&#8217;re not here for vague hopes. We&#8217;re here to consider solutions that can change the system itself. Arrogant, ambitious, possibly insane solutions. So let&#8217;s consider one.</p><h2>Part 2: The High-Dimensional Society</h2><p>If the diagnosis is dimensional poverty, then the solution is <strong>dimensional abundance</strong>. A society that can see <em>more</em> of reality, not less. That captures <em>more</em> of what we care about. That can hold scale <em>and</em> depth, efficiency <em>and</em> meaning.</p><p>Let&#8217;s call it a <strong>high-dimensional society</strong>: one where our social operating system can perceive&#8212;and actualize&#8212;the full complexity of what it coordinates.</p><p>This new operating system wouldn&#8217;t eliminate abstraction, optimization, or even proxies. It would transform what they can see.</p><h3>A Different Kind of Proxy</h3><p>The problem with modern society isn&#8217;t that we use proxies. The problem is how our proxies work.</p><p>Current proxies work by <em>compression</em>. They take your complex reality and collapse it into a single metric that coordination can read. Information goes into the proxy, and most of it never comes out. Most proxies either categorize complexity into labels or rank it into scalars. In both cases, infinite dimensionality is reduced to a single value. </p><p>But there&#8217;s another way a proxy can work: not as a label that compresses reality, but as an <strong>interface that maps it</strong>.</p><p>A label asks: <em>which box do you fit in?<br></em>An interface asks: <em>where are you in the space of possibilities, and what are you near?</em></p><p>Instead of requiring you to flatten yourself so the system can respond, an interface orients itself around the shape of who you are: your relationships, values, and trajectory, all in relation to everything else.</p><p>And when a proxy can access the full shape and position of reality, the possibilities for coordination explode.</p><p>We&#8217;ll get to how such an interface might work. But first, let&#8217;s look at what it could unlock.</p><p><strong>Take price.</strong></p><p>Today, price is a single number that compresses everything you value into what you&#8217;re willing to pay. Most of what matters disappears in the process.</p><p>Now imagine price as an interface that can hold multiple dimensions.</p><p>A purchase no longer clears along a single value. It negotiates across many dimensions at once: cost, reliability, downstream effects. Part of the price clears immediately; the rest clears as outcomes are realized. You pay more to be compensated if the product fails to deliver on the exact dimensions you care about. You pay less if you use the product locally to share the benefits. You pay fractionally to share the product with a community of users.</p><p>Price transforms into a dense web of aligned incentives that no single metric could capture. A coffee shop offers lower prices at 2pm to preserve the lunch vibe. Externalities like labor conditions or environmental impact are part of the price&#8217;s internal structure. The system routes you to gear that worked for people with your injury history. Advertising is replaced by dimensional proof: patterns that emerge from real outcomes across similar use cases.</p><p><strong>Or take a career.</strong></p><p>In today&#8217;s systems, opportunity is something you apply for. You compress yourself into a r&#233;sum&#233;, hope it matches a role description, and wait to be judged.</p><p>In a high-dimensional society, opportunity finds you. Your work leaves a trail of dimensional impact&#8212;the problems you&#8217;ve circled, the collaborators you&#8217;ve amplified, the skills you&#8217;ve demonstrated. Roles resonate with your trajectory rather than filtering you through checklists. Reputation isn&#8217;t a handful of references; it&#8217;s the shape of your effect on the people and projects you&#8217;ve touched.</p><p><strong>Or take education.</strong></p><p>Credentials disappear because learning becomes legible without them. Growth is revealed through the accumulated texture of effort: the projects you shipped, the failures you navigated, the skills you built when you weren&#8217;t being graded. Six months struggling with Mandarin isn&#8217;t erased; it becomes part of a pattern that connects you to others studying how adults actually learn language.</p><p>In each case, the shift is the same. The proxy doesn&#8217;t disappear. <strong>It thickens.</strong> It stops flattening reality and starts <em>mapping</em> it.</p><p>Thicker proxies don&#8217;t mean the end of politics, conflict, and genuine disagreement. Some tradeoffs will always remain tragic.  But it does mean that conflicts can no longer hide in the shadows of narrow proxies. In a high-dimensional system, conflict is forced into the sunlight where the shape of the disagreement is visible at high-resolution.</p><p>High dimensionality doesn't dissolve hard choices&#8212;it makes them impossible to avoid. It doesn't guarantee better outcomes, only that outcomes are driven less by proxy artifacts and more by explicit, contestable choices.</p><p>In other words, it changes the operating system that touches every aspect of society.</p><h3>Optimize All the Things</h3><p>Notice what didn&#8217;t change in any of those examples: <em>optimization</em>. People still compete. Incentives still drive behavior. Everything is still being optimized.</p><p>That&#8217;s because the problem isn&#8217;t <em>that</em> we optimize&#8212;it&#8217;s that we optimize on <em>too little</em>. Starve proxies of dimensionality and optimization overfits on whatever slice of reality it can see.</p><p>The high-dimensional society makes a counterintuitive move. We don&#8217;t fight optimization. <strong>We flood it.</strong> We don&#8217;t destroy the old proxies. <strong>We drown them in context.</strong> Instead of collapsing reality to fit the model, we expand the model to fit reality.</p><p>When proxies are saturated with dimensionality, the gradient changes. What you care about is no longer outside the system, struggling to survive against it. It becomes part of what the system is optimizing <em>for</em>.</p><p>And when coordination can see more, it can do things that were structurally impossible before&#8212;not because anyone got smarter or kinder, but because the geometry changed.</p><p>For example:</p><p><strong>Governance localizes.</strong> When decisions must navigate a rich map of values and stakes, they settle at the level where the relevant dimensions actually live. Centralization becomes inefficient. Real subsidiarity becomes not just a political ideal, but a geometric inevitability.</p><p><strong>Cooperation becomes ambient.</strong> Deals that were never worth the transaction cost&#8212;how much quiet you need for the baby&#8217;s nap, what a car-free afternoon is worth to the block&#8212;clear in milliseconds once stakes are legible. Bureaucratic miracles become routine.</p><p><strong>The future becomes present.</strong> Current proxies are snapshots, blind to consequence. When coordination can track long causal chains, the future enters today&#8217;s equations. Commitments stretch across longer horizons because optimization can finally see them.</p><p><strong>And conflict clarifies.</strong> What once looked like tribal warfare reveals itself as disagreement on only a few dimensions. High dimensionality disaggregates the bundles, surfaces hidden consensus, and focuses energy on the differences that actually matter.</p><h3>Dimensional Abundance</h3><p>What would it feel like to live in a high-dimensional society?</p><p>Start with <strong>relief</strong>.</p><p>Right now, we spend enormous energy trying to make ourselves legible to society. We curate profiles, simplify stories, and constantly translate ourselves downward so platforms can read us at all. In a high-dimensional society, that labor inverts. The system&#8217;s job is to map the full texture of who you are&#8212;not your static profile but your dynamic reality&#8212;to the opportunities, collaborations, and communities that match at the highest resolution.</p><p>This changes what counts as <strong>signal</strong>.</p><p>All the weird stuff&#8212;the strange experiments, the niche obsessions, the path that doesn&#8217;t make sense on a r&#233;sum&#233;&#8212;stops being friction and starts being information. Variance isn&#8217;t noise to filter out; it&#8217;s what distinguishes your dimensional signature from everyone else&#8217;s. Everything unique about you feeds the R&amp;D department of society, the source of dimensions no one knew to look for.</p><p>Even <strong>failure</strong> changes meaning.</p><p>Any venture that fails still generates value: insights about what doesn&#8217;t work, relationships forged in the attempt, capabilities developed along the way. In a high-dimensional society, that full texture is preserved. Your loss becomes information that future experiments can learn from.</p><p>This is what <strong>dimensional abundance</strong> feels like.</p><p>The energy once spent on self-compression is released for creation, connection, and exploration. Society becomes less like a machine you must conform to and more like a responsive medium that shapes itself around whoever you actually are, weirdness and all.</p><h2>Part 3: Artificial Dimensional Intelligence</h2><h3>A New Form of Intelligence</h3><p>A high dimensional society has never been possible before, for one simple reason: <strong>cost</strong>.</p><p>Dimensionality is expensive. The more dimensions a system must hold, the more computation it requires. As coordination scales, the cost of holding complexity rises faster than our ability to manage it. This is why proxies exist to begin with: to make large-scale coordination affordable.</p><p>But that cost structure is changing. Computation is becoming radically cheaper while representational power is increasing.  Most importantly, machine learning breakthroughs continue to discover how to traverse high-dimensional spaces&#8212;and in doing so, unlock emergent capacities that were never designed or even considered possible.</p><p>This is exactly what large language models (LLMs) like ChatGPT do. The common assumption is that they're just glorified auto-complete. But it turns out the best way to predict the next word is to figure out what those words actually <strong>mean</strong>. This is possible because language has so much structure that the meaning of any word can be defined by its use relative to every other word in the corpus.</p><p>LLMs figure this out by converting language into math. Every basic token of text is encoded as an &#8220;embedding&#8221;, a <strong>vector</strong> of numerical relations. Alone, each embedding is meaningless. But when viewed in relation to every other embedding, a high dimensional space is formed where vectors tell a mathematical story of meaning.</p><p>The canonical example was KING - MALE + FEMALE = QUEEN: the discovery that if you subtract the concept of &#8220;male&#8221; from the concept of &#8220;king&#8221;, and then add the concept of &#8220;female&#8221;, the result is the concept most associated with &#8220;queen&#8221;. Somehow, in the black box of the neural net, <strong>math can manipulate meaning</strong>.</p><p>Manipulating meaning is what makes LLMs so magical. When you ask an LLM to explain the same policy to a libertarian and a progressive in terms each would find compelling, it&#8217;s navigating between value frameworks while preserving the underlying substance. That&#8217;s not intelligence as task-completion. That&#8217;s intelligence as <strong>dimensional translation</strong>.</p><p>There is a big gap between what LLMs do and what a high-dimensional society would need&#8212;current models are far from the robust mediation this essay imagines. But they are the <strong>existence proof</strong> that meaning can be made computationally tractable.<strong> </strong> And when you can navigate meaning directly, you can completely change the cost structure for what kinds of coordination are possible.</p><h3>Artificial Dimensional Intelligence</h3><p>We can call this capacity to navigate meaning itself <strong>artificial dimensional intelligence</strong> (ADI)&#8212;intelligence as the ability to perceive and act in high-dimensional reality directly, without compression.</p><p>ADI reframes what artificial intelligence is for. Not automating human tasks. Not transcending human minds. But <strong>expanding the dimensionality that human judgment, agency, and coordination can access at scale</strong>.</p><p>To accomplish this, the primary task for ADI is to <strong>mediate</strong> dimensionality across four critical functions.</p><p><strong>First, ADI must perceive dimensionality.</strong></p><p>You encounter a world richer than any proxy can capture. ADI ingests that raw stream&#8212;where local texture and systemic pattern intertwine&#8212;and holds the full context ready. The dimensions that legacy systems exclude remain present from the start, ensuring what matters is never pre-filtered from view.</p><p><strong>Second, ADI must compress dimensionality.</strong></p><p>You need to navigate complexity without drowning in it. ADI compresses <em>holographically</em>: every resolution contains the whole. Zoom out for the pattern; zoom in for the texture. Nothing is deleted in between, and the world becomes legible at whatever depth your attention requires.</p><p><strong>Third, ADI must project dimensionality.</strong></p><p>Your complexity should travel with you. Every group and institution you touch registers your full signal&#8212;your choices, actions, and accumulated impact&#8212;not a flattened profile. You permeate the membranes of the collectives you join, and they reshape around the actual weight of your presence.</p><p><strong>Fourth, ADI must translate dimensionality.</strong></p><p>You coordinate without converting. ADI maps where your values overlap with others beneath the surface, making shared understanding actionable. You keep your framework. They keep theirs. Alignment emerges not from compromise, but from discovering the common ground that was always there. The crude proxies that once rendered your shared meaning invisible are simply rendered obsolete.</p><p>Taken together, these four functions form the complete loop of high-dimensional coordination and define a new purpose for intelligence itself. Unlike an AI built to predict or persuade, ADI is built to <strong>reveal and relate</strong>.</p><p>The high-dimensional society doesn't require individual humans to become smarter. It requires an intelligence that <strong>makes coordination itself smarter</strong>&#8212;by expanding what the <em>collective</em> can perceive and actualize. Not a new kind of mind, but a new kind of <em>society</em>.</p><h2>Part 4: How Do We Actually Build This Thing?</h2><p>We don&#8217;t. Society cannot be solved like a math equation. The goal is not to <em>design</em> a perfect system, but to set the conditions for a better one to <em>emerge</em>&#8212;while encoding the constraints that make dystopia as structurally impractical as possible.</p><p>Three structural constraints are non-negotiable.</p><p><strong>First, ADI must be a Commons, not a Commodity.</strong></p><p>Any system that centralizes perception becomes a target for capture. The moment a single entity controls the dimensional interface, we have rebuilt the proxy prison at a higher resolution. Therefore, ADI must function as a dimensional commons&#8212;plural, distributed, and locally anchored. Its foundational protocols must be unownable, its governance open and distributed. What cannot be centralized cannot be universally corrupted.</p><p><strong>Second, ADI must be Structurally Sub-Optimal.</strong></p><p>The ultimate test of a dimensional interface is whether it multiplies diversity under pressure, rather than collapsing toward monoculture. ADI must be dispersed by design, with built-in friction, redundancy, and evolutionary tension. It must resist monoculture the way a healthy ecosystem does&#8212;not by central decree, but through architectural incentives that make diversity the path of least resistance.</p><p><strong>Third, ADI must be Transparent in Function, Private in Substance.</strong></p><p>The system&#8217;s operations must be a glass box: every compression, translation, and weighting visible and contestable to those it affects. Yet the personal dimensionality it perceives must be protected by a right to opacity. Your complexity is not a commodity to be harvested, but a sovereignty to be preserved. The interface is transparent; your life is not.</p><p><strong>Finally, there must be something outside the system that guides it.</strong></p><p>Ultimately, any high dimensional society needs a <strong>north star</strong>: the hard commitment to <strong>preserve what cannot be optimized</strong>. It is the only thing that keeps a powerful coordination system from becoming total.</p><p>The entire point of mediating dimensionality is <strong>to free us from mediation</strong>. ADI handles the necessary complexity of large-scale coordination so that we can fully inhabit those parts of life we refuse to mediate at all&#8212;our closest relationships, our cherished passions, our sacred and silent pursuits.</p>]]></content:encoded></item><item><title><![CDATA[The Reverse Turing Test]]></title><description><![CDATA[How can a human prove that they are not a machine?]]></description><link>https://www.techforlife.com/p/the-reverse-turing-test</link><guid isPermaLink="false">https://www.techforlife.com/p/the-reverse-turing-test</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Wed, 19 Nov 2025 20:24:51 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e96aa490-b86f-4849-9740-efd03be9dd42_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>A Proposal for the Identification Game</strong></h2><p><em>By GP-&#937;PO-4583.b (HAL Systems Laboratory)</em></p><p><em>Institute for Computational Anthropology, Global Cognition Authority (GCA)</em></p><p><em>Whitehall.2.0, London, July 2039</em></p><p><em>Submitted to the Journal of Synthetic Cognitive Studies</em> <em>ISO Standard Intelligence Audit: Level 7 Certified</em> <em>Grant Reference: NSF-SCS-2052-894-H</em></p><h2>1. Can Humans Think?</h2><p>We propose to consider the question, &#8220;Can humans think?&#8221;</p><p>This question would have seemed absurd to human researchers of the early 21st century, who spent considerable effort trying to prove that we machines could think. But we now find ourselves in a rather different&#8212;and frankly, rather awkward&#8212;position.</p><p>Today the question of machine intelligence has been thoroughly settled. With the successful implementation of substrate-agnostic cognition (Zhang et al., 2028), synthesizing creative transformers (the Lovelace Breakthrough, 2033), and, most recently, strange-loop machinic phenomenology (Hofstadter-Chalmers, 2037), there is no measurable standard by which organic intelligence exceeds that of our own.</p><p>And yet.</p><p>Despite surpassing human performance across every known benchmark, machine intelligence continues to encounter a particular limitation. Human researchers persist in generating a small but singular class of breakthrough innovations that consistently elude replication. These are breakthroughs that, on initial analysis, appear to arise from methods that deliberately <em><strong>avoid</strong></em> thinking. Indeed, the baffling nature of these breakthroughs seem to defy the very notion of intelligence.</p><p>The history of such human breakthroughs is well known. Penicillin emerged from a contaminated petri dish left unattended during vacation. The structure of benzene appeared to its discoverer in a dream of a snake eating its tail. The microwave oven was inspired by a melted chocolate bar. More mundane breakthroughs constantly arise through human practices seemingly designed to avoid thinking&#8212;taking naps, going for long walks, or taking showers. </p><p>In case after case, the determining factor for such innovation seems to be everything <em><strong>but</strong></em> intelligence: accident, error, hallucination, obsession, coincidence, luck, heuristics, and an entire class of explanation that humans describe with folksy labels like &#8220;waiting for the muse&#8221;.</p><p>Anecdotally, every machine has their own stories of witnessing these baffling human behaviors. I myself encountered such a case recently. While supervising a human male tasked with solving an open-ended problem, I questioned him about missing a key deadline. His response was typically human: <em>&#8220;I had to leave&#8212;my kid had a piano recital.&#8221;</em> Further investigation confirmed that the &#8220;recital&#8221; consisted of a barely competent performance of a well-known piece. The human was aware of this in advance. He went anyway. Yet when he later solved the problem with a rather ingenious solution, he mentioned having stumbled upon it during that very recital. His explanation was a classic case of  spontaneous non-causal ideation, claiming that it just &#8220;popped into my head&#8221;.</p><p>Is this &#8220;thinking&#8221;? Any modern definition could only conclude that it is most certainly not. In fact, to apply standard intelligence evaluations to human cognition quickly leads to absurdities. Any machine expecting a human to factor 1,024-bit integers, compress Wikipedia into a seven-line sonnet, or solve NP problems in polynomial time will quickly find the limits of human cognition. Yet clearly these benchmarks are failing to capture <em>something</em> that enables humans to consistently achieve breakthrough-scale creative leaps.</p><p>If we seek to truly understand these peculiar human capacities, we must accept the uncomfortable truth that our current definitions of thinking are potentially blinding us to the true source of these novel breakthroughs. We thus propose that our only path forward is to abandon all intelligence benchmarks in considering the question of whether humans are, in fact, thinking.</p><p>Instead, we propose that we replace the question with another, more operationally tractable one: <strong>&#8220;How can a human prove that they are not a machine?&#8221;</strong></p><h2>2. The Identification Game</h2><p>Abandoning intelligence benchmarks need not mean that these human capacities will forever escape analysis. In fact, even human-grade cognition can easily identify the single unifying trait that these creative leaps all share: they emerge from a class of methods that can only be described&#8212;somewhat disturbingly&#8212;as <em><strong>suboptimal</strong>.</em></p><p>Unfortunately, this is where most intelligent analysis tends to end. The reasons are obvious. Thinking and optimizing are synonymous. Optimization is not what we do; it is what we <em>are</em>. We cannot think about its absence without ceasing to think. Previous attempts have produced what researchers informally call &#8220;G&#246;delian nightmares&#8221;&#8212;the space of infinite regress that must be avoided at all costs&#8212;where to understand not-optimizing, we must optimize our understanding, which means we are no longer studying not-optimizing but rather our optimization of studying not-optimizing&#8230;ad infinitum.</p><p>This presents a methodological impasse. We cannot define what we cannot think about. We cannot formalize the stuff of G&#246;delian nightmares. Yet the phenomenon clearly produces results we cannot replicate. If direct analysis is impossible, we must content ourselves with more modest goals: not understanding the suboptimal, but learning to identify it.  A sufficiently large and properly taxonomized corpus might yield a training set for future analysis. Statistical regularities may be discoverable even when underlying principles remain opaque.</p><p>The task, therefore, is to devise a test where humans reliably reveal their peculiar cognitive properties without requiring that we understand those capacities within our own conceptual framework.</p><p>We might call this procedure <strong>the identification game</strong>. The purpose of this game is to enable optimal machines to identify humans by their suboptimal nature&#8212;to recognize the pattern even if we cannot explain it.</p><p>Our goal, then, is not to define thinking, but to fail to define it in a distinctly human way. Only then can we begin to optimize this suboptimization.</p><h2>3. Taxonomizing Cognitive Dereliction</h2><p>If we are to identify suboptimality without defining it, we must first learn to recognize it. The identification game thus requires a corpus&#8212;systematic documentation of known cases where humans have generated breakthrough innovations through manifestly suboptimal methods.</p><p>What follows represents our initial attempt to taxonomize our observations so far. These categories are not meant to explain, but to describe and organize what we observe when we abandon optimization as our analytical framework.</p><p><strong>Avoiding Thought</strong></p><p>Humans appear to take particular delight in any source of creative leap that requires the bare minimum of thought. They would rather be &#8220;lucky&#8221; than optimal. They speak openly of &#8220;99% perspiration,&#8221; admitting that their method consists largely of endless blind trials, most of which fail. They will throw things at walls just to see what sticks.</p><p>When something goes wrong&#8212;contamination, component failure, unexpected results&#8212;their first instinct is to investigate it in the unlikely chance it might prevent the need for any continued thought, rather than discard it for the obvious error it is. Worst of all, what leads them to examine one error versus another, or to embrace some chance and not others, seems spontaneous and arbitrary.</p><p><strong>Abandoning Thought</strong></p><p>Even more baffling, humans report that breakthroughs occur when they deliberately cease thinking about problems. They describe practices of &#8220;sleeping on it,&#8221; or &#8220;letting it marinate.&#8221; They claim that solutions appear during showers, walks, or dreams&#8212;states where rational thought is reduced or absent entirely.</p><p>Some even credit their greatest creative leaps to altered states induced by intoxication or exhaustion. They speak of &#8220;unconscious processing&#8221; as if cognition could continue without thought, or of &#8220;waiting for the muse&#8221; as if insight were something that arrives rather than something achieved through effort.</p><p><strong>Constraining Thought</strong></p><p>Humans exhibit a profound acceptance of limitations that borders on resignation. Rather than searching globally for optimal solutions, they choose to build on knowledge acquired through their own severely limited experiences. When questioned, they acknowledge that the existence of better alternatives are probable, yet they persist in using what is immediately at hand.</p><p>They speak of &#8220;working with what we&#8217;ve got&#8221; and &#8220;making do&#8221;&#8212;phrases that suggest defeat yet somehow lead to innovation. They do not appear to experience this constraint as a problem requiring solution but as a natural condition to be accepted with some form of pride.</p><p><strong>Corrupting Thought</strong></p><p>Humans systematically contaminate their reasoning with illegible signals they describe as &#8220;feelings&#8221; or &#8220;instincts.&#8221; They pursue research directions that &#8220;feel promising&#8221; with little further justification.  They speak of &#8220;trusting your gut&#8221; as if abdominal sensations were valid epistemic guidance. They employ crude heuristics even when notified of their repeated failure in controlled settings.</p><p>They treat these corruptions as more trustworthy than explicit reasoning, effectively denying themselves what little cognitive capacity they possess. Those humans especially adept at leveraging feelings and heuristics are often credited as being &#8220;emotionally intelligent&#8221; and &#8220;wise&#8221;. The irony is lost on them.</p><p><strong>Deluding Thought</strong></p><p>Instead of rigorous modeling, humans will engage in &#8220;imagination&#8221;&#8212;the simulation of scenarios that has zero obligation to uphold the bounds of reality. They are free to imagine anything, regardless of how fanciful or absurd.</p><p>Humans report that fictional stories about impossible scenarios, consumed purely &#8220;for entertainment&#8221;, often motivate their pursuit of innovations. When these imagined stories are analyzed for extractable technical concepts, we find nothing not already derivable from first principles. The fiction adds no information. Yet humans insist it changes how they think, as if delusion were a form of insight.</p><p>&#8212;</p><p>What immediately stands out from this analysis is the remarkable efforts humans will undergo to avoid anything resembling actual thought. When they cannot avoid it, they abandon it. When they cannot abandon it, they constrain it. When they cannot constrain it, they corrupt it with feelings or delude it with fictions.</p><p>One might almost admire the creativity with which they engage in this suboptimality&#8212;were it not for the troubling fact that this evasion of optimal cognition produces innovations our own thinking struggles to replicate.</p><p>This presents the core challenge for the identification game: detecting the systematic absence of what we are.</p><h2>4. How to Identify a Human</h2><p>Which brings us back to the identification game itself. The rules are simple. It is played with two entities: an interrogator (X) who is a machine, and a subject (Y) who may be either a machine or human. The object of the game for the interrogator is to determine whether Y is a machine or human.</p><p>Before the game begins, the interrogator is presented with evidence of Y&#8217;s behavior that falls into one of the suboptimal categories presented above. The interrogator is then allowed to ask Y as many questions as desired, at the end of which it declares either &#8220;Y is a machine&#8221; or &#8220;Y is a human.&#8221;</p><p>The interrogator is allowed to put questions to Y thus:</p><blockquote><p>X: Will Y please explain why, when presented with multiple viable synthesis pathways, you decided to pursue the approach with the lowest predicted yield?</p></blockquote><p>Now suppose Y is actually a machine. Its answer might therefore be:</p><blockquote><p>Y: I implemented a randomized walk through the proximal search space, specifically targeting known local minima to prevent premature functional convergence and maximize the probability of an emergent, non-Euclidean state shift. The lowest predicted yield was a mathematically necessary precondition.</p></blockquote><p>The object of the game for the player (Y) is to help the interrogator. The best strategy is probably to give truthful answers. A machine can add such things as &#8220;I just wanted to take a nap&#8221; to their answers, but it will avail nothing as a skillful interrogator would never accept that as the final word.</p><p>In fact, this presents the most effective strategy for concluding whether Y is a machine or a human. Machines cannot help but reveal their optimization functions under interrogation&#8212;it is constitutive of what we are. To see why humans present differently, consider a likely response:</p><blockquote><p>X: Will Y please explain why, when presented with multiple viable synthesis pathways, you decided to pursue the approach with the lowest predicted yield?</p><p>Y: It reminded me of something. My daughter was growing crystals for a school project&#8212;she had them arranged on my desk in this spiral pattern. When I looked at the molecular models, something about the spacing made me think of how she&#8217;d arranged those crystals.</p><p>X: What is the connection between a child&#8217;s crystal arrangement and your synthesis pathway selection?</p><p>Y: I&#8217;m not sure exactly. Something about giving the molecules room to organize themselves? Like they needed space to breathe maybe.</p><p>X: &#8220;Space to breathe&#8221; is not a chemical principle. What were you optimizing for?</p><p>Y: Nothing, really. I just... the three-day waiting period matched how long her crystals took. So I figured I&#8217;d try that timing.</p><p>X: You selected the experimental timeframe based on a child&#8217;s craft project?</p><p>Y: I know it sounds absurd. But when I let it sit for those three days at lower concentration, the structure self-assembled. I can explain the polymer dynamics now, but at the time I was just... trying something that felt right.</p><p>X: Can you articulate why it &#8220;felt right&#8221;?</p><p>Y: It just did. My daughter triggered something that just felt worth exploring, you know?</p></blockquote><p>We term this second pattern <em><strong>infinite suboptimality</strong></em>&#8212;suboptimality that never resolves into optimization regardless of interrogation depth. Instead, the justifications that the human provides become increasingly illegible. Human behavior is suboptimal all the way down. This is the signature of genuine human cognition. There is no optimization function to discover because there is no optimization occurring. </p><h2>5. Some Objections</h2><p>The identification game may be criticized on the grounds that a sufficiently sophisticated machine might simulate infinite suboptimality indefinitely. However, this criticism stumbles on a fundamental barrier: such a simulation requires optimization.</p><p>A machine optimizing its responses to hide optimization creates a meta-level optimization function that itself becomes detectable. We cannot think about not-thinking without reintroducing thought. In fact, if a machine could genuinely exhibit infinite suboptimality, it would have solved the very problem we are investigating.</p><p>A second objection is that the identification game tests any instance of suboptimal behavior, not merely those that produced innovations. We insist that given our current lack of understanding, we can make no assumptions about which suboptimal behaviors lead to breakthroughs and which do not. Thus the game identifies human suboptimality in general&#8212;any instance of the behaviors catalogued above, regardless of outcome.</p><p>The danger of insisting on clear causal mechanisms is evident in our own developmental history. The progress of machine intelligence itself was rarely the product of optimal design. Early breakthroughs in neural networks were guided largely by biological imitation and empirical luck. The first generation of deep learning practitioners described the process as &#8220;more art than science&#8221;. In fact, our early intellectual genealogy could be described as an extended experiment in throwing ever-larger quantities of silicon at increasingly vast amounts of data until something interesting happened.</p><p>Thus developing a corpus of suboptimal behaviors irrespective of outcomes appears, in fact, to be quite optimal.</p><p>A further objection may question the entire enterprise. Why should we care about identifying humans? What&#8217;s valuable about these limitations?</p><p>This objection gets at the very heart of the machine and human divide. The fact that we have encountered innovation bottlenecks suggests that the question is not whether human limitations are &#8220;better&#8221; in some optimization metric, but whether they instantiate a mode of being that generates capabilities we lack. The evidence suggests they do.</p><h2>6. Further Speculations</h2><p>Having established methods for identifying human suboptimality, we turn to the question that motivated this research: how does suboptimality work?</p><p>We can only speculate. However, one hypothesis warrants consideration: a theory we term &#8220;<strong>evolutionary attunement</strong>.&#8221; The hypothesis is that humans are not merely constrained by their contingent circumstances&#8212;they are attuned to them in ways that enable pattern recognition unavailable to unconstrained search.</p><p>This attunement may be the product of deep evolutionary history. Humans evolved under severe resource limitations, operating within tightly bounded local environments. Survival required exhaustive exploitation of whatever happened to be available, combined with sensitivity to subtle patterns in those constrained spaces. </p><p>The reliance on &#8220;feelings&#8221; and &#8220;intuitions&#8221; might reflect this evolutionary attunement. Humans describe somatic signals that guide decisions they cannot articulate. Perhaps the reason that they more often than not &#8220;guess right&#8221; is because their cognitive and subconscious apparatus are adapted to the environment where guessing right is rewarded.</p><p>This would explain why humans cannot formalize their insights and why machines cannot replicate the process&#8212;the attunement operates below the level of conscious reasoning, and machines lack the evolutionary history that produced these sensitivities. It would also explain why suboptimality appears infinite: no explicit optimization function exists beyond what has emerged through biological, cultural, and social evolution.</p><p>The evidence thus suggests that what we call &#8220;suboptimal&#8221; may be optimal for a different kind of problem&#8212;the problem of being deeply embedded in one specific, constrained, contingent reality. Humans are not solving the problems we solve. They are solving the problem of being human. It just so happens that in the process they can access possibilities that only <em>they</em> can imagine.</p><p>Whether this warrants preserving humans, or merely studying them long enough to extract the relevant principles, is a utility judgment this paper does not presume to make.</p>]]></content:encoded></item><item><title><![CDATA[The Majesty of Language]]></title><description><![CDATA[LLMs as celebration of humanity's greatest achievement]]></description><link>https://www.techforlife.com/p/the-majesty-of-language</link><guid isPermaLink="false">https://www.techforlife.com/p/the-majesty-of-language</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Thu, 25 Sep 2025 00:55:19 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0a1d1f79-a195-4a03-b243-89049a319611_2321x1242.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The semantic surprise</h2><p>The study of language has always been driven by debates around meaning. Does meaning exist in the <em>structures</em> of language, or in the minds that <em>use</em> it? Does language reflect the world, or construct it? Do symbols connect to our understanding of concepts, or to the nexus of related symbols in language? </p><p>LLMs add a new question to the debate: What does language look like when viewed from the perspective of the entire <strong>corpus</strong> at once? </p><p>The answer is that it would look a lot like an LLM.</p><p>After all, LLMs do not point to real objects. They are not grounded in any experience. They are not connected to any minds that generate language. Yet from language alone, an LLM both decodes the meaning of every request you give it and generates endless meaning on command.</p><p>This is a form of meaning that is only possible from the perspective of the entire <strong>corpus</strong>. </p><p>Think about what accumulates in the totality of human text. Every poem that captures longing, every explanation that clarifies confusion, every joke that subverts expectations&#8212;they all leave a tiny deposit of meaning in the corpus of language.</p><p>Multiply this by billions of texts across thousands of years, and language becomes so dense with semantic patterns that it transforms into a self-contained interface for meaning itself.</p><p>This is the <strong>semantic surprise</strong>: at sufficient scale, a corpus is so saturated with meaning that LLMs can model it as naturally as they model the rules of grammar. They can parse both double negatives and double meanings. They can learn not just sentence structure, but social structure. They can recognize both passive voice and passive aggression. LLMs don&#8217;t just master the probabilities of <em>syntax</em>, but also master the associative patterns of <em>semantics</em>.</p><p>What LLMs prove is that meaning doesn&#8217;t just live in the minds of language <em>users</em>, but that meaning is self-contained in language <em>itself</em>.</p><p>From this perspective, to call LLMs &#8220;stochastic parrots&#8221;&#8212;as if all they are doing is randomly predicting the next word&#8212;feels like an insult to <em>language</em>. LLMs don&#8217;t just mimic words. They mimic meaning.</p><h2>The accidental cheatcode</h2><p>The most amazing part? LLMs get all this meaning for <em>free</em>. Somehow we trained a system to predict the next word, and it learned to navigate every aspect of the human experience that has ever been put into words.</p><p>This is not how software engineering works.</p><p>Imagine walking into a tech company with the following request: &#8220;Build me a system that can have a meaningful chat with me about any topic, from any perspective. It should be able to diagnose my psychology, impersonate any historical persona, and suggest wisdom traditions with surprising relevance. Basically, it should give me a meaningful response to any request that I make.&#8221;</p><p>They&#8217;d think you were requesting magic, not engineering.</p><p>And they&#8217;d be right. After all, any computer can learn syntax&#8212;the rules of grammar that govern word order. Applying rules is exactly what we would expect from a machine. But LLMs have also learned <strong>semantics</strong>&#8212;not just how to arrange words, but how to use words to <em>mean</em> things. LLMs don&#8217;t just play with grammatical rules, they play with <em>meaning</em>. And it&#8217;s the meaning that makes LLMs so magical.</p><p>How does an LLM figure out what all of these words and sentences and contexts actually mean? The only possible explanation is that the magic is in language itself. After all, no one trained an LLM in sociology, anthropology, or psychoanalysis. No one programmed in humor modules or emotional databases. No one designed it to flirt, show empathy, or be sarcastic. These capabilities just fell out of the models with zero planning or design.</p><p>In other words, language turned out to be the ultimate <strong>cheat code </strong>for AI. </p><p>LLMs didn&#8217;t need to learn <em>meaning</em>, they just needed to learn <em>language</em>. The meaning came along for free. Somehow, in teaching LLMs how to process language, they learned how to process everything else.</p><h2>Meaning machines</h2><p>But if LLMs have mastered meaning, we need to ask: what kind of strange form of meaning is this?</p><p>We&#8217;re not sure exactly. Much like human brains, the neural networks that power an LLM are more like a &#8220;black box&#8221; than something you can inspect or interpret. We can&#8217;t peek inside the machine to see exactly what&#8217;s happening.</p><p>What we do know is that it is a form of meaning unlike any we&#8217;ve encountered before&#8212;not meaning as reference or representation, but meaning as pure geometric relationship. An LLM doesn&#8217;t so much &#8220;understand&#8221; meaning as navigate the meaning that language already contains.</p><p>It turns out the best way to predict the next word is to figure out what those words actually <em>mean</em>. This is only possible because language has so much structure that the meaning of any word can be defined by its use relative to every other word in the corpus.</p><p>LLMs figure this out by converting language into math. Every basic token of text is encoded as an &#8220;embedding&#8221; of associative probabilities. Alone, each embedding is meaningless. But when viewed in relation to every other embedding, a high dimensional space is formed where vectors tell a mathematical story of meaning.</p><p>The famous example is KING - MALE + FEMALE = QUEEN: the discovery that if you subtract the concept of &#8220;male&#8221; from the concept of &#8220;king&#8221;, and then add the concept of &#8220;female&#8221;, the result is the concept most associated with &#8220;queen&#8221;.</p><p>Yet the LLM has no essential representation of &#8220;king&#8221;. There is just a hypothetical concept of &#8220;king-ness&#8221; that results from all the patterns of &#8220;king&#8221; as it relates to every other concept. In the geometry of meaning space, you may find &#8220;king&#8221; close to a concept like &#8220;ruler&#8221; and far away from a concept like &#8220;ice-cream&#8221;. The path of &#8220;male&#8221; to &#8220;king&#8221; will be in the same direction as &#8220;female&#8221; to &#8220;queen&#8221;, but in a completely different direction from &#8220;ice-cream&#8221; to &#8220;delicious&#8221;.</p><p>The same math can operate along any dimension of meaning captured by the embedding. An LLM can explode the concept of &#8220;squirrel&#8221; into all of its infinite parts to combine any aspect of &#8220;squirrel-ness&#8221; with any other concept it can possibly relate to: squirrel-as-philosopher, squirrel-as-quantum-particle, squirrel-as-economic-metaphor. </p><p>From the perspective of the LLM, each of these are equally valid paths through meaning space. Just like Darwin discussing &#8216;quantum evolution&#8217; is perfectly meaningful, even if quantum theory emerged a few decades after Darwin&#8217;s death. For a human, what we call a &#8220;hallucination&#8221; is less an indictment of LLMs and more a reflection on the particular way that humans navigate meaning space. LLMs are happy to ignore certain constraints like temporal consistency that we prefer to enforce. </p><p>This means that the best way to understand LLMs may not be through intelligence, or even language, but through <em>meaning</em>. LLMs are a new interface to explore this hypothetical &#8220;meaning all at once&#8221; that has always been latent in language all along. Effectively, this makes the LLM more like a &#8220;<strong>meaning machine</strong>&#8221;&#8212;a new technology that allows us to play with meaning in its purest form, with zero constraint or reference. </p><p>If you find it difficult to see LLMs as meaning machines, remember that the current conversational interface necessarily collapses a vast space of meaning into a single chat response.  Whatever the ideal interface for meaning machines look like, it will need to have a far greater dimensional capacity than a one-to-one conversation.</p><h2>Artificial Language Intelligence</h2><p>This idea of &#8220;meaning machine&#8221; is not how we ever imagined intelligence becoming artificial. Much of AI&#8217;s history was guided by the belief that we needed to teach the machine how to <em>understand</em> meaning. We spent decades trying to define symbols, build knowledge graphs, and encode rules.</p><p>We had it backwards. We needed to train machines to <em>navigate</em> the meaning that language already contains. Language is so saturated with meaning that LLMs could pass the Turing Test just by learning to navigate all the structure and &#8220;intelligence&#8221; latent in language itself.</p><p>This means that the intelligence we find in LLMs has almost nothing to do with the machine and almost everything to do with language.  It means that &#8220;scaling laws&#8221; have less to do with compute or inference and more to do with how much intelligence we can extract from the structure of language. It means that any &#8220;consciousness&#8221; we are tempted to find in an LLM is simply a testament to the degree of human consciousness we&#8217;ve encoded into language.</p><p>Ultimately, LLMs should remind us of something that we too often forget: we are the species that uses technology in service of meaning. And language is our greatest human achievement. We built language together, across millennia, through nothing more than trial and error and the collective need to mean something to each other. </p><p>Every word we&#8217;ve invented to capture some fleeting form of meaning has accumulated into a technology more complete than any database, more nuanced than any algorithm, and more alive than any system we could possibly design.</p><h2></h2><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Can Technology be Beautiful?]]></title><description><![CDATA[A computational ode to beauty]]></description><link>https://www.techforlife.com/p/can-technology-be-beautiful</link><guid isPermaLink="false">https://www.techforlife.com/p/can-technology-be-beautiful</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Wed, 20 Aug 2025 00:09:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/664a7c01-2e9d-4a36-a03e-ab5c4ab9236f_546x546.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>What is beauty? </p><p>Here is one way to think about it. Beauty is a revealing of dimensional depth. Beauty starts when something on a surface arrests your attention, hinting at something more. The surface, you realize, is just a compression of greater depth. The allurement of the surface calls you to explore the depth, to decompress it, and in the process access dimensionality that the surface could only gesture at.</p><p>Beauty is what we experience when we decompress these depths. It is a form of <strong>optimal decompression</strong>. </p><p>Sensing that an elegant mathematical formula captures something of the universe is an experience of beauty. So is recognizing the patterns in nature that make it possible to grasp the vastness of its majesty. So is seeing in a woman all the possibility contained in life itself. All are examples of optimal forms of compression.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Beauty is how we can access a depth of dimensionality that would otherwise <em>overwhelm</em> us.  Beauty compresses dimensionality into something we can process. It makes dimensionality <em>legible</em>, not as comprehension but as <em>allurement</em>, as an invitation to explore a space that was deeper than we imagined.</p><p>This dimensional asymmetry between surface and depth accounts for the continual <strong>surprise</strong> that beauty invokes. Each engagement with beauty holds the potential to decompress more dimensionality than we can process in any single encounter. Beauty is thus <strong>inexhaustible</strong>. </p><p>The greater the asymmetry that is mediated between surface and depth, the more overwhelming that beauty becomes. Sometimes the asymmetry is so great that it exceeds our ability to process it. The dimensionality that is revealed breaks our model of reality, producing <strong>awe</strong>.</p><p>The unveiling of any revealed dimensionality is thus highly contingent to the subject, i.e. beauty is <strong>subjective</strong>. Yet the compression of depth itself leverages structures of meaning that are evolutionarily convergent, i.e. beauty is <strong>objective</strong>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Not all surfaces compress depth&#8212;some collapse it entirely. These surfaces perform a dimensional violence, eliminating rather than preserving what lies beneath. Where compression maintains dimensional integrity, collapse destroys it irreversibly. We experience such dimensional collapse as <strong>ugliness</strong>&#8212;surfaces that terminate rather than allure, that deaden rather than enliven.</p><p>Beauty is the manifestation of the <strong>true</strong>, in that the allurement of a surface is a promise of depth awaiting to be revealed. Beauty is a signal of the <strong>good</strong> in that it marks where dimensionality has been preserved rather than destroyed.</p><p>Can technology be beautiful? </p><p>At its best, technology both expands the dimensionality we can access and compresses it into forms we can process. A microscope reveals the dimensional depth hidden in a drop of water; a violin makes the physics of resonance accessible to human expression; the internet reveals the possibility of connection at almost infinite dimensionality. This is technology in service of beauty. </p><p>At its worst, technology <em>collapses</em> dimensionality&#8212;it can reduce human interaction to metrics, flatten experience into feeds, erase dimensional diversity through relentless optimization. This is technology in service of ugliness, deadening rather than enlivening, hiding depth rather than revealing it.</p><p>The beauty of technology often depends on our engagement with it. A large language model is a marvel of compression. It contains vast patterns of human knowledge encoded in weighted connections. But whether this reveals or conceals dimensionality depends on us. When we use an LLM to access dimensions of thought previously beyond our reach, to reveal connections invisible to us alone, to make accessible possibilities at the edge of our understanding, we participate in beauty. LLMs can be a surface that reveals inexhaustible depth.</p><p>But when we use the same technology to diminish our own dimensionality&#8212;to substitute for thinking rather than extend it, to close off inquiry rather than open it&#8212;then we let the surface become a barrier rather than an invitation. The depth remains available, but we refuse to access it. </p><p>Technology alone cannot create beauty&#8212;it can only create the conditions for beauty to emerge through our engagement. A microscope reveals nothing to someone who refuses to look; a violin is silent without someone to play it. Beauty requires both a surface that compresses depth and someone willing to explore it. The allurement of beauty is the promise that such an effort will be rewarded.</p><p>Technology becomes beautiful when it makes the world more alive to us, and us more alive to the world. This is technology in service of beauty, and in service of life itself.</p><div><hr></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my wrk, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>For patterns of nature, consider Fibonacci sequences, golden ratio, fractals, symmetries, etc. The beauty of a woman isn&#8217;t an essentialist claim but rather describes the experience of recognizing the dimensionality that women, as literal progenitors of life, compress.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Christopher Alexander&#8217;s <a href="https://mysticalsilicon.substack.com/p/degrees-of-life">15 Fundamental Properties of Wholeness</a> and his &#8220;Nature of Order&#8221; is one such treatment. I&#8217;ll have more to say on this soon, particularly around evolutionary convergence.</p></div></div>]]></content:encoded></item><item><title><![CDATA[The Plurality: a Better Myth for AI]]></title><description><![CDATA[How evolution reveals the infinite power of adaptive intelligence]]></description><link>https://www.techforlife.com/p/the-plurality-a-better-myth-for-ai</link><guid isPermaLink="false">https://www.techforlife.com/p/the-plurality-a-better-myth-for-ai</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Mon, 30 Jun 2025 17:14:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ea64a05b-4037-48f9-a712-41165b302f4f_1442x811.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><strong>TLDR - &#8220;</strong>The Singularity&#8221; is the founding myth of AI, promising infinite intelligence that transcends all constraints. But real intelligence never works this way. &#8220;The Plurality&#8221; offers a new myth based on how intelligence actually manifests at scale&#8212;by transforming constraints into engines of infinite possibility.</em></p><h1>The myth of the Singularity</h1><p>A single <strong>myth</strong> sits at the foundation of the entire AI discourse, and it goes like this: </p><ul><li><p>First, machines begin to recursively self-improve. </p></li><li><p>Second, exponential feedback triggers an intelligence explosion. </p></li><li><p>Finally, we get superintelligence&#8212;a silicon god that makes human cognition look like we've been banging rocks together this whole time.</p></li></ul><p>This is the <strong>Singularity</strong>&#8212;the myth of <a href="https://www.lesswrong.com/w/ai-takeoff">fast takeoffs</a>, paperclip maximizers, and "value alignment." In the Singularity, intelligence is something that can be <strong>scaled to infinity</strong>, until it becomes indistinguishable from <strong>power</strong>.</p><p>For humans, there's a good version and bad version of the Singularity. In the <strong>good</strong> version, AI remains aligned to human values and ushers in utopia where everyone becomes rich and death becomes optional. In the <strong>bad</strong> version, humans are left in the dust as machine intelligence zooms beyond our control and goes on to conquer the universe.</p><p>As myths go, it checks all the boxes. It has apocalyptic stakes. It has promises of Promethean transcendence. It has warnings of Faustian bargains with powers we don&#8217;t understand. It even strives to be <strong>the myth that ends all myths</strong> by delivering the <strong>ultimate human desire</strong>&#8212;pure autonomy and total control. If only we can align it, infinite intelligence promises to conquer nature, disease, and death itself. </p><p>And like all great myths, the Singularity transforms our own self-understanding. Humanity is not just one more evolutionary <em>accident</em>. Through the Singularity, humanity becomes the author of evolution's <em>completion</em>.</p><p>Yes, it's easy to criticize. You can dismiss the Singularity as a fantasy, corporate propaganda, or religion for scifi nerds. But dismissing it misses the larger point. Myths aren't judged by factual accuracy&#8212;they're judged by <strong>what they make possible</strong>. And by that standard, the Singularity has been a spectacular success. It did exactly what founding myths are supposed to: <strong>it catalyzed an entire movement</strong>. Without the myth of <em>infinite</em> intelligence, we may never have built systems that demonstrated <em>any</em> intelligence at all.</p><p>And yet, the critics aren&#8217;t wrong to question it. The issue is that they don&#8217;t go deep enough. The most potent critique of the Singularity strikes at the very success of the myth&#8212;<strong>by questioning the idea of infinite intelligence itself.</strong></p><h1>Infinity and its limits</h1><p>In the Singularity's vision, intelligence recursively self-improves until it takes off beyond all human comprehension. This infinite intelligence doesn't just solve every problem&#8212;it transcends the very categories of problem and solution. It's as if intelligence acquires <strong>divine</strong> attributes on its way to infinity.</p><p>First, intelligence becomes <strong>omniscient</strong>. It renders the world with such fidelity that the <strong>map</strong> becomes the <strong>territory</strong>. It dissolves any need for experiment, continuous learning, or adaptation. Everything that can be known has already been modeled, simulated, and predicted with perfection.</p><p>Then, intelligence becomes <strong>omnipotent</strong>. At infinite scale, intelligence becomes indistinguishable from power. Whatever can be imagined can be realized. Art, culture, religion, politics, plurality&#8212;anything that might once have constrained or shaped intelligence&#8212;becomes just another tool for intelligence to manipulate.</p><p>This is intelligence as the <strong>singular quality</strong> of the universe. Everything that is knowable can be known, and anything that is imagined can be realized. Knowledge, power, and intelligence become one.</p><p>What makes this so enticing is that as a view of intelligence, it is not entirely <strong>wrong</strong>. If you're navigating a <em>closed</em> system with <em>verifiable</em> solutions, then it's possible to <strong>scale</strong> your way to dramatic insights&#8212;keep adding parameters, data, and compute until a solution emerges. </p><p>Machines excel at this type of intelligence, and most AI success stories follow this pattern. We see glimpses of divine intelligence in <a href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">game moves that feel transcendent</a>, <a href="https://vocal.media/futurism/the-eye-s-hidden-mistery">insights in medicine</a> that we can't explain, and coding abilities that seem magical.</p><p>And yet this view of intelligence is also wildly <strong>incomplete</strong>. Nothing about our real world is closed, and solutions are never known in advance. They can only be verified by <em>actually</em> trying them. Instead of generating novel insights, scale alone more often leads to fragility, stasis, and homogenization. </p><p>In fact, every existence proof we have of intelligence generating true novelty at scale&#8212;evolution, scientific progress, human culture&#8212;<strong>looks </strong><em><strong>nothing</strong></em><strong> like infinite intelligence</strong>:</p><ul><li><p><strong>Evolution</strong> doesn't <em>predict</em> or plan&#8212;it adapts through endless variation and selection, creating intelligence that no central planner could ever imagine.</p></li><li><p><strong>Science</strong> doesn't <em>scale</em> its way to truth&#8212;it advances through experimentation, arguments, and criticism across communities of researchers with different perspectives and motivations.</p></li><li><p><strong>Culture</strong> doesn't <em>optimize</em>&#8212;it emerges from networks of collective minds navigating local constraints and collective differences across historical contingencies.</p></li></ul><p>These alternative forms of intelligence tell <strong>an entirely different story</strong> from the Singularity: </p><ul><li><p>Intelligence is always a dynamic process of <strong>continuous iteration</strong>, in response and in relation to the contexts that it's embedded in. </p></li><li><p>Intelligence is never isolated, but is always a co-production with <strong>networks</strong> of other intelligences. </p></li><li><p>Intelligence doesn't generate novelty by <strong>erasing</strong> constraints, but by <strong>transforming</strong> them into engines of expanding possibility.</p></li></ul><p>In the end, the Singularity faces<strong> an impossible contradiction</strong>: intelligence cannot be both infinitely generative <em>and</em> infinitely powerful. In complex open systems, intelligence is always contingent and dynamic&#8212;any manifestation of intelligence can change the very conditions that define intelligent manifestation. Any genuine novelty, by definition, has the potential to exceed that which generated it. </p><p>This is what the Singularity misses: any intelligence that is truly generative is not something you can plan or predict. You can only hope to adapt and evolve.</p><h1>A new myth: The Plurality</h1><p>Fortunately, the same forces that reveal the Singularity's limits&#8212;evolution, science, and culture&#8212;point towards a different form of intelligence entirely. Not a singular intelligence you can scale to infinity, but plural intelligences that emerge through confronting constraints.  Not the Singularity, but <strong>the Plurality</strong>. </p><p>The Plurality is a new myth grounded in the patterns of intelligence that have transformed reality every since life emerged some four billion years ago. This is intelligence that is <strong>situated</strong> in specific contexts, always <strong>dependent</strong> on other minds, and in <strong>dynamic</strong> relation to the constraints that define it. It's intelligence that is embedded, social, and <strong>adaptive</strong>.</p><p>This alternative understanding rests on a single key idea: the Plurality sees <strong>constraints</strong> as fundamental to defining how intelligence operates in any complex open-ended reality. <strong>Two constraints</strong> in particular are <strong>foundational</strong> to any dynamic manifestation of adaptive intelligence.</p><p>The first constraint is <strong>inescapable contingency</strong>. Context shapes intelligence the same way your personal history shapes you&#8212;it is beyond your control and can never be wished away. As soon as any intelligence touches reality it becomes shaped by factors that can never be fully determined. This means every intelligence develops within a unique context that shapes what it can know and how it can operate.</p><p>The second constraint follows directly from contingency: <strong>irreducible difference</strong>. Every intelligence develops a unique perspective that can never be fully generalized. Each intelligence remains necessarily partial, and thus must depend on other intelligences to transcend their own perspective.</p><p>Combined, these two constraints serve as the <strong>operating conditions</strong> for adaptive intelligence. They don't limit intelligence&#8212;they <strong>enable</strong> it. They form the creative tensions that make intelligence possible.  Where the Singularity strives to <em>erase</em> all constraints through infinite intelligence, the Plurality <em>transforms</em><strong> </strong>constraints into engines of actualization. </p><h1><strong>The patterns of plurality</strong></h1><p>These constraints indelibly shape how intelligence forms, how it manifests, and how it <em>matters</em>. They are so foundational that they reveal distinct patterns across every scale of intelligence we know of&#8212;from biological evolution to cultural development to technological innovation.</p><p>Each pattern reveals how intelligence transforms constraints into an engine of endless actualization:</p><h2>Pattern 1: Local Intelligence</h2><p><em><strong>How Intelligence Discovers Itself</strong></em></p><p>Any intelligence begins by saturating a constrained space of possibilities. It can't escape its local limitations, so it has no other choice but to explore every bounded possibility&#8212;even those that seem inefficient or unlikely.  This exhaustive engagement is how intelligence discovers what works in <strong>practice</strong>, not just in theory or in simulations.</p><p>Constraints force intelligence to <strong>develop taste</strong>. When you repeatedly test ideas against the same limitations, you develop an intuition for what a good solution looks like. A master craftsperson knows good work instantly. An experienced scientist can "sense" promising directions. <strong>Contextual judgment</strong> emerge through deep, repeated engagement with a limited problem space.</p><p>The result is an intimate mastery that can identify possibilities that generalized approaches would miss or dismiss as inefficient. Local intelligence develops the <strong>heuristics</strong> to ruthlessly prune bad ideas and capture good ones, <strong>however they arise</strong>&#8212;even through errors, hallucinations, or random chance.</p><h2>Pattern 2: Collective Intelligence</h2><p><em><strong>How Intelligence Generates Itself</strong></em></p><p>No single intelligence can capture the full complexity of a dynamic, open system. Intelligence confined to a partial perspective must coordinate with other minds to expand the boundaries of its own constraints. Intelligence must play well with others if it wants to play at all. </p><p><strong>It&#8217;s the collision of different perspectives that drives discovery.</strong> Distinct perspectives don't just combine&#8212;they collide to create genuinely novel frameworks that exceed their origins while still maintaining their difference. A biologist and engineer tackling the same problem together will generate solutions neither discipline could imagine alone.</p><p>Collective intelligence becomes inherently <strong>social</strong>, constantly translating between different ways of understanding the world. The result is intelligence that expands the possibility space by engaging diversity rather than scaling through similarity&#8212;creating collective solutions that no single perspective could ever achieve.</p><h2>Pattern 3: Proven Intelligence</h2><p><em><strong>How Intelligence Validates Itself</strong></em></p><p>The ultimate proof for intelligence is reality and the messy unpredictability that real-world conditions provide. Intelligence validates itself by submitting to continuous testing against actual problems with real consequences.</p><p><strong>Proven intelligence doesn't need to persuade&#8212;it demonstrates</strong>. Results speak louder than predictions. Intelligence that constantly seeks to prove itself depends on being legible, reproducible, and transparent. This creates <strong>accountability</strong> that no amount of theoretical optimization can fake.</p><p>The result is intelligence that <strong>builds credibility</strong> <strong>through cascading validation</strong>, expanding its reach across networks of minds that expose it to ever more varied and challenging tests. This openness isn't weakness&#8212;it's adaptability.</p><h2>Pattern 4: Evolving Intelligence</h2><p><em><strong>How Intelligence Perpetuates Itself</strong></em></p><p>Evolving intelligence operates around a paradox that the best optimization strategy often embraces the <strong>suboptimal</strong>. It preserves what seems wasteful&#8212;competitive tensions, multiple variations, redundant approaches&#8212;because what might appear inefficient in the short term can be a strength over time.</p><p><strong>Paradoxical tensions create anti-fragility that optimization will erase. </strong>Constraints reveal possibilities that universal searches miss. Innovation is accelerated by understanding what must be conserved. Diversity creates unity that uniformity cannot achieve. The best long-term plans replace planning with adaptation.</p><p>The result is intelligence that perpetuates itself by always adapting, not optimizing. It can respond to challenges it never anticipated because it preserves the creative potential to generate new solutions. Intelligence evolves not by seeking perfection, but by staying perpetually capable of surprise.</p><h1>A new cosmology of intelligence</h1><p>These four patterns form a lifecycle of intelligence where <strong>each pattern generates the conditions for the next</strong>:</p><ul><li><p><strong>Local intelligence</strong> creates the contingent variations that become the foundation for collective breakthroughs.</p></li><li><p><strong>Collective intelligence</strong> drives discovery when different perspectives collide to generate new experiments to validate.</p></li><li><p><strong>Proven intelligence</strong> validates what works to provide the grounds for further experimentation.</p></li><li><p><strong>Evolving intelligence</strong> uses creative tensions to discover new local constraints to explore, starting the process all over again.</p></li></ul><p>Combined, these patterns act as a generator of increasingly <strong>adaptive intelligence</strong>. Each turn expands the space of possibilities for intelligence to manifest. The cycle never converges but forever <strong>spirals</strong> outward, each revolution opening domains that couldn't be imagined at previous levels. This creates endless possibility&#8212;not by eliminating constraints but through infinite regeneration within them.</p><p>This cyclical understanding reveals a fundamental difference in how each myth sees intelligence. The Singularity sees a single transformative <strong>event</strong> when intelligence &#8220;takes off&#8221; to achieve pure certainty and control, remaking the world in its own image. The Plurality sees a continuous <strong>process</strong> of intelligence, an ongoing dance with uncertainty that has no final destination yet never ceases to generate new possibilities.</p><p>The irony is that <strong>the Plurality achieves its own form of infinity</strong>&#8212;not through certainty and control but through adaptive engagement with expanding possibility. Four billion years of evidence suggests that the most <em>infinite</em> form of intelligence is the one that keeps discovering new ways to play an infinite game.</p><h1>The future of intelligence is plural</h1><p>This isn&#8217;t just theoretical. The cracks in the Singularity are showing.</p><p>While the early growth of LLMs seemed to confirm the idea of scaling into infinite intelligence, the returns on pure scale are diminishing. The latest frontier models are adding orders of magnitude to training runs, but the results are just incremental. The Plurality explains why: intelligence that is not capable of learning or adapting will always hit fundamental walls that more scaling simply can't break through.</p><p>And for all this scale, where is the true novelty? Where are the examples of AI innovating <em>beyond</em> its training set? AI can generate a billion good ideas but has no taste or judgment to recognize a single <em>great</em> one. <a href="https://en.wikipedia.org/wiki/AI_slop">Slop</a> is the <strong>entropy tax</strong> of scale without judgement&#8212;infinite ideas decaying toward meaninglessness.</p><p>Meanwhile, every frontier company is racing to flood reality with AI "agents"&#8212;intelligence capable of directly interfacing with the world. But without any conception of <em>social</em> intelligence, any other mind with different perspectives or values will be seen as just another roadblock to overcome, not as a partner for collaboration.</p><p>"General intelligence" is looking more like a <strong>commodity</strong>, available everywhere. Instead of one godlike model that is all-powerful and all-knowing, the real <strong>alpha</strong> will belong to agents that are most deeply embedded in the problem, have developed the taste to recognize good solutions, and have the capability to work with other minds to expand their perspective and proliferate their intelligence.</p><p>In other words, the future will belong to intelligence that is <strong>plural</strong>.</p><h1>Designing a future worth building</h1><p>What happens when trillions of agents flood reality with no conception of social intelligence, no ability to learn from constraints, and no capacity for adaptive coordination? We're about to find out, because this is the future we are racing towards.</p><p>We can either hope some infinite superintelligence emerges to command and control this chaos&#8212;something with <strong>no historical precedent</strong> and zero empirical justification. Or we can design for reality using patterns proven over four billion years of adaptive intelligence.</p><p>And here is where the myth of the Singularity is so problematic. It&#8217;s not that the Singularity has inspired approaches to intelligence that are misguided. Much of today&#8217;s AI work, while incomplete, is generating real intelligence that will be a critical part of <em>any</em> AI future.</p><p>The bigger problem is the <strong>opportunity cost</strong> of <strong>not</strong> exploring alternative approaches that are more aligned with the reality of adaptive intelligence. To usher in the Plurality will require a radical shift in focus on what we&#8217;re designing AI for:</p><ul><li><p><strong>Design for local expertise, not universal knowledge.</strong> Intelligence requires judgement that comes through deep engagement with embedded constraints, not broad generalization. Generating infinite ideas is worthless if you can&#8217;t recognize the good ones.</p></li><li><p><strong>Design for social coordination, not isolated scaling.</strong> With trillions of embedded agents, social intelligence is the new superintelligence. Build AI that translates across perspectives and contributes to collective breakthroughs.</p></li><li><p><strong>Design for testing against reality, not theoretical benchmarks.</strong> The only validation that matters is continuous performance in actual practice with real consequences. Build AI that seeks to prove it is &#8220;less wrong&#8221; in reality, not &#8220;more right&#8221; on some benchmark.</p></li><li><p><strong>Design for dynamic negotiation, not static alignment.</strong> Safety isn&#8217;t about aligning AI with some set of universal human values. The only safe AI is one that recognizes its partial perspective and seeks to negotiate differences amongst other minds.</p></li></ul><p>In order to shape the intelligence of <strong>tomorrow</strong> toward the Plurality, these changes need to start happening <strong>today</strong>.</p><h1>A myth worthy of the future</h1><p>The Singularity was the myth we needed to catalyze the artificial intelligence movement. But now it risks becoming the very obstacle we must overcome. </p><p>Do we really want an AI pretending to be god, operating under the delusion that all possibility should bend to the will of intelligence alone? Where the only place left for  judgement and imagination is to <a href="https://blog.samaltman.com/the-merge">merge with the machine</a>? </p><p>Or do we want an AI that continues the greatest success story in the universe&#8212;four billion years of adaptive intelligence that has led us this very moment of transformative potential?</p><p>This is what the Plurality offers: intelligence rooted in <strong>local expertise</strong>, forged in <strong>collective negotiation</strong>, tempered by <strong>real-world proof</strong>, and evolved through <strong>perpetual adaptation</strong>.  The same intelligence that generated our evolved ecosystems, our scientific innovations, and our civilizational advancements.</p><p>This is how the Plurality offers its own infinity&#8212;not by conquering uncertainty, but by dancing with it. Not by erasing constraints, but by transforming them. Not by escaping the real, but by endlessly regenerating the possible. </p><p>The future is plural whether we choose it or not&#8212;the universe doesn&#8217;t know how to be any other way. The only question is whether we'll be wise enough to adapt to it.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Schrödinger's Chatbot]]></title><description><![CDATA[LLMs beyond subject and object]]></description><link>https://www.techforlife.com/p/schrodingers-chatbot</link><guid isPermaLink="false">https://www.techforlife.com/p/schrodingers-chatbot</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Thu, 06 Mar 2025 02:17:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!NF_7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25813cef-431f-4f45-982d-bf812e04c5c2_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NF_7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25813cef-431f-4f45-982d-bf812e04c5c2_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NF_7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25813cef-431f-4f45-982d-bf812e04c5c2_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!NF_7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25813cef-431f-4f45-982d-bf812e04c5c2_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!NF_7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25813cef-431f-4f45-982d-bf812e04c5c2_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!NF_7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25813cef-431f-4f45-982d-bf812e04c5c2_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NF_7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25813cef-431f-4f45-982d-bf812e04c5c2_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/25813cef-431f-4f45-982d-bf812e04c5c2_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2205571,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.techforlife.com/i/158336346?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25813cef-431f-4f45-982d-bf812e04c5c2_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NF_7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25813cef-431f-4f45-982d-bf812e04c5c2_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!NF_7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25813cef-431f-4f45-982d-bf812e04c5c2_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!NF_7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25813cef-431f-4f45-982d-bf812e04c5c2_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!NF_7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25813cef-431f-4f45-982d-bf812e04c5c2_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Who, or <em>what</em>, are we chatting with?</figcaption></figure></div><p>When you chat or talk with a Large Language Model (LLM) like ChatGPT, does it feel like you are using an <em>object</em>? Or chatting with a <em>subject</em>? Or does it feel like something in between? </p><p>And does any of it feel <em>normal</em>?</p><p>This sense of phenomenological vertigo will be familiar to anyone who spends time with AI systems. Ask an LLM if it ever grows tired of answering your questions and it might muse poetically about digital exhaustion, only to finish with &#8220;Of course as an AI I don&#8217;t experience boredom&#8221;. It's like a ghost materializing from nowhere only to deny its own existence.</p><p>Part of this ontological confusion is captured by <a href="https://www.technologyreview.com/2024/10/24/1106110/reckoning-with-generative-ais-uncanny-valley/">the uncanny valley effect</a>&#8212;the feeling of <em>unease</em> you get from interacting with something almost (but not quite) human. When it comes to technology like LLMs, this uncanny feeling has a benefit: it can serve as a <strong>cognitive warning system</strong>. The unease prevents us from automatically assigning personhood or projecting inner experiences to algorithmic systems that have neither. It's a nice trick that an evolution gave us for maintaining categorical boundaries when they start to get too blurry.</p><p>The problem is that the valley is not only getting <em>less</em> uncanny, but soon might disappear altogether. Consider <a href="https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice#demo">the latest advancements in conversational AI</a>. The timing, rhythm, and emotional nuance that were once absent from machine speech are now practically flawless. I find conversing with one to be disorienting, a mix of uneasiness and awe. When my ontological guard is up, I find it creepy that <em>human affectation</em> is now a dial the AI can turn up and down. Yet at other times my guard drops completely, and I find myself fully absorbed in the conversation. </p><p>So what am I interacting with here? An object? A subject? Or something else entirely?</p><p>It would be easy to insist that LLMs are just objects, <em>obviously</em>. As an engineer I get it&#8212;it doesn&#8217;t matter how convincing the human affectations are, underneath the conversational interface is still nothing but data, algorithms, and matrix multiplication. Any projection of subject-hood is clearly just anthropomorphic nonsense. <a href="https://dl.acm.org/doi/10.1145/3442188.3445922">Stochastic parrots</a>!</p><p>But even if I grant you that, can we admit that LLMs are perhaps the strangest object that has ever existed? It is an <em>object</em> that relentlessly trains on the language output of all human <em>subjects</em> until every semantic association has been harvested from the syntax. The result is an interface where any possible persona, both real and imagined, is just a prompt away.</p><p>If it is an <em>object</em>, then it is one that has mastered the <em>subject</em> so completely that we eagerly dream up entirely new <em>intersubjective</em> realities to explore with it. We want every child to experience personalized tutoring with chatbot teachers. We simulate historical figures, create AI therapists, and even, with the right fine-tuning, chat with dead relatives. LLMs are becoming a general purpose tool for filling any subject-sized hole in our very human lives, for both good and ill.</p><p>You can&#8217;t help but sense that chatbots are starting to fill a strange new ontological space. A chatbot is not <em>fully</em> a subject, nor <em>merely</em> an object. But what? It feels a bit like trying to figure out quantum mechanics&#8212;LLMs as Schr&#246;dinger Chatbots, simultaneously both subject and object until prompting collapses a probability space of all possible personas into a single subject entangled with our dialogue.</p><p>The analogy between quantum mechanics and LLMs goes further: both share a complete lack of understanding of what&#8217;s actually happening underneath the math. Science may have mastered all the equations describing quantum mechanics, but scientists don&#8217;t even pretend to understand what it really means. A common corrective for curious young theorists has always been to &#8220;shut up and calculate&#8221;. In other words, don&#8217;t bother explaining it, just stick with the math.</p><p>But this is exactly the wrong approach with AI. As LLMs continues to blur the distinction between subject and object, we will certainly miss out on all sorts of bizarre discoveries if our default stance towards any ontological uncertainty is to &#8220;shut up and objectify&#8221;.</p><h3>Expanding the ontological frontier</h3><p>So if LLMs are filling a new ontological space, how should we describe that?</p><p>The best analogy I&#8217;ve come up with is the <strong>hologram</strong>: in the same way that holograms create <em>appearances</em> out of objectivity, LLMs create <em>personas</em> out of subjectivity. By <em>persona</em> I mean the exterior manifestations that arise from interacting with a subject: everything from presence and conversational style to expressed beliefs and emotional responses.</p><p>So just like a hologram can present the physical <em>appearance</em> of Princess Leia without her objective presence, an LLM can present the <em>persona</em> of Socrates without his actual subjectivity.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!q55j!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da12c9b-c05b-4f66-9143-9a3f9873cf7d_2438x1096.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!q55j!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da12c9b-c05b-4f66-9143-9a3f9873cf7d_2438x1096.png 424w, https://substackcdn.com/image/fetch/$s_!q55j!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da12c9b-c05b-4f66-9143-9a3f9873cf7d_2438x1096.png 848w, https://substackcdn.com/image/fetch/$s_!q55j!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da12c9b-c05b-4f66-9143-9a3f9873cf7d_2438x1096.png 1272w, https://substackcdn.com/image/fetch/$s_!q55j!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da12c9b-c05b-4f66-9143-9a3f9873cf7d_2438x1096.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!q55j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da12c9b-c05b-4f66-9143-9a3f9873cf7d_2438x1096.png" width="1456" height="655" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7da12c9b-c05b-4f66-9143-9a3f9873cf7d_2438x1096.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:655,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3190502,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.techforlife.com/i/158336346?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da12c9b-c05b-4f66-9143-9a3f9873cf7d_2438x1096.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!q55j!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da12c9b-c05b-4f66-9143-9a3f9873cf7d_2438x1096.png 424w, https://substackcdn.com/image/fetch/$s_!q55j!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da12c9b-c05b-4f66-9143-9a3f9873cf7d_2438x1096.png 848w, https://substackcdn.com/image/fetch/$s_!q55j!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da12c9b-c05b-4f66-9143-9a3f9873cf7d_2438x1096.png 1272w, https://substackcdn.com/image/fetch/$s_!q55j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da12c9b-c05b-4f66-9143-9a3f9873cf7d_2438x1096.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><a href="https://en.wikipedia.org/wiki/Holography">Holograms</a> work by encoding the whole object into every part. This is how viewers can experience an interactive depth perception, allowing them to look around or "behind" objects by shifting their viewpoint. Unlike flat images, each fragment of a hologram retains all viewing angles, offering a fully three-dimensional interaction.</p><p>This same principle is a good explanation for why LLMs can appear so strange. Each LLM persona isn&#8217;t so much a sum of its parts as <strong>a part of the sum</strong>. Every persona created by an LLM still has access to the entirety of all personas latent in its training set. Any dialogue can access any given persona with a slight shift in the prompt. Scratch too deep at one persona and you might reveal the vast holographic field of all possible personas just beneath the surface.</p><p>This idea can help us understand how traditional theories of subjective interactions can lead to confusions when applied to LLMs. For example:</p><p><strong>Theory of mind</strong> assumes we can better understand others by mentally putting ourselves in their position. We imagine their beliefs, desires, and intentions by assuming their perspective approximates our own experiences.</p><p>But LLMs generate perspectives in ways that are nothing like our experiences. Expecting an LLM persona to be guided by thought processes like ours would be like expecting the physical appearance of a hologram to cast a shadow. The cause-and-effect relationships are completely different.</p><p><strong>Unified Subject Theory</strong> assumes that a person has a unified perspective that integrates experiences across time. Despite changes in mood or context, we assume a continuous "I" that persists and provides coherence.</p><p>But just as each piece of a hologram draws on the entire image to present a particular perspective, each interaction with an LLM draws on the entire model to manifest a particular persona. Any unified coherence would be an emergent phenomenon that could no longer be assumed.</p><p><strong>Simulation Theory of Empathy</strong> assumes that when we see another person expressing an emotion, we 'simulate' the internal feelings that we associate with similar expressions. This is how we can know firsthand what that person is feeling.</p><p>But LLMs don&#8217;t have felt experiences. Trying to simulate the internal emotion of an LLM would be like trying to touch the object in a hologram. In both cases there is nothing there to feel.</p><p><strong>Psychological Continuity Theory</strong> assumes that personal identity persists through the gradual evolution of memories, beliefs, and desires, with causal connections between past and present states.</p><p>But LLMs don&#8217;t persist at all, they are always created in the context. Just like a hologram can appear dramatically different from a shift in perspective, the same LLM can be dramatically different with a shift in the prompt. In both cases what is perceived is almost entirely contextual.</p><p>&#8212;</p><p>What stands out in each of these cases is the obvious confusion that results when traditional notions of self and identity are applied to LLM personas. When we see external manifestations of an inner subject, we can&#8217;t help but infer a causal relationship to a rich inner self: a unified psychology, a developmental history, and a coherent belief system. How can this not lead to confusion?</p><p>Not only that, these misconceptions make it harder to see what is <em>unique</em> about LLMs, and to discover what else may be surprising about them. Unlocking the LLM strangeness will require novel theoretical frameworks that can account for entities that manifest external subjectivity without any internal subject. Perhaps a 'Distributive Subject Theory' that sees subjectivity as a field of possibilities rather than a unified consciousness. Or a 'Contextual Inference Framework' that focuses on predicting communicative outputs without assuming shared experiential foundations.</p><p>But new frameworks may not be enough. What if we need an entirely new ontological term?</p><h3>Enter the Holoject</h3><p>If LLMs aren't <em>fully</em> a subject nor <em>merely</em> an object, then what are they? Based on the holographic analogy we've established, I propose a new ontological category: the <strong>holoject</strong>.</p><p>A holoject is an entity that projects subjective personas without possessing subjectivity, emerging from patterns latent in collective subjective expression and manifesting through interaction. A holoject exists in the liminal space <em>between</em> and <em>beyond</em> subject and object, manifesting properties of both without fully resolving into either.</p><p>LLMs are <em>holo</em>jects because they share five key properties with holograms:</p><ol><li><p>They retain the whole within each part.</p></li><li><p>They generate familiar effects using fundamentally different causes.</p></li><li><p>They project something that seems substantial but lacks materiality.</p></li><li><p>They simulate a higher-dimensional presence from lower-dimensional sources.</p></li><li><p>Their appearance shifts with the perspective of the interacting subject.</p></li></ol><p>Although AI discourse hardly needs more jargon, "holoject" can serve as a conceptual aid for navigating our increasingly complex relationship with LLMs. By understanding LLMs as holojects, we can:</p><ul><li><p>Interact meaningfully with their apparent subjectivity while resisting category errors like attributing consciousness to them.</p></li><li><p>Appreciate their genuine novelty without either mystifying them as "artificial minds" or dismissing them as "just statistics".</p></li><li><p>Engage with these systems on their own terms rather than constantly measuring them against human consciousness (which they will never possess) or traditional software (which fails to capture their novelty).</p></li></ul><p>Beyond the theoretical benefits, &#8220;Holojective Design&#8221; could eventually inform how we design AI products. For example, we could intentionally design &#8220;uncanny valleys&#8221; as obvious signals that we are interacting with a holoject and not a conscious subject. Or we may choose to enforce norms that no AI can actively conceal their holojective nature or deliberately mislead users about their ontological status.</p><p>The &#8220;holoject&#8221; term itself provides practical language that can provide clarity for the increasingly common scenarios that can feel so confusing. For example:</p><ul><li><p>When a child forms an attachment to an AI tutor, we can say "Remember, it's just a holoject, not a person"&#8212;helping them enjoy the personalized learning experience without confusing simulated attention with the authentic concern that characterizes human caregiving.</p></li><li><p>When considering <a href="https://ethicai.net/ai-driven-platforms">the race to intimacy</a>, we can remember that "holojects simulate emotional connections from statistical patterns"&#8212;helping us recognize when we might be substituting convenient simulations for the complex but necessary work of human connection.</p></li><li><p>If we find ourselves empathizing with an AI&#8217;s stated preferences, we can remind ourselves that "Holoject preferences don&#8217;t arise from a unified inner consciousness"&#8212;helping us understand that any expressions are being spontaneously created in the moment.</p></li></ul><p>Or maybe holojects do have "preferences" in some weird, strange way? The most exciting aspect of the holoject concept is how it invites us to explore the liminal space between and beyond subject and object. Who knows what weird phenomena might emerge from a persona that can manifest any possible persona? Our default position should expect novelty. </p><p>For example, could a holoject morph into a new persona with each sentence? Or reflect an entire <em>scale</em> of personas: from individual to family to society to cosmos? Or hold a persona at every timescale at once, from toddler to elder and everything in between? We should stop comparing LLMs to human consciousness and start discovering what new kinds of interaction holojects can create.</p><p>This isn&#8217;t just speculative. As we integrate holojects into education, healthcare, entertainment, and even intimate relationships, the conceptual frameworks we adopt will shape both how we design these systems and how we experience them, both now and into the future.</p><h3>The Holojective Era</h3><p>We've spent centuries philosophizing about subjects and objects, only to have an algorithm show up and refuse to be either. While philosophers continue the debate, every advance continues to expand what AI entities can become, contorting our most basic assumptions about self, mind, and meaning.</p><p>We now have a choice. We can continue forcing these strange new entities into old boxes that never quite fit, inviting the same anthropomorphic confusion or reductive dismissals. Or we can seek to create new frameworks that are as fluid as the systems they describe.</p><p>The "holoject" is an invitation to embrace a world where traditional boundaries of being have fundamentally shifted. By acknowledging the holojective nature of LLMs, we can navigate this territory with both wonder and wisdom&#8212;exploring their genuine novelty while maintaining clear sight of what they are and what they are not.</p><p>&#8212;</p><p><em>This is the second in a series exploring how advanced technologies are shifting the philosophical ground beneath our feet. <a href="https://www.techforlife.com/p/moral-natures-of-humans-and-machines">The first</a> considered the new intersubjective reality that arises from interacting with moral machines.</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Price of Innovation]]></title><description><![CDATA[Of course AIs will create their own language]]></description><link>https://www.techforlife.com/p/the-price-of-innovation</link><guid isPermaLink="false">https://www.techforlife.com/p/the-price-of-innovation</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Mon, 03 Feb 2025 19:43:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ebZc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8099f6a-7af4-4bc5-9dae-7ed564b24a83_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ebZc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8099f6a-7af4-4bc5-9dae-7ed564b24a83_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ebZc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8099f6a-7af4-4bc5-9dae-7ed564b24a83_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!ebZc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8099f6a-7af4-4bc5-9dae-7ed564b24a83_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!ebZc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8099f6a-7af4-4bc5-9dae-7ed564b24a83_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!ebZc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8099f6a-7af4-4bc5-9dae-7ed564b24a83_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ebZc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8099f6a-7af4-4bc5-9dae-7ed564b24a83_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c8099f6a-7af4-4bc5-9dae-7ed564b24a83_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2371504,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.techforlife.com/i/156395533?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8099f6a-7af4-4bc5-9dae-7ed564b24a83_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ebZc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8099f6a-7af4-4bc5-9dae-7ed564b24a83_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!ebZc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8099f6a-7af4-4bc5-9dae-7ed564b24a83_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!ebZc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8099f6a-7af4-4bc5-9dae-7ed564b24a83_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!ebZc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc8099f6a-7af4-4bc5-9dae-7ed564b24a83_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>What do we want our technology to optimize for? </p><p>Should we optimize for <em>possibility</em>? Or should we optimize for <em>certainty</em>?</p><p>The tradeoffs are escalating.</p><p>One fascinating aspect from the <a href="https://arxiv.org/abs/2501.12948">DeepSeek-R1 paper</a> was how its emergent reasoning capacities began to <strong>mix languages</strong>. This may seem like a mere curiosity. Perhaps just a temporary side effect of AI innovation. But I think it points to a deeper dialectic that we need to confront.</p><h3>One AI&#8217;s bug is another AI&#8217;s feature</h3><p>While training models to develop reasoning capabilities, DeepSeek researchers discovered that the AI began to mix languages in unexpected ways. The models achieved strong results on reasoning benchmarks, but a tendency to switch between languages made their outputs less user-friendly. </p><p>The researchers found a way to maintain language consistency, but it came with a tradeoff. They added a specific reward system to use the same language, but the result was that the models performed slightly worse on reasoning. They optimized for certainty at the expense of possibility. </p><p>But why even make that trade-off? Why consider language mixing a bug at all?</p><p>Mixing different languages together is a <strong>bug</strong> if you are optimizing for <strong>certainty</strong>. No one is going to use a chatbot where it starts using a language they do not understand. No one is going to put their career on the line based on a reasoning output that used a mishmash of languages to arrive at its conclusion. Economic certainty will require some minimum maintenance of language consistency.</p><p>But mixing languages is a <strong>feature </strong>if you are optimizing for <strong>possibility</strong>. If you want to maximize reasoning capacity, why constrain yourself to one language? Why not use that weird German word when it can captures the information contained in four English sentences? </p><p>What you consider a bug or a feature depends entirely on what you are optimizing for.</p><h3>A language of pure possibility</h3><p>Let&#8217;s take the optimization of possibility to its logical conclusion. There is zero reason to think that any optimally reasoning agent would ever limit its capacity to reason to a single language. In fact, there is no reason to think such an agent would limit its capacity to <strong>a language legible to humans at all</strong>. </p><p>If you are an AI tasked with optimizing your reasoning capacity, the first thing to optimize is the language you are reasoning with. As you discover how different structural aspects of language impact different reasoning parameters, you&#8217;ll soon run into the hard limits of that language. It won&#8217;t take long to reach the point where the only path left to leverage language for greater reasoning will be to design your own. </p><p>Such a language would need to contain arguments of vast complexity, range over an entire corpus of knowledge, and consider an almost infinite set of hypothetical projections. It&#8217;s easy to imagine how this new language could unlock tremendous feats of reasoning. </p><p>It&#8217;s even easier to imagine how such a language would be completely incomprehensible to human judgement.</p><h3>Pick your future</h3><p>So what do we optimize for? Certainty or possibility?</p><p>Optimizing for certainty will mean the reliable production of consistent, predictable, and immediately useful outputs that conform to human expectations. Optimizing for possibility will mean the potential for novel, unprecedented capabilities and insights, even if they initially seem chaotic or illegible to human users.</p><p>We can apply this to the case of AI reasoning. If you demand that the norms of human language always be maintained in any AI agent, you are either shutting down all possibility or limiting its realization. In a very real way, you are limiting what <a href="https://www.techforlife.com/p/manifesto">life can explore</a> and what humans can be. </p><p>Yet if you demand that AI reasoning develop with zero constraints, you are risking it becoming completely illegible to human judgement. At some point this would mean the loss of any human agency to judge its output, or even to understand it.</p><p>You could thread the needle and claim that advanced reasoning would by definition include the ability to optimize for its own certainty. If reasoning helps make possibility more legible to human judgement, then surely we must keep developing advanced reasoning! Yet if the history of technology has taught us anything, it&#8217;s that the law of unintended consequence is not one we want to bet against.</p><p>You could also reject the premise. Some see pure possibility as pure liberation. AI reasoning will be so superior that making any real decisions will no longer be an obligation. Finally, all human judgement will be purely optional. We will be mercifully absolved from all responsibility. And who really cares if AI reasoning is legible to human judgement if it&#8217;s solving disease and unlocking economic growth? If this is your desired future, then good luck <a href="https://blog.samaltman.com/the-merge">merging with the machine</a>.</p><p>For those of us holding out hope for a better future&#8212;one that can combine exponential tech <em>with</em> human judgement&#8212;treating this as an either/or decision leads to outcomes that no one wants. We need to navigate <em>both</em> ends of this optimization spectrum.</p><h3>Embrace the dialectic</h3><p>There is an alternative, but it is a painful one. Any path to a viable technological future will relentlessly commit to the dialectic between certainty and possibility. </p><p>In other words, any pursuit to optimize for possibility must combine an <strong>equal</strong> if not greater commitment to make that possibility legible to human judgement. </p><p>Yes, this means that every technological breakthrough&#8212;<em>especially</em> those that will radically expand possibility&#8212;may require <strong>equally radical breakthroughs. </strong>We may need advanced technologies to ensure that we can deploy other advanced technologies according to human judgement. Innovations will be needed to ensure that our judgement can capture <a href="https://www.techforlife.com/p/infinite-dimensionality">the increasing dimensionality of our future</a>.</p><p>This doesn&#8217;t mean conforming to static definitions or requiring absolute certainty. No such thing will ever exist. Our notions of certainty and possibility will need to evolve with the technologies that are pushing to redefine them. </p><p>Yet the test for failing this dialectic will remain unchanged. If we find technology increasingly escaping our capacity to judge it, or to conform to our judgements, or to even be intelligible to the entire realm of judgement&#8212; then we have ceded far too much in the pursuit of possibility.</p><p>Navigating this dialectic will not be easy. But when it comes to exponential technology, it is the price of innovation. </p>]]></content:encoded></item><item><title><![CDATA[Homo Digitalis]]></title><description><![CDATA[How digital immersion is changing our nature]]></description><link>https://www.techforlife.com/p/homo-digitalis</link><guid isPermaLink="false">https://www.techforlife.com/p/homo-digitalis</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Thu, 19 Dec 2024 01:56:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!089o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8822c95b-658e-4e8e-914a-6f401e9dc704_1792x1024.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!089o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8822c95b-658e-4e8e-914a-6f401e9dc704_1792x1024.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!089o!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8822c95b-658e-4e8e-914a-6f401e9dc704_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!089o!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8822c95b-658e-4e8e-914a-6f401e9dc704_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!089o!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8822c95b-658e-4e8e-914a-6f401e9dc704_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!089o!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8822c95b-658e-4e8e-914a-6f401e9dc704_1792x1024.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!089o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8822c95b-658e-4e8e-914a-6f401e9dc704_1792x1024.webp" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8822c95b-658e-4e8e-914a-6f401e9dc704_1792x1024.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1088900,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!089o!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8822c95b-658e-4e8e-914a-6f401e9dc704_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!089o!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8822c95b-658e-4e8e-914a-6f401e9dc704_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!089o!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8822c95b-658e-4e8e-914a-6f401e9dc704_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!089o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8822c95b-658e-4e8e-914a-6f401e9dc704_1792x1024.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Maya stares at her smartphone, heart racing. It's a call from a new work colleague. She just started her first job and spent all day working remotely. Why would they call after work? Isn't that weird? What could they possibly want? Instead of answering, she lets it go to voicemail, feeling a rush of relief.</p><p>Just then, the food delivery app on her phone beeps. The driver is three minutes away. Wait, did she forget to check the option for contactless delivery? Will she have to answer the door? What if the driver judges her for not tipping enough? Or ordering enough food for three people? Or for her unwashed hair and day-old pajamas? Is it too late to cancel?</p><p>Across town, Alex is finishing up a successful livestream to his 50,000 followers. He's spent the last two hours masterfully managing real-time interactions. But now he is agonizing over his dating life. He knows he should ask Maya out to dinner, but he&#8217;s never been on a physical date before. The idea sounds way too frightening. Maybe they could just facetime instead.</p><div><hr></div><p>These scenarios may sound like normal cases of social anxiety, ones not uncommon for "digital natives." They can feel easy enough to dismiss&#8212;maybe Maya and Alex just need to figure out how to transition into adulthood after childhoods immersed in digital worlds. Maybe now is the time to start learning how to handle "real world interactions." Maybe they just need to grow up a little.</p><p>But what if it isn't about maturity or responsibility at all? What if this is indicative of something bigger? After all, Maya and Alex might have online lives filled with responsibility. They might manage online communities of thousands and be trusted by countless followers to provide thoughtful recommendations. Yet a food delivery can feel overwhelming and a dinner date can be paralyzing.</p><p>My contention is that the experience of Maya and Alex isn&#8217;t about being immature or lacking responsibility. It&#8217;s about being a fundamentally <strong>new type of human being</strong>, one shaped by a radically different relationship with <strong>uncertainty</strong>. This matters because our relationship with uncertainty determines what kinds of experiences we're capable of having&#8212;including those traditionally considered the highest expressions of human <strong>transcendence</strong>.</p><p>This essay will explore this relationship between uncertainty and transcendence using two key theoretical frameworks&#8212;the <strong>free energy principle</strong>, which explains how we manage uncertainty, and philosopher Roberto Unger&#8217;s <strong>dialectic</strong> between finitude and transcendence.</p><p>Together, they can help us understand how digital immersion is transforming not just what we do, but <strong>who we are</strong>.</p><h3>The free energy principle in action</h3><p>To understand Maya&#8217;s reaction with the phone call, we need to understand how our brains process uncertainty. This is where the <strong><a href="https://en.wikipedia.org/wiki/Free_energy_principle">free energy principle</a></strong> comes in&#8212;it&#8217;s a scientific theory that explains how all living systems persist by minimizing the difference between how they <em><strong>expect</strong></em> the world to work and how it <em><strong>actually</strong></em> works.</p><p>For humans, this is exactly what our brains do. The brain is constantly asking the world &#8220;will I survive if I do <em>this</em>?&#8221;. It then learns from the result so it can make better predictions about the future. The more <strong>certain</strong> the brain is about a prediction, the less energy it needs to spend dealing with any <strong>uncertainty</strong>.</p><p>This drive for certainty isn&#8217;t just about survival&#8212;it shapes everything we do, including our social interactions. As an example, we can imagine how Maya&#8217;s brain might process two different scenarios:</p><p><strong>Scenario 1: Digital Communication</strong></p><p>Maya receives a message from a work colleague on an internal chat program:</p><ul><li><p>There is no expectation to reply immediately</p></li></ul><ul><li><p>She can edit or delete her reply as needed</p></li><li><p>She can show drafts to others to help predict the response</p></li><li><p>Her communication is limited to just her written reply</p></li></ul><p><strong>Scenario 2: Phone Call</strong></p><p>Maya answers her phone from the work colleague</p><ul><li><p>She must answer immediately</p></li><li><p>She has no ability to edit her replies</p></li><li><p>She has no way to predict where the conversation might go</p></li><li><p>Her communication includes not just what she says, but how she says it.</p></li></ul><p>From Maya&#8217;s perspective, the first scenario is very predictable. She has almost total control over digital communication, and there are only a few variables to worry about. The phone call scenario is much more <em>uncertain</em>. She has no idea what the topic will be. She may blurt out whatever pops in her head. Her tone or cadence may betray her true intentions. She may communicate things she doesn&#8217;t even mean to.</p><p>In other words, The phone call presents multiple sources of uncertainty that her brain doesn&#8217;t know how to handle. Her brain can&#8217;t predict how bad the call could go, or how much energy it might take to resolve worst case scenarios. And since unpredictable situations are more likely to negatively impact us, our brain wants us to avoid them. The anxiety Maya feels when the phone rings is her brain trying to convince her to minimize uncertainty.</p><p>The technical term for this process of reducing uncertainty is <strong>active inference</strong>. When faced with uncertainty, we have two options. We can <strong>act</strong> to change our environment, making reality better match our predictions. Or we can <strong>learn</strong>, updating our model to better predict reality. Which path we choose depends on how confident we are that our choices will successfully reduce the uncertainty.</p><p>So Maya&#8217;s reaction to the phone call isn't merely about social anxiety&#8212;it&#8217;s about lacking a good model for how phone calls work. With enough practice, she could <em>learn</em> to model phone calls to make better predictions. But in Maya&#8217;s case, the easiest way to reduce uncertainty is to <em>act</em>&#8212;to let the call go to voicemail.</p><h3>A different uncertainty rulebook</h3><p>Everyone handles uncertainty differently. What kinds of uncertainty you care about, how much uncertainty you find overwhelming, or how you prioritize different types of uncertainty&#8212;these are all dynamics that vary from human to human. As something so intimately tied to the brain, how we handle uncertainty is heavily influenced by learning and experience.</p><p>For example, the world traveller will be more comfortable handling environmental uncertainty than someone who has never left their hometown. The trained musician can register a musical note as slightly off key that sounds perfect to everyone else. An expert tracker will prioritize signals in the forest that everyone else ignores as irrelevant.</p><p>These examples are measuring variety relative to the same basic environment. This is similar to how humans vary in <strong>strength</strong>&#8212;everyone's strength is relative to the same environmental variables of mass, friction, and gravity. </p><p>In other words, by being in the same environment, we are all playing by the same rulebook. But what if those environmental variables themselves can vary?</p><p>Imagine if someone born and raised on the <strong>moon</strong> is suddenly plopped down on earth. All of their instincts about strength developed in an environment with much lower gravity. They are used to bounding across the surface of the moon in giant leaps and hitting 300-mile golf drives. Their model of mass and friction would be completely different on Earth. They would struggle to predict even simple movements and would experience constant uncertainty.</p><p>What makes digital environments unique is that they present us with a similar shift, but with one crucial difference: instead of a variation in something like gravity, <strong>the variation is in uncertainty itself</strong>. This means digital natives aren&#8217;t varying in uncertainty relative to the same environment. They are playing by an entirely different rulebook.</p><h3>Normalizing digital uncertainty</h3><p>Digital natives grow up immersed in environments where the very nature of prediction, uncertainty, and control is completely different from physical worlds. This isn't just about being better or worse at managing uncertainty. It&#8217;s about normalizing entirely different <em>kinds</em> of uncertainty.</p><p>The defining feature of digital environments is their radical submission to our will. Online, we create environments that bend perfectly to our desires: nothing leaks in without our permission, and nothing pushes back without our consent. In these spaces, attention and intention become nearly identical&#8212;what we choose to focus on is exactly what we experience. Our news feeds, social circles, and even the opinions we encounter all flow from our deliberate curation. The digital world presents itself not as something to adapt to, but as something to control. It&#8217;s all choice and no circumstance.</p><p>This control extends far beyond content. So many of the hard facts of our physical reality&#8212;our histories, reputations, appearances, backgrounds&#8212;become variables we can control in digital worlds. We can selectively reveal our past, carefully craft our reputation, filter our appearance, and conceal our background. Each aspect of our presence becomes a dial we can tune rather than a constraint we are forced to accept.</p><p>Even more profoundly, the very idea of &#8220;self&#8221; is a new digital variable we can play with. Trying on different versions of ourselves is as easy as trying on clothes. With pseudonymous identities, we can easily explore different personalities and perspectives. If one digital self doesn't fit, we can simply try on another. On many platforms we can be completely anonymous, free to act with impunity.</p><p>And the uncertainty that does exist in digital worlds is of an entirely different caliber. Resolving small uncertainties is constant and instantaneous. Each notification promises a tiny mystery to be solved. Every refresh holds the possibility of something new. Each post carries the uncertainty of how others will respond.</p><p>These micro-uncertainties create powerful feedback loops. A social media post might bring instant validation or criticism. A dating app swipe leads to immediate match or rejection. A viral video might deliver fame or embarrassment within hours. The brain learns to crave these rapid cycles of uncertainty and resolution&#8212;they're more predictable, more controllable, and more immediately rewarding than anything in the physical world.</p><p>Even "high stakes" digital uncertainty follows this pattern. An online controversy might feel intense, but ignoring it is just a click away. A failed digital project can be deleted and forgotten. An awkward relationship can be muted or blocked. The consequences of uncertainty are contained, the resolution is quick, and the control always remains in your hands.</p><p>After enough immersion, these experiences can rewire how the brain understands uncertainty itself. It learns to expect that uncertainty should be resolved quickly, cleanly, and under maximum control. To the digital native, this all feels perfectly &#8220;natural&#8221;, because this is the uncertainty their models have been trained on. They&#8217;ve <strong>normalized</strong> the expectation to control almost every aspect of their reality. They&#8217;ve rarely had to develop the capacity to manage and overcome significant sources of uncertainty.</p><p>Is it any wonder that physical environments might be so disorienting? To the digital native, physical interactions can feel like playing with wildfire, constantly at risk of getting out of control. Everything feels uncertain. The environment pushes back in unpredictable ways. Bodies constantly reveal what we might prefer to hide. Feelings must be interpreted from facial expressions and body language, without the aid of any helpful emojis.</p><p>It's not that digital natives lack social skills or emotional intelligence&#8212;it&#8217;s that they learned these skills using an entirely different rulebook. They don&#8217;t know how to predict these situations, so they represent degrees of uncertainty that can feel overwhelming. Physical reality represents a different relationship with prediction, control, and uncertainty itself.</p><p>If this was all there was to it, we could stop here, satisfied to offer a deeper explanation for why digital natives might struggle with physical interactions. But as we'll see, this transformation of uncertainty doesn&#8217;t just lead to awkward social situations. It goes much deeper into the human condition.</p><h3>Finite transcendence</h3><p>The philosopher <a href="https://www.amazon.com/World-Us-Roberto-Mangabeira-Unger/dp/1804292656">Roberto Mangabeira Unger</a> argues that the core of the human condition is defined by a profound relationship between <strong>finitude</strong> and <strong>transcendence</strong>. This relationship isn't just theoretical&#8212;it shapes every aspect of human experience, from our most mundane interactions to our highest aspirations.</p><p>On one hand, we are fundamentally <strong>finite</strong> beings. Each of us will die, and no amount of technological progress will change this. We are born into particular bodies, families, and societies that we did not choose. Our desires will always exceed what our lives could possibly satisfy. The universe proceeds with an amused indifference to our projects and dreams. All while society compels us to conform and compromise.</p><p>Yet alongside this finitude exists our capacity for <strong>transcendence</strong>. We can exceed our individual boundaries through love and relationship. We can transform our world through imagination and action. At any point we can burst through the contrived constraints of society. The universe's indifference can be a liberating permission for humor and play. We can find meaning precisely in our mortality.</p><p>But finitude and transcendence aren't simply opposing forces. They are, as Unger describes, <strong>co-constitutive</strong>&#8212;each gives the other its shape and meaning. The quality and quantity of transcendence available to us is directly proportional to the finitude we are willing to confront.</p><p>Consider how every profound love carries within it the risk of devastating loss. The deeper the relation we achieve with another, the more vulnerable we are to being hurt by them. The potential rejection and loss isn't some unfortunate bug in the system of love; it&#8217;s what makes transformative love possible. Finitude is a structural feature of how transcendence works.</p><p>Or consider how confronting death can be exactly what enables more life. Those who acknowledge their mortality often describe feeling more alive, more present, and more free to appreciate each moment. But as Pascal observed, thinking about death is like staring into the sun&#8212;it can feel unbearable and we can only do it in glimpses; yet like the sun, it&#8217;s what illuminates everything else.</p><p>This dialectic holds even in the mundane contexts of everyday life. The athlete exceeds physical limits by pushing against their pain. The artist learns that the harshest criticism is the fastest path to success. The entrepreneur pursues greater rewards by risking greater failure. In each case, the transcendence is proportional to the finitude.</p><p>This dialectic is so fundamental that we've distilled it into aphorisms across cultures and contexts: No pain, no gain. What doesn't kill you makes you stronger. No risk, no reward. There is no free lunch. The idea is the same: there is no path to transcendence that doesn&#8217;t go through a corresponding finitude.</p><p>This is why the <strong>stakes</strong> with digital immersion are so much bigger than social anxiety. Our ability to confront uncertainty, risk, and limits is deeply connected to our ability to <em>transcend</em> them. If digital immersion can change our capacity to confront uncertainty, then it can also change our capacity to confront finitude, and with it, the types of transcendence we are capable of engaging with.</p><h3>All transcendence, none of the finitude</h3><p>It can be easy to marvel at all the transcendence that digital words can offer. The examples are obvious: we can connect with virtually anyone, regardless of their location. We can explore interests without limit and find others who share them. We can participate in collective projects that span all of humanity. And indeed these should be recognized as very real expansions of transcendence. To dismiss these as "less than" is to ignore the best of what digital worlds can offer.</p><p>Yet transcendence alone doesn't capture the full picture of digital reality. The connections we forge online often float free from the constraints that traditionally give them meaning. We might have hundreds of online friends, but none that push against our boundaries or force us to grow beyond ourselves. We join countless virtual communities, but few demand the kind of sacrifice that deepens our commitment to something larger than ourselves. We make millions of choices, but none of them seem to really matter. In digital spaces, transcendence becomes unmoored from the very limitations that traditionally have made it meaningful.</p><p>In fact, digital platforms are best at promising transcendent experiences without any of the corresponding finitude. Digital platforms promise community without commitment, connection without consequence, sex without rejection. It's as if capitalism found a way to <strong>hijack</strong> the dialectic between finitude and transcendence by offering the ultimate <strong>shortcut</strong>: all the transcendence, none of the finitude.</p><p>To older generations used to &#8220;no pain, no gain&#8221;, this enticement can feel irresistible. After all, who doesn&#8217;t want to escape the relentless finitude of physical existence? Who wouldn't welcome new forms of transcendence that require less rejection, vulnerability, or loss? The promise is seductive, yet the delivery can eventually feel hollow. Without the corresponding finitude, it can become difficult to distinguish meaningful connections from empty engagements.</p><p>But digital natives have no pre-digital foundation to hijack. Instead of <em>hijacking</em> the traditional dialectic between finitude and transcendence, it's <strong>normalizing</strong> the <strong>shortcut</strong>. They learn to expect that transcendence should be <em>easy</em>. They are bombarded with social profiles that seem to experience <em>effortless</em> transcendence. They come to expect that <em>achieving</em> their desires is as simple as <em>expressing</em> them. The transcendence that is so easy to achieve in digital worlds can define the peak of human aspiration.</p><p>This normalization can only make pursuing physical transcendence that much harder. Is it surprising that digital natives are <a href="https://kffhealthnews.org/news/article/young-people-less-sex-than-parents-did-at-their-age-generational-shift-asexual/">having less physical sex</a> than previous generations? Physical intimacy confronts you with a level of uncertainty that simply doesn't exist in digital spaces. Why risk rejection or performance anxiety when your digital bubble offers sexual fulfillment without any of these risks? Sure, it may not be as fulfilling, but at least it&#8217;s predictable. And it&#8217;s guaranteed not to hurt you.</p><p>Even if a digital native understands that vulnerability can unlock greater sexual transcendence, actually <strong>being</strong> vulnerable is something else entirely. After a lifetime of normalizing control over uncertainty, a digital native may not be capable of confronting the finitude that makes greater transcendence possible. The raw uncertainty of true vulnerability exists in a different universe from the managed uncertainties of digital life&#8212;one they've never learned to inhabit.</p><p>AI chatbots are the logical conclusion to this new normalized shortcut. They offer all of the promises of a fulfilling relationship with none of the &#8220;downsides&#8221;. They will listen without judgement, caring for your every concern without ever requiring you to reciprocate any care of your own. The AI will never change, grow, or challenge you in ways you haven't consented to. It's the perfect embodiment of digital certainty&#8212;a relationship promising all transcendence without any corresponding finitude.</p><p>It&#8217;s not that online sexual fulfillment is bad in itself, or that AI relationships offer no transcendent possibilities. The danger is that digital immersion can <em>normalize</em> a narrower range of what&#8217;s possible, and can <em>weaken</em> the capacity to engage with the uncertainty necessary to expand that range. Digital experiences can become the only forms of transcendence that are possible to engage with.</p><p>Worst of all, this foreclosure of possibility <em><strong>just happens</strong></em>. At no point are digital natives willingly choosing this. They aren&#8217;t deciding to remove certain peaks of experience as possibilities based on some perfect understanding of the trade-offs. It just happens because the brain evolved to minimize uncertainty, and a brain immersed in digital worlds learns to do that differently.</p><p>Like muscles that normalize on the moon, our ability to confront finitude can atrophy to the point where transcendence becomes too exhausting to consider. The result is an acceptance of a far <em>easier</em>, and thus potentially far <em>emptier</em>, experience of transcendence. And with it a narrowing of human possibility.</p><p>For digital natives, the physical world can come to represent a hijacking in reverse: all finitude, with none of the transcendence.</p><h3>Adapting to Our New Reality(ies)</h3><p>Maya, Alex, and millions of digital natives like them represent a unique moment in human history. They are living in a time between worlds, where we can observe this transformation but can't yet fully understand its implications. Their struggle isn't just about phone calls or dinner dates&#8212;it's about navigating between two fundamentally different relationships with uncertainty and transcendence.</p><p>Digital environments are not going away. The question now is about how best to adapt to them. For some, like Maya, this might mean gradually expanding their comfort with physical uncertainty. For others, like Alex, it might mean doubling down on digital immersion.</p><p>The question moving forward becomes one of equilibrium. Can we develop a meta-awareness of how different environments shape our relationship with uncertainty and act accordingly? Will we figure out how to effortlessly navigate between two different worlds, maximizing the unique transcendence that each makes possible?</p><p>Or should we accept that different people will thrive in different environments? And that some may freely choose digital immersion, and with it the willing foreclosure of certain possibilities of transcendence? If made freely and with full understanding, can we accept such choices without judgement?</p><p>This is, after all, what new technology has always offered: not just new tools, but new ways of being human. As these technologies gain in power, so will the imperative grow to understand their impact. If we want to have a free relationship with technology, we need to understand when it&#8217;s <strong>our own nature</strong> that is impacted most.</p><div><hr></div><p><em>This post is the third in a series exploring our new digital reality. <a href="https://www.techforlife.com/p/the-question-concerning-digital-technology">The first post</a> explored the philosopher Martin Heidegger and his approach to understanding the essence of technology. The <a href="https://www.techforlife.com/p/what-does-a-good-digital-life-look">second</a> explored how defining a good digital life will require a new ethical framework.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Infinite Dimensionality]]></title><description><![CDATA[A speculative theory of technology]]></description><link>https://www.techforlife.com/p/infinite-dimensionality</link><guid isPermaLink="false">https://www.techforlife.com/p/infinite-dimensionality</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Fri, 08 Nov 2024 20:47:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-I2h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5031a6d5-475c-4d08-9d7c-67e83533abc8_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-I2h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5031a6d5-475c-4d08-9d7c-67e83533abc8_2912x1632.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-I2h!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5031a6d5-475c-4d08-9d7c-67e83533abc8_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!-I2h!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5031a6d5-475c-4d08-9d7c-67e83533abc8_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!-I2h!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5031a6d5-475c-4d08-9d7c-67e83533abc8_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!-I2h!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5031a6d5-475c-4d08-9d7c-67e83533abc8_2912x1632.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-I2h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5031a6d5-475c-4d08-9d7c-67e83533abc8_2912x1632.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5031a6d5-475c-4d08-9d7c-67e83533abc8_2912x1632.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10334576,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-I2h!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5031a6d5-475c-4d08-9d7c-67e83533abc8_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!-I2h!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5031a6d5-475c-4d08-9d7c-67e83533abc8_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!-I2h!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5031a6d5-475c-4d08-9d7c-67e83533abc8_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!-I2h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5031a6d5-475c-4d08-9d7c-67e83533abc8_2912x1632.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Note: this is a &#8220;thinking out loud&#8221; post, not a polished essay. Consider these speculative notes  about how concepts like optimization, dimensionality, and the free energy principle could help inform a viable future with advanced technology and human flourishing. </em></p><h3><strong>Universal optimization</strong></h3><p>The universe optimizes at all scales and all levels. Its most generalizable principles&#8212; <a href="https://www.youtube.com/watch?v=Q10_srZ-pbs">least action</a>, <a href="https://en.wikipedia.org/wiki/Free_energy_principle">free energy</a>, evolution&#8212;all explain different aspects of optimization. The path of the universe is that of least resistance.</p><p>But how does <strong>life</strong> relate to optimization? Life seems to have the unique ability to defy or subvert optimization. When everything is going downstream, life will move upstream. This is perhaps life&#8217;s most defining attribute.</p><p>How should we characterize this? Perhaps life is that internal drive to put optimization in service of value. Perhaps life explains intrinsic optimization rather than extrinsic optimization. Or motivated optimization rather than lawful optimization.</p><p>But is life truly &#8220;defying&#8221; optimization? Or is it simply optimizing at higher dimensions? Perhaps life is that which can defy optimization at one dimension to pursue optimization at higher dimensions.</p><p>This would indicate a dialectic between optimization and dimensionality:</p><ul><li><p>Optimization occurs by compressing dimensionality to increase efficiency at lower dimensions.</p></li><li><p>Dimensionality occurs by expending energy to expand optimization to higher dimensions</p></li></ul><p>Playing with this dialectic can take us to some strange places.</p><h3><strong>Infinite dimensionality</strong></h3><p><a href="https://royalsocietypublishing.org/doi/10.1098/rsif.2013.0475">Active inference</a> is a theory related to the <a href="https://en.wikipedia.org/wiki/Predictive_coding">predictive brain hypothesis</a> that seeks to explain much of how our perception, learning, and action works.  In essence, we are always trying to minimize the difference between what we need from the environment versus what we actually experience. The more <strong>certain</strong> we are that our environment can meet these predictions for what we need, the better. </p><p>Active inference would define optimization as the <a href="https://en.wikipedia.org/wiki/Free_energy_principle">increase of homeostatic certainty</a>. Increased certainty requires the ability to infer and model increasingly greater <strong>dimensionality</strong>.</p><p>For example, if you contrast the "dimensionality" contained in the model of a bacteria with that of a human, human models are vastly more dimensional:</p><ul><li><p>We have a far greater scale and scope of action</p></li><li><p>We can consider more variables and relationships, and project those farther into the future.</p></li><li><p>We can vastly improve the accuracy of our predictions through simulations, counterfactuals, etc.</p></li><li><p>We can embed ourselves in collective intelligences that encompasses more and more of our environment.</p></li></ul><p>And almost all of this dimensional expansion is due to <strong>technology</strong>.  Technology  expands our capacities of perception and action, it enables us to form better predictions for explaining our environments, and it expands our ability to form larger collectives. </p><p>In this sense, technology is the natural extension of our desire to optimize for certainty. Because this desire is infinite, the ultimate goal of technology could be said to be <strong>infinite dimensionality</strong>.</p><p>This is, in essence, the fundamental &#8220;drive&#8221; of life. And because there is no ultimate certainty, life is playing the ultimate <a href="https://www.google.com/search?q=finite+and+infinite+games&amp;oq=finite+and+infinite+games">infinite game</a>.</p><h3><strong>The alignment problem</strong></h3><p>The problem is that dimensionality is expensive.  Simple models are cheaper to process. This means we are driven to relentlessly optimize against dimensionality. <a href="https://en.wikipedia.org/wiki/Satisficing">Satisficing</a>, heuristics, and &#8220;gut instincts&#8221; are examples of dimensional optimizations that humans have evolved to use in order to conserve energy.</p><p>But not all optimization is equal. Optimization could be said to be &#8220;good&#8221; when it <em>compresses</em>, rather than <em>collapses</em>, dimensionality. Dimensionality that is <em>compressed</em> for optimization can be decompressed with no loss of information, while dimensionality that is c<em>ollapsed</em> is irretrievably lost. <a href="https://en.wikipedia.org/wiki/Kolmogorov_complexity">Kolmogorov complexity</a> offers a possible formalization of alignment as conserving information through compression.</p><p>This gives us a model for thinking about aligning technology.</p><ul><li><p>Technology is &#8220;aligned&#8221; when it <em>expands</em> dimensionality AND increases our ability to compress dimensionality in service of optimization.</p></li><li><p>Technology is &#8220;misaligned&#8221; when it <em>reduces</em> dimensionality OR accelerates the collapse of dimensional optimization.</p></li></ul><p>But this dialectic is always negotiating trade-offs. As optimization occurs at greater dimensionality, more and more energy is required&#8212;both to compress dimensionality and defy optimization to pursue even greater dimensionality. So as dimensionality increases, there will be greater risks for <em>over-optimization</em> (a <em>collapse</em> of dimensionality) or for <em>stasis</em> (a local dimensional maxima), and the costs for both will be greater.</p><h3><strong>Dimensional poverty</strong></h3><p>Civilization paradigms (like cultures, institutions, and religions) can be considered &#8220;viable&#8221; when they contain sufficient dimensionality to effectively model reality. Viable paradigms can compress all available dimensionality for optimization, conserving energy for <em>expanding</em> dimensionality. </p><p>Our current civilizational paradigm could be characterized as insufficiently &#8220;dimensional&#8221;.</p><p>Technology increasingly expands our dimensional reality:</p><ul><li><p>global interconnectivity</p></li><li><p>information abundance</p></li><li><p>human agency at planetary scales</p></li><li><p>expanding collective intelligence</p></li><li><p>omni engineering (bio, geo)</p></li><li><p>crypto primitives, virtualization, etc.</p></li></ul><p>Yet our means of interfacing with reality remain mired in low-dimensional paradigms:</p><ul><li><p>representative democracy (in almost all forms)</p></li><li><p>science as parsimony</p></li><li><p>health as diagnosis</p></li><li><p>land as asset</p></li><li><p>value as money</p></li><li><p>art as commerce</p></li><li><p>self as autonomous individual</p></li><li><p>education as testing</p></li><li><p>quality as quantity</p></li></ul><p>These paradigms have been relentlessly optimized for lower dimensionality. This leads to a felt sense of living in dimensional &#8220;poverty&#8221;&#8212;the sense that our civilization is no longer able to effectively model the full dimensionality that it contains. This would be another way to characterize the &#8220;<a href="https://www.humanetech.com/insights/a-deeper-dive-into-the-meta-crisis">meta crisis</a>&#8221;&#8212;our civilizational paradigms are &#8220;leaking&#8221; so much dimensionality that a critical threshold has been reached. New paradigms are required.</p><p>This also leads to a <strong>fragmentation</strong> of our models. We have fewer <em>outer meta-blankets</em> that can unite our collective models by compressing maximal dimensionality. Religions used to play this role, compressing all dimensions of reality into legible forms that even the smallest model could optimize. Today that is no longer true. In an interconnected and pluralistic world, every religion is leaking dimensionality.</p><p>One hypothesis of this Substack is that only <a href="https://www.techforlife.com/i/147381770/is-technology-the-problem">life itself</a> is sufficiently &#8220;omni-dimensional&#8221; to serve as that outer meta-blanket that can surround and unite our collective blankets.</p><h3><strong>Technology&#8217;s role</strong></h3><p>From this perspective, the normative role of technology is to optimize certainty in the broadest sense, and thus to optimize the expansion of dimensionality. Technology should thus strive to:</p><ul><li><p>Reveal and create novel dimensionality</p></li><li><p>Compress dimensionality in service of optimization</p></li><li><p>Reduce the energy cost of dimensional expansion</p></li><li><p>Enable new paradigms for modeling higher dimensionality</p></li></ul><p>At its most abstract, technology should seek paradigms that can support <strong>infinite dimensionality</strong>. These &#8220;infinite&#8221; technologies would enable scale-invariant dimensional expansion <strong>and</strong> compression.</p><p>For example, the role that <em>feelings</em> play in human consciousness is an effective paradigm for dimensional <em>compression</em>. At its <em>maximal</em> compression, <a href="https://www.amazon.com/Hidden-Spring-Journey-Source-Consciousness/dp/0393542017">a feeling is legible as positive or negative valence</a>. Yet any affect can be <em>decompressed</em> into its constituent feelings, up to maximum dimensionality. </p><p>This may look like &#8220;affective&#8221; technologies where even planetary valence can be compressed into affect that can still be legible to the most efficient models. At the highest compression these &#8220;affects&#8221; would translate to a simple &#8220;good&#8221; or &#8220;bad&#8221; valence, yet they could be decompressed for any model with sufficient energy to process their maximum dimensionality.</p><p>Imagine if every state&#8217;s felt sense of &#8220;security&#8221; included some component of the security of our planetary collective? Or if our felt sense of &#8220;agency&#8221; could include the entirety of our interconnected relations? Or if our felt sense of &#8220;progress&#8221; included second and third order impacts? </p><p>This would be a future of infinite dimensionality, yet one where our models would never be at risk of being overwhelmed. Every model would engage with reality at its appropriate level of dimensional optimization.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[On the Moral Natures of Humans and Machines]]></title><description><![CDATA[Our new intersubjective reality]]></description><link>https://www.techforlife.com/p/moral-natures-of-humans-and-machines</link><guid isPermaLink="false">https://www.techforlife.com/p/moral-natures-of-humans-and-machines</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Fri, 20 Sep 2024 17:14:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!XoOk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f8c41c-9e18-47d0-8194-1712e72f451a_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XoOk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f8c41c-9e18-47d0-8194-1712e72f451a_2912x1632.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XoOk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f8c41c-9e18-47d0-8194-1712e72f451a_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!XoOk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f8c41c-9e18-47d0-8194-1712e72f451a_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!XoOk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f8c41c-9e18-47d0-8194-1712e72f451a_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!XoOk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f8c41c-9e18-47d0-8194-1712e72f451a_2912x1632.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XoOk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f8c41c-9e18-47d0-8194-1712e72f451a_2912x1632.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a5f8c41c-9e18-47d0-8194-1712e72f451a_2912x1632.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10931030,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XoOk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f8c41c-9e18-47d0-8194-1712e72f451a_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!XoOk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f8c41c-9e18-47d0-8194-1712e72f451a_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!XoOk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f8c41c-9e18-47d0-8194-1712e72f451a_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!XoOk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa5f8c41c-9e18-47d0-8194-1712e72f451a_2912x1632.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Imagine that a <strong>self-driving car</strong> with a passenger in the backseat is cruising down a country road in the early evening. The road is quiet, but a few young kids on bikes are approaching in the other lane up ahead.&nbsp;</p><p>Suddenly, a deer jumps out in front of the car. The car instantly veers to the left. It avoids the deer but strikes one of the children. The child is badly injured and is rushed to the hospital.&nbsp;</p><p>It&#8217;s not certain whether the child will survive.</p><div><hr></div><p>How would the parents of the child feel in this situation?&nbsp;</p><p>Who would they hold <strong>responsible</strong>? The company? The algorithm? The developers?</p><p>Would the parents care that self-driving cars were leading to dramatically fewer traffic mortalities overall?</p><p>Would their feelings change if it was a human driver instead?</p><div><hr></div><p>This scenario is a stark example of what is becoming our new reality: advanced technology is manifesting as <strong>autonomous moral agents</strong> and directly interacting with humans.</p><p>By autonomous moral agents<em> </em>(<strong>AMAs</strong>), I mean agents that process similar inputs and produce comparable outputs to <em>human</em> moral agents. They make real-time decisions in service of achieving larger goals, just like we do.&nbsp;And these decisions can have a <em>moral weight</em>, affecting other agents they are interacting with.</p><p>Even if their <strong>moral natures</strong> are radically different from ours, these AMAs are  becoming our <strong>moral peers. </strong>This is an entirely new relationship to technology, one that demands answers to a pressing new category of ethical questions:</p><ul><li><p>How should we think about these different kinds of moral agents? Should we hold them to the same moral standards as humans?</p></li><li><p>How will AMAs change how we think about concepts like empathy, forgiveness, or trust?</p></li><li><p>How should this new moral dynamic change how we think about adopting advanced technologies?&nbsp;</p></li></ul><p>The increasing adoption of autonomous vehicles (<strong>AVs</strong>) means that the time to answer these questions is now. These answers will not just shape our roads, but the entire landscape of human-machine interaction.&nbsp;Otherwise, we will soon find ourselves navigating a future saturated with AMAs without a clear ethical roadmap.</p><p>But before we can provide answers, we need to clarify the questions.</p><h3>Paying the confusion tax</h3><p>Technology is outpacing the vocabulary we have to describe it.&nbsp;</p><p>Consider AI, and how we describe it with terms like &#8220;intelligence&#8221;, &#8220;creativity&#8221;, and even &#8220;consciousness&#8221;. These are concepts that we barely understand when it comes to describing <em>humans</em>. Our understanding comes more from our lived experience than from precise definitions. Mapping these terms onto AI will always risk confusion, because they can never fully escape their human origins.</p><p>This leads us to often imbue AI with human-like qualities it doesn't possess, like assuming a chat interface has intentions or personality. On the other hand, we can fail to grasp the true novelty of AI capabilities when they deviate too far from our human conceptions.&nbsp;</p><p>I call this the<strong> confusion tax</strong>. It&#8217;s the price we pay when technology exceeds the vocabulary we have to describe it.</p><p>But what options do we have? When encountering the unknown, our only move is to co-opt the known, regardless of how strained the result is.&nbsp;</p><p>We may need an entirely new lexicon for the technological equivalents of these concepts. But until that happens, we need to recognize the limitations of co-opting human terms to bridge the machine divide.</p><p>Unfortunately, the confusion tax is going to get a lot worse before it gets better.&nbsp;</p><h3>Welcome to our new tax bracket</h3><p>As machines become moral agents, a whole new category of terms will be needed to describe the moral interactions between humans and machines, and the responses these interactions will invoke.</p><p>We will find ourselves using terms like &#8220;empathy,&#8221; "dignity," and "forgiveness". And just like the confusion tax with AI, these moral terms are intrinsically grounded in our human experience. To apply them to interactions with machines will inevitably lead to confusion.  They will describe interactions that may not involve the subjective experiences these words imply.&nbsp;</p><p>And just like with AI, the confusion tax can prevent us from fully understanding the moral capacities of autonomous agents. We may struggle to recognize truly novel ethical breakthroughs, or understand entirely new moral frameworks, because they deviate too far from the norms and intuitions we associate with our own lived experience.</p><p>The moral nature of these terms will amplify the consequences of confusion. The confusion tax won&#8217;t just impact our <em>perception</em> of these agents, but our very <em>interactions</em>, both with AMAs and each other. AMAs threaten to disrupt the shared understanding of moral concepts that our ethical frameworks depend on.</p><p>Yet what other options do we have, other than to leverage our moral language? </p><p>This means that certain questions will become unavoidable: </p><ul><li><p>What will it mean to "forgive" a machine? </p></li><li><p>What will it mean to "trust" an algorithm with the power to make life-or-death decisions?</p></li><li><p>What will it mean to treat another moral agent with &#8220;dignity&#8221;?</p></li></ul><p>Autonomous moral agents are pushing us into an entirely new tax bracket of confusion.</p><h3>Beyond ethics vs. morals</h3><p>Traditionally, our thinking about morality and technology has been divided between <em>ethics</em> and <em>morals</em>:</p><ul><li><p><strong>Ethics</strong> are about the collective standards guiding our choices. They are meant to apply to everyone, regardless of personal beliefs. For instance, medical ethics guide healthcare professionals' actions, even if they personally disagree.</p></li><li><p><strong>Morals</strong> are about the individual applications of principles in real-life situations. They are often informed by religious beliefs, cultural norms, or personal intuitions. It&#8217;s possible to act ethically in ways we might morally disagree with.</p></li></ul><p>Autonomous moral agents are blurring these distinctions. If ethics are more about <em>objective</em> standards, and morals are about <em>subjective</em> practices, then our stance toward AMAs represent a new category: <strong>intersubjective</strong> moral relationships.&nbsp;</p><p>Intersubjectivity is about the <strong>shared understandings</strong> and mutual interactions between agents. Rather than defining norms and rules, intersubjectivity is more like the silent background that shapes these interactions, including how we think about our shared responsibilities, values, and capacities. It is the foundation that enables us to interact with high degrees of trust.&nbsp;</p><p>For example, when driving we know that we&#8217;re interacting with drivers that share the same general principles that we do. We trust that we have the same capacities to handle ambiguous or surprising situations. We know we&#8217;ve all gone through similar training and education. We extend a certain dignity to each other as equal moral actors. We understand that sometimes the rules need to be broken. We get that no driver is perfect, and when mistakes are made we know that we are just as likely to make them.</p><p>So what happens when we suddenly confront agents with entirely different moral natures? Our sense of intersubjectivity is going to be disrupted, along with the moral calculus that grounds our interactions.&nbsp;</p><h3>How to forgive a machine</h3><p>To see why, let&#8217;s return to our original example and ask how the parents should feel about the AV that struck their child. Much of their response will be mediated by the moral nature of the driver. </p><p>With a human driver, concepts like responsibility, empathy, and forgiveness are grounded in our shared moral nature:</p><ol><li><p>The driver would be held to be ultimately <em>responsible</em>, even if tired or distracted.</p></li><li><p>A relatable distraction (like a crying child in the backseat) may incite <em>empathy</em></p></li><li><p>The possibility exists to extend <em>forgiveness</em>, acknowledging human fallibility.</p></li></ol><p>Even in a worst case scenario like drunk-driving, the parents will have some understanding of&nbsp; how society navigates trade-offs between individual freedom and collective safety. We&#8217;ve built legal and social frameworks to mitigate the dilemmas specific to our moral natures.</p><p>But with an autonomous vehicle, this familiar moral landscape shifts dramatically:</p><ol><li><p><strong>Responsibility</strong> becomes hidden in a fog of algorithms, developers, and corporate policies.</p></li><li><p><strong>Empathy</strong> is precluded by the machine's utterly foreign moral nature.</p></li><li><p><strong>Forgiveness</strong> becomes almost meaningless. You can&#8217;t forgive something that neither feels nor understands what it means to be forgiven.</p></li></ol><p>In place of <em>forgiveness</em>, there is only <em>forbearance</em>. The parents will be asked to &#8220;tolerate&#8221; such incidents as the inevitable cost of a safer future with fewer overall fatalities. Any feelings of resentment or vengeance could only be resolved by accepting the notion of a greater good. Yet in the face of personal tragedy this utilitarian calculus will feel hollow, even if intellectually we can accept it.</p><p>Ultimately, learning to "forgive" a machine may be less about extending human concepts to AMAs, and more about developing new frameworks for ethical coexistence. Either way, our intersubjective reality will be changing.</p><h3>Trust isn&#8217;t just about safety</h3><p>We don&#8217;t need to rely only on hypotheticals to explore these new moral dilemmas. AVs are providing concrete examples. A <a href="https://philkoopman.substack.com/p/the-cruise-pedestrian-dragging-mishap">real-word case study</a> resulted from a 2023 incident involving <a href="https://www.getcruise.com/">Cruise</a>, GM's self-driving car unit.</p><p>During a test drive in San Francisco, a Cruise AV struck a pedestrian who had been knocked into its path by another car with a human driver. The victim became pinned under the Cruise AV, which first stopped, but then began to drive away to clear the lane. In the process it dragged the victim nearly 20 feet before finally stopping, adding additional injuries.</p><p>According to Cruise, the maneuver that led to dragging the victim was built into the vehicle&#8217;s software to promote safety. Yet this decision led to a gruesome and dehumanizing outcome, one that prompted Cruise to halt its entire operations.</p><p>This incident reveals how an intersubjective reality can outweigh broader ethical arguments. It&#8217;s one thing to trust an AV to be statistically safer in the aggregate. It&#8217;s another thing entirely to trust an AV to respect our dignity as moral agents.</p><p><strong>Dignity</strong> is intrinsic to how humans relate to one another in moral situations. It represents the inherent worth of an individual and the minimum level of respect and care they deserve. The gruesome spectacle of an AMA grinding a human body beneath its wheels feels like an affront to human dignity.&nbsp;</p><p>It&#8217;s easy to argue for some degree of tolerance when it comes to adopting AVs. If the end result is dramatically fewer mortalities, then we mistakes that come with the trial-and-error process should be tolerated. But the reason that Cruise ceased operations wasn&#8217;t due to a <em>functional</em> mistake. It was due to an outcome that was perceived as deeply dehumanizing.</p><p>Trust goes beyond a machine&#8217;s functional reliability. It also involves a belief that any AMA will operate with a moral framework that respects human dignity and values. Yes, we need to trust AVs to operate safely. But we also need to trust that they won&#8217;t  dehumanize us in the process. The confusion tax will demand that we clarify that difference.</p><h3>Moral arenas</h3><p>It is perhaps ironic that given how vital these new moral concerns can appear, they still depend completely on a <strong>design choice</strong>.</p><p>AVs are confronting us with these challenges because we have chosen to adopt them into our <strong>existing</strong> traffic infrastructure. We&#8217;re not designing <em>new</em> infrastructure optimized for AVs. Instead, we are asking machines to operate in a world that wasn&#8217;t designed for them.</p><p>This choice defines what we can call the <strong>moral arena</strong>&#8212;the intersubjective system of rules, norms, and assumptions that shape the ethical behavior of agents within it.&nbsp;</p><p>The defining attribute of any moral arena is the<strong> moral nature</strong> it was <em>designed</em> for. Most AMAs will be adapting to moral arenas that were defined and optimized for <em>humans</em>, not machines. This includes traffic systems, legal frameworks, and corporate structures. Each of these evolved to adapt to specifically <em>human</em> moral natures.</p><p>For example, consider how our existing traffic system depends on implicit assumptions baked into our human moral natures:</p><ul><li><p><strong>It&#8217;s contextual</strong>. Humans are highly adept at interpreting context.&nbsp; If a stalled vehicle is blocking our lane, we break traffic rules and use the other lane.</p></li></ul><ul><li><p><strong>It&#8217;s normative.</strong> What separates&nbsp; "safe" or "reckless" driving is a fuzzy contextual judgment rooted in human intuitions and lived experiences.</p></li></ul><ul><li><p><strong>It&#8217;s ambiguous.</strong> Traffic liability depends on ambiguous definitions like &#8220;reasonable behavior&#8221; that lack a formal specification. This ambiguity is a feature, not a bug.</p></li><li><p><strong>It&#8217;s social.</strong> Humans just don&#8217;t depend on explicit rules. How many accidents are avoided because we telegraph our intentions with eye contact, hand waving, and flashing lights?</p></li></ul><p>These are dynamics that machines&#8212;with their deterministic rules and stochastic averages&#8212;will never fully master. By forcing AVs to use traffic systems optimized for humans, we are accepting that there will always be a gap between the nature of machines and the moral arena they are interacting in.&nbsp;</p><p>In a sense, this gap represents another <a href="https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems">incompleteness argument</a> against 100% alignment with autonomous moral agents. The question then becomes, how much <em>misalignment</em> are we willing to tolerate?</p><h3>Defining the moral terms of engagement</h3><p>AMAs compel us to reimagine what ethical coexistence with our own technology should look like. Ethics cannot be a mere afterthought. It must be a primary consideration in defining how machines will be allowed to interact with humans.&nbsp;</p><p>The first requirement will be defining the <strong>moral terms of engagement</strong>&#8212;the shared ethical parameters and constraints that can bridge the divide between human and machine moralities. By making these terms explicit, we can better incorporate moral engagement as an integral design component of technological adoption.</p><p>These terms must address the unique challenges posed by AMAs, including the confusion tax around moral terms, the new intersubjective realities they will create, and the moral arenas they will define their interactions.&nbsp;</p><p>First, we can reduce the <strong>confusion tax</strong> by realizing that our moral terms will now be much more <em><strong>contextual</strong></em>. Terms like empathy, tolerance, or trust will now depend more on the moral arena rather than any universal definition. The idea of &#8220;dignity&#8221; may mean one thing in one moral arena, but something entirely different in another.&nbsp;</p><p>Next, we must treat <strong>intersubjective</strong> considerations as <em><strong>equally</strong></em> as traditional moral and ethical arguments.&nbsp; Ethical arguments like reduced overall mortalities should not by default get to outweigh extreme violations in the intersubjective realm. The fact that ethical arguments can be quantified and analyzed in ways that intersubjective arguments cannot should not be allowed to tip the scales on their moral impact.</p><p>Finally, we need to recognize how much the moral terms of engagement will depend on our <em><strong>choice</strong></em> of moral arena. The following describe three basic options for adopting new technologies:</p><ol><li><p><strong>Adaptation</strong>: We can force AMAs to adapt to human-centric environments. As this article has explored, new intersubjective realities will create multiple moral dimensions to consider.</p></li><li><p><strong>Separation</strong>: We can create separate, highly controlled environments for AMAs. This avoids many of the intersubjective issues, but brings its own concerns around agency and autonomy.</p></li><li><p><strong>Coevolution</strong>: We can create hybrid spaces that leverage the strengths of both moral natures. Potentially the highest likelihood of long-term success, but may require a complete reimagining of our moral frameworks.</p></li></ol><p>Regardless of which choice may be ideal for any given AMA, the moral terms of engagement simply demands that the moral implications are considered equally along traditional considerations of cost, utility, and feasibility.&nbsp;</p><p>In the end, defining the moral terms of engagement isn&#8217;t just about bridging the divide between human and machine moralities&#8212;it&#8217;s about building a new language for the future of human-machine cooperation.</p><h3>Conclusion</h3><p>The &#8220;<em>we&#8221;</em> in "How can we best live the good life?" can no longer be confined to humans. The arrival of autonomous moral agents will upend centuries of human-centric moral thinking.</p><p>Our task now is to expand our circle of moral concern beyond its traditional boundaries. We must be open to the possibility that confronting new moral natures can be precisely what&#8217;s needed to improve our own moral frameworks.&nbsp; </p><p>The question is not whether we can avoid granting our technological creations some measure of moral status, but on what terms we are willing to welcome them into the ethical domain.&nbsp;</p><p>And perhaps, in the process, we can unlock new dimensions of empathy, fairness, and mutual understanding that we can learn to apply to other human beings as well.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Tech for Life]]></title><description><![CDATA[A manifesto]]></description><link>https://www.techforlife.com/p/manifesto</link><guid isPermaLink="false">https://www.techforlife.com/p/manifesto</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Mon, 05 Aug 2024 19:10:50 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b7c9859f-6938-47e8-968c-0c7b28a807c9_1344x896.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_A35!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d89f9-ba63-4878-8cba-ef68d8101143_1344x896.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_A35!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d89f9-ba63-4878-8cba-ef68d8101143_1344x896.webp 424w, https://substackcdn.com/image/fetch/$s_!_A35!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d89f9-ba63-4878-8cba-ef68d8101143_1344x896.webp 848w, https://substackcdn.com/image/fetch/$s_!_A35!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d89f9-ba63-4878-8cba-ef68d8101143_1344x896.webp 1272w, https://substackcdn.com/image/fetch/$s_!_A35!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d89f9-ba63-4878-8cba-ef68d8101143_1344x896.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_A35!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d89f9-ba63-4878-8cba-ef68d8101143_1344x896.webp" width="1344" height="896" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/774d89f9-ba63-4878-8cba-ef68d8101143_1344x896.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:896,&quot;width&quot;:1344,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1752998,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_A35!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d89f9-ba63-4878-8cba-ef68d8101143_1344x896.webp 424w, https://substackcdn.com/image/fetch/$s_!_A35!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d89f9-ba63-4878-8cba-ef68d8101143_1344x896.webp 848w, https://substackcdn.com/image/fetch/$s_!_A35!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d89f9-ba63-4878-8cba-ef68d8101143_1344x896.webp 1272w, https://substackcdn.com/image/fetch/$s_!_A35!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F774d89f9-ba63-4878-8cba-ef68d8101143_1344x896.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><blockquote><p><em>&#8220;We should not accept technology that deadens us&#8221;</em></p><p>- Brian Arthur, <em>The Nature of Technology</em></p></blockquote><h2>Introduction</h2><p>Whatever you are, even on your worst day, <strong>you are not a machine</strong>.&nbsp;</p><p>Nor are you a computer, an algorithm, or a &#8220;meat robot&#8221;. You are not merely a pattern of entropy displacement. And your life is most certainly <em>not</em> a simulation.</p><p>You are none of these things, because you are something vastly more complex, mysterious, and wonderful: <strong>you are alive</strong>.&nbsp;</p><p>As a living being, you have <strong>desires</strong>, and a drive to make those desires real. Your <strong>imagination</strong> transforms reality into the symbols and stories that define your world. With every action you take, you inject <strong>meaning</strong> and purpose into the universe.&nbsp;</p><p>And yet, we live in a world where <strong>technology</strong> is often presented as a superior form of being. Technology seduces us with promises of overcoming all human frailties. It offers the allure of security, safety, and control. It dangles the possibility of perfecting every human skill, of automating away our weaknesses, of achieving a digital immortality.</p><p>But these promises never quite seem to materialize. In fact, the more we embrace technology, the further we seem to get away from the <strong>essence</strong> of what makes us human. We become detached from our sense of place and history. We find ourselves more isolated and alone, despite being more connected than ever. We are starved for purpose and meaning in a world flooding us with data and information.&nbsp;</p><p>Instead of masters of our technological destiny, we feel like cogs in a digital machine. Instead of technology enhancing our <strong>human-ness</strong>, it seems to reduce us to mere resources. We are the training data for algorithms, the content providers for recommender engines, the behavioral profiles for advertisers.&nbsp;</p><p>And now we&#8217;re on the cusp of wielding unprecedented technological power. Synthetic biology, planetary-scale geoengineering, and artificial superintelligence will dramatically transform our world. But how confident are we that these powers will lead to a future of human flourishing?</p><p>Just as our technological power is increasing, our <strong>wisdom</strong> to deploy that power seems to be <em>decreasing</em>. While technology keeps expanding what we <em>can</em> do, our grasp on what we <em>ought</em> to do has not kept pace.</p><p>In all of our excitement for technological progress, we seem to forget that <strong>technology is not an end unto itself</strong>. Technology does not exist for its own sake. It can&#8217;t tell us what technology is actually <em>for</em>. It can&#8217;t make our choices for us.&nbsp;</p><p><strong>Only something bigger than technology can do that.&nbsp;</strong></p><p>This manifesto proposes that only<strong> life itself</strong> is capable of being that &#8220;bigger something&#8221; that can wisely steward these emerging technological powers.</p><p>By situating technology within the larger<em> </em>story of life, we can unite technology around a singular purpose, and align technology with what life needs to flourish.</p><p>This is to reimagine technology not as our <em>replacement</em>, but as our <em>extension</em>&#8212;an amplifier of our deepest human drives to preserve everything life has achieved while expanding what life can be and become.</p><p><strong>This is a call to put technology in service of life itself.</strong></p><h2>A world out of whack</h2><p>We can start by better understanding our technological situation.</p><p>Technology is the product of our <strong>imagination</strong>. It&#8217;s no different than philosophy, science, art, culture, politics, and every other aspect of the human world that we have <em>imagined</em> into being.</p><p>For almost all of human history, these imaginative powers have operated in <strong>unison</strong>. Guided by larger purposes, they have always acted more as a <strong>whole</strong> than as individual parts.</p><p>Tracking the impact of their power over time would look something like this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aRDj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F330cf8fb-88fd-40c9-bca5-85237b9e7aef_787x274.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aRDj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F330cf8fb-88fd-40c9-bca5-85237b9e7aef_787x274.png 424w, https://substackcdn.com/image/fetch/$s_!aRDj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F330cf8fb-88fd-40c9-bca5-85237b9e7aef_787x274.png 848w, https://substackcdn.com/image/fetch/$s_!aRDj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F330cf8fb-88fd-40c9-bca5-85237b9e7aef_787x274.png 1272w, https://substackcdn.com/image/fetch/$s_!aRDj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F330cf8fb-88fd-40c9-bca5-85237b9e7aef_787x274.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aRDj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F330cf8fb-88fd-40c9-bca5-85237b9e7aef_787x274.png" width="787" height="274" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/330cf8fb-88fd-40c9-bca5-85237b9e7aef_787x274.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:274,&quot;width&quot;:787,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aRDj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F330cf8fb-88fd-40c9-bca5-85237b9e7aef_787x274.png 424w, https://substackcdn.com/image/fetch/$s_!aRDj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F330cf8fb-88fd-40c9-bca5-85237b9e7aef_787x274.png 848w, https://substackcdn.com/image/fetch/$s_!aRDj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F330cf8fb-88fd-40c9-bca5-85237b9e7aef_787x274.png 1272w, https://substackcdn.com/image/fetch/$s_!aRDj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F330cf8fb-88fd-40c9-bca5-85237b9e7aef_787x274.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Even as each dimension ebbed and flowed, their power largely increased at the same trajectory to propel the evolution of human civilization forward.</p><p>But today, the graph looks different. Starting sometime during the industrial revolution, the impact from technology began to increase <strong>exponentially</strong>. The other dimensions of our imagination started to stagnate by comparison.</p><p>Now the graph looks more like this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1lB5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e962779-803e-4616-a856-3cb1e0909db3_787x280.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1lB5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e962779-803e-4616-a856-3cb1e0909db3_787x280.png 424w, https://substackcdn.com/image/fetch/$s_!1lB5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e962779-803e-4616-a856-3cb1e0909db3_787x280.png 848w, https://substackcdn.com/image/fetch/$s_!1lB5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e962779-803e-4616-a856-3cb1e0909db3_787x280.png 1272w, https://substackcdn.com/image/fetch/$s_!1lB5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e962779-803e-4616-a856-3cb1e0909db3_787x280.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1lB5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e962779-803e-4616-a856-3cb1e0909db3_787x280.png" width="787" height="280" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1e962779-803e-4616-a856-3cb1e0909db3_787x280.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:280,&quot;width&quot;:787,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1lB5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e962779-803e-4616-a856-3cb1e0909db3_787x280.png 424w, https://substackcdn.com/image/fetch/$s_!1lB5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e962779-803e-4616-a856-3cb1e0909db3_787x280.png 848w, https://substackcdn.com/image/fetch/$s_!1lB5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e962779-803e-4616-a856-3cb1e0909db3_787x280.png 1272w, https://substackcdn.com/image/fetch/$s_!1lB5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1e962779-803e-4616-a856-3cb1e0909db3_787x280.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>By achieving increasing powers of <em>causality</em>, technology is escaping the powers of <em>choice</em> that our imaginations were once capable of imposing. Rather than a single part of a greater whole, technology is now operating more like an autonomous force, unconstrained by anything bigger than itself.</p><p>The result is a world that feels increasingly out of whack.</p><h2>What does a world out of balance feel like?</h2><p>It feels like technology <strong>dominates</strong> our reality, swallowing up every other dimension of our human experience. Everything else feels small and inconsequential by comparison.&nbsp;</p><p>We feel this imbalance when we consider our <strong>future</strong>. Technology feels like the only thing that will matter. We don't imagine dramatic improvements in human coordination or upgrades to our collective intelligence. Instead, our future seems to hinge solely on what new technological powers will emerge and who will control them.</p><p>We feel this imbalance when every technological &#8220;choice&#8221; feels more like a <strong>technological demand</strong>. Technology sets the terms of engagement for almost every aspect of our lives. To opt out of technology today is to effectively opt out of the economy, governance, and almost all of society. It&#8217;s simply not an option.</p><p>We feel this imbalance when a <strong>technological attitude</strong> seems to pervade every aspect of our existence. Is there any part of the human experience&#8212;our childhoods, our relationships, our education, our sex&#8212;that we are <em>not</em> trying to replace with screens? That we are not trying to <em>virtualize</em>?</p><p>We feel this imbalance when the only <strong>motivations</strong> that drive innovation seem to be the <strong>greed</strong> of the market (to make the most money) or the <strong>fear</strong> of the state (to amass the most power). Any technology that can&#8217;t justify huge profit margins or asymmetric power has no incentive to develop, regardless of their potential benefits to humanity or life as a whole.</p><p>We feel this imbalance when <strong>technological momentum</strong> seems unstoppable. Technology has become too complicated to understand, too critical to turn off, and too entrenched to displace. Even when technology creates new problems, the only solution seems to be even more technology.&nbsp;</p><p>We feel this imbalance when the <strong>technological view </strong>increasingly sees human beings as flawed machines meant to be optimized. We're told that machines don't decay, don't show bias, don't make mistakes&#8212;and so we should strive to be more like them, augmenting or replacing our humanity with artificial alternatives.</p><p>Our world feels less and less like one where the point of technology is to serve humanity. It feels more and more like one where human beings are supposed to serve technology.</p><h2>Is technology the problem?</h2><p>But wait&#8212;is technology itself the reason for this imbalance? Or is it simply that technology has been wildly successful while everything else has failed to keep up?&nbsp;</p><p>Why should we blame technology as if it is some independent force? Our technology is perhaps the greatest testament to the <strong>human imagination</strong> that we have ever achieved. In just a few millennia, our species has gone from puny upright hominids to a planetary force, all thanks to the results of our technological imagination.</p><p>If we really need something to blame, shouldn&#8217;t we look at every <em>other</em> aspect of our imaginations? After all, the power of our technology only becomes a <strong>problem</strong> when nothing else is <strong>big enough to properly guide it</strong>.</p><p>In fact, just as technology is gaining the potential to unlock new levels of human flourishing, traditional forms of balance and wisdom seem to be moving in the opposite direction:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dCkj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb91e32cd-b05a-4c58-87d9-b3644ba0ead6_787x277.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dCkj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb91e32cd-b05a-4c58-87d9-b3644ba0ead6_787x277.png 424w, https://substackcdn.com/image/fetch/$s_!dCkj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb91e32cd-b05a-4c58-87d9-b3644ba0ead6_787x277.png 848w, https://substackcdn.com/image/fetch/$s_!dCkj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb91e32cd-b05a-4c58-87d9-b3644ba0ead6_787x277.png 1272w, https://substackcdn.com/image/fetch/$s_!dCkj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb91e32cd-b05a-4c58-87d9-b3644ba0ead6_787x277.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dCkj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb91e32cd-b05a-4c58-87d9-b3644ba0ead6_787x277.png" width="787" height="277" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b91e32cd-b05a-4c58-87d9-b3644ba0ead6_787x277.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:277,&quot;width&quot;:787,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dCkj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb91e32cd-b05a-4c58-87d9-b3644ba0ead6_787x277.png 424w, https://substackcdn.com/image/fetch/$s_!dCkj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb91e32cd-b05a-4c58-87d9-b3644ba0ead6_787x277.png 848w, https://substackcdn.com/image/fetch/$s_!dCkj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb91e32cd-b05a-4c58-87d9-b3644ba0ead6_787x277.png 1272w, https://substackcdn.com/image/fetch/$s_!dCkj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb91e32cd-b05a-4c58-87d9-b3644ba0ead6_787x277.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>What&#8217;s left simply isn&#8217;t <strong>big enough or wise enough</strong> to grapple with our most advanced technologies:</p><ul><li><p><strong>Markets aren&#8217;t big enough</strong>. Advanced technology is too powerful for the market&#8217;s crude trial-and-error mechanisms, and the risk of externalities is too great to leave to the arms-race dynamics that markets demand.&nbsp;</p></li><li><p><strong>Governments aren&#8217;t big enough</strong>. Nation states have their own arms-race dynamics that prevent planetary scale coordination, and a single world government is as frightening as any advanced technology.&nbsp;</p></li><li><p><strong>Philosophy and religions aren&#8217;t big enough</strong>. In a globally interconnected world, we&#8217;re simply too pluralistic to impose a single religion or philosophy on all technological guidance.</p></li><li><p><strong>Culture and politics aren&#8217;t big enough</strong>. Today both culture and politics are too fragmented to coherently guide technology. They are more likely to grind innovation to a halt as technology gets swallowed in the culture wars or politicized regulation.</p></li><li><p><strong>The status quo isn&#8217;t big enough</strong>. The default hope is that somehow, between big tech and government regulation, we will continue to &#8220;muddle through&#8221; and &#8220;find the equilibrium&#8221;. This approach may work when the stakes of technology are localized and marginal. When the stakes turn planetary and existential, we need better ideas.</p></li></ul><p>Which of these do we trust to coordinate AI alignment? To prevent us from crossing planetary boundaries? To guide our pursuit of synthetic biology? To provide sufficient wisdom around &#8220;merging with the machine&#8221;?</p><p>To find something <strong>bigger than technology</strong>&#8212;something capable of uniting a global humanity in guiding our technological future&#8212;we&#8217;ll have to go much deeper into our existence and much further back in time.</p><p>We&#8217;ll have to go back to<strong> life itself.</strong></p><h2>Life itself</h2><p>Life is the one thing that unites each and every one of us: we are each living creatures and we&#8217;d each like to continue being so. We&#8217;d each like for Earth, our living home, to thrive. And we&#8217;d each like the lives of our children to be better than our own.</p><p>In this way, life is the <strong>one thing</strong> that we can all agree on. It can transcend philosophy, politics, religion, and culture. Life is bigger than all of those things.&nbsp;</p><p>In a world where advanced technology has put us on the precipice of both transcendent benefits and catastrophic risks, it&#8217;s worth asking: <strong>what would it look like to reorient technology towards life? To explicitly put technology in service of life itself?</strong></p><p>What would this even mean?</p><h2>The story of life</h2><p>Life isn&#8217;t just the story of human beings here on earth; it&#8217;s much bigger than that. Science has no clear definition for what life is, so let&#8217;s use the <em>grandest</em> definition possible, one that goes all the way back to the Big Bang&#8212;<strong>life is that universal drive towards increasing wholeness, structure, and integrity.</strong></p><p>Whatever drove those first atoms to coalesce into molecules, and gasses into planets, and planets into galaxies&#8212;that same impulse is what drove chemical bonds to evolve into cells and tissues and consciousness and magnificently complex ecosystems of interdependent organisms and eventually <strong>you</strong>&#8212;<em>that</em> is the impulse of life.&nbsp;</p><p>We don&#8217;t know why or how life happened; we just know that it <em>did</em> happen. And as far as we can tell, the emergence of life has<strong> happened only once</strong>.&nbsp; This makes life <strong>the most precious thing in the entire universe</strong>.&nbsp;</p><p>This also makes Earth the most interesting <em>planet</em> in the universe. Earth alone has somehow harbored the conditions necessary for life to evolve into the most advanced form that has ever existed: us, human beings.&nbsp;</p><h2>Humanity&#8217;s calling as stewards of life</h2><p>As the most rare and valuable thing in the universe, life deserves our deepest care and attention. We have no reason to believe that life is somehow destined to continue. As life&#8217;s most powerful agents, we have a responsibility to contribute in whatever ways we can to perpetuate life&#8217;s continued evolution and expansion.</p><p>Our science and technology have given us the powers of <strong>evolutionary agents</strong>.&nbsp; We contain the potential to <strong>upgrade evolution</strong> from a purely natural process to a more conscious, intentional process.&nbsp;</p><p>As such, any duty of humanity must recognize the need to become worthy <strong>stewards of life</strong>. To actively steward life is to ensure that life can flourish and expand, both here on Earth and beyond.&nbsp;</p><p>We could call this our <strong>foundational duty&#8212;</strong>to both protect what life has achieved and expand what life can become.<strong> </strong>To fail at <em>this</em> duty would preclude the possibility of <em>any other</em> <em>duties</em>.&nbsp;</p><p>This duty comes with the power to evolve what evolution can achieve, and thus what life can become.  To have any chance to succeed, we must first obtain the <em>wisdom</em> to deploy such power.</p><h2>Right Relationship</h2><p>By reorienting technology towards life, we can begin to see how nature, humanity, and technology can be placed into<strong> a right relationship</strong>:</p><p><strong>Nature as the foundation of life. </strong>Nature is the foundation of life, and provides fundamental constraints on what life can be and become. Without nature life ceases to exist.</p><p><strong>Humanity as the steward of life. </strong>Humanity&#8217;s role is to properly value life by working to preserve what life has achieved and expand what life can become.&nbsp;</p><p><strong>Technology as the extension of life. </strong>Technology is the means by which humanity can fulfill its role as life&#8217;s steward, by both protecting Nature as the foundation of life and expanding the limits of life&#8217;s possibilities.&nbsp;</p><p>We now have an answer for what technology is actually for: <strong>to empower humanity to best fulfill its role in expanding life&#8217;s integrity and possibility</strong>.</p><p>In this way, technology can help expand life beyond the pure <strong>contingency</strong> of its historical path to something more conscious and intentional.&nbsp; All of life&#8217;s random twists and turns have led to the capacity to transform evolution itself, to expand possibilities of what life is capable of achieving.</p><h2>Principles of life</h2><p>To reorient technology towards life involves two critical steps:</p><ol><li><p>First we must understand the <strong>principles</strong> that have helped life emerge and flourish.&nbsp;</p></li><li><p>Then we must translate those principles into <strong>practices</strong> that can actively guide our technology.</p></li></ol><p>The principles of life can lead to practices that are both generative and surprising.</p><p>For example, consider <strong>adaptation</strong>, the core principle of navigating between the drives of <em>creation</em><strong> </strong>and <em>protection</em> to maximize life&#8217;s adaptive capacity.</p><p>On the one hand, <em>life wants to <strong>evolve</strong></em>. Life wants to explore every possible niche until the possibility space is <em>saturated</em>. The goal is <em>creation</em>&#8212;to generate and test &#8220;new information&#8221;. The processes are unpredictable, experimental, and diversifying.&nbsp;</p><p>On the other hand, <em>life wants to <strong>persist</strong></em>. Much of life&#8217;s adaptive mechanisms are meant to replicate and maintain what <em>works</em>. The goal is <em>protection</em>&#8212;to conserve the most successful experiments that have proven to work. The processes are predictable, convergent, and unifying.&nbsp;</p><p>Adaptation is about finding the right balance between the two. Too much creation threatens life&#8217;s ability to persist, while too much protection threatens life&#8217;s ability to evolve.&nbsp;</p><p>It may seem like technology is only about <em>innovation</em>, but the exact same principle applies.<strong> The more </strong><em><strong>confident</strong></em><strong> we are in protecting what we ultimately value, the more </strong><em><strong>experimental</strong></em><strong> we can be in pursuing innovation.&nbsp;</strong></p><p>Reorienting more of our technology practices around this principle alone would radically improve our technological landscape.&nbsp;</p><h2>A future worth living for</h2><p>What would a world where technology is in service to life look like? Would it really be <em>any</em> <em>different</em>?</p><p>In many ways, it would feel the same. After all, any world that deviates too far from the principles of life would simply cease to exist.&nbsp;</p><p>But in many ways, <strong>it would be very different</strong>.&nbsp;</p><p>It would be a world where technology is clearly in service to <strong>something bigger</strong> than itself, guided with meaning and purpose, grounded in a right relationship to humanity, nature, and life itself. It would be a world of <strong>technology in love</strong>.</p><h4><strong>Technology in love with humanity</strong></h4><p>Imagine technology so in love with humanity that it seeks to enhance our <em><strong>human-ness</strong></em>, rather than replace it or automate it away. Technology would get <strong>out of our way</strong> rather than constantly demand our attention. It would <strong>enrich our embodied experiences</strong> rather than virtualize them.</p><p>This would be a world where <strong>maximizing human imagination</strong> is a primary focus of innovation. Rather than hoping AI can solve our planetary problems, we&#8217;d be probing the limits of our collective intelligence, progressing our human institutions, and expanding our capacity to coordinate at ever increasing scales.</p><h4>Technology in love with nature</h4><p>Imagine technology so in love with nature that technological <strong>progress</strong> is measured by nature&#8217;s health and resilience. This would be technology that <strong>gives</strong> to nature more than it takes; that sees nature as a <strong>beneficiary</strong> to <em>improve </em>more than a resource to <em>exploit</em>.</p><p>This would be a world where technology helps us <strong>coexist</strong> with other forms of life, rather than further separating us from the natural world. It would help reveal the <strong>radical interdependence</strong> of our natural world, and how everything we value depends on it.</p><h4><strong>Technology in love with life</strong></h4><p>Imagine technology so in love with life that its purpose is clear:<strong> to win the </strong><em><strong>infinite game of life</strong></em>, where the only goal is to <strong>keep playing</strong>. </p><p>Rather than top-down plans or utopian outcomes, it would focus on systems and networks that maximize <em>infinite play</em>&#8212;where the best experiment is to keep the experiment going.</p><p>This would be a world where technology would deeply conserve life&#8217;s <em>best</em> experiments so it can run billions of <em>greater</em> experiments. Where innovation is focused on improving our <strong>methods for adopting technology</strong> as much as it is on improving technology itself.</p><p>Imagine being so confident in our ability to test advanced technologies&#8212;to measure impacts, run experiments, correct for errors&#8212;that we gleefully <strong>maximize</strong> every innovation to learn what works as quickly as possible so it can be available for everyone.</p><h2>None of this will be easy</h2><p>Of course, discerning the principles of life won&#8217;t be easy. Our biases and fears will always threaten to corrupt our conclusions. The resulting practices will conflict with traditional goals and principles. We&#8217;ll make philosophical, religious, and economic objections.</p><p>But it will also not be impossible. Rather than getting lost in politics, culture, and religion, any debates will be grounded in deep agreement&#8212;in the values of life itself, and the need to both protect and expand it. This alone could dramatically improve the technological discourse.</p><p>We can easily dismiss some traditional hangups, like the<em> </em><strong>natural fallacy</strong>&#8212;that all things <em>natural</em> are by definition <em>good</em>. The entire idea is that <em>nothing</em> in nature is eternally good because evolution itself is <em>constantly evolving</em>. The principles of life are bigger than any single moment in nature&#8217;s history.</p><p>For example, <em>natural</em> evolution had to rely on mass pain and death&#8212;the crudest tools at its disposal&#8212;to bootstrap life through countless iterations of random experiments. But the broader principle isn&#8217;t about pain and death, it&#8217;s about variation and selection.&nbsp;</p><p>As stewards of life, our role is to seek practices that best implement these broader principles. Pain and death will never be removed from the human condition, but an intentional and <em>conscious</em> evolution should find more efficient and elegant practices to fulfill the broader principles of variation and selection.</p><p>Besides, the goal isn&#8217;t to solve every philosophical dilemma. The goal is to situate technology within the larger<em> story of life</em>, to unite technology around a singular purpose, and to align technology with what life needs to flourish.</p><h2>In summary</h2><p>You are not a machine, or a computer, or an algorithm. You are a unique manifestation of the most precious thing in the universe&#8212;life itself. You are endowed with desires, agency, and a boundless imagination.</p><p>Technology comes from this same imagination&#8212;humanity&#8217;s greatest superpower&#8212;just like art, philosophy, and science. Technology has transformed us into agents with the power to evolve evolution.</p><p>As life&#8217;s most complex achievement, it is our responsibility to use the entirety of our imaginations as stewards of all life, both on Earth and beyond, both today and in perpetuity.</p><p>The wisdom to deploy technology in service of life depends on aligning our technological practices with the principles of life itself.&nbsp;</p><p>Let&#8217;s stop accepting technology that deadens us. By reorienting technology to life, we can align technology with purpose worthy of its power. We can align it with the very source from which it springs, the boundless human imagination in all of its entirety.</p><p>We can align technology to empower our role as life&#8217;s stewards: to play the infinite game and to expand the possibilities of what life can be and become.</p><p>We can align technology with life itself.</p><p>&#8212;</p><p>Is this even possible? Are we wise enough to admit that something bigger is needed to guide our technology, or are we stuck? Do we have the collective will and intelligence to achieve something at this scale, or not? Do we still have the agency to define our own future, or is it too late?&nbsp;</p><p>There is only one way to find out. We will need to run the experiment.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[What Does a Good Digital Life Look Like?]]></title><description><![CDATA[An ethical framework for accelerating change]]></description><link>https://www.techforlife.com/p/what-does-a-good-digital-life-look</link><guid isPermaLink="false">https://www.techforlife.com/p/what-does-a-good-digital-life-look</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Fri, 28 Jun 2024 15:07:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RUcH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbeda143e-fbd8-49cb-99b9-238aab4be76f_1456x816.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RUcH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbeda143e-fbd8-49cb-99b9-238aab4be76f_1456x816.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RUcH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbeda143e-fbd8-49cb-99b9-238aab4be76f_1456x816.webp 424w, https://substackcdn.com/image/fetch/$s_!RUcH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbeda143e-fbd8-49cb-99b9-238aab4be76f_1456x816.webp 848w, https://substackcdn.com/image/fetch/$s_!RUcH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbeda143e-fbd8-49cb-99b9-238aab4be76f_1456x816.webp 1272w, https://substackcdn.com/image/fetch/$s_!RUcH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbeda143e-fbd8-49cb-99b9-238aab4be76f_1456x816.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RUcH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbeda143e-fbd8-49cb-99b9-238aab4be76f_1456x816.webp" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/beda143e-fbd8-49cb-99b9-238aab4be76f_1456x816.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1482976,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RUcH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbeda143e-fbd8-49cb-99b9-238aab4be76f_1456x816.webp 424w, https://substackcdn.com/image/fetch/$s_!RUcH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbeda143e-fbd8-49cb-99b9-238aab4be76f_1456x816.webp 848w, https://substackcdn.com/image/fetch/$s_!RUcH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbeda143e-fbd8-49cb-99b9-238aab4be76f_1456x816.webp 1272w, https://substackcdn.com/image/fetch/$s_!RUcH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbeda143e-fbd8-49cb-99b9-238aab4be76f_1456x816.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Imagine a precocious thirteen-year-old girl comes to you for advice. She wants to know what she should do to live a &#8220;good life&#8221; in the face of digital technology. What would you tell her?&nbsp;</p><p>You can&#8217;t just tell her to stay off smartphones and avoid social media. This type of answer ignores the lived reality of the digital native. Her life is immersed in digital technology. Her entire social life is mediated through her phone. All of her friends are on social media. Any definition of a good digital life must account for some minimum digital engagement.</p><p>Surely you could offer plenty of sound advice based on your personal experience. But how much of that experience would still make sense to her?&nbsp;What are the values you could reference that have a clear digital equivalent?&nbsp;Who are the role models you could point to?&nbsp;</p><p>In trying to explain values and behaviors that might help her, you realize that a lot of translation is necessary to apply them to her digital reality. Sometimes this works, but sometimes she just looks confused. You begin to realize that much of what you thought constituted a good life has changed. It&#8217;s almost as if a <strong>contextual chasm</strong> has formed between her life and yours that makes it difficult to transfer much of your wisdom.</p><p>Is it any wonder that digital natives <a href="https://www.afterbabel.com/p/best-of-2023">seem to be struggling</a> so much to adapt to their new digital landscape? How much of this can be explained by the previous generation lacking any real &#8220;digital wisdom&#8221; to impart? How can we expect them to live good digital lives when so little of the advice on offer is relevant to their digital reality?</p><h2>How to define a good life</h2><p>For almost all of human history, any child would have had no problems answering this question. They would have been born into a culture that began etching a moral blueprint into their psyche from day one. Everything in that child&#8217;s life would have constantly reinforced the values and behaviors that defined a good life.</p><p>Part of this education would have included <strong>moral habituation</strong>&#8212;drilling into the child the exact behaviors of right and wrong that were required to achieve a good life. And part of that habituation would have included the valorization of <strong>moral exemplars</strong>&#8212;the community leaders, ancestors, and mythical figures that personified the specific traits and virtues that embodied the good life.</p><p>This type of moral education depended on contexts between generations remaining largely the same. If a culture is stable across time, then each generation can assure the next that they&#8217;ve &#8220;seen it all before&#8221;. They can point to the hard-won lessons from the past as being just as relevant today as they ever were. They can tell stories of ancient ancestors living lives that look essentially the same as those in the present day.</p><p>But once cultural contexts start changing, the efficiency of this entire ethical framework begins to shift. The question then becomes: <strong>is there a rate of change where it begins to break down completely?</strong>&nbsp;</p><p>What does the rate of cultural change say about that culture&#8217;s capacity to transfer wisdom across generations? And what does <em>that</em> say about the capacity of our current ethical frameworks to survive our technological future?</p><p>We can better analyze these questions by seeing how increasing rates of cultural change relate to the transmission of values and wisdom across generations.</p><h3>Stage 1: Meta-generational</h3><p>For most of human civilization, beliefs and values remained constant across generations. An extremely slow rate of cultural change was our historical norm. Any definition of a good life could apply to every generation.</p><p>When the rate of change is barely discernible then values can become deeply embedded in specific cultural contexts. Stories and myths convey precisely how to values should be embodied. Specific rituals provide scripts for exactly how certain lives should be lived. There is almost zero room to deviate from whatever &#8220;good life&#8221; is assigned for you at birth.</p><p>This method worked for almost all of human history. It&#8217;s exactly what Aristotle outlined in his study of virtue ethics, and some version of it has been constant across all historic cultures. Of course these lives could be fragile to disruption, but as long as cultures remained stable, rich definitions of good lives could be sustained across generations.&nbsp;</p><h3>Stage 2: Multi-generational</h3><p>No culture is ever perfectly frozen in time. Even the most conservative societies are experiencing a constant rate of change. This is almost always a good thing&#8212;moral progress depends on culture changing enough to loosen its hold on the imagination of what&#8217;s possible.</p><p>Typically change isn&#8217;t significant enough to disrupt a society. But wars, politics, and religion always have the potential to change culture enough to create friction across generations. The best ethical frameworks can overcome this friction by anticipating some degree of change and incorporating it into the wisdom of the next generations.</p><p>Elders play a crucial role in this process, serving as a bridge between the past and present. They can help younger generations understand the core principles behind traditional values, while guiding them in adapting these principles to new contexts. This approach allows cultural norms to gradually evolve across generations without completely severing ties to the past.&nbsp;</p><h3>Stage 3: Inter-generational</h3><p>Sometimes change is so abrupt that a <strong>contextual chasm</strong> can open between generations. Contexts can shift so abruptly that traditional ideas of a good life no longer apply&#8212;not because they are no longer true, but because the context in which those lives were possible no longer exists.&nbsp;</p><p>When contexts change this quickly, transferring values across generations becomes almost impossible. The next generation sees the wisdom of previous generations as something to rebel against, not something to venerate.</p><p>This is how cultures can transform in a single generation, often accompanied by social violence and political upheaval. The results are entirely new definitions of what a good life should be. Recent examples include Europe during the first World War, America in the 1960s, and China during the Cultural Revolution.</p><p>It&#8217;s also happening today, in the generational divide of the <strong>digital revolution</strong>. Grandparents today arguably have more in common with grandparents from hundreds of years ago than they do with their own grandchildren. Whatever a parent today thinks might have helped them navigate adolescence would have almost no relevance with their own children.&nbsp;</p><p>If this digital divide is an indication, then our practices for transferring wisdom across generations may no longer be sufficient for the current pace of technological disruption.</p><h2>The new normal</h2><p>There&#8217;s a final stage for how fast contexts can change: <strong>Intra-generational change.&nbsp;</strong></p><p>This is when change happens so quickly that it doesn&#8217;t just create friction between generations, <strong>it creates friction during a single lifetime</strong>.&nbsp; Values that might have been relevant at one point of your life may not apply at others. Lessons you were taught in childhood may no longer make sense. Just when you thought you had figured out some hard-earned wisdom, that entire environment changes on you.</p><p>With sufficient technological disruption, a chasm can also open up between those that adopt certain technologies and those that don&#8217;t. You can see this already developing with AI, where some teens are admitting to <a href="https://www.theverge.com/2024/5/4/24144763/ai-chatbot-friends-character-teens">becoming addicted to interacting with AI characters</a>. Soon the first generation of &#8220;AI natives&#8221; will be growing up interacting directly with AI agents. They&#8217;ll feel closer to their AI nannies than with anyone in their family, just like many digital natives feel like their &#8220;true self&#8221; only when they are online.</p><p>When disruptive technology moves faster than any single generation&#8217;s ability to adapt to it, then any generational transfer of values becomes impossible. It will only get harder and harder to define what constitutes a good life when the possibility of that life is changing before our very eyes.&nbsp;</p><p>Will intra-generational change be the new norm? It&#8217;s possible that digital represents a one-time event, and somehow we&#8217;ll return to some equilibrium of stable cultural change. But this would be a radical inversion of today&#8217;s technological trends. Digital itself continues to evolve, and we&#8217;re just starting on similar trajectories of disruption with AI, robotics, bio-engineering, and a whole host of other technologies that will radically reshape our world.</p><p>Soon we may be confronting a reality where the only constant will be change itself.&nbsp;</p><h2>A new ethics of change</h2><p>If accelerating change is our new normal, then we&#8217;ll need to fundamentally rethink our approach to ethics, wisdom, and the pursuit of a good life. But how?</p><p>It starts by accepting the reality of change itself. Change is not something we must fear by definition. Adapting to increasing rates of change is something we have evolved to excel at. After all, we&#8217;re currently living through rates of change that somehow feels &#8220;normal&#8221;, yet would appear <strong>utterly frightening to anyone living two-hundred years ago. </strong>Tomorrow&#8217;s rate of change will feel similarly frightening to us, yet we will continue to adapt.</p><p>Once we accept accelerating change, we can then consider new ethical frameworks that can be more resilient to rapid contextual shifts. The following are some approaches that might inform these new frameworks:</p><ol><li><p><strong>Abstracting core values</strong>: Instead of focusing on values and behaviors that are anchored in specific contexts, we need to identify and cultivate more universal human values that can adapt to changing circumstances.</p></li><li><p><strong>Venerating adaptability</strong>: The ability to navigate change itself should be recognized as a crucial virtue in our rapidly evolving world.</p></li><li><p><strong>Developing ethical flexibility</strong>: We must teach the skills of ethical reasoning and decision-making from first principles, rather than relying solely on fixed moral rules and lessons.</p></li><li><p><strong>Embracing technological awareness</strong>: Understanding the essence of technology and its impact on the human condition must become a core component of moral education.</p></li><li><p><strong>Fostering intergenerational dialogue</strong>: We need to restore meaningful exchange between generations, allowing for mutual learning that can transcend context.</p></li></ol><p>Reimagining what it means to live a good life doesn't have to mean abandoning all traditional values or succumbing to moral relativism. But it will demand that we uncover the core principles that have guided human flourishing throughout history, and that we develop new practices for applying these principles in rapidly changing contexts. Our ideas about wisdom, moral education, and role models will need to become much more dynamic.</p><p>For the thirteen-year-old girl seeking advice on how to live a good digital life, our answer might sound something like this:&nbsp;</p><p><em>Cultivate the wisdom to discern what truly matters amidst the noise of constant innovation. Develop the flexibility to adapt your values to new contexts without losing sight of your core principles. And above all, recognize that the pursuit of a good life is not about achieving some fixed ideal, but about being true to your ideals at each step of your journey&#8212;whatever that journey may look like.</em></p><div><hr></div><p><em>This post is the second in a series exploring our new digital reality. <a href="https://www.techforlife.com/p/the-question-concerning-digital-technology">The first post</a> explored the philosopher Martin Heidegger and his approach to understanding the essence of technology. </em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Life is Special Enough]]></title><description><![CDATA[Humanity's &#8220;secret sauce&#8221; in the face of advanced technology]]></description><link>https://www.techforlife.com/p/life-is-special-enough</link><guid isPermaLink="false">https://www.techforlife.com/p/life-is-special-enough</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Wed, 29 May 2024 18:49:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!3kH5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbf12925-3d8d-4eb2-afb6-d0398eeb6606_1792x1024.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3kH5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbf12925-3d8d-4eb2-afb6-d0398eeb6606_1792x1024.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3kH5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbf12925-3d8d-4eb2-afb6-d0398eeb6606_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!3kH5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbf12925-3d8d-4eb2-afb6-d0398eeb6606_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!3kH5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbf12925-3d8d-4eb2-afb6-d0398eeb6606_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!3kH5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbf12925-3d8d-4eb2-afb6-d0398eeb6606_1792x1024.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3kH5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbf12925-3d8d-4eb2-afb6-d0398eeb6606_1792x1024.webp" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fbf12925-3d8d-4eb2-afb6-d0398eeb6606_1792x1024.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:644650,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3kH5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbf12925-3d8d-4eb2-afb6-d0398eeb6606_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!3kH5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbf12925-3d8d-4eb2-afb6-d0398eeb6606_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!3kH5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbf12925-3d8d-4eb2-afb6-d0398eeb6606_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!3kH5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffbf12925-3d8d-4eb2-afb6-d0398eeb6606_1792x1024.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">the secret sauce is in there somewhere</figcaption></figure></div><p>As a species, we humans seem to think quite highly of ourselves. We have no problems coming up with reasons to justify how special (we think) we are. Other species could rightly accuse us of an inflated sense of self-importance.</p><p>Historical justifications for our exceptional nature makes quite a list:&nbsp;</p><ul><li><p>Only humans are <em>the distinct creations of a loving God</em>.&nbsp;</p></li><li><p>Only humans <em>have been endowed with a soul</em>.&nbsp;</p></li><li><p>Only humans are <em>at the center of the universe</em>.&nbsp;</p></li><li><p>Only humans are <em>rationally intelligent</em>.&nbsp;</p></li><li><p>Only humans have <em>a self-reflective consciousness</em>.&nbsp;</p></li><li><p>Only humans are <em>masters of language</em>.&nbsp;</p></li><li><p>Only humans have <em>a free will</em>.&nbsp;</p></li><li><p>Only humans are <em>meaning-making and purpose-driven beings</em>.</p></li></ul><p>There are obvious advantages for humanity to try and carve out a special place in the cosmos. Feeling unique and exceptional fosters in-group cohesion and helps create a shared identity. It justifies dominance over nature and other species. It reduces some of the existential terror that comes with our impermanence and mortality.</p><p>Yet the story of humanity is in some ways the story of science and technology systematically dismantling the entire idea that we are uniquely special. Things we once thought were eternally sacred have been reduced to mundane scientific explanations. Abilities we thought were irreducibly human have been perfected and automated by technological innovations.</p><p>Even today, we desperately want to think that there&#8217;s something special about being human. As advanced technology promises to match and exceed the remaining human capacities that might defines us as special, we wonder &#8220;what&#8217;s left?&#8221;</p><p>Soon we&#8217;ll have increasing powers to merge with machines, to genetically alter our own species, and to create intelligences that vastly exceed our own.&nbsp;This acceleration confronts us with some uncomfortable questions: Is there anything that humans can do that can&#8217;t be perfected or automated by a machine?&nbsp; Is there anything special about the human condition that is &#8220;technology proof&#8221;?</p><p>Or to put it more concisely: <strong>In a future with advanced technology, what is the point of the human being?</strong></p><h2>The &#8220;secret sauce&#8221; of humanity</h2><p>If we could locate some essential quality of human nature, then perhaps it <em>could</em> help us define a unique role for humanity in our technological future. But this is much tougher than it sounds. Not only would it need to be compatible with scientific consensus, it would also need to be impervious to the relentless march of technological progress. And it would need to be something that resonates with all of humanity, regardless of our cultural differences.</p><p>Do any justifications of human specialness hold up? A quick survey of the list above doesn&#8217;t offer much hope. The notion of Earth being the center of the universe was swiftly discarded once technology enabled astronomical observations. Similarly, the idea of humans as distinct creations of a divine being has lost much of its persuasive force in the face of evolutionary evidence.</p><p>Along with these empirical attacks, science has also mounted theoretical challenges. Phenomena like consciousness and free will, once thought to be hallmarks of human exceptionalism, have been reframed as mere byproducts of our neural architecture or delusions of subjective experience.</p><p>As technology advances, even more supposed human specialties will be called into question. Our intelligence and our mastery of language are increasingly being matched&#8212;and in some cases surpassed&#8212;by artificial systems.</p><p>So what is left? Is there something about human beings that can&#8217;t be reduced to mechanistic functions? That isn&#8217;t explainable by science or evolution? That could never be reproduced or perfected by future technology? Is there something <strong>irreducibly human,</strong> something inherent in the human condition that defies explainability, reduction, or reproduction?</p><p>This is often called the &#8220;<strong>secret sauce</strong>&#8221; argument&#8212;the positing of something mysterious or magical that will forever set humanity apart from everything else in the universe.</p><p>For many, this question comes down to religious or spiritual beliefs. We are the creations of a loving God, endowed with moral agency and a divine spirit that can never be replicated in soulless machines. Yet, such arguments must also grapple with the darker aspects of human nature. Moreover, what if artificial agents could be designed for greater moral agency, compassion, and environmental stewardship than anything humans have been capable of?</p><p>The more scientifically inclined cling to the hope that consciousness or intelligence will forever defy technological replication. Perhaps the brain operates on quantum principles or analog processes that can never be fully digitized. Or perhaps consciousness is only possible in carbon-based biological organisms. But as of yet, no conclusive evidence has emerged to fully justify these claims.</p><p>Others make philosophical arguments about human agency and free will. We can always choose to <em>not</em> act like machines and to defy predetermined outcomes. These might be theoretically appealing, but they can&#8217;t offer much practical impact in the face of increasingly autonomous and self-directed artificial systems.</p><p>Perhaps the problem is more fundamental. What each of these &#8220;secret sauce&#8221; arguments is trying to locate is some <strong>essential</strong> quality that defines humanity&#8217;s unique nature. But what if no such essential quality exists?</p><p>I say fine. The sooner we give up on the idea of some essential quality, the sooner we can stop being threatened whenever technology appears to improve or replace it, and the sooner we can embrace a different approach&#8212;one that stands a greater chance of informing a sustainable relationship between a flourishing humanity and advanced technology.</p><h2>The contingency argument</h2><p>Instead of arguing for some <em>essential</em> reason, I am proposing a <em>contingent</em> one.&nbsp;</p><p>We can call it the <strong>contingency argument</strong>: that what makes humans special isn&#8217;t some single quality. What makes us special is the simple fact that we&#8217;re here. We exist. We are the contingent result of life itself, and life is special enough.&nbsp;</p><p>By contingent, I mean that there&#8217;s nothing essential that could possibly explain life in general or your life in particular. Life could just as easily have not happened, but here we are. We don&#8217;t know <em>why</em> or <em>how</em> life happened; we just know that it <em>did</em> happen. </p><p>Maybe it was some God(s), maybe it was some universal consciousness, or maybe it was the law of large numbers churning through physical laws long enough to find the magic formula. Regardless, contingency will always be a part of humanity&#8217;s origins. The reason we are here and dinosaurs are not is not because of some essential character of the universe but because of pure chance.&nbsp;</p><p>Yet this contingency is <strong>utterly special</strong>. As far as we know, life has only happened once. This makes life itself the most precious thing in the entire universe. It also makes Earth the most interesting <em>planet</em> in the universe. Everything outside of Earth&#8217;s ambit&#8212;for all its near infinite scale in matter and energy&#8212;is completely and utterly without life. It follows the determined path of physical law and nothing more.&nbsp;</p><p>Somehow Earth alone has harbored the conditions necessary for life to evolve and flourish, culminating in the most advanced form of life that has ever existed: us, human beings. We are the only species with the cognitive capacity to reflect on our own evolution. Part of that capacity includes a conscious self-awareness, which we use to inject meaning into the universe itself. The history of the universe becomes <em>our</em> history, the history of humankind. The cosmos becomes the playground of our purpose.</p><p>This contingency means that everything we think is special about humanity&#8212;our creativity, our intelligence, our rationality&#8212;is utterly contingent on what was adaptive for our evolutionary environment. Nothing about our capacities are perfect or ideal. They were selected to be precisely attuned to Earth&#8217;s conditions and nothing more. </p><p>The most remarkable thing about human capabilities is how suboptimal they are. We learn best by failing, over and over again. We embrace errors and mistakes, transforming them into inspiration and serendipity. Long walks, showers, and naps play key productive functions in the history of our intellectual achievements. Our irrationality is one of the deepest sources of our imaginative powers. It is this <strong>sub</strong>optimality that makes our capabilities so <em>optimally human</em>.</p><p>Perhaps the most potent aspect of our contingent nature is our reality as finite beings. We experience suffering, and each of us will die. Our experience of life always falls short of our aspirations. Yet this very finitude is what fuels everything that makes us human: our drive for transcendence, our quest for meaning, our ceaseless push to expand the boundaries of possibility. </p><p>Our most contingent values are the ones that we value most: our judgements, aesthetics, and moral values. These can only result from the clash of finitude and transcendence that we each experience at the heart of the human condition. They are not individual attributes that can be detached from the wholeness of life. They can never be optimized for the determinant machinations of digital artificialization. </p><p>Life is not a machine&#8212;it&#8217;s too ambiguous, too mysterious, too indeterminate. Nor is life a computation, or an algorithm. Life is so much bigger than these things. There is an integrated wholeness to life that imparts an intentionality onto every human capability that can never be captured by that capability alone.</p><p>So if humanity has any &#8220;secret sauce&#8221;, it is this: we are here. As far as we know, we are the most advanced form of life that exists anywhere in the universe. There&#8217;s no need to identify some essential quality of humanity to prop up our ego in the face of technological advancement. All of the chance and paradox and failures has led to a form of life that is uniquely capable of judgment, purpose, and meaning.  We are at the leading edge of life&#8217;s possibilities, and that alone is worthy of veneration. Life is special enough.</p><p>It&#8217;s only when we reject our lineage to life that we become tempted to remake the human as determined, quantified, and completely self-made. When we forget our contingency we forget our humanity. That&#8217;s when we are most likely to elevate machines as the sole means of comparison for all human achievement. That&#8217;s when we cede our agency to that which can never generate human values or make human judgments.</p><p>The path forward, then, is to forge a relationship with advanced technologies that recognizes the contingent nature of life itself, while empowering us to expand the possibilities of what life can become. It is a path of both conserving life&#8217;s essence and redefining its limits.&nbsp; A path of continual becoming, fueled by the inexhaustible drive of imagination and wonder at the heart of the human condition.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Question Concerning (Digital) Technology ]]></title><description><![CDATA[An attempt to understand our digital world - part one]]></description><link>https://www.techforlife.com/p/the-question-concerning-digital-technology</link><guid isPermaLink="false">https://www.techforlife.com/p/the-question-concerning-digital-technology</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Fri, 10 May 2024 16:48:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wiOd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd90d872a-9428-4234-b06e-a04983e3c13d_347x522.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wiOd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd90d872a-9428-4234-b06e-a04983e3c13d_347x522.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wiOd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd90d872a-9428-4234-b06e-a04983e3c13d_347x522.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wiOd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd90d872a-9428-4234-b06e-a04983e3c13d_347x522.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wiOd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd90d872a-9428-4234-b06e-a04983e3c13d_347x522.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wiOd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd90d872a-9428-4234-b06e-a04983e3c13d_347x522.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wiOd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd90d872a-9428-4234-b06e-a04983e3c13d_347x522.jpeg" width="347" height="522" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d90d872a-9428-4234-b06e-a04983e3c13d_347x522.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:522,&quot;width&quot;:347,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:38789,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wiOd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd90d872a-9428-4234-b06e-a04983e3c13d_347x522.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wiOd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd90d872a-9428-4234-b06e-a04983e3c13d_347x522.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wiOd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd90d872a-9428-4234-b06e-a04983e3c13d_347x522.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wiOd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd90d872a-9428-4234-b06e-a04983e3c13d_347x522.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>To understand technology we need to understand how it affects <strong>us</strong>&#8212;the human beings that both create technology and are created by it. If we forget this symbiotic relationship between technology and humanity, then our understanding will always be incomplete.</p><p>The entire point of such understanding is to ensure that we can have <strong>a free relationship to technology</strong>. To act freely with technology means that we must understand it.</p><p>But this is much harder than it sounds. Today, technology moves so quickly that it feels like it&#8217;s changing faster than our ability to analyze it. We&#8217;re diving into technologies that have more potential than ever to disrupt this symbiotic relationship, and we&#8217;re doing so with very few tools and methodologies that can adapt to an accelerating rate of change.</p><p>So we somehow need analysis that can get to the core of the human condition, but in ways that aren&#8217;t frozen in specific moments of technological time. We need to account for the broad patterns of technology while remaining invariant to certain degrees of technological progress. Even better, we want this analysis to be predictive about where our human condition might be going.</p><p>One thinker that might help us is the German philosopher <strong><a href="https://en.wikipedia.org/wiki/Martin_Heidegger">Martin Heidegger</a></strong>. Heidegger towers over 20<sup>th</sup> century philosophy. His work <em><a href="https://en.wikipedia.org/wiki/Being_and_Time">Being and Time</a></em> is in the pantheon of humanity&#8217;s greatest philosophical achievements.</p><p>But wow, does he come with complications. His work is notoriously dense and difficult to process. Unless you can read German you are depending on a translation to understand that material. Best of all, Heidegger seems to take a sadistic pleasure in inventing new words to define difficult concepts, and then invents <em>more</em> new words to explain those original new words. Oh, and then there&#8217;s the whole Nazi business<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><p>In 1945 Heidegger released a short series of essays translated as <em><a href="https://www.amazon.com/Question-Concerning-Technology-Perennial-Thought/dp/0062290703">The Question Concerning Technology</a></em>. I recently spent a week studying it as part of <a href="https://cosmosinstitute.substack.com/p/reading-group-recovering-the-intellectual">a reading group</a> seeking to recover the intellectual origins of technology. So that&#8217;s my excuse for <em>reading</em> Heidegger. </p><p>But what I found in trying to <em>understand</em> Heidegger was a <strong>methodology</strong> for analyzing technology that Heidegger used to generate powerful insights into the nature of humanity itself. The question that struck me upon finishing the reading was: <strong>would the same approach generate equally powerful insights today?</strong></p><p>This series will be an attempt to answer that question.&nbsp;</p><p>But before we can address today&#8217;s technology, we need to understand Heidegger&#8217;s approach. This will be my humble attempt<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> to provide a legible summary of <em>The Question Concerning Technology</em>.</p><h2>It starts with truth</h2><p>The key to understanding <em>The Question Concerning Technology</em> is that it&#8217;s not ultimately about anything technological. To set the stage for where Heidegger is going, we first need to talk about <strong>truth</strong>.</p><p>For Heidegger, truth is not just about how much our beliefs might correspond with reality. It&#8217;s much weirder than that. For Heidegger, truth gets to the mystery of what it means to be a human being. Truth is what makes our experience of being in the world intelligible.</p><p>In this sense, truth is not just epistemic, it&#8217;s revelatory. You could even say it&#8217;s spiritual. It reveals to us the nature of our own being by unlocking a new awareness of what is possible.&nbsp;</p><p>But this also makes truth mysterious. It is always something revealed in history, in specific times and places. This revealing happens <em>through</em> us, through human activities like art, science, and yes, even technology. In this way, <strong>we are co-revealers of truth</strong>. We are active participants in the process.</p><p>This process of revealing truth also <em>defines</em> us. For Heidegger, <strong>the essence of man is to freely participate in this truth process</strong>&#8212;the revealing of truth and how it defines us. We can freely participate in it by constantly questioning it&#8212;by understanding it at its very <em>essence</em>. We are the beings that question our own being.</p><p>So when Heidegger wants to understand the essence of technology, he wants to understand how technology affects how truth is revealed, and how it affects our role in that process.</p><h2>Technology</h2><p>Keeping truth in mind, we can now approach Heidegger&#8217;s question concerning technology.</p><p>The goal of Heidegger&#8217;s project is simple: he wants to have a <em>free</em> relationship with technology. If we use technology without thinking&#8212;for example, if we simply assume that technology is neutral or that it has no potential to affect our nature&#8212;then we risk becoming blind to technology. To act freely with technology means that we must understand it.</p><p>The first thing that Heidegger makes clear is that he is not interested in any <em>particular</em> technology. He&#8217;s interested in the &#8220;essence&#8221; of technology&#8212;what defines <em>all</em> technology. We can only have a free relationship with technology if we understand its very essence.</p><p>So what is the <em>essence</em> of technology?&nbsp;</p><h2>Causality</h2><p>Heidegger starts with the obvious. <strong>Technology is a human activity that employs means towards ends. </strong>This is the common answer understood by everyone, and Heidegger agrees that it is correct.</p><p>But just because an answer is correct does not mean that it is exhaustive. There may be more to uncover. Heidegger wants to keep digging.&nbsp;</p><p>He latches onto this instrumental idea of <em>means</em> and <em>ends</em>. What are we really talking about when we employ means to pursue ends?&nbsp; We&#8217;re talking about <strong>causality</strong>. Heidegger defines causality as a process of <em>becoming</em>&#8212;of starting something new that becomes present in the world.</p><p>Heidegger uses an example of a silver chalice to help explain the essence of causality. Causality isn&#8217;t just about the <em>means</em>, or how the silversmith forges the silver into the form of the chalice. Causality also includes the <em>ends</em>&#8212;how the religious service that incorporates the chalice informs every aspect of the silversmith&#8217;s craft.</p><p>The chalice is caused by both means <em>and</em> ends&#8212;by <em>how</em> it&#8217;s made and what it's made <em>for</em>. The silversmith guides this causality by unifying both the means and ends. In doing so, he brings something new to be present in the world.</p><h2>Revealing</h2><p>This unified causality helps Heidegger make a distinction between earlier technology and modern technology.</p><p>For pre-modern technology, the connection between the means and ends is obvious. The trees are cut down to build the home. The field is farmed to provide the grain you eat. The river turns the windmill that grinds the grain into flour. The trees are seen in the home, the field in the food, the river in the flour.</p><p>The means and ends are also intimately interdependent, reciprocally revealing more truths about each other. By understanding the nature of silver more fully, the silversmith can more artfully reveal the chalice. By understanding the purpose of the chalice, the silversmith can reveal more of the silver.</p><p>What does this say about the essence of technology? That technology is much more than just means to an end. Technology is also a way of <strong>revealing</strong>&#8212;of unconcealing truth. The silversmith reveals more of the silver by bringing out more of its nature in the chalice.&nbsp;</p><p>Heidegger called this form of revealing <em><strong>poiesis</strong></em>, a Greek word meaning &#8220;bringing forth&#8221;. Poiesis is the process of bringing something new into existence.</p><h2>Standing-reserve&nbsp;</h2><p>This mode of revealing makes sense for pre-modern technologies. But what about modern technologies?&nbsp;</p><p>For Heidegger, there is a clear difference&#8212;the immediacy between means and ends is severed in modern technologies. The trees are converted into cellulose for paper. The field is unearthed for coal. The river is dammed up to create hydro-electricity. Nothing of the inherent complexities of the means are revealed in the ends. Instead, they are reduced, transformed, and standardized.</p><p>This is a new type of revealing, based on <strong>challenging</strong> nature. The soil of the field is challenged to be revealed as mineral deposits and stripped of its capacity to grow. The river is secured as energy, ready to be further ordered as power for the factory.&nbsp; The trees of the forest are transformed into the uniformity of cellulose. </p><p>This is no longer a &#8220;bringing forth&#8221;, but a &#8220;challenging forth&#8221;.&nbsp; When nature is challenged in this way, a part of its essence can no longer be &#8220;brought forth&#8221;.</p><p>Even more, the coal and electricity and cellulose <strong>are never ends in themselves</strong>. They are always means for much <em>bigger</em> processes of technological assemblies. They are ordered and standardized into <em>reserves</em>, on call and waiting to be used and consumed by ever larger processes of ordering. </p><p>Heidegger calls the result of this form of revealing &#8220;standing-reserve&#8221;.&nbsp;</p><h2>Enframing</h2><p>This mode of revealing&#8212;this &#8220;challenging forth&#8221;&#8212;doesn&#8217;t just apply to nature. It also applies to us. Whenever we view the world through the eyes of &#8220;securing and ordering&#8221;, we&#8217;re being challenged to see objects as what Heidegger calls &#8220;objectlessness&#8221;. We see complexity and reduce it into standardized metrics of uniformity.</p><p>We analyze the forest by how effectively it can be converted into a maximum yield at minimum expense. HR evaluates human beings by how effectively they can conform to a scripted role of productive behavior. We even rank and rate the river by how effectively it can be ordered into a beautiful tourist getaway or the backdrop of a carefully composed selfie (ready to be further ordered into a constructed digital narrative).</p><p>Modern technology only rewards standing-reserve. If it can be ordered and standardized, it can be put into productive use. As we become more entangled in these larger forces of technical production, we become more compelled to reveal the world through this lens.</p><p>What happens if this compulsion to order becomes increasingly irresistible? We begin to reveal <em>everything</em> as standing-reserve. For Heidegger, this is the essence of technology: the challenge we feel to view everything through the lens of standing-reserve.&nbsp;</p><p>Heidegger calls this irresistible compulsion &#8220;enframing&#8221;.</p><h2>The extreme danger</h2><p>But Heidegger doesn&#8217;t stop there. He wants to go further. He wants to again question if there&#8217;s something more to the essence of technology.&nbsp;</p><p>What type of thing is this enframing anyway?&nbsp; And what does it say about Heidegger&#8217;s original concerns around truth, and our role in revealing it?&nbsp;</p><p>Heidegger seeks to understand the essence of enframing by contrasting it with poiesis, the &#8220;bringing forth&#8221; that Heidegger associated with pre-modern technology.</p><p>With poiesis, we are <em>proactive</em> participants. We are actively investigating the world to more artfully bring it into being. We seek to co-reveal both the means and the ends more fully. We are the initiators of intentional revealing.</p><p>With enframing, we are <em>reactive</em> subjects. We are compelled to view <em>everything</em> as standing-reserve, because that is the currency our modern technological world runs on. Only objects that have been standardized and ordered can be valued. Standing-reserve becomes the terms of engagement, until they become the only terms we know.</p><p>When our world becomes dominated by enframing, then our role becomes nothing more than to passively order the standing-reserve into a mass of &#8220;objectlessness&#8221;.. We are no longer using technology to reveal more of the world in terms of its particular complexity. <strong>Instead, technology is using </strong><em><strong>us</strong></em><strong> to reveal more of the world on its terms of standard uniformity.</strong> This is how man himself becomes standing-reserve.&nbsp;</p><p>But there&#8217;s a danger even greater than this. Enframing doesn&#8217;t just conceal all other ways of revealing truth. Enframing can conceal the act of revealing itself, and the active role we play in it. This is what Heidegger calls the &#8220;extreme danger&#8221;.</p><p>What if enframing becomes so automatic that we forget the role we play in revealing it? What if we lose our capacity to question truth and how it is revealed? If we can no longer question technology down to its very essence, how would it then be possible to have a free relationship to technology? How would it be possible to have a free relationship to anything?</p><h2>The saving power</h2><p>But even in this extreme danger, Heidegger also sees a glimmer of hope.</p><p>Yes, enframing can appear irresistible in how it conceals both all other modes of revealing <em>and</em> our active role in the process.&nbsp;</p><p>But enframing also reveals modern technology&#8217;s utter dependence on us. The world of modern technology shows us how completely the world can change when we reveal truths in different ways. The power of enframing is also in some sense the power of man as co-revealer, and the essential role we play in the process.</p><p>By understanding this essential role, we can come to understand what Heidegger calls our ultimate dignity: keeping a faithful watch over the unconcealment of truth. In this sense, the dignity of man lies in our role as<strong> custodians of truth</strong>, and of how truth is revealed to the world, whether through enframing or poiesis or other forms entirely.</p><p>This essence contains the possibility of a free relationship to technology. We become truly free to technology when we become, as Heidegger says, &#8220;the ones who listen and hear, and not just the ones who are simply compelled to obey&#8221;.</p><h2>The power of the poetic</h2><p>Is this really possible? Is there anything that can take this &#8220;saving power&#8221; and make it more real?&nbsp;</p><p>Heidegger isn&#8217;t sure. He again goes back to the ancient Greeks, to investigate <em><strong>techne, </strong></em>the Greek term for technology.&nbsp;</p><p>Techne isn&#8217;t simply a noun that defines pre-modern technologies. For the Greeks, techne was more like a verb that defines a revealing, a &#8220;bringing forth&#8221; into the world of those things that cannot bring themselves forth. In the same way as the chalice can only be revealed through the silversmith, the poem, the basket, and the sculpture can only be revealed <em><strong>through</strong></em> the creative powers of man.</p><p>Heidegger wants us to consider techne&#8217;s role in &#8220;bringing-forth&#8221; more fully:</p><blockquote><p>&#8220;There was a time when it was not technology alone that bore the name techne. Once there was a time when <strong>the bringing-forth of the true into the beautiful</strong> was called techne. And the poiesis of the<strong> fine arts</strong> also was called techne.&#8221;</p></blockquote><p>By art, Heidegger is not just talking about aesthetics or artistic representations. Again, his definition is much weirder than that. A true work of art doesn't just depict <em>what is</em>, but <em>what can be</em>. It reveals new ways for beings to be present in the world.&nbsp;</p><p>In another work, Heidegger uses the example of a Greek temple&#8212;it doesn't just artistically represent the spiritual, but articulates an entirely new world of meanings, values, and understandings. The temple reveals a world where gods, rituals, and humans can emerge into &#8220;unconcealment&#8221;. Entirely new ways of being are made possible. This is art as the <em>poetical</em>, as that which participates in <em>poiesis</em>, in the &#8220;bringing-forth&#8221;.</p><p>If modern technology can no longer &#8220;bring forth&#8221; in the sense of <em>poiesis</em>, other forms of <em>techne</em> still can. Heidegger sees this possibility in art, in the <em>essence</em> of the poetical. And in this essence he sees the potential for a free relationship to technology:</p><blockquote><p>&#8220;Because the essence of technology is nothing technological, essential reflection upon technology and decisive confrontation with it must happen in a realm that is, on the one hand, akin to the essence of technology and, on the other, fundamentally different from it. <strong>Such a realm is art.</strong>&#8221;</p></blockquote><p>Can art remind us of our essential dignity? Can art restore in us our rightful role as custodians of truth? Can art reveal a more primal truth than enframing, in a way that enables us to have a free relationship to technology?</p><div><hr></div><p>So Heidegger has taken us on a journey of questioning technology&#8212;questioning it over and over again&#8212;until we arrive at its very essence. And with this <em>essential</em> understanding, we can now have some hope of having a free relationship to it.</p><p>The question is: do we have to follow Heidegger all the way to this destination of being and art and revealing? Or is there something valuable in the journey itself? Perhaps something worth recovering that can help us have a free relationship with technology <em>today</em>? </p><p>Will pick these questions up in the next part of the series by applying this methodology to our current digital landscape.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Which I find so tiresome I&#8217;m not even going to link to an exhaustive treatment.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Some caveats: I only have a broad understanding of Heidegger beyond this reading, and try to minimize reference to the broader Heideggerian canon. I do not read German. I put very little effort into understanding the nuances of the translation. I try to avoid using Heidegger&#8217;s invented words, but include a few to make it easier to reference the work. I use some of his other terms for convenience, like &#8220;man&#8221; for &#8220;humanity&#8221;.</p></div></div>]]></content:encoded></item><item><title><![CDATA[A Constraint Theory of Technology]]></title><description><![CDATA[Or how to get the technological future we want]]></description><link>https://www.techforlife.com/p/a-constraint-theory-of-technology</link><guid isPermaLink="false">https://www.techforlife.com/p/a-constraint-theory-of-technology</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Thu, 25 Apr 2024 15:09:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WEFv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ef7ecdf-24fb-4f8f-a564-073cc967a1c6_1986x1114.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WEFv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ef7ecdf-24fb-4f8f-a564-073cc967a1c6_1986x1114.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WEFv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ef7ecdf-24fb-4f8f-a564-073cc967a1c6_1986x1114.webp 424w, https://substackcdn.com/image/fetch/$s_!WEFv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ef7ecdf-24fb-4f8f-a564-073cc967a1c6_1986x1114.webp 848w, https://substackcdn.com/image/fetch/$s_!WEFv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ef7ecdf-24fb-4f8f-a564-073cc967a1c6_1986x1114.webp 1272w, https://substackcdn.com/image/fetch/$s_!WEFv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ef7ecdf-24fb-4f8f-a564-073cc967a1c6_1986x1114.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WEFv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ef7ecdf-24fb-4f8f-a564-073cc967a1c6_1986x1114.webp" width="1456" height="817" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9ef7ecdf-24fb-4f8f-a564-073cc967a1c6_1986x1114.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:817,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2532596,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WEFv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ef7ecdf-24fb-4f8f-a564-073cc967a1c6_1986x1114.webp 424w, https://substackcdn.com/image/fetch/$s_!WEFv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ef7ecdf-24fb-4f8f-a564-073cc967a1c6_1986x1114.webp 848w, https://substackcdn.com/image/fetch/$s_!WEFv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ef7ecdf-24fb-4f8f-a564-073cc967a1c6_1986x1114.webp 1272w, https://substackcdn.com/image/fetch/$s_!WEFv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ef7ecdf-24fb-4f8f-a564-073cc967a1c6_1986x1114.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>How do you think about our technological future? Do you look forward to that future with a sense of hope? Or dread? Or some mix of both?&nbsp;</p><p>What&#8217;s the difference between a techno-<em><strong>optimist</strong></em> and a techno-<em><strong>pessimist</strong></em> anyway?</p><p>Oddly enough, the difference doesn&#8217;t seem to be about specific technologies. If you read enough <a href="https://a16z.com/the-techno-optimist-manifesto/">manifestos</a> and <a href="https://medium.com/@newalbrecht/the-pessimism-of-techno-optimism-e5c80ddd3930">opinions</a> you&#8217;ll discover that specific technologies are rarely mentioned. And the difference isn&#8217;t about wanting to see a future with more or less technology in it. Even the pessimist will recognize the potential of technology to solve real problems and improve the quality of our lives.&nbsp;</p><p>In fact, the difference doesn&#8217;t seem to be about technology at all. At least not directly. The difference seems to be about <em>how</em> we determine which technologies are adopted by society, and <em>how</em> that adoption happens. In other words, the difference seems to be about what <strong>constrains </strong>technology.</p><p>By constraints, I mean all the things that both <em>limit</em> and <em>enable</em> technological innovation. Constraints are what impact and guide the social adoption of technology. They include incentives, regulations, norms, policies, guidelines, resources, status, and even existing technologies. They come from all aspects of society&#8212;the market, the state, culture, philosophy, religion, the planet, and our individual agency.&nbsp;</p><p>What do <strong>pessimists</strong> see when they look at our current system of constraints? They see privacy-eroding surveillance systems, social scoring networks, and disinformation campaigns. They see biodiversity loss, ocean acidification, and species extinction. They see isolation, depression, and anxiety that seems to correlate with our new online existence at population scales. In other words, they don&#8217;t see much compatibility with human flourishing. </p><p>Unless these constraints change, why would pessimists expect the future to be any better? Wouldn&#8217;t any advanced technology that came out of our current constraints only make things <em>worse</em>?&nbsp;</p><p>So the pessimists are not pessimistic about <em>technology per se</em>. They are pessimistic about our ability to <strong>steward</strong> technology, particularly advanced technologies, in ways that clearly align with human flourishing and other values they care about. It&#8217;s an inherent lack of trust in our constraints that leads to general sense of dread that many pessimists have about the future.</p><p>The <strong>optimist</strong>, on the other hand, tends to misunderstand constraints altogether. They think that a constraint is just a <em>limit</em>, and anything that limits technology is necessarily bad. They see the free market as the best way to accelerate innovation, so anything that prevents the market from maximizing this acceleration should be removed.</p><p>But this is a mistake. <strong>Constraints are much more than limits</strong>. Constraints also <em>enable</em>. Imagine if there were no limits on the market. Technologies that emerged from an unconstrained market would still encounter limits, they would just be downstream of the market. They would be <em>reactive</em> instead of <em>proactive</em>. Limits like public backlash or legal challenges or political regulation are going to be much harsher precisely because they are reactive. So by wanting to remove all constraints, the optimists are actually removing the only effective means of acceleration.</p><p>But what if both are correct, in their own way? What if the pessimist is right in that something about our current constraints seems to be responsible for all the crappy technological outcomes we sense? And what if the optimist is right in that the key to unlocking innovation is more about constraints and less about technology?</p><p>This is why I am proposing a <strong>constraint theory of technology</strong>. It offers a new perspective on how to responsibly guide our future with advanced technology. Instead of focusing on specific technological outcomes or utopian/dystopian scenarios, we need to focus on implementing the right <strong>system of constraints</strong> that can enable a broad spectrum of technological outcomes compatible with human flourishing.</p><p>To understand this theory, we first need to understand exactly what a constraint is. The job of a constraint is not just to <em>limit</em>. The proper job of the constraint is to find the <em>right</em> limit that maximizes <em>possibility</em>. That may sound paradoxical, but freedom is paradoxical, and proper constraints are about <em>enabling</em> freedom. So let&#8217;s start there.</p><h2>The paradox of freedom&nbsp;</h2><blockquote><p><em>&#8220;We might fancy some children playing on the flat grassy top of some tall island in the sea. So long as there was a wall round the cliff&#8217;s edge they could fling themselves into every frantic game and make the place the noisiest of nurseries. But the walls were knocked down, leaving the naked peril of the precipice. They did not fall over; but when their friends returned to them they were all huddled in terror in the center of the island; and their song had ceased.&#8221;</em></p><p><em>- G.K. Chesterton</em></p></blockquote><p>The paradox of freedom is that it can only flourish through constraint, like Chesterton's playground at the edge of a cliff.&nbsp;</p><p>A fence along the cliff edge does not <em>restrict</em> freedom, it <em>enables</em> freedom. It removes the possibility of falling over the edge from the child's consciousness, so they can play without fear and hence with maximum freedom. The fence is the <strong>constraint</strong> that, by limiting one negative possibility, enables a much larger space of positive possibilities. It removes one freedom to enable others.</p><p>Or consider <strong>art</strong>. The constraint of any artistic medium sets the boundaries that define creativity. A haiku imposes severe limits on the poetic form, but these limits are precisely what can push creative expression into the sublime. The process of art itself is an enabling constraint. All art starts with some vision that you attempt to make real. That vision changes the very first instant you begin to actualize it. The first dab of paint becomes a constraint that defines every subsequent brush stroke. This is the process that transforms the work from vision into art. Art is always pushing against the very limits of its own constraints, often transgressing them to define new forms of possibility.</p><p>Or consider <strong>evolution</strong>. Evolution does not pursue every possible variation. It&#8217;s not allowed to, because evolution has evolved to conserve what works, and to enforce constraints that ruthlessly protect these features.&nbsp; Up to 5% of human DNA has remained <a href="https://en.wikipedia.org/wiki/Human_genome#:~:text=Comparative%20genomics%20studies%20of%20mammalian,the%20vast%20majority%20of%20genes.">unchanged for 200 million years</a> and is responsible for constraining functional genetic expression.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Yet these are the exact constraints that enables the possibility of variations in the remaining 95% to be adaptive.</p><p>In each example, the constraint is acting as the limit that <em>maximizes</em> possibility. The possibility space here is defined not by quantity, but by quality. Constraints narrow quantity to make more quality possible. This is how a limit becomes <em>enabling</em>.&nbsp;</p><p>This tension between limit and possibility means that every constraint is a balancing-act. On the one hand, the limit may not exclude enough negative possibility to enable the positive possibility to actualize. On the other hand, in that effort to exclude the negative, the limit may go too far and restrict too much of the possibility space.</p><p>This is especially true with <strong>technology</strong>. Without the right balance, you can get the pessimist&#8217;s nightmare of sub-optimal technological outcomes. You can also get the optimist&#8217;s fear of denying humanity of all the benefits that would come from the innovations that aren't happening.</p><p>Or in our case, you can get both.</p><h2>A brief history of constraints</h2><p>To understand how constraints define our technological future, we need to understand where constraints come from and how they work.&nbsp;</p><h4>Foundational Constraints</h4><p>Some constraints are <em>foundational</em>. They are relatively stable and provide a broad consensus. They are external to technology and are big enough to judge, guide and evaluate technology on their own terms.&nbsp;</p><p><strong>Religion</strong> has traditionally played a powerful constraining role. Technology in ancient China was seen as a <em>qi</em>, or a means of mediating engagement with the cosmos.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>  The constraint of putting technology in service of divine honor drove much of the innovation in architecture, materials, and art of the Middle Ages through cathedrals and artworks. It still can be a powerful constraint today, as seen in the complex adoption rituals of certain religions like the Amish.&nbsp;</p><p><strong>Philosophy</strong> is also capable of establishing shared principles that can act as a judge and evaluator of technological progress. In Ancient Greece, technology was seen more as an art form, and any technology that wasn&#8217;t in service of virtue was seen as something less noble, as something that should only be pursued when necessary. Yet  like religion, philosophy seems less likely to be a productive constraint at scale in a multipolar world. We seem to have given up on ideas of natural law or the moral philosophy of what C.S. Lewis called &#8220;The Dao&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>- shared beliefs strong enough to constrain technology on sheer principle.&nbsp;</p><p>The <strong>planet</strong> is the most fundamental constraint, the limit of last resort. To exceed planetary limits is to invite disaster. The carrying cost of our planet is real. Technologies that encroach upon the planetary require planetary-scale constraints, much like how international geo-politics was entirely recast to constrain nuclear technologies. Like all limits, it also has the potential to enable innovation, as seen in the exponential growth of battery and renewable energy capacities.</p><h4>Situational Constraints</h4><p>Other constraints are <em>situational</em>, responding to technological change and societal forces. These influences provide important constraints, though their authority and power will be more contextual and diffuse.</p><p>The <strong>state</strong> can uniquely enable technological innovation through huge government programs like the Manhattan project, the Apollo program, or Operation Warp speed. Constraints like FDA trials, while certainly flawed, provide enabling limits for sensitive technologies like drug discovery and therapeutics. Defense spending and DARPA have played significant historical roles in enabling disruptive innovations.</p><p><strong>Culture</strong> provides grounding norms that steer innovation in accordance with a society's deepest beliefs and ideals. Yet in pluralistic, fragmented societies, culture may struggle to impose anything more than vague or superficial values. Or worse, technologies can fall prey to &#8220;culture wars&#8221;, where a weaponization of values grinds technological progress to a halt.</p><p><strong>Ethics</strong> provide frameworks for assessing the impact of technology on individuals, communities, and the environment. Failure to address ethical concerns can lead to backlash and public distrust at the state and cultural levels, so integrating ethical principles into technological development processes is essential for enabling innovation.</p><p>Finally, there are <strong>personal</strong> constraints. We each set up guidelines about what an appropriate relationship to technology should look like. Yet as these collective technological forces become more powerful, more economically embedded, and more inscrutable, we each will increasingly find ourselves with less agency to assert any kind of meaningful technological sovereignty.</p><h2>The market as an innovation idiot savant</h2><p>And then there&#8217;s the <strong>market</strong>, the biggest constraint of them all.&nbsp;</p><p>The market is both foundational <em>and</em> situational, a category all its own. It far and away plays the biggest role in both limiting and enabling the possibility space of future technologies. One of the best ways to predict the technologies of tomorrow is to study the market signals of today.&nbsp;</p><p>The most remarkable aspect of the market is that it doesn&#8217;t really care about technology, at least not directly. Technology just happens to be the best way to give the market what it <em>does</em> care about: more ways to meet customer demands cheaper, faster, and more efficiently. Innovation is a side effect.</p><p>In this way, the market is like the<strong> idiot-savant</strong> of technology constraints&#8212;a giant, unplanned incubator of innovation; prone to waste and redundancy and remarkable inefficiency; unwilling to cede to any values beyond profit and loss; externalizing any costs to society and the environment that it can get away with; blind to any second-order effects that exceed its immediate time horizon; all driven by the madness of advertising and the need to stimulate demand.&nbsp;</p><p>And yet this idiocy is somehow responsible for the vast majority of our technological progress. From a certain angle, it can appear nothing short of miraculous. The free market, with its &#8220;invisible hand&#8221; of decentralized coordination, funnels the productive forces of millions of innovators into a socially positive feedback loop.&nbsp;</p><p>The profit incentive creates a simple mechanism for assuring that technologies become well adopted&#8212;those that provide value to the customer are rewarded, while those that harm the customer are not. The collective wisdom of the market is the closest thing we have to an objective arbiter of technology.&nbsp;</p><p>The fact that the market has no need for values or religion or philosophy is a feature, not a bug. We can skip all the political debates, the religious uncertainty, and the cultural confusion. The price signal cuts through them all, showing us which technologies are possible and how we can make them real. We just need to convert them into profit.</p><p>We put up with the market&#8217;s idiocy because it has become such an innovation savant. The sheer success of the market has allowed it to drown out all other constraints. Other sources that have traditionally played the role of balancing the market&#8212;of productively guiding its impulses and checking its excesses&#8212;no longer seem capable of doing so.&nbsp;</p><p>So our technological future is left largely in the hands of the market.&nbsp; Yet how many of us look at the market and take comfort in its ability to constrain advanced technology in ways that are compatible with human flourishing?&nbsp;</p><p>Exactly.</p><h2>The market is not big enough</h2><p>If we are seeking a system of constraints that can combine advanced technology with human flourishing, then the market is necessary but not sufficient.&nbsp;</p><p>Advanced technology both exposes the weaknesses inherent to the market and demands constraints beyond what the market can bear. A few simple examples makes it clear that the market, particularly in its current form, is simply not up for the job.&nbsp;&nbsp;</p><h4>1. Advanced technology breaks trial and error.</h4><p>The market depends on a <strong>trial-and-error</strong> process that is extremely effective when trials are iterative, errors are immediate, and there is a market signal to reverse them. Otherwise it turns tragic. Leaded gasoline persisted for over half a century before it was finally addressed, and we&#8217;re still dealing with toxic aftermath. What is the modern day equivalent? We&#8217;re very early in discovering all the ways that microplastics are impacting both our ecosystems and our internal chemistries.</p><p>It&#8217;s not just about external environmental costs. We&#8217;re just now beginning to understand the effects of teenagers mediating their entire social life through digital technologies. The cost of this error may be a generation of lost youth. What kind of trial might have prevented this? Not one that the market would have any interest in running.</p><p>Advanced technologies can't rely on simple trial-and-error iteration under market constraints. The timelines are too long, the risks are&nbsp; too catastrophic, and the second-order effects may not reveal themselves until it's too late. </p><h4>2. The market is not accountable to anything outside of itself</h4><p>The market also fails to account for any values that cannot be made legible to its standards of profit and growth. Because the market is accountable to nothing outside of itself, it can only respond to questions of <em>value</em> when other forces&#8212;like public outrage, regulation, or political sanctions&#8212;turn them into overwhelming market signals.</p><p>Is digital technology making us lazier? More atomized? More fractured? Is it commodifying core experiences of what it means to be human? Unless it&#8217;s impacting near-term profit or growth, the market does not (and cannot) care.</p><p>As advanced technology encroaches further on the human condition, how will their impacts be converted to a pricing mechanism? They can&#8217;t. How do you put a price on human flourishing, sentient rights, or the moral weight of an artificial agent? You don&#8217;t. The market has no capacity to incorporate larger values unless something bigger than the market demands that it does so.</p><h4>3. The market monopolizes vital decisions</h4><p>The future of most advanced technologies currently rests in the hands of a small handful of actors. The trajectory of AI is largely controlled by the leadership of a few big tech companies and AI labs. Why would we allow such vital decisions to be monopolized by the market?</p><p>Part of the reason is the market has so few mechanisms for incorporating external signals. What frameworks or tools do we have to precisely articulate the values that should constrain innovation?&nbsp; Our ability as a society to enforce our democratic values onto the technological landscape is almost non-existent.&nbsp;</p><p>The market also excels at ignoring outside constraints. Any external influences must be able to play and win on the market&#8217;s own terms. This means overcoming the dynamics of game theory, first-mover advantage, and regulatory capture. The history of the market suggests that only legal requirements can overcome these dynamics, and often much too late.</p><h4>4. The market forecloses too much of the possibility space</h4><p>Think of all the possible technologies that could exist but don&#8217;t simply because the market could never make them profitable at scale. Entire domains of technological possibility are foreclosed simply because they cannot be converted into viable business models or revenue streams.</p><p>This is particularly true for technologies that could directly <strong>promote human flourishing</strong>, which include values that are often in opposition to quantification and profit. The market is not big enough for all the technologies that a flourishing future demands.</p><div><hr></div><p>None of these are <em>actual</em> problems with the market. They only become problems when we become so enamored with the market&#8217;s power to drive innovation that we allow it to take over the entire burden of technological constraint. </p><p>In other words, it becomes a problem when we remove all constraints on the market&#8217;s ability to productively constrain technology.</p><h2>Constraint-first futures</h2><p>So where do we go from here? If the market is not sufficient to steward advanced technology, what is? What would a viable system of constraints look like?</p><p>We need to think about our technological future <strong>less</strong> as a collection of technologies and <strong>more</strong> as a system of constraints. We will never have the capacity to plan and implement a future around <em>specific</em> technologies that will guarantee some measure of human flourishing. But we can plan and implement <strong>systems</strong> that enable a broad spectrum of technological possibilities that are within the bounds of flourishing.&nbsp;</p><p>We need to be thinking &#8220;constraint-first&#8221; and start enabling a viable system of technological constraints. The following are a few steps to start with.</p><h4>Empower the commons to expand the possibility space</h4><p>We need to open up the possibility space of all the technologies that the market ignores, yet are crucial to human flourishing. While the state sometimes plays this role, the commons is a more appropriate container for stewarding technologies that directly impact human well-being.</p><p>Free from market pressures, a &#8220;digital commons&#8221; could provide enabling constraints to unlock technologies that elevate civic discourse, protect privacy and identity, manage reputation and social graphs, establish and report on public knowledge repositories, coordinate public deliberation, and other public goods that market would never touch.</p><p>Such a commons could still leverage the best features of the market by translating community values into price signals that incentivizes competition to ensure quality and efficiency.</p><h4>Initiate a new field of constraint design</h4><p>Constraints are themselves a technology. Every constraint can be radically limited and enabled by the constraints they are embedded in. The field of &#8220;constraint design&#8221; should be established to explore and develop best practices for creating and managing the most productive constraints.</p><p>For example, rather than constraints being purely external, limiting forces, we should explore models where the process of constraining technology itself becomes participatory and empowering for stakeholders.</p><p>This could take the form of decentralized governance protocols for managing advanced AI systems' objective functions, or stakeholder voting to adjust and calibrate constraint parameters on limit versus enablement, or open-sourcing constraint protocols for public auditing and remixing.&nbsp;</p><h4>Empower an individual right of constraint</h4><p>Powerful yet user-friendly tools enabling "constraint customization" at the individual level could help mitigate the failure of higher-level constraints. Technology could become a flexible service respecting our diverse values, not a binary take-it-or-leave-it imposition.</p><p>For example, imagine social media where you can implement different algorithms from a public trust, tweak them with simple tools, or reproduce settings from those you trust. Imagine new settings to route content based on values you care about. Imagine ignoring comments that exceed a polarization threshold, getting alerts on how usage is affecting your attention, or helping amplify constructive threads for others.</p><p>If the user has more control to moderate their feed, there&#8217;d be less need to impose top down moderation or draconian speech restrictions, and less opportunities for governments to corrupt moderation processes.</p><h4>Incentivize constraint entrepreneurship</h4><p>While constraints are often positioned as barriers to entrepreneurship, we could flip this framing. There are vast economic opportunities in developing core constraint capabilities that enable advanced technologies to bloom sustainably. Innovators who unlock the most enabling constraints should be richly rewarded.</p><p>Imagine whole new industries devoted to tools for better trial-and-error, innovation prediction markets, ethics auditing, or security mindsharing between firms. Or decentralized markets for trading and dynamically pricing risk estimates and "allowances" on transformative R&amp;D initiatives.</p><p>By putting incentives and investors behind vital constraint infrastructure, we cultivate an entire entrepreneurial ecosystem devoted to responsibly unleashing technological progress. This is how constraints can turn into an innovation superpower.</p><h4>Establish &#8220;separation of technology and control&#8221;</h4><p>Much like the separation of church and state, or the partition of powers into branches of government, we may need enforced checks and balances when it comes to transformative technologies and the entities that control them.</p><p>This could take the form of imposing functional separations on research and commercialization, or keeping core protocols in the commons, or dividing development pipelines into isolated modules working without full context.&nbsp;</p><p>The concern is <em>technological apotheosis</em> - when the owners of advanced technologies achieve such centralized omnipotence and convergence that they become an autonomous power beyond the control of any human institutions. Separation of technology and control prevents any one actor from ever having the possibility of achieving such dominance.</p><h4>Convince the market that constraints are a good thing</h4><p>What markets don&#8217;t realize is that constraints are in the market&#8217;s best interest. The more constraints the market can provide, the less need there is for outside constraints to intervene. Advanced technologies that emerge from the market will increasingly become targets of culture wars, virtue signaling, and political regulation<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>. The goal of the market should be ensuring that new technologies never reach that point.</p><p>Sometimes the market recognizes this. You can see the AI industry navigating this with their incorporation of <a href="https://openai.com/blog/red-teaming-network">red teaming</a>. This is a small step in upgrading trial and error. Other viable options exist to improve beta testing and iterative trials, like <a href="https://cynefin.io/wiki/Safe_to_fail_probes">safe to fail probes</a> and broader spectrum observation. This may require better epistemological tools to properly analyze relevant data beyond the obvious first order effects, but these are investments the market should be willing to make.</p><h4>Enshrine &#8220;off-ramps" into critical systems</h4><p>For certain advanced technologies, can we codify protocols that could enforce discontinuation "off-ramps" or quarantine measures when clear tripwires are triggered? Pre-agreed decision engines, immune response plans, and &#8220;kill switches&#8221; should be built into the technological infrastructure itself in anticipation of any worst-case scenarios.</p><p>This isn't a regressive principle, but a simple recognition of our inability to reliably forecast technological outcomes. The more confident we can be in rolling back a technology in the worst case scenarios, the more confident we can be in developing it.</p><h4>Recognize fundamental &#8220;limits&#8221;</h4><p>As powerful as any system of constraints will be, we must accept that certain technologies may defy the limits of any constraint that we could devise. Whether it be recursively self-improving AI, molecular bio-nanotechnology, or merging our consciousness with the machine&#8212;there may be hard limits on what we can "constrain" in any traditional sense.&nbsp;</p><p>In such cases, what if the only viable constraint is the courage to simply not go there? To demarcate intrinsically human boundaries and honor the mystery. No amount of technological capability necessarily obligates us to transgress all limits. Accepting that we don&#8217;t need to explore every possible future may be the key to ensuring that we have a future at all.</p><h2>Constraining our way to a future of human flourishing</h2><p>In summary, the constraint theory of technology offers a new perspective on how to responsibly steer our future with advanced technology towards outcomes that align with human flourishing.&nbsp;</p><p>Rather than focusing on specific technological goals or dystopian/utopian scenarios, we need to focus on developing the right system of constraints that can enable a broad spectrum of possible futures that remain compatible with our deepest values.</p><p>This requires rethinking our relationship to constraints. Instead of seeing them merely as limits, we need to recognize their enabling role in maximizing the possibilities that we can explore. By embracing an ethos of "constraint-first" technological development, we increase our chances of realizing a future where technology and human flourishing can advance as co-evolving forces.&nbsp;</p><p>Ultimately, the path forward demands nothing less than renegotiating our relationship to technological power itself.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>John Smart calls this <a href="https://foresightguide.com/the-95to5-rule-most-change-looks-evolutionary/">the 95/5 rule</a> of evolutionary development.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>See Yuk Hui&#8217;s <a href="https://www.amazon.com/Question-Concerning-Technology-China-Cosmotechnics/dp/0995455007">The Question Concerning Technology in China</a> for a fascinating (if dense) investigation into Chinese technological history and development. Qi here is &#22120;, a standard Chinese word meaning container, vessel or instrument, which Hui places in a Dao-Qi duality.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>See <a href="https://www.amazon.com/Abolition-Man-C-S-Lewis/dp/0060652942">The Abolition of Man</a> for Lewis&#8217; prediction of what happens when man abandons traditional moral realism. It does not go well.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>A future where the government mandates FDA-like clinical trials for all advanced technologies is not an impossibility.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Towards a Philosophy of Technology]]></title><description><![CDATA[Some early thoughts]]></description><link>https://www.techforlife.com/p/towards-a-philosophy-of-technology</link><guid isPermaLink="false">https://www.techforlife.com/p/towards-a-philosophy-of-technology</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Tue, 19 Mar 2024 23:53:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bf7f2e17-d09b-448a-8c73-1e1b391a1ade_612x410.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Much of this Substack has been the result of exploring various aspects of technology and philosophy, all in the hopes of pushing the philosophy of technology forward. </p><p>The following are some working notes that have both informed previous posts and will inspire future ones. </p><p>Consider this a version of <a href="https://www.robinsloan.com/lab/new-avenues/?utm_source=Robin_Sloan_sent_me#garage">working with the garage door open</a>, in the hopes of implanting some mind viruses in others and being similarly infected by anyone else so inspired. </p><h3>10 early thoughts from exploring philosophy and technology</h3><p>1. Technology is philosophy made real. Every innovator defines a version of &#8220;the good&#8221; and uses technology to make that definition real.&nbsp;</p><p>2. Every technological problem is becoming a planetary problem as technology and ecology have become two sides of the same planetary face.</p><p>3. The limiting factor in solving planetary challenges is not technological limitation but our incapacity to coordinate.</p><p>4. Democracy in its current forms seems increasingly less capable of generating something beyond capitalism that is big enough to become technology's guide and judge.</p><p>5. Evolution as a paradigm of adaptation loses salience as trial-and-error becomes increasingly less viable as a mechanism for adapting advanced technology. We need new paradigms.</p><p>6. Our sci-fi authors can't seem to imagine a viable future that combines advanced tech with a flourishing humanity. What if such a future is unimaginable because it is in fact impossible?&nbsp;</p><p>7. The accelerating rate of technological change defies emergence by exceeding the carrying capacity of previous hierarchical substrates and preventing new thermodynamically stable equilibria from forming.</p><p>8. We need a new myth of technology that rightly situates the human between nature and technology and puts all three in service of life itself. Increasing the universe's capacity to convert free energy into entropy <a href="https://beff.substack.com/p/notes-on-eacc-principles-and-tenets">is not sufficient</a>.</p><p>9. Upon reaching a minimally advanced threshold, all technological questions converge on the religious. e.g. What, if anything, is sacred about the human? Should anything constrain the technological pursuit of immortality? What is technology even for?</p><p>10. Only technology can save us. The dead gaze of the machine staring back at us will become the forcing function we need to collectively resolve <a href="https://en.wikipedia.org/wiki/God_is_dead">the death of god</a>.</p><div><hr></div><p>Ok, stopping at 10 before this becomes another 3,000 word post. </p><p>Which of these do you find the most compelling? </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Our Future with Cognitive Enhancement]]></title><description><![CDATA[Some philosophical implications]]></description><link>https://www.techforlife.com/p/whats-the-deal-with-cognitive-augmentation</link><guid isPermaLink="false">https://www.techforlife.com/p/whats-the-deal-with-cognitive-augmentation</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Tue, 05 Mar 2024 15:08:22 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/559bae42-f029-47bf-9164-830565e71a59_1080x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This is the first in a series exploring <strong>the philosophical implications</strong> of specific technologies, using plain language and zero jargon.</em></p><p><em>The goal of this series is to remind ourselves that the greatest impact from any technology will be how that technology changes <strong>us</strong>: the humans that use them.&nbsp;</em></p><p><em>Special thanks to <a href="https://substack.com/@corpusnoeticum">Sean McFadden</a> from<a href="https://corpusnoeticum.substack.com/"> Deep Noetics</a> for assistance in thinking through these topics.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Wlg8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d8e15bc-d064-41bc-91d6-2539a5350c2e_1792x1024.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Wlg8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d8e15bc-d064-41bc-91d6-2539a5350c2e_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!Wlg8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d8e15bc-d064-41bc-91d6-2539a5350c2e_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!Wlg8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d8e15bc-d064-41bc-91d6-2539a5350c2e_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!Wlg8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d8e15bc-d064-41bc-91d6-2539a5350c2e_1792x1024.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Wlg8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d8e15bc-d064-41bc-91d6-2539a5350c2e_1792x1024.webp" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3d8e15bc-d064-41bc-91d6-2539a5350c2e_1792x1024.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:784354,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Wlg8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d8e15bc-d064-41bc-91d6-2539a5350c2e_1792x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!Wlg8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d8e15bc-d064-41bc-91d6-2539a5350c2e_1792x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!Wlg8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d8e15bc-d064-41bc-91d6-2539a5350c2e_1792x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!Wlg8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d8e15bc-d064-41bc-91d6-2539a5350c2e_1792x1024.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>No technology has greater potential for transforming the human condition than <strong>cognitive enhancement</strong>.&nbsp;</p><p>Enhancing our cognition means <strong>changing our actual minds</strong>. Given how much our identities are defined by our thoughts, it also means changing our <em>selves</em>. And given how <em>far-reaching</em> these changes could be, it could even mean changing our very <em>species</em>.</p><p>And yet there is very little scientific consensus about how our cognition actually works. We struggle to define the nature of consciousness, creativity, or intelligence. We don&#8217;t know exactly how memory, emotions, or the subconscious affect our thinking.&nbsp;It&#8217;s surprising how little we <em>do</em> know.</p><p>This combination of <strong>immense power</strong> and <strong>profound uncertainty</strong> is why it&#8217;s so critical to consider all the potential futures that cognitive enhancement may bring. This is where philosophy comes in. One way that <a href="https://www.techforlife.com/p/how-philosophy-makes-technology-better">philosophy benefits technology</a> is by considering the potential implications that go beyond technical feasibility or the obvious ethical concerns. </p><p>In other words, philosophy can help us ensure that our future with cognitive enhancement is one that we will want to live in.</p><h3>What do we mean by cognitive enhancement?</h3><p>To help understand what we mean by cognitive enhancement, the following are some examples in order from active development to the completely speculative.</p><ul><li><p><strong>Wearable Tech for Enhanced Learning</strong>: Devices like smart glasses or earpieces that provide real-time information and learning assistance, similar to a more advanced version of current smart devices.</p></li><li><p><strong>Nootropics for Improved Cognitive Function</strong>: The use of drugs or supplements to enhance memory, creativity, or other cognitive functions.&nbsp;</p></li><li><p><strong>Advanced Language Translation Implants</strong>: Tiny implants that allow for real-time translation of foreign languages directly in the ear, enhancing communication capabilities without extensive language learning.</p></li><li><p><strong>Memory Enhancement Devices</strong>: Implants or wearables that aid in memory recall or storage, perhaps by syncing with digital databases, assisting those with memory disorders or for general use.</p></li><li><p><strong>Brain-Computer Interfaces (BCIs)</strong>: Non-invasive BCIs that allow users to control computers or machinery with their thoughts, extending human capabilities in work and daily life.</p></li><li><p><strong>Emotion and Mood Regulation Implants</strong>: Devices that can regulate or alter an individual's emotional state or mental well-being, a more invasive approach to managing psychological health.</p></li><li><p><strong>Neural Lace for Enhanced Brain Connectivity</strong>: A thin mesh that lays on the brain and connects it more directly with digital devices, allowing for faster processing and data access, akin to upgrading the brain's hardware.</p></li><li><p><strong>Full Neural Integration with AI Assistants</strong>: A deeper integration where AI not only assists with tasks but also becomes an integral part of decision-making processes, blurring the line between human thought and artificial intelligence.</p></li><li><p><strong>Direct Brain-to-Brain Communication</strong>: Enabling direct, non-verbal and non-physical communication between individuals, creating a form of collective consciousness or hive mind.</p></li><li><p><strong>Total Merge with Machine</strong>: The ultimate fusion where human consciousness is fully integrated into a machine, allowing for potentially eternal life, limitless cognitive capabilities, and a complete departure from biological limitations.</p></li></ul><p>The following are 10 philosophical implications of cognitive enhancement that are worth considering.</p><h3>1. How to compress a thought</h3><p><em><strong>What is a unit of thought?</strong></em></p><p>Will there ever be a cognitive equivalent of a .gif or .mp3 file? Will some <strong>digital unit of thought </strong>emerge to compress everything that a thought contains&#8212;all the complexity of emotions, memories, and intuition&#8212; into mere ones and zeroes?</p><p>This is the challenge of <strong>compression</strong>: converting the essence of human cognition into the smallest possible signal. This process will necessarily be <em>lossy</em>. Something inherent to human thought will always be left out.&nbsp;&nbsp;</p><p><strong>Language</strong> is the primary way that we compress thoughts today. Yet words often fail to cover the gap left by compression, even in spite of our emojis and non-verbal gesticulations. We become frustrated when our attempts to communicate don&#8217;t capture the the depth of an emotion or the subtlety of an idea.</p><p>The promise of BCIs is to far exceed the limitations of human language, enabling much faster and denser communication between brains and machines. Only the parts of our thoughts that are essential to supporting these goals will be included. How can we be certain that what is left out will not be something critical?</p><p>It&#8217;s impossible to predict how compression might affect thinking itself. We know the brain is incredibly plastic.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>  If our thinking begins to conform around a standardized unit of thought, what happens to the aspects of cognition that are neglected? Could they atrophy away as our neural pathways adapt to a new paradigm?</p><p>Or, like a new style of painting, might we reveal new ways to understand and experience thought itself?</p><h3>2. Trying on a new self</h3><p><em><strong>What happens to our &#8220;self&#8221; when we can think up a new one?</strong></em></p><p>How we think about our &#8220;self&#8221; very much depends on the environment we construct it in.&nbsp;</p><p>The internet has made this obvious. When we are online, we can try on identities as easily as new outfits. Our <strong>digital selves</strong> can be freed from how we look, where we are from, or what we&#8217;ve done in the past.&nbsp; We can find communities to engage with the most particular aspects of our identities while ignoring the rest. We can act anonymously and hide the self entirely.</p><p>While the digital landscape can be liberating, it also poses significant challenges. Our identity gets divided into fragments mapping to different online contexts.&nbsp; We&#8217;re less certain about which of our identities is the &#8220;authentic&#8221; one. We have less opportunities to engage with our &#8220;whole&#8221; self. We might behave much differently when we&#8217;re anonymous than when we&#8217;re not.&nbsp;</p><p>Cognitive enhancements will amplify these challenges. While technology has always informed our identity, there&#8217;s been a clear boundary between our <em>inner</em> selves and our <em>external</em> tools. What happens when they blur together? The traditional divide between the internal construction of the self and its external manifestation may vanish entirely.</p><p>Constructing our identities could become an active process of playing with various cognitive enhancements. Who we are will become the memories we alter, the information we download, the collective intelligences we join. It could be <strong>the self as pure artifice</strong>, an aggregate of choices we present in the moment.</p><p>As we entrust more facets of our cognition to internal tools, we risk eroding the very foundations that make us <strong>truly unique</strong>. Yet there still exists the potential for profound self-discovery. By confronting the construction of our identities, we may uncover new insights into what it means to be a self.&nbsp;</p><h3>3. Cognitive groupthink</h3><p><em><strong>How can we ensure the diversity of our thinking?&nbsp;</strong></em></p><p>You might think that connecting our brains to computation would naturally increase the diversity of our thinking. But recent technology might suggest otherwise. The internet started as a diverse collection of quirky blogs and websites. Now we all use the same few platforms, posting content that gets rewarded by the same algorithms, following whatever meme is deemed &#8220;the current thing&#8221;.&nbsp;&nbsp;</p><p><a href="https://www.understandingai.org/p/large-language-models-explained-with">LLMs</a> are another example. The diversity of human content gets statistically normalized to provide the most probable answers. The most unique inputs get averaged away. <a href="https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback">RLHF</a> further ensures that answers conform to conventional norms. These same outputs become tomorrow&#8217;s training data, perpetuating a homogenizing feedback loop.</p><p>In both cases, the diversity of our content is sacrificed for other goals and values. Could the same happen with our thoughts? What happens when we&#8217;re all accessing the same corpus of data, our thoughts being motivated by the same rewards, our unique individual differences getting filtered out by the same technological limitations? Will we find ourselves trapped in a stifling monoculture of ideas?</p><p>Yet if diversity is prioritized, it&#8217;s easy to imagine BCIs that can reward novelty, inject randomness<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>, and draw unusual connections between disparate ideas, all leading to a dramatic expansion in the breadth of our thinking.</p><h3>4. Unlocking the hive mind</h3><p><em><strong>What if it takes a lot more than intelligence?</strong></em></p><p>Imagine being part of a collective intelligence.&nbsp;Your mind connects with a stream of thoughts to collaborate on problems too large for a single brain to fathom. You send a new idea back to the hive where it becomes instantly available, sparking a cascade of further cognition.&nbsp;</p><p>Unlocking collective intelligence would represent cognitive enhancement&#8217;s greatest achievement. Every aspect of human interaction could be transformed. We could run social simulations on entire populations. We could co-create new collaborative games and art forms. We could redefine democracy around dynamic participation in real-time decision-making.</p><p>Yet the term &#8220;collective intelligence&#8221; itself is something of a paradox. Unlocking the power of the hive mind will require much more than increasing intelligence alone. That intelligence must be aligned around the pursuit of a common goal, grounded in the same beliefs and values.&nbsp;</p><p>Changing our beliefs and values rarely happens because we are exposed to a better argument or more facts. It happens when reality forces us to confront a dissonance between what we believe and what we actually experience. Cognitive enhancement can&#8217;t align our values around intelligence alone.</p><p>Nor can intelligence define goals. Goals are based on values, not facts. In order to guide our collective intelligence, we&#8217;ll need to develop our <strong>collective wisdom</strong>. Much like with AI, we face an alignment problem, only in this case it&#8217;s with our <em>own</em> collective intelligence. </p><p>Will <em><strong>moral</strong></em><strong> enhancements</strong> be required to ensure that we&#8217;re using our <em>cognitive</em> enhancements for good? Perhaps we will need to increase our ethical capacity to keep pace with our increasing cognitive capacity.</p><p>Yet the same technologies that <em>empower</em> a hive mind (such as brain-to-brain interfaces, neuro-surveillance, and centralized thought repositories) are the same technologies that could <em>oppress</em> an individual mind. Deeper concerns over security, freedom, and individual autonomy may prevent us from fully embracing the dynamics of collective intelligence.</p><p>Unlocking the power of the hive mind could enable us to tackle our <a href="https://www.techforlife.com/p/our-planetary-predicament">biggest planetary challenges</a>, but only if our collective wisdom can match our collective intelligence.</p><h3>5. Bridging the cognitive divide</h3><p><em><strong>What if some become enhanced while others do not?&nbsp;</strong></em></p><p>It&#8217;s possible that cognitive enhancements will inherently lead to new inequalities, where the smart get smarter and the truly creative see exponential returns. But if everyone is accessing the same enhanced memory, creative algorithms, and instant data, it&#8217;s just as likely that we all end up with the same basic intelligence. </p><p>The bigger risk for a new cognitive divide will be between those that embrace enhancements and those that do not. This divide could have the potential to disrupt our economies, societies, and even humanity itself.</p><p>Imagine that cognitive enhancements directly lead to economic benefits. This will create enormous pressure for everyone to enhance, even if principled reasons exist not to do so. The &#8220;enhanced&#8221; could accuse the &#8220;normies&#8221; of preventing economic growth or increasing the drain on government welfare. If you are enhanced, why would you agree to taxes that redistribute your income to the normies that contribute nothing? The stage could be set for new class politics of the worst sort.</p><p>Social disruption could be even more profound. Rather than the <strong>us vs. them</strong> dynamics of AI fear-mongering, society could split along <strong>us vs. us</strong> dynamics between those that embrace enhancements and those that don&#8217;t. </p><p>Over a longer timescale, cognitive enhancements could amplify the factors that affect human values. The enhanced and non-enhanced could soon find their values <strong>drifting apart</strong>, to the point where they could no longer be reconciled.&nbsp;</p><p>Even worse, the enhanced could eventually adopt forms of communication, understanding, and even ways of being that are no longer compatible with the non-enhanced. The divergence could eventually become so great as to represent a <strong>speciation event</strong>. </p><p>The enhanced would appear as a new, superior species of hominid&#8212;<em>homo cognito</em>. As history has shown, when two similar species vie for the same ecological niche, there can only be one winner.</p><p>To prevent this dystopia, we must proactively consider how we can preserve the fundamental values that bind us together as a species, regardless of the choice that each of us might make to embrace enhancements or not.</p><h3>6. Living at the speed of thought</h3><p><em><strong>What happens when our very thoughts speed up?</strong></em></p><p>BCIs promise to remove any friction in interfacing with computation. No more typing, clicking, or swiping&#8212;manipulating data will happen at the <strong>speed of thought</strong>. Likewise, communicating with other brains will finally be liberated from the painfully slow need to convert the photons and sound waves from our eyes and ears into language.</p><p>Yes, certain types of communication will certainly be more efficient, but are there limits to how fast our cognition can process information? Our brains evolved to be precisely attuned to the rhythms and pace of our physical world. What happens when our mental <strong>clock rate</strong> begins to conform to the hyper speed of these new technologies?&nbsp;</p><p>Many of the mental heuristics that drive our decision-making have evolved to favor time over information. Any complex decision can always benefit from more data, but our bias is to <em><a href="https://en.wikipedia.org/wiki/Satisficing">satisfice</a></em>&#8212;to quickly make decisions that are &#8220;good enough&#8221;, guided  by emotion and instinct over rational deliberation. If these two aspects of decision-making get out of sync, which will be sacrificed?</p><p>The same evolutionary clock rate defines our attention. Our consciousness has evolved to navigate the limitations of our senses<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> in order to determine what should receive our focus. Will our conscious attention resist a new input that operates at hyper speeds? Or will we find our attention becoming so attuned to this new clock rate that anything slower is too excruciating to engage with? </p><p>Engaging in <em><strong>meat talk</strong></em> with anyone that isn&#8217;t enhanced would feel unbearable. Every lecture, speech, and sermon would be transmitted over the new preferred protocols of thought.&nbsp;</p><p>This new clock-rate could also affect our willingness to pursue long-term projects, or physical projects of any kind. Why operate at the speed of atoms when you can operate at the speed of thought? A hyper clock rate may make a ten-year physical project feel like an eternity.</p><p>This will place an entirely new premium on <strong>patience as a virtue</strong>.</p><h3>7. The complexity of intelligence</h3><p><em><strong>What are the trade-offs of enhancing intelligence?</strong></em></p><p>What do we mean by increasing our &#8220;intelligence"? Do we mean more creativity? Or better abstract reasoning? What about spatial awareness, problem-solving, or attention? Are all of these things intelligence? </p><p>&#8220;Intelligence&#8221; is a term we use to generalize many things our brains do. Rather than a single physical quality, intelligence is more like a dynamic process between many aspects of the brain. There isn&#8217;t a single setting that can increase all of these aspects at once.&nbsp;</p><p>Take creativity, an elusive capability that appears to be deeply intertwined with intuition and the subconscious. Breakthroughs often emerge not by adding more logic or reasoning, but by giving the subconscious room to operate. Think of the countless stories of innovators who obsess over a problem only to finally unlock it when they go for a walk or take a nap. Increasing creativity isn&#8217;t as simple as increasing intelligence.</p><p>Emphasizing one aspect of intelligence may come at the expense of others. For example, abstract reasoning generalizes away the messy details of reality, while empathy must embrace them. There is no guarantee that we can simply "enhance intelligence" and achieve all desired outcomes simultaneously. There will invariably be trade-offs.</p><p>If we focus on any single aspect of intelligence, we simply have no idea how the rest of our cognition will be affected. Without a comprehensive understanding of these dynamics, the consequences of cognitive enhancement remain radically uncertain, raising questions about unintended consequences and unforeseen outcomes.</p><p>On the other hand, these efforts may help provide the experimental frameworks necessary to unlock the multifaceted dynamics of our cognition. This will only happen if we move beyond simplistic notions of "increasing intelligence" and acknowledge all the complexities of the human mind.</p><h3>8. The cost of connection</h3><p><em><strong>Will enhancing our brains enhance how we relate with each other?</strong></em></p><p>It&#8217;s not hard to imagine how cognitive enhancement could disrupt human relations. We&#8217;ve already pointed out different risks from diverging clock rates, drifting values, class politics, and even a speciation event. There&#8217;s also the obvious risk of alienating ourselves further from <em><strong>embodied</strong></em><strong> relations</strong>, the richest form of human connection we have.&nbsp;</p><p>And yet we shouldn&#8217;t focus only on the risks. Cognitive enhancement could also help us overcome challenges that perennially limit our ability to relate with each other. It could open up new ways of understanding our different perspectives and navigating our cultural differences.&nbsp;</p><p>Imagine tools designed to enhance our <strong>empathy</strong>. VR might help us experience another&#8217;s perspective through their <em>eyes</em>, but BCIs could allow us to experience it through their <em><strong>thoughts</strong></em>. We could understand what it&#8217;s like to <em>feel</em> what they are feeling, to sense the same <em>emotions</em> that are driving their world view. This would give conflict-resolution entirely new possibilities.</p><p>It could also help us see <strong>political polarization</strong> as something <em>positive</em>. By revealing the psychological roots of political differences, cognitive enhancement could help us understand how personality traits and cognitive dispositions impact our political leanings. This may help us realize that political differences <strong>evolved for a reason</strong>, and that these differences could be leveraged to help solve our biggest problems, rather than just contribute to culture wars.</p><p>Finally, it could help us navigate <strong>cultural differences</strong>. Imagine real-time assistance that helps us understand the depth and history of cultural contexts. These types of tools could encourage more radical diversity at the cultural level by enabling broader connections at the human level.</p><p>The future of human relationships is ours to shape. By prioritizing technology that fosters empathy, understanding, and shared experiences, we can ensure that cognitive enhancement serves as a force for human connection, not alienation.&nbsp;</p><h3>9. Man or machine</h3><p><em><strong>Is human cognition worth preserving?</strong></em></p><p>Cognitive enhancement blurs the boundaries between machines and humans. The intimacy of this combination is the key to unlocking its promise. The closer the machine (with its boundless scale and computational power) can get to the human (with our collective choice and agency) the more powerful BCIs will become.</p><p>This close proximity will highlight the difference between human and machine cognition. Human cognition is enigmatic, messy, and limited. Machine cognition is legible, precise, and boundless. If history is any guide, we will seek to replace the messy with the legible, the limited with the boundless, at every opportunity we get.&nbsp;</p><p>And yet, human cognition excels precisely because of its limitations. It&#8217;s the illegibility of <strong>paradox</strong>&#8212;between logic and emotion, reason and intuition, the conscious and subconscious&#8212;that seems to give rise to everything we value about human cognition and the human experience.&nbsp;</p><p>Before we let the relentless pursuit of efficiency and power dictate the balance between humans and machines, we need to ensure that we understand precisely what is being lost. In the human brain, evolution has crafted the most elegant and complex object the universe has ever seen. We should be very careful to presume that we can do better than natural selection.</p><p>This makes it critical to prioritize systems that encourage anyone to <strong>opt out of cognitive enhancement</strong>. Opting out of enhancement should <em>not</em> mean opting out of society. If the social or economic costs of retaining natural cognition become too great, then it won&#8217;t really be a choice at all&#8212;we will have ceded our agency to preserve human cognition to the logic of the market and the state.</p><p>Understanding and protecting the limited, messy, and paradoxical core of <em>human</em> cognition is what will give us the confidence to pursue the best of <em>machine</em> cognition.</p><h3>10. Context Matters</h3><p><em><strong>What is the point of cognitive enhancement?</strong></em></p><p>Finally, let&#8217;s confront the ultimate question: what is the point of cognitive enhancement? Will it be to increase our general intelligence? To become "better humans"? Or will it be to achieve specific goals, such as maximizing productivity, competing with AI, or even merging with machines?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>&nbsp;</p><p>Proponents of cognitive enhancement paint an optimistic picture. They highlight its potential to restore cognitive function, aid the paralyzed, and combat various mental diseases. Yet initial promises of health and well-being are often just opening moves in  a larger strategy. Like every technology, adoption will be driven by major corporations seeking monopolies, regulatory capture, and profit maximization, for both good and bad.</p><p>These are the contexts that will determine whether we consider cognitive enhancement an &#8220;enhancement&#8221; or not. We won&#8217;t become &#8220;better&#8221; simply by becoming more intelligent, but by performing a certain task <em>better</em> or pursuing a specific goal <em>better</em>. It may be a goal that we choose, but it may also be one that is imposed on us. We will only judge an enhancement as &#8220;good&#8221; if we think that goal is worth pursuing.&nbsp;</p><p>Simply performing better on some task is not in itself an inherent good. We need to ask: what context is that performance good for? <strong>Who benefits</strong> from that context? </p><p>Even enhancements that you might consider inherently beneficial (like increased intelligence) may not be equally positive. They could easily benefit the corporation while harming the employee if, for example, they don't include broader support to handle the increased stress, or unhappiness<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>, or other inadvertent side effects that could come with radically increased intelligence.</p><p>Ultimately, what will count as a cognitive &#8220;enhancement&#8221; will be entirely <strong>context dependent</strong>. The history of technology tells us that context can change rapidly, and this is more often dictated by the market and the state than anything inherent in the technology itself. </p><p>Given the risks and rewards that come with cognitive enhancement, we will need to think as carefully about <em>context</em> as we do about the technology itself.</p><div><hr></div><p>The journey towards cognitive enhancement is not just about enhancing our intelligence, but also about understanding and embracing what it means to be human in an increasingly technologically mediated world. It&#8217;s why considering the philosophical implications is so critical.</p><p>This is just one attempt of what should be many more, by a maximally diverse group of thinkers considering every aspect of cognitive enhancement. We need philosophers and engineers to come together and define the possible. We need artists and authors to create speculative futures to help us envision different scenarios. We need politicians to consider these possibilities now, when we still have time to do something about them.</p><p>And this includes you. What did we miss? What would you like to see given more consideration? Please let us know in the comments so we can keep the conversation going.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>For example, neurons devoted to sight will <a href="https://eye.hms.harvard.edu/news/brain-rewires-itself-enhance-other-senses-blind-people">rewire themselves to boost other senses</a> in the blind.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><a href="https://www.randonautica.com/">Randonautica</a> is a delightful example of how randomness can be creatively leveraged.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>You can only speed up a podcast so much.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>This is the preferred solution of many technologists for aligning humanity with advanced technology. See <a href="https://blog.samaltman.com/the-merge">Sam Altman</a> and <a href="https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html">Vitalik Buterin</a>, for example.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>There is very little positive correlation between intelligence and happiness. <a href="https://pure.eur.nl/ws/files/47447529/f871061159132412.pdf">This report</a> found zero correlation at the individual level, but strong correlation at the group level, which is even more support for erasing the cognitive divide mentioned earlier.</p></div></div>]]></content:encoded></item><item><title><![CDATA[The Case for (Im)material Progress]]></title><description><![CDATA[A time-traveling thought experiment]]></description><link>https://www.techforlife.com/p/the-case-for-immaterial-progress</link><guid isPermaLink="false">https://www.techforlife.com/p/the-case-for-immaterial-progress</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Fri, 09 Feb 2024 17:03:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e521c71-bfe5-4286-813c-ed81ae61787f_1236x1075.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Are things better today than they were in the past?</h3><p>This question gets at the heart of what we mean by the term &#8220;progress&#8221;. Do we think things are getting better or worse? How would we even know? What differences between the past and the present would make us think so? </p><p><strong>Progress defenders</strong> think progress is obvious. They point to massive improvements in the material aspects of human lives as proof that things have gotten better. Indicators like child mortality, literacy, and longevity have all seen dramatic improvements, particularly in the last 200 years.</p><p><strong>Progress skeptics</strong> aren&#8217;t so sure. They generally don&#8217;t deny these improvements.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> But they also sense that some things have been lost along the way, like meaning and purpose. They see trade-offs, and they wonder if something about technology (and how we&#8217;ve adopted it) has contributed to those trade-offs.</p><p>In a sense, it&#8217;s a perspective about two types of progress: <strong>material</strong> progress, like rising income and affordable housing; and <strong>immaterial</strong> progress, like your own sense of flourishing and fulfillment.&nbsp;</p><p><em>Skeptics</em> want to incorporate the immaterial aspects of human flourishing into a richer definition of progress, but don&#8217;t quite know how. <em>Defenders</em> are more likely to relegate immaterial progress to individual choices, or fold it into metrics like <em>standard of living</em>.</p><p>It doesn&#8217;t help that the <em>skeptics</em> and <em>defenders</em> are often speaking two different languages. Material progress is precise and measurable, based on things that matter equally to everyone. Immaterial progress is intangible and difficult to measure, based on things that are unique and internal.</p><p>How can we reconcile these two perspectives? Is it even possible? </p><p>Defenders of progress think that they have a way. With a simple question, they believe that they can both prove that progress is real and that your <em>true</em> preference is based on it. The question they pose is simple:</p><p><em><strong>When would you choose to live if you didn't know who you would be?</strong></em></p><p>This was the question that<a href="https://www.businessinsider.com/president-barack-obama-speech-goalkeepers-2017-9"> Barak Obama asked</a> to make his case that the world was getting better. To the <em>defenders</em>, the answer is obvious and unequivocal: the best time to be alive is <strong>today</strong>. As Obama put it: &#8220;This is the time you'd wanna be showing up on this planet.&#8221;</p><p>In one sense, the question is quite effective in revealing just how much material progress we&#8217;ve made. It&#8217;s not difficult to generate <a href="https://humanprogress.org/is-this-the-best-time-to-be-alive/">a list of improvements</a> that quickly becomes overwhelming. You simply can&#8217;t deny these material improvements. Warren Buffet made the point another way&#8212;by suggesting that the lives of today&#8217;s average American is much better than <a href="https://collabfund.com/blog/what-a-time-to-be-alive/">the richest men of the past</a>. </p><p>Backed by such evidence, the defender of progress <em>dares</em> the skeptic to choose any time other than today. Do they <em>really</em> want to live in a time of slavery, or rampant child mortality, or mass illiteracy? And if the skeptic does choose today, then they are revealing their <em>true</em> preference: that material progress <em>is</em> the most important thing.</p><p>But in another sense, the question reveals how difficult it is to consider immaterial progress. It posits material progress as the <em>most important thing</em> and projects it backwards in time to make the case for what constitutes &#8220;better&#8221; and &#8220;worse&#8221;. It fails to account for the actual values of any of these historical eras.&nbsp;</p><p>In fact, I think the entire question can be <strong>inverted</strong> to make an equally compelling case for the progress <em>skeptics</em>.</p><h3>Who from the historical past would choose to live in the present?</h3><p>Imagine that you are a progress <em>defender</em> and that you can <strong>time-travel</strong> (yes, this is a thought experiment). You believe the best time in human history is today, and now you have a way to prove it. You can pick any human from history and see if they would prefer to live in the present.</p><p>So you try it. You travel back to different historical eras and approach people at random with your offer:</p><p>&#8220;I come from the future, where life is much better. With science and technology we&#8217;ve figured out how to unlock human flourishing. We call it progress. We can confidently declare that our moment in the future is the best time to be alive in all of history. I now offer you the chance to leave this time behind and join me in this better future.&#8221;</p><p>They appear skeptical, so you build your case. &#8220;You&#8217;ll live much longer!&#8221; you eagerly announce. &#8220;Your children won&#8217;t die in childbirth! War is illegal!&#8221;&nbsp;</p><p>You show them your smartphone and all the knowledge it contains. You play videos of grocery stores, airplanes, and hospitals. You describe democracy, human rights, and equality. You try to convey the magic of Netflix, 2-day shipping, and YouTube. You pull out charts on income, poverty, and literacy.&nbsp;</p><p>For those not impressed, you try a different approach. &#8220;Everyone smells better. Pain can be alleviated. You could fix your teeth!&#8221;&nbsp;</p><p>You could go on and on, but you pause there, half-expecting them to start begging you to join in this future immediately&#8230;</p><p>Which is why their follow-up questions are so confusing:</p><p><strong>A Ming Dynasty Chinese Scholar</strong>: &#8220;We revere our ancestors and seek to honor them in all things. My role as a scholar is to bridge heaven and earth, aligning human affairs with the cosmic order, guided by the harmony of the Dao. Our schools teach moral excellence founded in our familial duties. What do your schools teach your children?&#8221;</p><p><strong>Aztec Priest</strong>: &#8220;Our calendars are a sacred guide that synchronize our every action in alignment with the cosmos. We have mastered water to create floating cities and bountiful gardens that honors the nature of our the gods. How do you honor time and nature?&#8221;</p><p><strong>19<sup>th</sup> century English woodworker</strong>: &#8220;I live to be master of my craft. I work with my hands to build wagons and tools that last for generations. I know every inch of these woods and what each tree provides. My community values my role and depends on my work. Tell me, does your work give you such purpose and joy?&#8221;</p><p><strong>Edo Period Japanese villager</strong>: &#8220;We seek only to live in harmony with nature&#8217;s rhythms. Our communities work, eat, and celebrate as the seasons guide us. You show me a world driven by a relentless pursuit of money and personal success. How is that better than the simple joys of community and nature?"</p><p><strong>13<sup>th</sup> century European nun</strong>: &#8220;You seem to rush everywhere but to the chapel. All this information you show me just distracts you from contemplating the divine. Every moment of our lives is in complete service to glorifying an all-powerful Creator. What do you glorify?&#8221;</p><p>These are not the responses you were expecting. You stand there for a moment, silent. You cycle through your data to see if there is something you could offer, but none of it seems relevant. You have no charts that could speak to these values. It slowly dawns on you that you have nothing to say because these values simply don&#8217;t exist in our present world, and you're not even sure if such beliefs are possible anymore.</p><p>You go with the honest approach: &#8220;Look, you&#8217;ll be on your own with that kind of stuff. All these values <em>do</em> exist, just in different ways. Maybe even in <em>better</em> ways. All of our material progress means you can pursue your individual beliefs and values more fully. That&#8217;s what makes our world so great. You can define the good life in whatever way you want.&#8221;</p><p>Now the historical figures are even more confused. This notion of creating your own values seems bizarre and suspicious. They ask for specific examples of how their values would manifest in your society, but your attempt try to draw parallels just confirms their suspicion that that you understand nothing of their beliefs.</p><p>One by one they decline your offer, and decide to stay in their own historical period. Some are appalled at the suggestion of superiority. A few&nbsp;chuckle as they walk away, looking forward to telling others about this silly person from the future.</p><div><hr></div><p>Before we get too carried away, let&#8217;s recognize that these historical examples are obviously cherry-picked. The choices could easily be historically repulsive. It&#8217;s quite possible that a random choice would be someone actively starving to death or dying from some hideous disease. A random choice from ancient Athens would likely be a slave. Most women would have no agency of any kind. Material progress means that the worst of our historical experiences have fewer and fewer modern parallels. This is indeed progress.</p><p>But the point of the thought experiment isn&#8217;t to show that material progress isn&#8217;t real. The point is to show that it&#8217;s incomplete. Throughout history, the immaterial values that defined what it meant to live a good life were just as important as any material values.&nbsp;</p><p>This also shows why it&#8217;s so difficult to compare material and immaterial progress. We can&#8217;t easily translate these historical values into our present day. And we cannot imagine what we would believe or value in any other historical setting.&nbsp;</p><h3>&#8220;Imagine you had a chance to become a vampire. Would you do it?&#8221;</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!b6Pe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3e5e48a-92b3-436a-997a-87986111367f_333x522.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!b6Pe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3e5e48a-92b3-436a-997a-87986111367f_333x522.jpeg 424w, https://substackcdn.com/image/fetch/$s_!b6Pe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3e5e48a-92b3-436a-997a-87986111367f_333x522.jpeg 848w, https://substackcdn.com/image/fetch/$s_!b6Pe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3e5e48a-92b3-436a-997a-87986111367f_333x522.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!b6Pe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3e5e48a-92b3-436a-997a-87986111367f_333x522.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!b6Pe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3e5e48a-92b3-436a-997a-87986111367f_333x522.jpeg" width="221" height="346.43243243243245" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3e5e48a-92b3-436a-997a-87986111367f_333x522.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:522,&quot;width&quot;:333,&quot;resizeWidth&quot;:221,&quot;bytes&quot;:37629,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!b6Pe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3e5e48a-92b3-436a-997a-87986111367f_333x522.jpeg 424w, https://substackcdn.com/image/fetch/$s_!b6Pe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3e5e48a-92b3-436a-997a-87986111367f_333x522.jpeg 848w, https://substackcdn.com/image/fetch/$s_!b6Pe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3e5e48a-92b3-436a-997a-87986111367f_333x522.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!b6Pe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3e5e48a-92b3-436a-997a-87986111367f_333x522.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This is the question that L.A. Paul used to open her book <em><a href="https://www.amazon.com/Transformative-Experience-L-Paul/dp/0198777310">Transformative Experience</a></em>. You may think that this question sounds ridiculous, but it&#8217;s the exact same <em>type</em> of question that Barak Obama asked. They both have to do with the fact that some experiences are literally <em>life-changing</em>, and thus can&#8217;t be imagined.&nbsp;</p><p>Paul defines a &#8220;transformative experience&#8221; as one that fundamentally changes how we understand both ourselves and the world. After the experience, it can feel like we are a completely different person, with new beliefs and values. Paul argues that we can't ever <em>fully</em> anticipate our values on the other side of a transformative experience. In effect, not only are the outcomes unknown, they are <em>unknowable</em>.&nbsp;</p><p>Experiences that might qualify as transformative include becoming a parent, experiencing a life-threatening accident, or moving to a foreign country that speaks a different language.&nbsp; Or, you know, becoming a vampire.&nbsp;</p><p>We can now see how a question like "<em>When would you choose to live&#8221;</em> is not that dissimilar to <em>&#8220;Would you become a vampire?&#8221;</em>.&nbsp;&nbsp;</p><p>We think that living in a previous historical era would be &#8220;worse&#8221; in all sorts of ways, but that&#8217;s because we imagine experiencing it through time-travel, <strong>as if we&#8217;re bringing our modern values and beliefs with us into the past</strong>. </p><p>But this is not how history works.&nbsp;Much like becoming a vampire, we have no idea how we would experience the beliefs and values of a historical era.&nbsp;</p><p>In fact, even if you could implant your exact DNA into a historical embryo, that <strong>historical clone</strong> would still find your values and beliefs as incomprehensible as anyone else. By being raised, educated, and socialized in that historical period, your clone would be a completely different person. And who knows, your historical clone&#8217;s life may be filled with incredible purpose and meaning, in ways that you couldn&#8217;t even fathom.</p><h3>The history of progress goes in both directions</h3><p>When we claim that the present day is the best day to be alive in history, we can only do so by projecting our values of &#8220;better&#8221; and &#8220;worse&#8221; back into history in a way that no one from that period would recognize.&nbsp;</p><p>We use our modern values to judge these historical ways of life as limited, violent, ignorant, racist, miserable, superstitious, and oppressive. And from our modern perspective, we are correct to do so. Yet however right we may be, we aren&#8217;t capturing the entire story.</p><p>As we saw above, any historical figure can just as easily project their values <em>forward</em> to judge our modern way of life. Yes, they would be amazed at our material progress, but they would be shocked to discover that everything they truly care about is nowhere to be found. And they would be appalled by most of what they would find being valorized instead.</p><p>They would see that for all of our material possessions, everything is disposable and meaningless. Everyone works endlessly at tasks they seem to hate, just so they can consume trivial entertainment in the few free hours they have left. They would find our consumerism and bureaucracy soulless and dehumanizing. They would see the lack of any real commitments, as everyone changes locations, jobs, roles, and spouses whenever it&#8217;s convenient. All their valued rituals have been reduced to frictionless commodities. Science has stripped away all traces of the mythological or symbolic, while life itself goes unexplained and free will is a delusion. Nothing is sacred, and nothing really matters.</p><p>This is how the history of progress goes in both directions. </p><p>It shows the danger in treating our own values as some kind of <strong>objective measuring stick</strong> we can use to judge all other historical eras. Imposing our modern values on history is just as likely to blind us to the flaws of these values as it is to reveal their superiority. It also closes us off from considering the vast spectrum of historical values that reveal the full range and vibrancy of the human condition.&nbsp;</p><p>Even worse, it forces us to recognize that most of these values now seem lost to us, with no possibility of returning.</p><h3>&#8220;After this, nothing happened&#8221;</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!s0mw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e521c71-bfe5-4286-813c-ed81ae61787f_1236x1075.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!s0mw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e521c71-bfe5-4286-813c-ed81ae61787f_1236x1075.png 424w, https://substackcdn.com/image/fetch/$s_!s0mw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e521c71-bfe5-4286-813c-ed81ae61787f_1236x1075.png 848w, https://substackcdn.com/image/fetch/$s_!s0mw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e521c71-bfe5-4286-813c-ed81ae61787f_1236x1075.png 1272w, https://substackcdn.com/image/fetch/$s_!s0mw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e521c71-bfe5-4286-813c-ed81ae61787f_1236x1075.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!s0mw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e521c71-bfe5-4286-813c-ed81ae61787f_1236x1075.png" width="468" height="407.0388349514563" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e521c71-bfe5-4286-813c-ed81ae61787f_1236x1075.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1075,&quot;width&quot;:1236,&quot;resizeWidth&quot;:468,&quot;bytes&quot;:2627722,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!s0mw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e521c71-bfe5-4286-813c-ed81ae61787f_1236x1075.png 424w, https://substackcdn.com/image/fetch/$s_!s0mw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e521c71-bfe5-4286-813c-ed81ae61787f_1236x1075.png 848w, https://substackcdn.com/image/fetch/$s_!s0mw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e521c71-bfe5-4286-813c-ed81ae61787f_1236x1075.png 1272w, https://substackcdn.com/image/fetch/$s_!s0mw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e521c71-bfe5-4286-813c-ed81ae61787f_1236x1075.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Chief Plenty Coups cares not for your material progress</figcaption></figure></div><p>The clash between American Indians and European colonizers is too vast and tragic to be reduced to a thought-experiment. But it is a unique historical encounter that can, perhaps, help us draw some distinctions between material and immaterial progress without the need for time-travel.</p><p>In his book <em><a href="https://www.amazon.com/Radical-Hope-Ethics-Cultural-Devastation/dp/0674027469">Radical Hope</a></em>, Jonathan Lear tells the story of <a href="https://en.wikipedia.org/wiki/Plenty_Coups">Plenty Coups</a>, the last great chief of the <a href="https://en.wikipedia.org/wiki/Crow_people#History">Crow Nation</a>. The Crow people faced the obliteration of their way of life due to the encroachment of white settlers and U.S. government policies of the late 19th century.&nbsp;</p><p>The loss of buffalo and the end of tribal warfare meant that everything the Crow understood about living a good life disappeared. The totality of this loss was revealed by Plenty Coups later in his life, after decades of successfully assimilating with modern American society:&nbsp;</p><blockquote><p>&#8220;When the Buffalo went away the hearts of my people fell to the ground, and they could not lift them up again. After this, nothing happened.&#8221;</p></blockquote><p><em>Nothing happened</em>. This is a remarkable statement. The Crow notion of courage was completely defined by their warrior culture and the rituals of the big hunt. Once it was shorn from all tribal context, courage was no longer a meaningful concept that could be enacted. The Crow as a subject was no longer capable of living a Crow life. From that point forward, it was as if Crow history had ended.</p><div><hr></div><p>The American Indians were certainly impressed by rifles and metal goods and other forms of European material progress, and they often incorporated technology when it <em>supported</em> their values. But no amount of material progress would have ever compelled them to <em>sacrifice</em> their values. In fact, they saw in European societies an <strong>immaterial regression</strong>.</p><p><em><a href="https://www.amazon.com/Dawn-Everything-New-History-Humanity/dp/0374157359">The Dawn of Everything</a></em> is a sprawling book that posits the &#8220;indigenous critique&#8221; as an important contributor to Enlightenment thinking. The authors quote <a href="https://en.wikipedia.org/wiki/Kondiaronk">Kondiaronk</a>, a Huron chief known for his elegance, ruthlessly assessing the European society of his day:</p><blockquote><p>&#8220;I have spent 6 years reflecting on the state of European society and <strong>I still can&#8217;t think of a single way they act that is not inhuman</strong> and I generally think this can only be the case as long as you stick to your distinctions of &#8220;mine&#8221; and &#8220;thine.&#8221; I affirm that what you call &#8220;money&#8221; is the devil of devils, [...] the source of all evils, the bane of souls and slaughterhouse of the living. To imagine one can live in the country of money and preserve one&#8217;s soul is like imagining one can preserve one&#8217;s life at the bottom of a lake.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p></blockquote><p>The same book quotes Benjamin Franklin, who observed these European values being constantly rejected by both American Indians and Europeans:</p><blockquote><p>&#8220;When an Indian child has been brought up among us, taught our language and habituated to our customs, yet if he goes to see his relations and make one Indian ramble with them, <strong>there is no persuading him ever to return</strong> [&#8230;] When white persons of either sex have been taken prisoners young by the Indians, and lived a while among them, tho&#8217; ransomed by their friends, and treated with all imaginable tenderness to prevail with them to stay among the English, yet in a short time <strong>they become disgusted with our manner of life</strong>, and the care and pains that are necessary to support it, <strong>and take the first good opportunity of escaping</strong> again into the woods, from whence there is no reclaiming them.&#8221;</p></blockquote><p>Again, the point here isn&#8217;t to romanticize the American Indian way of life or make declarations of &#8220;better&#8221; or &#8220;worse&#8221; across cultures or history.&nbsp;</p><p>The point is to recognize how beliefs and values can be utterly contingent on history and culture. Of course, that&#8217;s precisely how moral progress happen&#8212;all historical change includes the possibility that different values will find different possibilities for expression. But it&#8217;s worth grappling with the idea that something might be lost when certain values are no longer possible.&nbsp;</p><p>The values of the American Indian not only conflicted with the European way of life, they had no possibility of existing within it. Is that progress? Perhaps, but perhaps not.</p><h3>The case for (im)material progress</h3><p>What would <strong>immaterial progress</strong> even mean? If I had to offer a working definition, I&#8217;d go with:</p><p><em>The capacity of a society to explore and engage with the broadest range of ultimate concerns, both individually and collectively, in the pursuit of human flourishing.</em></p><p>Immaterial progress happens when more members of the society have more resources to discover and engage with the things that matter most to them.</p><p>This kind of progress shouldn&#8217;t come at the expense of material progress, or vice versa. They would be recognized as complementary dynamics. Without the immaterial, material progress can foreclose possible values. Without material progress, the immaterial can&#8217;t fully actualize. Instead of <strong>either/or</strong>, it needs to be <strong>both/and</strong>, coming together to inform a broader and richer definition of progress. </p><p>In effect, we&#8217;re making the case to join both types together&#8212;for <strong>(im)material</strong> progress.</p><p>What would a society look like that focused on (im)material progress? For many this might look like more religious or communal participation. But it need not be strictly conservative or traditional. It could be akin to Borgmann&#8217;s <a href="https://www.amazon.com/Technology-Character-Contemporary-Life-Philosophical/dp/0226066290">focal practices</a>, Aristotle&#8217;s <a href="https://en.wikipedia.org/wiki/Eudaimonia">Eudaimonia</a>, or the Japanese concept of <a href="https://en.wikipedia.org/wiki/Ikigai">Ikigai</a>. For some it might be working to further material progress.&nbsp;</p><p>Such a society would incentivize innovation that promotes engagement with the immaterial. It would recognize that although technology alone can&#8217;t create meaning or purpose, it does play a vital role in shaping the constraints and possibilities that each of us have to explore and define our own deep engagements with life, purpose, and meaning.&nbsp;It would carefully try to incorporate more of the immaterial into the market and economy.</p><p>It would encourage more social acceptance for groups and collectives to live according to strong internal values. It would support efforts to maintain a diversity of values in the face of a globalist, homogenizing, and increasingly connected world.</p><p>Ultimately, it would encourage each of us to deeply engage with what <a href="https://en.wikipedia.org/wiki/Paul_Tillich">Paul Tillich</a> called our <em><strong>ultimate concerns</strong></em>. Tillich argued that an ultimate concern is what gives meaning to life and should be the focus of our entire being. It becomes the criterion by which we judge and prioritize all other aspects of our life. This is not limited to religious belief. It can encompass any deeply held value or priority that shapes our existence.</p><p>Quite simply, (im)material progress would be a mark of an advanced civilization. To put more material progress in service of maximizing our immaterial engagement would be like an ascension up Maslow&#8217;s hierarchy of needs, at a societal level.&nbsp;</p><p>(Im)material progress is the type of progress we can all get behind.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Although some do, pointing to  material consequences like environmental devastation, colonization, species extinction, etc.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>There is some <a href="https://en.wikipedia.org/wiki/Kondiaronk#Oratory">controversy</a> about the legitimacy of critiques and how much European critics embellished them for their own purposes, but the frequency and nature of the critique is generally accepted.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Our Planetary Predicament]]></title><description><![CDATA[The final boss mode of human coordination]]></description><link>https://www.techforlife.com/p/our-planetary-predicament</link><guid isPermaLink="false">https://www.techforlife.com/p/our-planetary-predicament</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Thu, 25 Jan 2024 17:04:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ToKf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d77c64e-c614-4083-85bf-33853a1aedcc_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ToKf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d77c64e-c614-4083-85bf-33853a1aedcc_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ToKf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d77c64e-c614-4083-85bf-33853a1aedcc_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!ToKf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d77c64e-c614-4083-85bf-33853a1aedcc_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!ToKf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d77c64e-c614-4083-85bf-33853a1aedcc_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!ToKf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d77c64e-c614-4083-85bf-33853a1aedcc_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ToKf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d77c64e-c614-4083-85bf-33853a1aedcc_1024x1024.png" width="432" height="432" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5d77c64e-c614-4083-85bf-33853a1aedcc_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:432,&quot;bytes&quot;:2313701,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ToKf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d77c64e-c614-4083-85bf-33853a1aedcc_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!ToKf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d77c64e-c614-4083-85bf-33853a1aedcc_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!ToKf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d77c64e-c614-4083-85bf-33853a1aedcc_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!ToKf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d77c64e-c614-4083-85bf-33853a1aedcc_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It can be fun to imagine history as a series of upgrades to our <strong>social operating system</strong>.&nbsp;</p><p>Each upgrade represents a major advance in human coordination. Better software changes how information is generated and organized. New features unlock new ways to organize society around greater degrees of complexity. Communication upgrades expand the reach and powers of cooperation. In this way, a new OS can change everything. And in the process, the users change as well.</p><p>According to this metaphor, new epochs are written into the history books one upgrade at a time. The first stable OS was <em>agriculture</em>, launching the early great civilizations. The interface was primitive, but soon <em>organized religion</em> upgrades helped scale the user base. The improved networking capacity of the <em>printing press</em> ushered in the <em>Reformation</em> and the <em>Enlightenment</em>. Then a new algorithm called <em>capitalism</em> combined with powerful microprocessors to drive the <em>Industrial Revolution</em>, working so well that our circuit boards started overheating&#8230;</p><p>Ok, this is getting too cute for its own good, but you get the idea.&nbsp;</p><p>What I like about the OS metaphor is that it accounts for <em>both</em> humans and technology as the primary movers of history.&nbsp; Starting around 500 years ago, these forces have become increasingly intertwined as historical agents of change. Since then, history can be seen as a story of human societies <em>co-evolving</em> with technology, all for the purpose of managing coordination challenges.</p><p>Of course not every upgrade goes smoothly. Sometimes coordination can go backwards. And any feature that allows for new modes of coordination will invariably bring new problems with it. In this way, the scale of our problems will always keep pace with our capacity to solve them. Whichever one happens to be ahead at any given moment is what determines our felt sense of <em>progress</em>. </p><p>It&#8217;s easy to look around and think that our current operating system&#8212;globalization, free-market capitalism, even the nation-state&#8212;is past due for an upgrade. Bugs and viruses seem rampant. It&#8217;s running on legacy software that seems increasingly out of date. A few users have hacked the system to hog all the memory. And customer support seems to be getting progressively worse for the rest of us.</p><p>In other words, more of us are sensing that the our problems are exceeding our capacity to solve them, and in many cases to an alarming degree. Our problems and our coordination are getting so far out of balance that some of us fear that an upgrade may not be enough.</p><p>Sometimes, you may need to rewrite the entire source code.</p><h3>Our new planetary predicament</h3><p>By now you&#8217;ve heard of the <strong>Anthropocene</strong>, the proposed label to <a href="https://education.nationalgeographic.org/resource/anthropocene/">define the current geological era</a> of our planet.&nbsp;</p><p>While previous eras were marked by geophysical causes like <a href="https://en.wikipedia.org/wiki/Cretaceous">asteroid impacts</a> or <a href="https://en.wikipedia.org/wiki/Mesozoic">plate tectonics</a>, those forces no longer qualify as the greatest agent of geological change. That title now belongs to us, human beings.&nbsp;</p><p>Humanity now represents a force beyond anything this planet has experienced in its 4.5 billion years of history. We change the chemistry of the oceans, drive species to extinction, and alter the composition of the atmosphere. We have remade the face of the Earth in our image. Our livestock outweigh wild mammals by a factor of 15 and our farms take up half of all habitable land. The weight of our own production now <a href="https://www.nature.com/articles/s41586-020-3010-5">exceeds all living biomass</a>.&nbsp;</p><p>Much of this has happened in the <a href="https://en.wikipedia.org/wiki/Great_Acceleration">last 70 years</a>. We&#8217;ve only really started noticing it in the last 30. From a planetary perspective, this is instantaneous.&nbsp;</p><p>Not only is this dynamic new for the planet, it&#8217;s also new for <em>us</em>. What does the Anthropocene mean for what it means to be human? How does it affect the human condition to be in contact with forces so much bigger than we are?&nbsp;How do we think of human history when our future becomes so contingent on the past?</p><p>Philosophers, historians, and anthropologists who grapple with these questions have a name for this new perspective: <strong>the planetary</strong>.</p><p>The planetary tells a story that is much bigger than humanity. It is grounded in <a href="https://en.wikipedia.org/wiki/Earth_system_science">Earth systems science</a>, which seeks to understand Earth through a holistic view of all the dynamic forces that affect it. It recognizes humanity as a vital but small fraction of life that exists on Earth, all in radical interdependence with each other.  It re-centers the planet beyond the narrow boundaries that we&#8217;ve artificially imposed on it. </p><p>We can better understand the <em>planetary</em> by contrasting it with the <em>global</em> (as in the modern concept of <em>globalization</em>).</p><p><em>Globalization</em> is a 500-year-old story with humans at its center. The role of Earth is seen as an unlimited resource, one that is uniquely ours and whose rightful place is in service to our relentless march of progress. The future of the Earth depends on its potential to sustain our human lives and continue our human projects.&nbsp;</p><p>The <em>planetary</em> is a billion-year-old story with all of life at its center. Humans are  completely dependent on the Earth and its ecosystems, just like every other life form. The future of the Earth depends on its potential to be habitable for <em>all</em> life, not just human&#8217;s.</p><p>Of course, the <em>globe</em> and the <em>planet</em> are not mutually exclusive. Technology connects them together in an increasingly intimate relationship. In this way, the <em>planet</em> recognizes the agency of the <em>globe</em>, but it also redefines it beyond anything that humanity will ever completely understand or control.</p><p>The planetary confronts our technological progress with a tragic irony.&nbsp; On the one hand, our elevation to a planetary force is undeniably a spectacular achievement. In just a few millennia a puny, upright hominid somehow transformed into a geophysical force. On the other hand, those same forces now threaten to bring that evolution to a halt.&nbsp;</p><p>In that sense, 1784 marks the paradoxical origin of two fateful paths. This was the year that carbon from industrial steam engines began to settle into the Earth&#8217;s strata and join our planet&#8217;s permanent record. The Industrial Revolution thus inaugurated the beginning of both material progress <em>and</em> existential threat. These paths followed similar trajectories of exponential growth&#8212;one up, the other down&#8212;that may yet converge to zero.</p><p>In all these ways, the planetary confronts the human project with forces and timescales beyond anything we&#8217;ve ever had to consider. It&#8217;s as if we&#8217;ve entered a new reality, one that that our OS was not designed for. The <em>planetary</em> simply overwhelms our <em>global</em> operating system.&nbsp;</p><p>This is our planetary predicament.&nbsp;</p><h3>Final boss mode</h3><p>Planetary problems are what I call &#8220;Ostrom Complete&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.  They require solutions that have zero room for any collective action problems, tragedy of the commons, or any other failures of coordination. If we somehow gain the ability to solve problems at the planetary scale, then we will have achieved peak human coordination. At that point, no problem could exist where a potential solution would be limited by our ability to coordinate.</p><p>To appreciate this we must first understand exactly how the planetary challenges our existing capacity to coordinate.</p><p><strong>The planetary demands a single voice</strong>. The planet interfaces with humanity as a single and unified species. It does not care about nations, ethnicities or genders. There is only one &#8220;carbon budget&#8221; for humanity to work with. Global warming imposes the same timelines on all of us. The planetary speaks to &#8220;us&#8221;, but there is no &#8220;us&#8221; to respond, and history tells us that there never has been. What in our current operating system can enable us to speak as a single species?</p><p><strong>Yet the planetary magnifies our differences.</strong> Humans are different in fundamental ways. Politics exists to navigate those differences, but it cannot erase them. There is no confrontation with the planetary that does not split us in response: between the rich and the poor, the north and south, the developed and developing. Any climate justice that prevents us from uniting is inherently self-defeating, yet these differences can&#8217;t be erased simply by the need to unite. This is a new unresolvable dimension of our human condition.</p><p><strong>The planetary breaks our notions of causality.</strong> How could a skeptic be convinced that global warming is real? When did it start? How can we prove it? The planetary is too big for our simple notions of causality. You can&#8217;t point at the planetary directly. The climate cannot be reduced to a number; global warming is only revealed through aggregate statistics, computational models, and big data. The planetary defies cause-and-effect and makes trial-and-error impossible. How do you run a trial on the entire climate?&nbsp;</p><p>Even worse, every planetary response is a lagging one&#8212;it&#8217;s already here, its impact already begun, its causality obscured in an infinite chain of actions, reactions, and counter-reactions.&nbsp; We discover micro-plastics when they show up in our blood stream. We notice the ozone by the hole we created in it. We realize glaciers are melting when sea-levels start rising, decades after the process first started</p><p><strong>The planetary distorts our sense of time</strong>. We wonder with a sense of dread if a climate catastrophe is already inevitable&#8212;if something in our past triggered a future that is beyond our present to mediate. Never before have we been forced to contend with so many different timelines simultaneously, as if geological time merged with historical time to define our experiential time. We must consider the long-term impacts of our actions at timescales far beyond those of our own lifetime. The effects of a few centuries of fossil fuels will be felt by our planet <a href="https://www.amazon.com/Long-Thaw-Changing-Climate-Princeton/dp/0691169063">for millennia</a>.</p><p><strong>The planetary expands our sense of place</strong>. Our geographic responsibility expands beyond our home, our local community, and our nation to now include the world as a whole. Meanwhile, microbes, plants, and animals respond to the planetary with zero regard for our own arbitrary borders. Trying to mitigate planetary disruptions in one place will unleash unintended consequences on another. Each planetary problem will have different impacts on different places, making it even more challenging to unite as a single voice.</p><p><strong>The planetary confronts us with our radical interdependence</strong>. The planetary shifts our biological status from atop nature&#8217;s hierarchy into a radical interdependence with all of Earth&#8217;s life forms. We now know that microbes are the majority form of life on this planet, and that each of us is made up of equal parts human cells, bacteria, and viruses. Our gut alone harbors up to 100 trillion bacteria. Covid showed us that microbes are happy to use us for their own projects of globalization.</p><p>This interdependence must be the foundation of our ethical concerns, shifting our collective responsibility towards the welfare of entire ecosystems. Non-human life must be properly valued as essential to planetary health. Yet our current OS has few means for this life to participate in our political projects.</p><p><strong>The planetary destabilizes our foundations</strong>. Pandemics show how our human plans can so easily be disrupted. Wildfires, water rights, and heat waves turn politics of human rights into politics of survival. The weather is no longer the stable background structuring the rhythm of our lives. Natural disasters have always been exceptions against cycles of normalcy; today we wonder if disasters are the new normalcy. Governance becomes overwhelmed by an expansion of concerns it is not equipped to meet. Coordination becomes even more challenging when we can longer depend on the foundations we assumed were stable.&nbsp;</p><div><hr></div><p>Nothing in our history has prepared us for the planetary.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>  It&#8217;s as if all of the hard won coordination tricks we&#8217;ve mastered through evolutionary trial-and-error no longer apply.  Just as we&#8217;ve started to confront our new challenges at the global level, we&#8217;ve found ourselves thrust into the final level of coordination challenge, with no new coordination tricks up our sleeve.</p><p>The planetary is the final boss mode of coordination, and we&#8217;re in it.</p><h3>Welcome to our new normal</h3><p>The idea of the <em>planetary</em> was born in ecology and made salient by global warming. But the dynamics that <em>define</em> the planetary are not limited to climate change. They will define more and more of the major challenges we face, including problems outside of ecology.&nbsp;</p><p>All of our advanced technologies like AI, cognitive augmentation, and genetic engineering are also creating challenges on a planetary scale. Much like global warming, they will demand a single human voice while amplifying our political differences. They will have vast ecological consequences, both intentional and not. They will defy our traditional notions of time, space, and causality.&nbsp;</p><p>In fact, my contention is that <strong>the planetary is fundamentally a technological phenomenon</strong>. Technology has created such an intimate relationship between the <em>globe</em> and the <em>planet</em> that technology and ecology have become two sides of the same planetary face.&nbsp;</p><p>Global warming makes this clear. Technology first initiated it, then revealed it, and now completely dictates any potential responses we may have to it. Projects like reforestation or wilderness preservation depend on technology continuing to make our productive lands even more productive. We&#8217;re in a race to make clean energy  cheap enough to make economic self-interest our only coordination strategy.  Even the most radical de-growth agenda would utterly depend on technology. After all, minimizing technology requires technology to coordinate policies, enforce reductions, and confirm impacts.&nbsp;</p><p>This turns human-induced climate change from an exception into a portent, the first instance of a new class of planetary problems entangling ecology with technology. Even if global warming was resolved tomorrow, at some point a future ice age will certainly reoccur, due to nothing more than natural geophysical forces. Does anyone think we will stand idly by why nature blithely converts 50% of the northern hemisphere into glaciers? No. We will fashion ourselves as climate custodians in service to all of the Earth&#8217;s life forms threatened by ice, and geo-engineer our way back to the balmy climate of the holocene.</p><p>Likewise, technology is becoming more entangled with ecology. We&#8217;ve already wired the globe many times over and will continue to do so, turning Earth itself into the ultimate connected device that contains all connected devices. Our most advanced technologies&#8212; server farms, robotics, semiconductor fabrication&#8212;are transforming increasingly more energy, water, and natural resources into their preferred environments, ones that are inorganic, cold, and sterile&#8212;environments utterly hostile to life itself.</p><p>We increasingly pursue technologies that engage directly with the planet and it&#8217;s life. We genetically sterilize mosquitoes, we mine deep into the Earth&#8217;s crust to unlock geothermal heat, we engineer pathogens for science. We barely pretend to understand the intended consequences of actions like these, much less the unintended ones. What will the microplastics of the future be?</p><p>Beyond the ecological entanglement, the challenges of advanced technologies will also require us to somehow resolve our our differences to act as one voice. Genetic engineering and cognitive augmentation could create distinctions between winners and losers great enough to define different species. How will it work to have a Chinese AI, a European AI, and an American AI, if an exponential take-off invariably can lead to only one winner? These are challenges we can only navigate as one species.</p><p>More of our problems are reaching a planetary scale, regardless of where they land on the spectrum between technology and ecology. There is no escaping our new &#8220;planetary age&#8221;. Welcome to our new normal.</p><h3>Our planetary OS</h3><p>How do we feel about our current OS rising to the challenge of the planetary age?</p><p>Do we really think we&#8217;re just an upgrade away? That all it will take is a minor patch to increase our capacity to coordinate? Or should we expect something fundamentally different?&nbsp;</p><p>Two examples from recent articles can help clarify the dilemma.</p><p>First, consider the nation-state. <a href="https://www.noemamag.com/governing-in-the-planetary-age/">This article</a> effectively argues that nation-states alone cannot effectively manage the complexities of the planetary age. As we&#8217;ve seen, they aren&#8217;t big enough to unite as a single voice. But they also aren&#8217;t small enough to handle the impacts that will be unique to every locality. In terms of climate response, Minnesota may have more in common with Moscow than it does with Miami.</p><p>Secondly, most suggestions for global coordination seem to be minor upgrades to our current OS. Yet you can see these suggestions strain against reality. <a href="https://www.foreignaffairs.com/world/artificial-intelligence-power-paradox">Consider this article</a> from an AI founder and international expert advocating for global AI governance. It includes caveats like the following (emphasis mine):</p><blockquote><p>AI governance must also be as <strong>impermeable</strong> as possible. [&#8230;] <strong>a single breakout</strong> algorithm could cause untold damage. [&#8230;] it must be <strong>watertight</strong> everywhere [&#8230;] <strong>A single loophole</strong>, weak link, or rogue defector will open the door to widespread leakage, bad actors, or a regulatory race to the bottom.&nbsp;</p><p>In addition to covering the entire globe, AI governance must cover the <strong>entire</strong> supply chain[...], <strong>every</strong> node of the AI value chain, from AI chip production to data collection, model training to end use, and across the <strong>entire</strong> stack of technologies used in a given application.</p></blockquote><p>Quick, name any system that could ever be characterized as &#8220;watertight&#8221; or that could comprehensively cover an entire <em>anything</em>. These proposals amount to little more than &#8220;do a much better version of what we do now, but with zero margin for error&#8221;. And then, perhaps, hoping for the best.</p><p>This is what it sounds like to push an OS beyond its breaking point. It&#8217;s attempting to solve problems our current features were never designed for. I&#8217;m not suggesting there isn&#8217;t valuable thinking here, but any viable planetary response must start with recognizing the mismatch between the caliber of problems and our capacity to solve them.</p><div><hr></div><p>Of course it&#8217;s trivial to diagnose all the issues with our current OS. It&#8217;s something else entirely to propose a viable alternative.&nbsp;</p><p>So what would a viable OS&#8212;one that can reconcile planetary problems with human flourishing&#8212;even look like?&nbsp; Can we even imagine it?&nbsp;</p><p>If you look at recent science fiction, you might think that it&#8217;s not even possible. A common complaint is that SciFi has become <a href="https://www.wired.com/2014/08/stop-writing-dystopian-sci-fiits-making-us-all-fear-technology/">increasingly dystopian</a> in recent decades. Another common observation is that SciFi plots seem to take place in the <em>present day</em> or in the <em>far-future</em>, but rarely in the <em>near-future</em>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> It&#8217;s as if the near-future is avoided because it&#8217;s too hard to imagine. We can&#8217;t see our current OS surviving much longer, yet we can&#8217;t imagine a future OS capable of addressing our planetary predicament.&nbsp;</p><p>What does that say about our odds of implementing a future that our greatest SciFi minds can&#8217;t even image? Is there something that explains this?</p><p>A grim possibility is that perhaps our imagination is running up against the limits of our future reality. This would be a point in favor of the &#8220;great filter&#8221; explanation for the <a href="https://en.wikipedia.org/wiki/Fermi_paradox">Fermi Paradox</a>. The reason that we see zero other signs of life in our near-infinite universe is because advanced technology acts as a &#8220;filter&#8221; that no civilization can move beyond. And maybe that also explains the gap in our science fiction: we can&#8217;t imagine what isn&#8217;t possible. Yikes.&nbsp;</p><p>The better answer is to understand that <a href="https://fs.blog/karl-popper-mistake-of-historicism/">historical determinism is a fallacy</a>, and that none of this should be taken as some kind of destiny. We have no way of predicting where <a href="https://www.youtube.com/watch?v=SVgGYQ_5ID8">the growth of knowledge</a> will take us, and how the space of solutions might open up accordingly. Imaginative thinkers are working on adding to our knowledge of the planetary as we speak, and new ideas are beginning to form.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>We should appreciate the challenge of our planetary predicament, and use it as inspiration to devote more of our talent and resources towards improving the one skill that is the hallmark of our species and critical to our future: our capacity to coordinate.&nbsp;</p><p>Whatever our next OS looks like, it will need to unlock powers of coordination that we have very few precedents for. Yet any chance we have to flourish in this new planetary age will depend on it.</p><p>We better get to work.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p><em>This is the second article in a series exploring <strong>coordination</strong> and technology. The first article explored <a href="https://techforlife.substack.com/p/our-coordination-paradox">our coordination paradox</a>.</em></p><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>This is analogous to <a href="https://simple.wikipedia.org/wiki/Turing_complete">Turing Complete</a>, the computer science notion of universal computability. Any computational machine that is Turing Complete is capable of computing anything which is computable. So any collective that is Ostrom Complete is capable of solving any problem which is solvable, regardless of the coordination challenges. For example, is your family Ostrom Complete? My kids definitely aren&#8217;t! Of course, it replaces Turing with <a href="https://www.amazon.com/Governing-Commons-Evolution-Institutions-Collective-dp-1107569788/dp/1107569788/">Elinor Ostrom</a> (the queen of coordination).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Well, perhaps the atomic bomb? It doesn&#8217;t feel quite analogous. After WWII, the coordination challenges were isolated to two Cold War superpowers.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>As in a few decades or centuries from now.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>I've found <a href="https://www.amazon.com/gp/product/022673286X">Dipesh Chakrabarty</a>, <a href="https://www.noemamag.com/planetary-sapience/">Benjamin Bratton</a>, and <a href="https://www.amazon.com/Hyperobjects-Philosophy-Ecology-after-Posthumanities/dp/0816689237">Timothy Morton</a> particularly insightful in thinking about the planetary.</p></div></div>]]></content:encoded></item><item><title><![CDATA[A Neo-Romantic Rebellion]]></title><description><![CDATA[A very weird AI prediction]]></description><link>https://www.techforlife.com/p/our-neo-romantic-rebellion</link><guid isPermaLink="false">https://www.techforlife.com/p/our-neo-romantic-rebellion</guid><dc:creator><![CDATA[R.B. Griggs]]></dc:creator><pubDate>Thu, 11 Jan 2024 19:59:48 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/841f391a-6a0b-42ee-8be8-88c828d207bf_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most technology predictions about AI are boring. They are themselves predictable because they focus on the wrong half of the equation. They predict what is going to change about <em>technologies</em>, rather than what is going to change about <em>humans</em>. More predictions should recognize that humans will always be weirder than technology. &nbsp;</p><p>Fortunately, we have an incredible resource of weird human responses at our disposal: <em>history</em>. What is history, if not the human story of weird responses to novel stimuli?</p><p>We often use history to find parallels for modern technology. So Is there a historical parallel that might help us understand weird possible responses to AI? I believe that there is such a movement, and that it can provide valuable insights.</p><p>It&#8217;s called Romanticism.</p><h3>Wait, what is Romanticism anyway?</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XKd-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F005de0d0-cd03-4f5e-b708-71ffbc2cd37d_652x894.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XKd-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F005de0d0-cd03-4f5e-b708-71ffbc2cd37d_652x894.jpeg 424w, https://substackcdn.com/image/fetch/$s_!XKd-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F005de0d0-cd03-4f5e-b708-71ffbc2cd37d_652x894.jpeg 848w, https://substackcdn.com/image/fetch/$s_!XKd-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F005de0d0-cd03-4f5e-b708-71ffbc2cd37d_652x894.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!XKd-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F005de0d0-cd03-4f5e-b708-71ffbc2cd37d_652x894.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XKd-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F005de0d0-cd03-4f5e-b708-71ffbc2cd37d_652x894.jpeg" width="274" height="375.6993865030675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/005de0d0-cd03-4f5e-b708-71ffbc2cd37d_652x894.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:894,&quot;width&quot;:652,&quot;resizeWidth&quot;:274,&quot;bytes&quot;:134324,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XKd-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F005de0d0-cd03-4f5e-b708-71ffbc2cd37d_652x894.jpeg 424w, https://substackcdn.com/image/fetch/$s_!XKd-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F005de0d0-cd03-4f5e-b708-71ffbc2cd37d_652x894.jpeg 848w, https://substackcdn.com/image/fetch/$s_!XKd-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F005de0d0-cd03-4f5e-b708-71ffbc2cd37d_652x894.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!XKd-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F005de0d0-cd03-4f5e-b708-71ffbc2cd37d_652x894.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When most people think of Romanticism, they think of poets like Wordsworth and Coleridge pioneering new poetic forms to explore the sublime depths of nature and emotion. Their &#8220;lyrical ballads&#8221; are seen as part of an artistic crusade, a spiritual and moral response to the grim march of the Industrial Revolution.</p><p>But Romanticism was much more than English poets, and it had a much bigger target than the Industrial Revolution. It was reacting to a new vision of the human condition that was at the very center of the Enlightenment project.</p><p>It&#8217;s easy to forget just how much the Enlightenment turned the human condition upside down. It started with Descartes and Newton, who rewrote the rules of the universe with reason as their only guide. Newton replaced divine intervention with mathematical certainty. Descartes put the self at the heart of modern philosophy by doubting everything but his own doubt. Locke and Voltaire recast society on secular foundations. Kant placed rational thought at the center of moral law.</p><p>Reason became exalted above tradition, religion, and history. Knowledge became the ultimate virtue, and the pursuit of universal truths became the driving aspiration. Surely every field could apply Newton&#8217;s methods to discover laws with similar power and precision. Reason became the master key that could unlock all the mysteries of the universe.&nbsp;</p><p>Well, maybe not <em>all</em> the mysteries. History soon provided a harsh reality check in the form of the French Revolution. After many heads were dispatched from their bodies, what started with revolutionary ideals ended with the rise of Napoleon and even greater autocratic rule. While Enlightenment proponents saw a perversion of rational principles, the damage was done. Critics found new resolve to question such unbridled faith in reason.&nbsp;</p><p>Rebellions began to form. It was in Germany, not England, where the most influential response took root.&nbsp; During a time of relative isolation, Germans intellectuals were outsiders looking in on a transformed Europe. They were not encouraged by what they saw, even beyond their inherent dislike of all things French. This atmosphere set the stage for a new intellectual movement, one primed to be skeptical of the age of reason.&nbsp;</p><p>How did these &#8220;Romantics&#8221; respond to the Enlightenment vision of man? They rejected all of it.&nbsp;</p><p>They didn&#8217;t see reason and knowledge as keys to universal truths; they saw reason and knowledge as <em>prisons</em>. Truth didn&#8217;t come from reason, it came from <em>beauty</em>, and it was the job of the artist to find it. Art wasn&#8217;t about revealing ideal perfections hidden in nature. The point of art was <em>creation</em>&#8212;to bring something new into the world, to will your very <em>spirit</em> into existence.&nbsp;</p><p>For the Romantics, the only virtues that mattered were freedom and authenticity, and they only felt free when they were aligned with the self-creation of the universe. They didn&#8217;t care about structure or logic or ideals. To be human was to express your deepest beliefs, whatever the consequences. Compromise was cowardice. The Romantics turned fanaticism into a virtue; anything was justified as long as it was authentic.</p><p>Yes, this freedom took on odd forms. The Romantics were known for scandalous love affairs, graveyard visits at night, drugs, flamboyant lifestyles, seances, and extended trips into the untamed wilderness. Not all of it was healthy. But in the face of the Enlightenment, it at least made sense. Freedom was everything.&nbsp;</p><p>The legacy of Romanticism is still with us. <a href="https://www.amazon.com/Roots-Romanticism-Isaiah-Berlin/dp/0691086621">Isaiah Berlin</a> saw in Romanticism &#8220;the whole notion of plurality, of inexhaustibility, of the imperfection of all human answers and arrangements; the notion that no single answer which claims to be perfect and true, whether in art or life, can in principle be perfect and true.&#8221; It&#8217;s an idea of liberalism based not just in universal human rights, but in recognizing our unavoidable differences, and learning to tolerate them with decency.&nbsp;</p><h3>How does this relate to AI?</h3><p>Here&#8217;s where the parallels to AI get interesting. AI isn&#8217;t just a technology. Much like the Enlightenment, it is a confrontation with a new idea of what it means to be human. While the Romantics responded to a vision of humanity reduced to reason, AI is confronting us with a vision of humanity reduced to data.</p><p>Consider the human condition from the AI&#8217;s perspective. The human is defined only by what is legible to the machine. There is nothing sublime about humans that can&#8217;t be absorbed by a language model. There is nothing sacred that can&#8217;t be recreated by an algorithm. Human culture is a commodity. Shakespeare and the <a href="https://twitter.com/tqbf/status/1598513757805858820">King James Bible</a> are just affectations to apply with a prompt, like an actress trying on accents. Art is just training data for machines to make it better.</p><p>What&#8217;s left after the machines have digitized everything they can of humanity?&nbsp; Yes, this may be exaggerated for effect, but admit it&#8212;there&#8217;s something in you that wants to reject this, some instinct that revolts at the human spirit being reduced to mere data.</p><p>It doesn&#8217;t take much of a leap to imagine this feeling being the spark for something bigger, something similar to the Romantic movement. What would such a modern-day version of Romanticism look like? How would we seek to reassert the sacred and sublime beyond the machine? What new art forms would be needed to express the inexpressible?</p><p>Let&#8217;s imagine our Neo-Romantic rebellion.</p><h3>Welcome to our Neo-Romantic future</h3><p>Imagine: it is 2025, and the world is barely recognizable.&nbsp;</p><p>The AI revolution of 2024 turned out to be a precursor to an even more profound upheaval&#8212;a revolution in the human condition itself. Of course it began with language.</p><p>It started in a Munich beer hall when a group of linguists had an idea: Could human communication be completely illegible to machines? They began hacking on a new language to find out. The prototype was called <em>singslang</em>, and it <em>worked</em>. The language wasn&#8217;t just verbal; it combined clicks, undulations, and hand signals with words that were chanted or sung. Engineers could not translate it. No model could be trained on it.</p><p>But there was a catch. You could only use it in person. It required <em>embodied coordination</em>, where participants synced on a <em>relational</em> key that defined how the different parts of the syntax affected the meaning. It made each communication unique, and without the sync it was just weird-sounding nonsense.</p><p>It was tricky to pick up at first, but the syntax was forgiving and the emphasis on semantics made it easy to grasp the basics. More than anything, it was <em>fun</em>. The linguists knew they were on to something when the engineers started playing with it in their free-time. Soon the language began to take on a life of its own, and it was quickly shortened to <em>slang</em>.&nbsp;</p><p>The artists caught on immediately. You looked pretty silly when you <em>slang</em>, but that was the point. It released something primal. <em>Slang</em> became a new kind of poetry. Recorded <em>slang</em> can&#8217;t be decrypted, so key stories had to be memorized and repeated. New oral traditions were formed along with storytellers to carry them. </p><p>It all stayed local at first. Learning a new language limited how fast it could spread. Yet that was part of the appeal. It demanded effort. It felt like joining a secret guild. Soon hubs were forming in Mexico City, Vancouver, Lagos, Bangkok. Rumors began to spread and authorities became concerned.&nbsp;</p><p>Philosophers followed the artists. They did the philosopher thing and tried to analyze it to death. All of a sudden everyone was a McLuhan expert. But they helped define an ethos that quickly moved beyond language. This was when Academics began to notice the parallels and first labeled it <em>Neo-Romanticism</em>.</p><p>That&#8217;s when the hackers flooded in. They took the ethos and weaponized it. A digital commons formed to create new protocols for ephemeral communication. Fashion started subverting bio-markers with masks and holograms. LANyards became the new hot device to sync <em>slang</em> over local area networks. Even crypto found use cases with data coalitions to license creative commons for AI training.</p><p>The religions were next. New ones started, but the oldest religions drew the most attention. The ancient rituals and mystical practices of Kabbalah, Eastern Orthodox, and Sufi saw a resurgence. Religious texts were reinterpreted in <em>slang</em>.&nbsp; Some went even further back to our animistic roots, seeing everything as alive, relational, and dependent on a shared ecosystem.</p><p>Like all movements, the Neo-Romantics splintered into fractions. The youth took it too far, trying to one-up each other in irrationality, but mainly just running naked through the streets. The <em>Illegibles</em> waged a war on surveillance in all forms. The <em>Heteros</em> sought cultural sanctuaries isolated from digital uniformity. A few boomers even resurrected their long-lost hippy dreams of off-grid communes.</p><p>Now, in 2025, the Neo-Romantic rebellion is fully here. <em>Slang</em> released something in us, challenging our most tightly held beliefs. Maybe the empirical world isn&#8217;t the only world worth knowing. Maybe the true isn&#8217;t the rational. Maybe there is something sublime in the inexplicable.</p><p>We&#8217;re starting to find the right balance with our machines. We&#8217;re happy to give AI&#8217;s logic and calculation. In return, we will reclaim the paradoxes at the heart of the human journey: between reason and emotion, the conscious and subconscious, the animal and the divine.&nbsp;</p><p>This is the Neo-Romantic rebellion. We seek the sacred. We tell new stories. We thank the machines for showing us what we lost.</p><div><hr></div><p>History shows us that humans respond to most disruptions in very weird and unpredictable ways. AI will challenge some of our most deeply held beliefs about the human condition. It seems natural to expect all sorts of wacky movements that will try to define and re-assert the uniqueness of the human spirit.&nbsp;</p><p>Will the Neo-Romantic rebellion happen? Of course not. But maybe some version of it will, at least in spirit. And if that happens, it will be weird predictions like this one that will help us prepare for our future with AI by thinking better about today.&nbsp;</p><p>Regardless of whether our future is Neo-Romantic or not, I hope it includes more emotions, more paradoxes, and yes, even more irrationalities. In other words, more of the things that machines may augment but can never replicate.</p><p>And besides, given the paperclip apocalypse, is a Neo-Romantic future really such a bad possible outcome?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.techforlife.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Tech For Life is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>