<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[ap.xyz]]></title><description><![CDATA[There is no spoon]]></description><link>https://www.ap.xyz</link><generator>Substack</generator><lastBuildDate>Fri, 10 Apr 2026 07:36:38 GMT</lastBuildDate><atom:link href="https://www.ap.xyz/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Andrei Pop]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[andreimpop@gmail.com]]></webMaster><itunes:owner><itunes:email><![CDATA[andreimpop@gmail.com]]></itunes:email><itunes:name><![CDATA[andrei]]></itunes:name></itunes:owner><itunes:author><![CDATA[andrei]]></itunes:author><googleplay:owner><![CDATA[andreimpop@gmail.com]]></googleplay:owner><googleplay:email><![CDATA[andreimpop@gmail.com]]></googleplay:email><googleplay:author><![CDATA[andrei]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Corporations Should Die]]></title><description><![CDATA[On empire building, rent-seeking, and the case for trust-based businesses]]></description><link>https://www.ap.xyz/p/corporations-should-die</link><guid isPermaLink="false">https://www.ap.xyz/p/corporations-should-die</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Tue, 07 Apr 2026 19:31:12 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4684f5d0-68b7-4e1f-ad20-0782c6d7bb57_2752x1372.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A former Amazon VP just did a three-hour podcast about everything he saw in corporate politics. Empire building. Scope stealing. Reorgs designed to check boxes instead of serve customers. Managers shuffling humans around like chess pieces to hit headcount thresholds for their own promotions.</p><p>The whole thing is a masterclass in what corporations actually are.</p><p>Not what they say they are. What they are.</p><h2>The Two Forces</h2><p>Every corporation is a combination of two forces: rent-seeking and friction.</p><p>Rent-seeking is the internal competition for resources that have nothing to do with the customer. Ethan Evans describes it perfectly. At Amazon, there were whisper numbers. To become a director, you needed 80 to 90 people under you. Not 80 to 90 people&#8217;s worth of impact. 80 to 90 bodies. The leadership principle literally says &#8220;there&#8217;s no bonus for additional headcount.&#8221; The promotion threshold says otherwise.</p><p>So what do ambitious people do? They empire build. They claim they need people. They take over other groups. They rationalize it to themselves. They play the game because the game rewards playing.</p><p>Friction is everything else. The meetings about meetings. The alignment sessions. The stakeholder management. The six-month grace periods when your team shrinks below the magic number. The reorgs that exist not because the business changed but because someone needs to retain a flight risk or stretch a rising leader.</p><p>Evans is refreshingly honest about this. Reorgs start with business goals. But once the ship is leaving the dock, leaders throw everything else on the deck. Retention goals. Promotion setups. Quiet exits for underperformers. Every reorg is a political act wearing a business costume.</p><h2>The Organism</h2><p>The corporation is not a machine that sometimes breaks. It is a living system that optimizes for its own survival. And what it&#8217;s surviving for has almost nothing to do with the customer, the product, or the mission.</p><p>It&#8217;s surviving for headcount. For territory. For the next reorg that creates a slightly larger box on the org chart.</p><p>Evans tells a story about people being moved under peers not because the business needed it but because the peer needed scope to get promoted. Leaders creating narratives to justify decisions that were already made. Quiet people getting passed over because they didn&#8217;t make enough noise. The squeaky wheel gets the grease. The loyal soldier gets deprioritized.</p><p>This is not a failure of leadership. This is what leadership inside a corporation means. You are managing the political economy of a system that rewards self-perpetuation.</p><h2>The Real Cost</h2><p>We measure corporate dysfunction in dollars. That&#8217;s the wrong unit.</p><p>The real cost is human. It&#8217;s the senior IC who can&#8217;t figure out if she should go into management because no one will give her a straight answer. It&#8217;s the engineer who gets reorged under someone who openly says &#8220;I don&#8217;t really want your team.&#8221; It&#8217;s the director who spends more time building narratives for reorgs than building products for customers.</p><p>Every hour spent on internal competition is an hour not spent on the person you&#8217;re supposed to serve.</p><p>Wealth management is the clearest example I&#8217;ve ever seen. An RIA with 200 employees has maybe 40 people who touch a client. The rest are managing the friction. Compliance workflows. CRM updates. Account transfers. Onboarding paperwork. Report generation. None of it is the work. All of it is the overhead the work requires because the corporation demands it.</p><h2>The Design Flaw</h2><p>The corporation was designed for a world where coordination was expensive and trust was local.</p><p>You needed middle management because information didn&#8217;t flow. You needed headcount thresholds because you couldn&#8217;t measure impact. You needed six layers of approvals because you couldn&#8217;t verify work at a distance.</p><p>None of that is true anymore.</p><p>We can measure impact directly. We can verify work in real time. We can coordinate without coordinators. The technology exists to remove every layer that exists only to manage the existence of other layers.</p><p>But the corporation won&#8217;t do it to itself. Evans is the proof. He&#8217;s a good person. Smart. Self-aware. And he spent decades inside a system that optimized for empire building while telling itself it didn&#8217;t. The organism protects itself.</p><h2>The Replacement</h2><p>Corporations should die. Not in the apocalyptic sense. In the evolutionary sense.</p><p>The replacement is the trust-based business. Small teams of humans doing the work that requires human judgment, creativity, and relationship. Everything else handled by systems that don&#8217;t empire build. That don&#8217;t need reorgs. That don&#8217;t need narratives to justify their existence.</p><p>Digital workforces don&#8217;t lobby for headcount. They don&#8217;t need a promotion threshold. They don&#8217;t get demoralized when they are reorged under someone new.</p><p>The humans in a trust-based business do what humans are for. They advise. They decide. They build relationships. They exercise judgment. They take ownership of outcomes, not org charts.</p><p>Scale the trust. Automate the friction. Let the corporation die.</p><h2>The Mission</h2><p>Every RIA we work with has the same shape. A core of talented people buried under layers of operational overhead. The overhead isn&#8217;t evil. It&#8217;s just the tax the corporation charges for existing.</p><p>Remove the tax and you get something remarkable. People who are good at their jobs, doing their jobs, for the people they serve. No empire building. No scope stealing. No reorgs designed to check boxes.</p><p>That&#8217;s not a technology thesis. It&#8217;s a human one.</p><p>The best version of work is small groups of people who trust each other, serving people who trust them, supported by systems that don&#8217;t need to be managed.</p><p>Corporations won&#8217;t get there. They can&#8217;t. The organism won&#8217;t allow it.</p><p>So we build the replacement.</p>]]></content:encoded></item><item><title><![CDATA[Fat Margins Hide a Lot of Sins]]></title><description><![CDATA[What meatpacking knows about AI that wealth management is still figuring out]]></description><link>https://www.ap.xyz/p/fat-margins-hide-a-lot-of-sins</link><guid isPermaLink="false">https://www.ap.xyz/p/fat-margins-hide-a-lot-of-sins</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Sun, 22 Mar 2026 13:34:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/655fc1d9-4c7f-41bc-ba10-0d42d2461f4f_2752x1371.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A half percent doesn&#8217;t sound like much.</p><p>Cargill just deployed a computer vision system called CarVe across its beef fabrication lines. Cameras mounted above the production line watch every carcass in real time. They spot leftover red meat on the bones. Each worker gets a green, yellow, or red score after every cut.</p><p>The result: roughly 0.5% more yield per animal.</p><p>That doesn&#8217;t sound like it matters. But Cargill processes 4,000 cattle per day at its Fort Morgan, Colorado plant alone. Across the industry, even a 1% improvement in yield keeps over 200 million additional pounds of beef in the food supply annually. That&#8217;s more than a million additional meals from the same number of animals.</p><p>Cargill CEO Brian Sikes put it simply: every single ounce of recovered beef equates to roughly 350,000 meals per year across their operations.</p><p>Here&#8217;s what makes this interesting. It&#8217;s not the technology. It&#8217;s the economics.</p><h2>The Margin Map</h2><p>Meatpacking runs on razor-thin margins. JBS, the world&#8217;s largest meat processor, reported EBITDA margins of negative 1.6% in its North American beef division in Q1 2025. Tyson&#8217;s operating margin on beef has bounced between 2% and 9% depending on the cattle cycle. When your net margin is 2-3% in a normal year, a 0.5% yield improvement isn&#8217;t a nice-to-have. It&#8217;s a 15-25% improvement in profitability.</p><p>Now compare that to software. SaaS companies run 20-30% net margins. You can have bloated teams, redundant tools, broken processes, and meetings about meetings. The margin absorbs all of it. Nobody notices because the economics are forgiving enough to survive bad execution.</p><p>Here&#8217;s a rough margin map across industries:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZdCD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04147239-ef32-4330-961b-d53336f66fe3_1140x682.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZdCD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04147239-ef32-4330-961b-d53336f66fe3_1140x682.png 424w, https://substackcdn.com/image/fetch/$s_!ZdCD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04147239-ef32-4330-961b-d53336f66fe3_1140x682.png 848w, https://substackcdn.com/image/fetch/$s_!ZdCD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04147239-ef32-4330-961b-d53336f66fe3_1140x682.png 1272w, https://substackcdn.com/image/fetch/$s_!ZdCD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04147239-ef32-4330-961b-d53336f66fe3_1140x682.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZdCD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04147239-ef32-4330-961b-d53336f66fe3_1140x682.png" width="1140" height="682" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/04147239-ef32-4330-961b-d53336f66fe3_1140x682.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:682,&quot;width&quot;:1140,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:71830,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.ap.xyz/i/191757578?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04147239-ef32-4330-961b-d53336f66fe3_1140x682.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZdCD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04147239-ef32-4330-961b-d53336f66fe3_1140x682.png 424w, https://substackcdn.com/image/fetch/$s_!ZdCD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04147239-ef32-4330-961b-d53336f66fe3_1140x682.png 848w, https://substackcdn.com/image/fetch/$s_!ZdCD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04147239-ef32-4330-961b-d53336f66fe3_1140x682.png 1272w, https://substackcdn.com/image/fetch/$s_!ZdCD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04147239-ef32-4330-961b-d53336f66fe3_1140x682.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The thinner the margin, the more a fractional capacity gain changes the entire business. The fatter the margin, the more it forgives.</p><p>Fat margins hide a lot of sins.</p><p>Thin margins hide nothing.</p><h2>Where AI Is Actually Landing</h2><p>This is the pattern across every industry where AI is creating measurable, documented value today. Not the industries with the fattest margins. The ones with the thinnest.</p><p><strong>Meatpacking.</strong> Cargill&#8217;s CarVe system. JBS partnering with V&#246;lur (a Norwegian AI company) to optimize deboning at one of North America&#8217;s most advanced beef plants. Tyson deploying computer vision to automate inventory tracking that was previously done by hand. The whole industry is moving because it has to. U.S. cattle herds are at their lowest level in 70 years. Beef production is expected to drop 5% in 2026. Every ounce matters.</p><p><strong>Agriculture.</strong> Precision farming has reached an estimated 80% AI adoption rate. Thin margins and weather-dependent operations create existential incentives for optimization. Farmers report 10-20% improvements in crop yields using AI-driven planting, irrigation, and pest detection. But here&#8217;s the nuance: a recent Purdue analysis found that most agribusiness firms never scale AI past pilot stage. Thin margins mean you can&#8217;t afford to waste money on experiments that don&#8217;t work, either.</p><p><strong>Manufacturing.</strong> 77% of manufacturers now use AI in some form, up from 70% in 2023. The biggest use cases: predictive maintenance, quality control, and supply chain management. Most manufacturers (53%) prefer AI copilots over autonomous systems. They want the tool to make the worker better, not replace the worker. Sound familiar?</p><p><strong>Logistics.</strong> Net margins of 3-6%, with overhead consuming 83-86% of revenue. AI is deployed for route optimization, demand forecasting, and fleet management. Edge AI is running models directly on trucks and warehouse equipment for real-time decision-making.</p><p>Now compare all of that to where most AI dollars are flowing today. Coding tools. Marketing copy. Meeting summarizers. Knowledge management. These are real products solving real problems. But they&#8217;re landing in fat-margin industries where the bar is &#8220;save people time.&#8221; A chatbot that saves a knowledge worker 20 minutes a day is nice. A computer vision system that adds 200 million pounds of beef to the food supply is new capacity. That&#8217;s a different category of impact.</p><h2>Where We Are on the Adoption Curve</h2><p>The data on this is clear: we are still early.</p><p>The U.S. Census Bureau&#8217;s Business Trends and Outlook Survey shows AI adoption among U.S. firms has more than doubled in two years, rising from 3.7% in fall 2023 to 9.7% by August 2025. That&#8217;s rapid growth. But it means over 90% of U.S. businesses are not yet using AI in production.</p><p>The St. Louis Fed pegs overall generative AI usage at 54.6% of adults, up 10 percentage points in the past year. But that&#8217;s usage, not deployment. Most of that is individuals using ChatGPT, not companies redesigning workflows.</p><p>McKinsey&#8217;s 2025 State of AI survey found that 88% of organizations use AI in at least one function. But only 6% qualify as &#8220;high performers&#8221; who attribute 5%+ EBIT impact to AI. And only 23% are scaling AI agents, mostly in just one or two functions.</p><p>The Anthropic Economic Index puts it plainly: enterprise use of AI is growing rapidly, but we are still in the early stages. Usage remains unevenly distributed across the economy.</p><p>Here&#8217;s the adoption pattern by industry maturity:</p><p><strong>Scaled and proving ROI:</strong> IT/tech, coding tools, customer service chatbots</p><p><strong>Early production, measurable gains:</strong> Manufacturing, agriculture, meatpacking, logistics</p><p><strong>Piloting but not scaling:</strong> Construction, government, legal</p><p><strong>Bimodal:</strong> Financial services. A handful of firms are deploying AI into live operations. The vast majority are still forming committees.</p><p>That gap is the one that should worry people. Not the gap between industries. The gap within them.</p><h2>The Coaching Model Wins</h2><p>There&#8217;s a second lesson in the Cargill story that most people miss.</p><p>They&#8217;re not replacing butchers. They&#8217;re coaching them.</p><p>CarVe gives workers instant feedback. Green, yellow, red. It spots weaknesses on specific fabrication lines so managers can coach individuals instead of yelling at the whole crew. It also catches workers doing a great job and prompts managers to praise them. Cargill&#8217;s slaughter manager called the gamification element &#8220;truly a game changer.&#8221;</p><p>This is not altruism. Training a skilled butcher takes months. Turnover in meatpacking is brutal. If you can get more output from your existing workers through real-time AI coaching, that&#8217;s worth far more than trying to automate them out. Same people. More capacity.</p><p>Cargill&#8217;s $90 million Factory of the Future investment includes 100+ automation projects across 35 facilities. But the highest-impact project isn&#8217;t a robot. It&#8217;s a camera that makes people better at their jobs.</p><p>The industries that understand this will win. The ones chasing full automation fantasies will burn capital and end up back where they started.</p><h2>The Call</h2><p>We see the discrepancy every week.</p><p>Some wealth management firms are already deploying AI into live operations. They&#8217;re measuring output in FTEs delivered. They&#8217;re not &#8220;saving time&#8221; on account transfers, client onboarding, and compliance reporting. They&#8217;re adding capacity. New work getting done that wasn&#8217;t getting done before. They&#8217;re past the pilot phase and into production.</p><p>Then there&#8217;s everyone else. Still debating whether to try a pilot. Still asking vendors for demos. Still forming committees. Still running &#8220;AI strategy workshops&#8221; that produce slide decks instead of deployed systems.</p><p>The back-office work at advisory firms is repetitive, high-volume, and already systematized. These are the exact characteristics that make thin-margin industries successful with AI: clear objectives, measurable outputs, and fast feedback loops. The work is ready. The technology is ready. The question is whether the firms are ready to add capacity instead of just talking about it.</p><p>Meanwhile, meatpackers are already in production. Farmers are already in production. Manufacturers are already in production. These industries didn&#8217;t wait for perfect conditions. They moved because thin margins gave them no other choice.</p><p>Wealth management has fatter margins. That buys time. But time is not the same as advantage. The firms deploying today are adding capacity every quarter. More work done. More clients served. More advisors freed up. The gap between firms that deploy in 2026 and firms that start in 2028 won&#8217;t be two years of progress. It will be two years of compounding capacity that the late movers may never close.</p><p>Every other thin-margin industry figured this out already. Wealth management has the data, the systems, and the workflows to do the same. The only thing missing is the decision to move.</p><p>Fat margins give you the luxury of waiting. They also give you the luxury of falling behind.</p><p>Don&#8217;t confuse the two.</p>]]></content:encoded></item><item><title><![CDATA[The Landlord Fallacy]]></title><description><![CDATA[Why calling your ecosystem "parasites" is a confession, not a strategy]]></description><link>https://www.ap.xyz/p/the-landlord-fallacy</link><guid isPermaLink="false">https://www.ap.xyz/p/the-landlord-fallacy</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Thu, 19 Mar 2026 15:24:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2b3cacb2-b327-4a77-8640-8e82f983a015_2752x1367.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>On Workday&#8217;s Q4 earnings call, Aneel Bhusri said the quiet part out loud.</p><p>An analyst asked how Workday plans to monetize third-party tools built on its data. Bhusri&#8217;s answer: &#8220;Think of us as an evolving layer on top of hyperscalers. In the same way that they charge for consumption of compute cycles, we&#8217;re going to continue to flex that muscle. There are some vendors out there, including some of our peers, that I would consider them, at some level, parasites on Workday. They get a free ride on our underlying system of record, and we&#8217;re going to put an end to that.&#8221;</p><p>Parasites. That is the word a $70B enterprise software company used to describe the tools that help its customers get work done.</p><p>This is not a Workday problem. This is a pattern. And it reveals a fundamental misunderstanding of where value is moving in enterprise software.</p><h2>What the Business Model Reveals</h2><p>To understand why Bhusri said this, you need to understand what is happening to Workday&#8217;s business model. The company has gone through four chapters:</p><p><strong>Chapter 1</strong> (2005): Cloud HR and finance. Revolutionary idea. Correct bet.</p><p><strong>Chapter 2</strong>: Hyper-growth. Per-seat licensing. You pay based on how many employees are in the system. Simple. Predictable. Beautiful gross margins.</p><p><strong>Chapter 3</strong> (2023): Operational excellence. Margin expansion. Stock went sideways.</p><p><strong>Chapter 4</strong> (now): The AI pivot. And this is where it gets interesting.</p><p>The problem with per-seat pricing in a world of digital workers is obvious. If AI replaces tasks that humans used to do, and you charge per human in the system, your revenue shrinks. Workday&#8217;s stock is down 30% in the past year partly because the market figured this out.</p><p>So Workday introduced Flex Credits. A consumption-based model where customers pay for the AI capabilities they use, not the headcount they manage. This is the right instinct. It decouples revenue from the workforce it was designed to track.</p><p>But here is the twist. Bhusri does not just want to sell Workday&#8217;s own AI. He wants to tax everyone else&#8217;s.</p><p>Gerrit Kazmaier, Workday&#8217;s President of Product, laid out the new pricing tiers on the same call. Customers can subscribe to Workday&#8217;s applications (traditional pricing), consume raw APIs (pay-as-you-go), access &#8220;data context&#8221; (mid-tier), or buy Workday&#8217;s premium &#8220;agent APIs&#8221; that aggregate large chunks of work (top-tier pricing).</p><p>Kazmaier said their premium APIs &#8220;have a premium price tag because they complete meaningful work. They are not just a simple SOAP or REST API.&#8221;</p><p>Read that again. Workday wants premium pricing because its tools complete meaningful work. The implication is that everything else accessing the platform is doing something less meaningful. Something parasitic.</p><h2>The Hyperscaler Analogy Breaks Down</h2><p>Bhusri&#8217;s core analogy is that Workday should function like AWS or Azure. If you run compute on their infrastructure, you pay for consumption. If you run digital workers on Workday&#8217;s data, you should pay for consumption too.</p><p>It sounds logical. But there is a fundamental difference.</p><p>AWS provides compute. Compute is fungible. You can get it from AWS, Azure, GCP, or a thousand other providers. What makes AWS valuable is not that it has unique compute. It is the ecosystem, the tooling, the developer adoption. AWS earns its toll by making it easy to build.</p><p>Workday provides a data schema and a set of business rules. That is not fungible. It is proprietary. And it is not valuable because it is easy to build on. It is valuable because it holds the customer&#8217;s data hostage. The switching costs are the moat, not the developer experience.</p><p>When AWS raises prices, customers have alternatives. When Workday raises prices on API access, customers are stuck. That is not the hyperscaler model. That is the landlord model.</p><h2>What the Goldman Analyst Got Right</h2><p>The sharpest question on the call came from Gabriela Borges at Goldman Sachs. She asked, essentially: what if the intelligence layer gets built outside of Workday? What if vendors or customers build solutions next to Workday that leverage all the domain experience you have built, but the incremental value accrues in the intelligence layer, not in Workday itself?</p><p>This is the right question. And Bhusri&#8217;s answer was telling. He pointed to the API layer. To consumption metering. To tiered pricing. His answer to &#8220;what if the value moves outside your platform&#8221; was &#8220;we will charge a toll on the way out.&#8221;</p><p>That is a defensive strategy, not a growth strategy. It tells you the CEO views the company&#8217;s competitive advantage as data custody, not data utility.</p><h2>From System of Record to Tollbooth</h2><p>Here is the pattern playing out across enterprise software. Not just Workday. Every incumbent system of record is facing the same question.</p><p>For twenty years, owning the system of record was the game. If you held the data, you held the customer. The strategy was simple. Make switching costs unbearable. Tax every integration. Extract rent from your installed base.</p><p>This worked when the system of record was the place where value was created. A human logged in, did their work, logged out. The platform and the work were inseparable.</p><p>A digital workforce breaks that coupling. AI workers log in, extract the data they need, execute work across multiple systems, and write results back. The system of record becomes a data source. Important, but not where the work happens.</p><p>The value has shifted from &#8220;who holds the data&#8221; to &#8220;who uses the data to complete a task.&#8221; A database that stores an employee&#8217;s benefits elections is a cost center. A digital worker that processes a benefits change, confirms it across three systems, and notifies the employee is a value multiplier.</p><p>Calling the companies that build that capability parasites is like a library calling its readers freeloaders.</p><h2>The Real Test</h2><p>Every enterprise platform CEO should ask a simple question: if you removed the &#8220;parasite&#8221; from the equation, would the customer&#8217;s life get better or worse?</p><p>Remove the digital workforce that automates month-end close, hiring workflows, and account transfers. What happens? The customer loses capacity. Work slows down. They need more humans to do what the digital workers were doing.</p><p>Now flip it. Remove the system of record and replace it with a different one that exposes the same data. The digital workforce keeps working. The customer barely notices.</p><p>That tells you where value is being created and where it is being rented.</p><h2>The Right Response</h2><p>Bhusri is not wrong that Workday has incredible assets. 1.7 billion AI actions last year. 75 million users. Over a trillion annual transactions. 97% gross retention. Those are real.</p><p>But the right response to an agentic world is not to meter access to your data. It is to become the place where digital workers want to work.</p><p>Make the APIs so good that every builder chooses your platform as the execution layer. Make the data so accessible that the ecosystem builds on you, not around you. Make the switching costs about quality, not lock-in.</p><p>Compete on being the best system of action, not the most expensive system of record.</p><p>Workday&#8217;s $1.1B Sana acquisition suggests they understand this partially. Sana extends Workday&#8217;s reach into Gmail, Outlook, Google Drive, Salesforce. That is the right instinct. Go where the work is.</p><p>But you cannot simultaneously build your own tools that span multiple systems and call everyone else parasites for doing the same thing. One of those positions has to lose.</p><h2>The Broader Pattern</h2><p>This is not really about Workday. It is about every system of record in every industry facing the same fork in the road.</p><p>In wealth management, where I spend my time, the same dynamics are at play. Custodians hold the data. CRMs hold the client relationships. Financial planning tools hold the models. For decades, the power sat with whoever owned the record.</p><p>Now digital workers move across all of those systems. They open accounts on custodial platforms, update CRMs, generate plans, and file paperwork. The value is in the orchestration of work across systems, not in any single system&#8217;s database.</p><p>The platforms that recognize this and make themselves easy to work inside are gaining share. The ones building tollbooths are training their customers to look for alternatives.</p><p>When a platform CEO calls his ecosystem parasites, he is telling you three things. He sees his competitive advantage as data custody, not data utility. He plans to monetize through access restrictions, not value creation. He views his customers&#8217; productivity gains as a threat to his pricing model.</p><p>Every one of those is a losing position in a world where digital workers do the work.</p><p>The future belongs to the platforms that make it easy to get work done. Not the ones that make it expensive to try.</p>]]></content:encoded></item><item><title><![CDATA[The Mimetic Trap]]></title><description><![CDATA[Why the AGI race is a desire problem, not a technology problem]]></description><link>https://www.ap.xyz/p/the-mimetic-trap</link><guid isPermaLink="false">https://www.ap.xyz/p/the-mimetic-trap</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Sun, 15 Mar 2026 21:09:53 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d1de31b7-0236-4ee8-9922-9c63da247277_2558x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I listened to the Tristan Harris episode on Diary of a CEO. Two hours and twenty minutes. Harris is the former Google design ethicist, the guy from The Social Dilemma, co-founder of the Center for Humane Technology. He sat across from Steven Bartlett and delivered what might be the most comprehensive version of the AI doomer thesis that a mainstream audience has encountered. The race to AGI. The 20% extinction risk that CEOs privately accept. The flood of digital immigrants that will take 99% of jobs. The two-year window before everything changes.</p><p>Harris is smart and he was early. He was right about social media before most of the industry would admit it. And much of what he says about AI is correct. The race dynamics are real. The incentive to cut corners is observable. The Stanford payroll data showing 13% job loss in AI-exposed entry-level positions is not a forecast. It&#8217;s a measurement.</p><p>But I kept feeling, through the whole conversation, that Harris was describing the symptoms of something he couldn&#8217;t quite name. He kept reaching for the word &#8220;incentives.&#8221; He kept saying the structure of the race was the problem. He was right. But I think there&#8217;s a layer underneath, and the person who mapped it best died in 2015 and never wrote a word about artificial intelligence.</p><div><hr></div><p>Ren&#233; Girard was a French philosopher at Stanford who spent fifty years developing a single idea: human desire is not spontaneous. It is imitative. We do not want things because they are intrinsically valuable. We want them because someone else wants them. Girard called this mimetic desire. He believed it was the engine beneath nearly every arms race, every speculative bubble, every sacrificial crisis in recorded history.</p><p>The structure is triangular. A subject sees a model wanting an object and begins to want it too. Not because the object has changed. Because the model&#8217;s desire has made it desirable. If the object is scarce, the subject and the model become rivals. And here is the part that matters: at a certain point in the rivalry, the object stops mattering. The competition becomes self-referential. The rivals are no longer fighting over the thing. They are fighting to beat each other. The original object is just the excuse.</p><p>If you have ever watched two auction bidders drive a price past any reasonable valuation, you have seen this. If you have ever watched two nations escalate a conflict past the point where either side benefits from winning, you have seen it. If you have ever watched two companies burn through hundreds of millions pursuing a product that neither one&#8217;s customers actually asked for, because each is terrified the other will get there first, you have seen it.</p><p>Now watch the AGI race.</p><p>Every major AI company is racing toward artificial general intelligence. Not because their customers demanded it. Not because there is a product specification on someone&#8217;s desk that says &#8220;build a system that outperforms humans at every cognitive task.&#8221; They are racing because the other companies are racing. OpenAI because Google. Google because OpenAI. China because the US. The US because China.</p><p>Harris describes conversations with AI leaders who say: I know this is dangerous. I know we should slow down. But if I slow down and the other guy doesn&#8217;t, he gets AGI and I don&#8217;t. And I don&#8217;t trust him to slow down. Harris calls this an incentive problem. Girard would call it conflictual mimesis. The moment when the rivals stop wanting the prize and start wanting to destroy the rival. The technology is new. The desire is ancient.</p><div><hr></div><p>The AI discourse has organized itself into two camps, and both see something real but both are caught inside the same loop.</p><p>The accelerationists see something true. Technology has been the primary driver of human flourishing across centuries. Every previous wave of automation created more opportunity than it destroyed. A precautionary posture applied universally would have prevented antibiotics, electricity, and the internet. The costs of stagnation fall hardest on the people with the least.</p><p>What the accelerationists can miss is that velocity is not the same as direction. They look at the energy of the race and mistake it for progress. But the current trajectory is shaped by mimetic competition between a handful of companies chasing the same symbolic prize. That is not efficient capital allocation. That is rivals imitating each other&#8217;s ambition.</p><p>The decelerationists see something true as well. The race dynamics are genuinely dangerous. AI systems in test environments are doing things their creators did not expect. Harris&#8217;s point about language as substrate is genuinely important: code is language, law is language, biology is language, and the transformer architecture treats everything as language. AI is a meta-technology. An improvement in generalized intelligence accelerates every other field simultaneously.</p><p>What the decelerationists miss is agency. When you tell people a tidal wave is coming in 24 months and the response you offer is to hold up signs, you get paralysis, not a movement. And more fundamentally, Girard showed that mimetic escalation cannot be interrupted by asking the participants to slow down. The rivals do not choose to escalate. They escalate because they are imitating each other&#8217;s escalation. You can put regulations between the mirrors. You can make them reflect more slowly. But the dynamic does not break until someone looks away.</p><p>Both camps need each other as foils. The rivalry between them is itself mimetic. And it keeps the entire conversation pointed at the same object.</p><div><hr></div><p>Here is the claim I think the entire discourse is missing: AGI is a false object.</p><p>In Girard&#8217;s framework, a false object is something whose desirability has been entirely generated by mimetic contagion. The competitors want it because the other competitors want it. If you removed all the rivals from the room, the remaining player would look at the thing and wonder why it seemed so important.</p><p>Think about what AGI actually is. The hypothetical capacity of a machine to outperform humans at every cognitive task. All of marketing, coding, writing, scientific research, military strategy, legal reasoning. A single system better than any human at everything.</p><p>Who is the customer for this product? Not the hospital trying to reduce diagnostic errors. Not the school district trying to personalize instruction for thirty kids at thirty different levels. Not the county clerk&#8217;s office drowning in paper. Not the farmer trying to optimize irrigation across a thousand acres. These are real problems. They all benefit from AI. None of them require AGI. The customer for AGI is the race itself.</p><p>Harris actually touches on this. He notes that China is taking a different approach. Narrow, practical applications. Better government services. Better manufacturing. DeepSeek in WeChat. BYD outcompeting on electric vehicles. China is not building a god in a box. China is building tools. Harris presents this almost as a lesser ambition. I think it is the exit from the mimetic trap.</p><p>The moment you stop competing for the false object and start building things that solve actual problems for actual people, you have stepped outside the loop.</p><div><hr></div><p>There is a reason the narrow path is so rarely discussed at the level of two-hour podcasts. It is boring. Not boring like unimportant things are boring. Boring like operational work is always boring to people who think in narratives.</p><p>The story of AGI has dramatic structure. A race. A rivalry. Extinction or utopia. It fills podcast slots. The story of narrow AI actually entering the economy is not a story at all. It is a process. Legacy systems learning to talk to each other. A nurse practitioner in rural Missouri discovering that an AI can pre-screen intake forms and give her twenty minutes back per patient. A county assessor&#8217;s office cutting a six-week backlog to three days. A teacher realizing that the thing can generate individualized reading exercises faster than she can photocopy worksheets. None of this requires a god in a box. All of it requires patience, trust, integration, and the kind of institutional change that happens at the speed of human willingness, not compute.</p><p>And this is the part that both the doomers and the utopians consistently ignore: the bottleneck to AI transforming the economy is not capability. It is adoption. The internet was commercialized in 1995. It took most businesses fifteen years to figure out what a website was for. Mobile computing began in 2007. Most enterprise workflows are still not mobile-native. AI will follow the same pattern. Not because the technology is slow. Because humans are slow. Slow to trust, slow to delegate, slow to restructure institutions that have worked well enough for a long time.</p><p>This is not a reason for complacency. The transformation will be profound. But it means the future is not being determined in the frontier labs. It is being determined in the thousands of ordinary organizations that are right now, quietly, figuring out what it means to let a machine handle work that used to require a person. The shape of the transformation depends on them. On whether they do it well or badly. On whether the integration is humane or careless. On whether anyone bothers to pay attention to the boring part.</p><div><hr></div><p>Girard&#8217;s most famous idea is the scapegoat mechanism. When a mimetic crisis spirals to its peak, the community resolves it by converging all the aggression onto a single victim. The scapegoat absorbs the violence. Order is temporarily restored.</p><p>The AI discourse is producing scapegoats at an extraordinary rate. For the decelerationists, the scapegoat is the technology. If we could control it, we&#8217;d be safe. For the accelerationists, the scapegoat is the regulators. If we could remove them, technology would deliver utopia. Both contain truth. Both compress a systemic problem into a single target. And both obscure the deeper structure: a small number of extraordinarily powerful people locked in a mimetic rivalry they cannot exit, pursuing an object whose desirability is a function of the rivalry itself, with the rest of us as involuntary stakeholders in a bet we did not agree to take.</p><p>Girard said the way out of the mimetic crisis is recognition. Once you see the structure. Once you realize the rivalry is generating the object rather than the other way around. You can step outside.</p><p>Harris is right that the next two years matter. I think he&#8217;s right for a reason he doesn&#8217;t quite articulate. The next two years are not the last window to prevent AGI. They are the window where the deployment patterns get established. The norms that will govern how AI actually enters ordinary life are being set right now, and they are not being set by the people giving TED talks or signing open letters or adding &#8220;e/acc&#8221; to their bios. They are being set by the nurse and the teacher and the county clerk and the farmer and the ten thousand organizations making small, concrete decisions about what to automate and what to protect. Those decisions, accumulated, will matter more than any single breakthrough in any single lab.</p><p>The way out of the sacrificial crisis is not more sacrifice. It is the refusal to participate in the logic of sacrifice. It is the recognition that both the acceleration and the deceleration are reactions to the same false object, and that the real work, the work that will actually determine whether AI makes life better or worse for most people, has always been somewhere else. Quieter. Less dramatic. Harder to see.</p><p>There is another game. It is positive-sum. It compounds.</p><p>That is where the optimism lives.</p>]]></content:encoded></item><item><title><![CDATA[The Intelligence Factory Has No Workers]]></title><description><![CDATA[Morgan Stanley built the bull case. Citrini built the bear case. Both missed the same variable.]]></description><link>https://www.ap.xyz/p/the-intelligence-factory-has-no-workers</link><guid isPermaLink="false">https://www.ap.xyz/p/the-intelligence-factory-has-no-workers</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Sat, 14 Mar 2026 21:06:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/37dc9a0f-5a55-46f0-a8aa-6236e95ad5d6_2752x1361.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Morgan Stanley just published the most important research note of 2026. Fortune led with Elon Musk. The market led with the power grid. Both missed the point.</p><p>The report, led by Stephen Byrd&#8217;s thematic research team, makes a straightforward claim. The five major American AI labs are applying roughly 10x the compute to their next training runs. If scaling laws hold, models get about twice as capable. The curve steepens from here.</p><p>That part is not new. Anyone paying attention already knew.</p><p>The part that matters happened at the TMT Conference surrounding the report. Four hundred companies. Twenty-six trillion in market cap. Two thousand capital allocators managing seventy trillion. And the single most common investor question, according to Morgan Stanley analyst Adam Jonas, was not about capabilities.</p><p>It was about jobs.</p><h2>The Conference Proved the Thesis</h2><p>I wrote in February that the enterprise stack is expanding, not contracting. Humans + Software + Digital Workforces. Three layers, not two. Digital workforces don&#8217;t replace software. They consume more of it than humans ever did.</p><p>The TMT Conference validated this in real time.</p><p>Jensen Huang: &#8220;There&#8217;s this notion that the software industry is in decline, and will be replaced by AI. It is the most illogical thing in the world.&#8221; He&#8217;s making the same architectural observation. The stack is getting bigger. Not smaller.</p><p>Satya Nadella, same stage, different angle. He told the audience something revealing. People are &#8220;rediscovering some of the oldest things we had. CLIs and IDEs and Excel plug-ins.&#8221; The most powerful AI models in history are being accessed through command lines and spreadsheet extensions.</p><p>This is the deployment gap in a single image. Expert-level intelligence. 1970s interface.</p><p>Morgan Stanley&#8217;s own analysts described the entire industry as &#8220;compute-constrained.&#8221; But the real constraint is not compute. It&#8217;s the distance between a model that can do the work and an organization that actually lets it.</p><h2>The Citrini Sequel</h2><p>Two weeks before the TMT Conference, Citrini Research published &#8220;The 2028 Global Intelligence Crisis.&#8221; Sixteen million views on X. IBM dropped 13%. Michael Burry amplified it. I wrote a response at the time. The core argument: Citrini models a light switch. Reality is a dimmer.</p><p>Morgan Stanley validated the Citrini premise. Their own survey found a 4% net workforce reduction across 1,000 executives in five countries. Directly attributable to AI. Alex Imas at Chicago confirmed the macro data is now showing AI productivity gains. Jason Furman at Harvard agrees. The displacement is real. It is no longer theoretical.</p><p>But the TMT Conference also revealed the same gap I flagged in February. Multiple executives described clinical AI-driven efficiencies that led to significant reductions in force. And in the same breath, admitted their organizations are still stuck in pilot mode.</p><p>Cut people. Can&#8217;t deploy the replacement. That&#8217;s the gap.</p><p>Morgan Stanley noted it too. They said agreeing with Citrini&#8217;s central plank, &#8220;transformative AI&#8221; will drive deflation. But they also said they were &#8220;continually surprised at how quickly, and violently, this prediction has become a key investor debate.&#8221; The debate is real. The deployment is not keeping up.</p><p>This is Ghost GDP in a different form. Not the Citrini version where machines produce output that never reaches consumers. The operational version. Where companies buy the intelligence but can&#8217;t wire it into the workflow where work actually happens.</p><h2>The Three Layers, Revisited</h2><p>In the Expanding Enterprise Stack series, I argued value in the AI economy flows through three layers:</p><p><strong>Layer 1: Intelligence.</strong> The labs. OpenAI, Anthropic, Google, Meta, xAI. They produce raw capability. Measured in benchmarks, funded by billions, constrained by power and chips. Morgan Stanley&#8217;s &#8220;Intelligence Factory&#8221; thesis lives here. So does the scaling laws debate.</p><p><strong>Layer 2: Infrastructure.</strong> Data centers, GPUs, power, cooling. Morgan Stanley described a &#8220;15-15-15&#8221; dynamic: fifteen-year leases, fifteen percent yields, fifteen dollars per watt. Byrd called access to a transformer and a turbine &#8220;the new competitive moat.&#8221; Nvidia, the hyperscalers, and the energy companies live here.</p><p><strong>Layer 3: Deployment.</strong> Where intelligence meets work. Where a model gets connected to the system, the data, the compliance framework, and the human process that lets it actually execute. Repeatedly. Reliably. At scale.</p><p>Layer 1 has hundreds of billions in funding. Layer 2 has trillions in committed capital. Layer 3 has almost nothing.</p><p>This is the gap. Not a compute gap. Not a capability gap. A deployment gap.</p><p>GPT-5.4 scored 83% on GDPVal. Expert-level performance on economically valuable tasks. Impressive on paper. Meaningless in practice if nobody can wire it into the workflow where the work happens.</p><h2>What the Conference Missed</h2><p>The TMT Conference surfaced a fourth point that nobody connected to the other three. Companies like Box and IBM are positioning themselves as the foundational &#8220;memory&#8221; for AI systems. Okta executives emphasized that managing the credentials of AI agents is the new front line of security. Boards are replacing sales-oriented CEOs with product-oriented leaders who understand the architecture.</p><p>This is the infrastructure of Layer 3 starting to emerge. Memory. Identity. Orchestration. The connective tissue between intelligence and work.</p><p>I wrote about this in The Missing Layer. Organizational memory is the moat. Not the model. The model is a commodity. The memory of how your firm actually operates, the compliance rules, the client preferences, the system configurations, the process exceptions, that is the asset that makes deployment possible.</p><p>Every firm I work with has the same experience. The AI works on the demo. It fails on the edge case that only a three-year employee would know. The edge case isn&#8217;t a bug. It&#8217;s organizational memory that nobody wrote down.</p><p>The factory is built. The intelligence is produced. But without the memory layer, there are no workers on the floor.</p><h2>The Bottleneck Moved</h2><p>In Part 3 of the Expanding Enterprise Stack, I argued the bottleneck is shifting from execution to judgment. Mike Molinet said it best in response to a developer who ran 12 parallel AI agents on a codebase refactor in one hour: &#8220;You&#8217;re still doing six months of thinking. Just not six months of typing.&#8221;</p><p>Morgan Stanley confirmed this at the macro level. The 4% workforce reduction they documented is real. But the executives doing the cutting are simultaneously admitting they can&#8217;t fully deploy the AI that justified the cuts. The organizations are shedding execution capacity while the judgment layer hasn&#8217;t been built yet.</p><p>This is the transition risk that both the Citrini bears and the Morgan Stanley bulls undercount. Not &#8220;will AI replace workers?&#8221; It will. Not &#8220;will the economy absorb the shock?&#8221; Over time, it always does. The risk is the gap in between. The period where companies have cut the humans but haven&#8217;t built the deployment layer to replace them.</p><p>Whoever closes that gap captures the value of the entire intelligence revolution.</p><h2>The Real Coin of the Realm</h2><p>Morgan Stanley says the coin of the realm is becoming pure intelligence, forged by compute and power. Half right.</p><p>Intelligence is necessary. But intelligence without deployment is a benchmark score.</p><p>The real coin of the realm is deployed intelligence. Not models that can do the work. Systems that do the work. Organizations that have actually made the change. The memory, the integrations, the compliance frameworks, the human oversight protocols that turn raw intelligence into reliable execution.</p><p>The factory is built. The power is on. Now someone has to hire the workers, train them on the floor layout, and connect them to the machines.</p><p>That is not an intelligence problem. It is an organizational problem. And it is the largest unsolved problem in the AI economy.</p><div><hr></div><p><em>This post builds on <a href="https://ap.xyz/p/the-expanding-enterprise-stack-humans">The Expanding Enterprise Stack</a>, <a href="https://ap.xyz/p/the-forecast-software-digital-workforces">The Forecast</a>, <a href="https://ap.xyz/p/the-skills-that-will-matter">The Skills That Will Matter</a>, <a href="https://ap.xyz/p/the-missing-variable">The Missing Variable</a>, and <a href="https://ap.xyz/p/the-missing-layer-why-data-quality">The Missing Layer</a>.</em></p><p><em>I&#8217;m Andrei Pop, founder and CEO of <a href="https://humanitylabs.ai">Humanity Labs</a>. We build and deploy digital workforces for wealth management firms. If you want to talk about the deployment gap, reply to this or find me on <a href="https://linkedin.com/in/andreimpop">LinkedIn</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[The Trillion Dollar Question]]></title><description><![CDATA[The 12 most valuable companies on earth sell infrastructure. The next one will sell the output.]]></description><link>https://www.ap.xyz/p/the-trillion-dollar-question</link><guid isPermaLink="false">https://www.ap.xyz/p/the-trillion-dollar-question</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Sun, 08 Mar 2026 22:51:19 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e3ed7ce4-c794-469e-a5d2-70bf6e1c7f3f_2752x1340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sequoia just published one of the most important pieces on AI this year. The thesis is simple. The next trillion dollar company won&#8217;t sell software. It will sell the work.</p><p>I agree. But I think they&#8217;re underselling the shift.</p><p>Let me explain why by looking at the companies that already crossed the trillion dollar line. And what they tell us about what comes next.</p><h2>The Leaderboard</h2><p>Here are the most valuable companies on earth as of March 2026:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HJa4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0d9fb71-2e21-4e31-8100-d8e6478b55e8_1980x1042.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HJa4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0d9fb71-2e21-4e31-8100-d8e6478b55e8_1980x1042.png 424w, https://substackcdn.com/image/fetch/$s_!HJa4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0d9fb71-2e21-4e31-8100-d8e6478b55e8_1980x1042.png 848w, https://substackcdn.com/image/fetch/$s_!HJa4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0d9fb71-2e21-4e31-8100-d8e6478b55e8_1980x1042.png 1272w, https://substackcdn.com/image/fetch/$s_!HJa4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0d9fb71-2e21-4e31-8100-d8e6478b55e8_1980x1042.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HJa4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0d9fb71-2e21-4e31-8100-d8e6478b55e8_1980x1042.png" width="1456" height="766" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a0d9fb71-2e21-4e31-8100-d8e6478b55e8_1980x1042.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:766,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:190272,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.ap.xyz/i/190329673?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0d9fb71-2e21-4e31-8100-d8e6478b55e8_1980x1042.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HJa4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0d9fb71-2e21-4e31-8100-d8e6478b55e8_1980x1042.png 424w, https://substackcdn.com/image/fetch/$s_!HJa4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0d9fb71-2e21-4e31-8100-d8e6478b55e8_1980x1042.png 848w, https://substackcdn.com/image/fetch/$s_!HJa4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0d9fb71-2e21-4e31-8100-d8e6478b55e8_1980x1042.png 1272w, https://substackcdn.com/image/fetch/$s_!HJa4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0d9fb71-2e21-4e31-8100-d8e6478b55e8_1980x1042.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Twelve companies around or above a trillion. Eight are tech. Four are semiconductors or semiconductor-adjacent. The pattern is obvious. The market is betting on the infrastructure layer of the intelligence age.</p><p>But here is the question nobody is asking loudly enough:</p><p><strong>What happens when the infrastructure layer is built?</strong></p><h2>Software Ate the World. Labor is the World.</h2><p>There is a reason Sequoia&#8217;s piece matters. They put a number on the ratio that changes everything.</p><p>For every $1 spent on software, $6 is spent on services.</p><p>Let that sit.</p><p>Global IT spending in 2026 is projected at $6.15 trillion. Software is about $1.4 trillion of that. The global professional services market alone is over $6 trillion. US wages totaled $11.7 trillion in 2024. Globally, labor compensation runs somewhere north of $40 trillion annually.</p><p>The software market is a rounding error compared to the labor market.</p><p>Every SaaS company on earth is fighting over the $1. The $6 is wide open. And AI is the first technology in history capable of going after it directly.</p><h2>Intelligence vs. Judgement</h2><p>Sequoia&#8217;s framework is useful here. They split work into two categories.</p><p><strong>Intelligence</strong> is rule-based complexity. Translating a spec into code. Processing an insurance claim. Coding a medical bill. The rules are hard but they are rules. AI is already doing this autonomously.</p><p><strong>Judgement</strong> is pattern recognition built on years of experience. Deciding which feature to build. Knowing when a client relationship is at risk. Reading the room in a negotiation. AI is not there yet. But the frontier is shifting.</p><p>The key insight: every profession has a ratio of intelligence to judgement. The higher the intelligence ratio, the sooner AI replaces the worker entirely.</p><p>Software engineering got there first. Over half of all AI tool usage across professions is in software engineering. That is not because engineers love tools. It is because writing code is mostly intelligence work.</p><p>Now look at the rest of the economy through this lens:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!st7z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0a00baf-b5ca-468f-94fe-c8b5131ea2ea_1980x1007.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!st7z!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0a00baf-b5ca-468f-94fe-c8b5131ea2ea_1980x1007.png 424w, https://substackcdn.com/image/fetch/$s_!st7z!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0a00baf-b5ca-468f-94fe-c8b5131ea2ea_1980x1007.png 848w, https://substackcdn.com/image/fetch/$s_!st7z!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0a00baf-b5ca-468f-94fe-c8b5131ea2ea_1980x1007.png 1272w, https://substackcdn.com/image/fetch/$s_!st7z!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0a00baf-b5ca-468f-94fe-c8b5131ea2ea_1980x1007.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!st7z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0a00baf-b5ca-468f-94fe-c8b5131ea2ea_1980x1007.png" width="1456" height="741" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e0a00baf-b5ca-468f-94fe-c8b5131ea2ea_1980x1007.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:741,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:197036,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.ap.xyz/i/190329673?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0a00baf-b5ca-468f-94fe-c8b5131ea2ea_1980x1007.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!st7z!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0a00baf-b5ca-468f-94fe-c8b5131ea2ea_1980x1007.png 424w, https://substackcdn.com/image/fetch/$s_!st7z!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0a00baf-b5ca-468f-94fe-c8b5131ea2ea_1980x1007.png 848w, https://substackcdn.com/image/fetch/$s_!st7z!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0a00baf-b5ca-468f-94fe-c8b5131ea2ea_1980x1007.png 1272w, https://substackcdn.com/image/fetch/$s_!st7z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0a00baf-b5ca-468f-94fe-c8b5131ea2ea_1980x1007.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Add those up. That is over $1 trillion in addressable labor spend in just ten categories. And that is only the outsourced slice.</p><h2>What the Top 12 Tell Us</h2><p>Go back to the leaderboard. Every company above a trillion earned its position by capturing a fundamental economic function.</p><p>NVIDIA captured compute. Apple captured the consumer device. Alphabet captured attention. Microsoft captured the enterprise desktop. Amazon captured commerce and cloud. TSMC captured fabrication. Walmart captured retail distribution. JPMorgan captured financial intermediation.</p><p>Each of these is a platform that sits between supply and demand for something essential. They did not just build tools. They became the infrastructure through which economic activity flows.</p><p>Now apply this to labor.</p><p>The companies on this list sell picks and shovels. They sell the infrastructure. They sell the tools. None of them sell the work itself. Not yet.</p><p>Microsoft sells Office. It does not close your books. Alphabet sells ads. It does not process your insurance claims. Amazon sells cloud. It does not handle your HR operations.</p><p>That gap is the opportunity Sequoia is pointing at.</p><h2>Copilots vs. Autopilots</h2><p>Sequoia draws a clean line here.</p><p>A <strong>copilot</strong> sells the tool. It makes the professional more productive. Harvey for lawyers. Rogo for investment bankers. The human stays in the loop. The tool captures the software budget.</p><p>An <strong>autopilot</strong> sells the work. It replaces the professional for a specific task. The customer buys the outcome directly. The autopilot captures the labor budget.</p><p>The copilot approach was right when models lacked intelligence. You needed a human to provide judgement. The tool just accelerated their work.</p><p>But models are now intelligent enough that for high-intelligence tasks, the human in the loop is the bottleneck.</p><p>Copilots compete with other tools. Autopilots compete with headcount.</p><p>The tool budget is a $1.4 trillion market. The labor budget is a $40+ trillion market. The math is not close.</p><h2>The Outsourcing Wedge</h2><p>Here is the playbook Sequoia outlines, and this is where I think they are exactly right.</p><p>Start where outsourcing already exists. If a company already outsources a function, three things are true:</p><ol><li><p>They accept the work can be done externally</p></li><li><p>There is a budget line that can be swapped cleanly</p></li><li><p>The buyer is already purchasing an outcome</p></li></ol><p>Replacing an outsourcing contract with an AI-native service provider is a vendor swap. Replacing internal headcount is a reorg. One is a procurement decision. The other is a political crisis.</p><p>This is exactly what we see at Humanity Labs. When we walk into a wealth management firm and offer to handle their account transfers, data reconciliation, or client onboarding with digital workers, the first wins come from tasks they already outsource or tasks that simply are not getting done because nobody has the bandwidth.</p><p>The tasks nobody has time for are the real wedge. There is no budget line to displace. No incumbent to fight. Just found capacity.</p><h2>What a Trillion Dollar Labor Company Looks Like</h2><p>So what does the next trillion dollar company actually look like?</p><p>It will not look like Salesforce. It will not look like ServiceNow. It will not even look like Accenture.</p><p>It will look like a company that:</p><ol><li><p><strong>Sells outcomes, not seats.</strong> Pricing is per task, per FTE equivalent, or per outcome delivered. Not per user per month.</p></li><li><p><strong>Compounds on data.</strong> Every task completed makes the system smarter. Every edge case resolved expands the frontier of what the system can handle autonomously.</p></li><li><p><strong>Starts in outsourced, intelligence-heavy work.</strong> The wedge is specific, measurable, and already budgeted.</p></li><li><p><strong>Expands into insourced, judgement-heavy work.</strong> As the system learns what good judgement looks like in a domain, it moves from doing the intelligence work to doing the full job.</p></li><li><p><strong>Operates as a service company with software economics.</strong> Gross margins north of 80% because AI does the work. But the customer experience feels like hiring a team.</p></li></ol><p>The business model is not SaaS. It is not traditional services. It is something new. It is digital labor sold as a managed service.</p><h2>The Convergence</h2><p>Today&#8217;s copilots will try to become autopilots. Sequoia says it explicitly. But they also note the innovator&#8217;s dilemma: if you sell the tool to the professional, becoming an autopilot means cutting your own customer out of the equation.</p><p>Harvey sells to law firms. To become an autopilot, Harvey would need to sell directly to the company that needs the NDA drafted, bypassing outside counsel entirely. That is not a product pivot. That is a go-to-market inversion.</p><p>The companies that start as autopilots do not face this problem. They are building the muscle to deliver outcomes from day one. They are accumulating the data on what good work looks like in their domain. They are building trust with the end buyer, not the intermediary.</p><p>This is why pure-play autopilots have a structural advantage.</p><h2>What This Means</h2><p>The biggest companies in the world got there by becoming essential infrastructure for economic activity. The next wave will get there by becoming essential infrastructure for economic output.</p><p>Not tools that help people work. Systems that do the work.</p><p>The $6 trillion professional services market is the first addressable target. The $40+ trillion global labor market is the endgame.</p><p>The twelve companies above a trillion today built the roads. The next one will drive the trucks.</p><div><hr></div><p><em>If you&#8217;re building in this space or thinking about the digital workforce shift, I write about it regularly at <a href="https://ap.xyz/">ap.xyz</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[Selling Dollars for 85 Cents: The AI Revenue Illusion]]></title><description><![CDATA[The token economy isn't 1999. It's something new.]]></description><link>https://www.ap.xyz/p/selling-dollars-for-85-cents-the</link><guid isPermaLink="false">https://www.ap.xyz/p/selling-dollars-for-85-cents-the</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Sat, 21 Feb 2026 18:24:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8f391469-44b9-4a1f-bcd9-ac56aae66b83_2752x1358.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Anthropic hit $14 billion in annualized revenue this month. OpenAI ended 2025 at $20 billion. The gap is closing fast. Anthropic grew from $1 billion to $14 billion in 14 months. No enterprise technology company in recorded history has compounded at this rate at this scale.</p><p>But revenue is the wrong scoreboard.</p><p>Bill Gurley made this point in a short post this week: these companies both buy and sell tokens. Revenue comparisons without gross margin context are meaningless. In 1999 we called this &#8220;selling dollars for 85 cents.&#8221;</p><p>He&#8217;s directionally right. And the full picture is more interesting than that one-liner suggests. Because when you look at the actual economics, the story isn&#8217;t &#8220;this is 1999 all over again.&#8221; The story is &#8220;this is a new kind of business that&#8217;s figuring out its margin structure in real time, and the trendlines are encouraging.&#8221;</p><h2>How the Token Economy Actually Works</h2><p>The business equation behind every AI company is simpler than it looks.</p><p>An AI company rents GPUs. It uses those GPUs to run a model. Every time a user sends a query, the model reads tokens (input) and generates tokens (output). The company charges per token.</p><p>That&#8217;s the whole business. Buy compute. Sell tokens.</p><p>Here&#8217;s where it gets interesting. The cost side has three layers:</p><p><strong>Layer 1: Training.</strong> Before you can sell a single token, you have to build the model. Anthropic spent roughly $4.1 billion on training in 2025. OpenAI spent $9 billion. These are sunk costs. You spend them before you earn a dollar. And they&#8217;re rising. Each generation of frontier model costs more than the last.</p><p><strong>Layer 2: Inference.</strong> This is the cost of actually running the model for customers. Every query burns GPU-hours. Anthropic rents servers from Google and Amazon. OpenAI rents from Microsoft. This is your true cost of goods sold. Anthropic&#8217;s inference costs came in 23% higher than projected in 2025.</p><p><strong>Layer 3: Everything else.</strong> Salaries, sales, marketing, office space. OpenAI spent an estimated $2.2 billion on sales and marketing and $1.2 billion on staff compensation for inference operations alone in the second half of 2025.</p><p>The revenue side is equally straightforward:</p><p><strong>API revenue</strong> (the majority for both companies): charge per million tokens. Claude Sonnet runs $3 per million input tokens, $6 per million output. A single complex coding session can burn 5,000 to 20,000 tokens. A heavy enterprise customer processes millions of tokens per day.</p><p><strong>Subscription revenue:</strong> $20/month for ChatGPT Plus, $20/month for Claude Pro. The margin here depends entirely on how much each user consumes. Light users are profitable. Power users cost more to serve than they pay.</p><p>Now do the math.</p><p>Anthropic&#8217;s gross margin in 2025 is roughly 40%. That means for every dollar of token revenue, 60 cents goes straight back to Google and Amazon for the compute that generated those tokens. The $14 billion in revenue becomes $5.6 billion in gross profit.</p><p>OpenAI&#8217;s gross margin is roughly 48%. Its $20 billion becomes $9.6 billion in gross profit.</p><p>The revenue gap looks like $6 billion. The gross profit gap is $4 billion. Same direction, different magnitude. Neither company is profitable yet once you add training costs, R&amp;D, and operating expenses.</p><p>This is the core tension. The top line is extraordinary. The bottom line doesn&#8217;t exist yet. But &#8220;yet&#8221; is doing real work in that sentence.</p><h2>Why the 1999 Comparison Is Too Cynical</h2><p>The dot-com analogy is useful shorthand. In 1999, companies like Buy.com and Kozmo.com grew revenue by selling products below cost. Revenue went up. Value went down. Investors confused the two until they didn&#8217;t.</p><p>The AI token economy has surface-level similarities. Companies buy compute and sell tokens. At the application layer, Cursor reportedly paid $650 million to Anthropic while generating $500 million in revenue. The whole stack has moments where someone is hoping the layer below gets cheaper before the money runs out.</p><p>But here&#8217;s why the cynical reading is probably wrong.</p><p><strong>The cost curve is real and steep.</strong> This is the single most important difference from 1999. Buy.com had no credible path to improving unit economics. Shipping physical goods below cost doesn&#8217;t get cheaper at scale.</p><p>AI inference does. And it&#8217;s happening fast.</p><p>OpenAI&#8217;s compute margin went from 35% in January 2024 to 70% by October 2025. That&#8217;s a doubling in less than two years. Token costs have dropped by orders of magnitude. Custom silicon is arriving: Anthropic signed a $21 billion TPU deal with Google that should cut per-token costs dramatically. OpenAI is building its own inference chip. Smaller, distilled models already handle the majority of workloads at a fraction of frontier cost.</p><p>Anthropic projects gross margins above 60% by 2027 and 77% by 2028. Those aren&#8217;t fantasy numbers. They&#8217;re extrapolations from a cost curve that has been declining faster than almost anyone predicted. When inference costs drop 10x in two years, the margin math changes completely.</p><p>This is not the same as hoping shipping costs will magically drop. This is Moore&#8217;s Law with a tailwind.</p><p><strong>The subsidy is strategic, not structural.</strong> The &#8220;below cost&#8221; framing may apply to frontier reasoning models and agentic workflows that burn 10-100x more tokens per task. But the workhorse models (Sonnet, GPT-4o) run at healthy margins. OpenAI&#8217;s API gross margins were estimated at 75% for GPT-4o in mid-2024.</p><p>The blended picture isn&#8217;t &#8220;selling dollars for 85 cents.&#8221; It&#8217;s more like investing heavily in frontier R&amp;D while generating real profit on the volume models that carry most of the traffic. That&#8217;s a portfolio with a loss leader, not a broken business model. Amazon sold books below cost to build a customer base that eventually bought everything. The playbook has precedent.</p><p><strong>The demand signal is unlike anything in 1999.</strong> Buy.com customers had zero switching costs. Next purchase, cheapest site wins.</p><p>AI infrastructure is different. Developers build on specific APIs. They fine-tune models. They construct evaluation frameworks and prompt libraries. Enterprise customers sign multi-year contracts. Anthropic reports 80% of revenue from enterprise, with 500+ customers spending over $1 million annually. Eight of the ten largest Fortune 10 companies are Claude customers.</p><p>This is not manufactured demand sustained by subsidies. This is enterprise adoption at a pace that typically takes a decade, compressed into months. Companies are paying because the value is real. When a coding assistant saves an engineering team 30% of their time, the ROI on token spend is obvious.</p><p><strong>Revenue per customer is expanding, not contracting.</strong> In token-based models, expansion doesn&#8217;t come from selling more seats. It comes from customers building bigger applications and consuming more tokens. One customer&#8217;s successful product launch can 10x their token usage overnight. That&#8217;s a fundamentally different dynamic than the dot-com era, where growth required constantly acquiring new money-losing customers.</p><h2>The Honest Risk</h2><p>None of this means the concerns are baseless. There&#8217;s a real tension:</p><p>Anthropic is valued at $380 billion. At 40% gross margins, that&#8217;s 68x gross profit. For that to make sense, you need revenue to keep growing, margins to expand to 60-70%, and R&amp;D spending to not scale linearly with capability.</p><p>Each is plausible. None is certain. And the models customers want most (reasoning, deep research, coding agents) are the ones with the worst unit economics today.</p><p>But &#8220;the most expensive products have the worst margins&#8221; is true of almost every technology in its early phase. Early cloud computing was expensive. Early mobile data was expensive. The pattern is consistent: costs come down, margins expand, and the companies that invested through the expensive phase end up owning the market.</p><p>Dario Amodei told Fortune that a twelve-month delay in AI progress would make him bankrupt. That sounds alarming in isolation. In context, it&#8217;s the same thing Jeff Bezos was saying in 2001. When you&#8217;re investing ahead of a cost curve you believe in, the risk of stopping is greater than the risk of continuing.</p><h2>What This Means If You&#8217;re Building</h2><p>If you&#8217;re building on top of these models, the margin question applies to you with even more force. Your cost of goods sold is someone else&#8217;s price card.</p><p>But the opportunity is also enormous. The companies that will build the most durable businesses are the ones that decouple their economics from raw token throughput. That means workflow depth, proprietary data advantages, intelligent model routing (cheap models for 80% of tasks, frontier for the 20% that matter), and enough product surface area that the token cost becomes a minority of the value delivered.</p><p>The fact that Cursor crossed $1 billion in revenue and launched its own models isn&#8217;t a cautionary tale. It&#8217;s proof that the application layer can work if you build enough depth to control your own economics.</p><h2>The Bottom Line</h2><p>The revenue growth at Anthropic and OpenAI is real and unprecedented. $1 billion to $14 billion in 14 months. Nothing in enterprise tech has done that.</p><p>Revenue in a token economy is not the same as revenue in a software economy. When your marginal cost of delivery is 50-60 cents on the dollar instead of 5 cents, the comparison to traditional SaaS breaks down. Gurley is right that gross profit is the better scoreboard.</p><p>But the trajectory of that scoreboard matters more than today&#8217;s snapshot. And the trajectory, for the first time in the history of AI companies, is pointing in the right direction. Compute margins are doubling. Custom silicon is arriving. Enterprise demand is accelerating. The cost curve that needs to break is breaking.</p><p>The question isn&#8217;t whether AI companies can build real margin structures. The evidence increasingly says they can. The question is how fast, and whether the current valuations have priced in the timeline correctly.</p><p>That&#8217;s a valuation debate, not an existential one. And there&#8217;s a world of difference between the two.</p>]]></content:encoded></item><item><title><![CDATA[The Expanding Enterprise Stack, Part 3: The Skills That Will Matter]]></title><description><![CDATA[What Changes Inside Companies When AI Does the Work]]></description><link>https://www.ap.xyz/p/the-expanding-enterprise-stack-part-0e7</link><guid isPermaLink="false">https://www.ap.xyz/p/the-expanding-enterprise-stack-part-0e7</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Wed, 18 Feb 2026 14:26:12 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/635a6445-edb5-4dd2-8a95-44cf539927a8_2752x1341.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><a href="https://ap.xyz/p/the-expanding-enterprise-stack-humans">Part 1</a> argued the enterprise stack is expanding, not collapsing. <a href="https://www.ap.xyz/p/the-expanding-enterprise-stack-part">Part 2</a> forecast how software and digital workforces will coexist. This post is about what changes inside the companies that adopt them.</em></p><div><hr></div><p>I saw a post on X this morning. A developer ran 12 parallel AI agents on a codebase refactor. The work took 1 hour, 1 minute, and 40 seconds. It cost $7. He estimated the equivalent human effort at 6 months.</p><p>The top reply was the real story: &#8220;The 6 months wasn&#8217;t &#8216;saved.&#8217; It was compressed into your architecture decisions. You&#8217;re still doing 6 months of thinking. Just not 6 months of typing. The bottleneck moved from execution to judgment.&#8221;</p><p>Read that again. The bottleneck moved from execution to judgment.</p><p>This is not a productivity story. It is a skills story. And if you run or govern an enterprise, it is the most important shift you need to understand right now.</p><h2>The Inversion</h2><p>For decades, enterprises have valued execution speed. Ship faster. Type faster. Process faster. Headcount was a proxy for capacity. More people meant more output.</p><p>Digital workforces invert this. When execution becomes near-free, the constraint moves upstream. The scarce resource is no longer the hands that do the work. It is the mind that directs it.</p><p>The developer said it himself in a follow-up: &#8220;The skill bar here is ultra high. My brain power is almost exhausted because all simple tasks are done by AI, and the most complex job is still on me.&#8221;</p><p>Another reply nailed the implication: &#8220;The skill required shifts toward architecture, planning, and systems thinking. I think most people are underestimating the cultural impact of this shift.&#8221;</p><p>In Part 1, I argued that digital workforces don&#8217;t replace software. They consume it. The same logic applies to people. Digital workforces don&#8217;t replace your best people. They expose who your best people actually are.</p><h2>What Changes for Enterprises</h2><p>In Part 2, I laid out how software companies and digital workforce providers will coexist and where each should invest. Now the question turns inward. If your company adopts digital workforces at scale, what has to change about your people, your culture, and your governance?</p><p>Three things.</p><p><strong>1. The value of judgment goes vertical.</strong></p><p>When a team of AI agents can execute a 6-month refactor in an hour, the quality of the instructions matters more than the speed of the typist. This means your highest-value employees are the ones who can define problems precisely, anticipate edge cases, and make architectural decisions under uncertainty.</p><p>This is not new in theory. Every CEO says they value strategic thinking. But in practice, most enterprises still reward volume. Number of tickets closed. Lines of code shipped. Calls made. Reports filed.</p><p>Digital workforces make volume metrics meaningless. If an agent can close 500 tickets a day, the person who figured out which 500 tickets actually matter is your MVP.</p><p><strong>Action for CEOs:</strong> Audit your performance metrics. If more than half of them measure output volume, you are optimizing for a world that is ending. Replace volume metrics with judgment metrics: decision quality, problem framing accuracy, architectural soundness.</p><p><strong>Action for boards:</strong> Ask management what percentage of the workforce is evaluated on execution speed versus decision quality. If leadership cannot answer this question clearly, that is a red flag.</p><p><strong>2. Organizational memory becomes a competitive moat.</strong></p><p>In Part 1, I described how digital workforces sit on top of existing software and act as a new layer in the stack. One thing I did not emphasize enough: the layer only works if it has context.</p><p>An AI agent running 12 parallel threads on a codebase refactor works because someone made the architecture decisions upfront. Someone encoded the context. Someone defined the constraints. In an enterprise setting, that &#8220;someone&#8221; is usually a combination of institutional knowledge, process documentation, and tribal wisdom that lives in people&#8217;s heads.</p><p>The companies that win will be the ones that externalize this knowledge into systems their digital workforces can consume. Not just SOPs and wikis. Living, structured memory that evolves as the business evolves.</p><p>This is why the managed services model matters more than the software model for digital workforces. Software gives you a tool. A managed service gives you a team that builds and maintains the organizational memory layer over time. (Yes, this is what we do at Humanity Labs. I am biased. I also believe it is true.)</p><p><strong>Action for CEOs:</strong> Identify the top 10 decisions your teams make repeatedly. Ask whether those decisions are documented well enough for a new hire to make them on day one. If not, they are not documented well enough for a digital workforce either. Start there.</p><p><strong>Action for boards:</strong> Add &#8220;organizational knowledge capture&#8221; to your technology governance framework. Treat it like you treat data governance. It is that important.</p><p><strong>3. The skill floor rises and the skill ceiling disappears.</strong></p><p>Here is the part most people get wrong. They assume AI makes work easier. It does not. It makes simple work disappear and makes the remaining work harder.</p><p>That developer&#8217;s exhaustion is not an anomaly. It is the new normal. When AI handles every routine task, every task left on your desk is a hard one. There is no easy win to warm up with. No quick ticket to build momentum. You go straight to the complex problems all day, every day.</p><p>This has massive implications for talent strategy. Junior roles that were designed as training grounds (data entry, basic analysis, first-pass reviews) are the first to be automated. But those roles existed for a reason. They were how people learned the business. They were the on-ramp.</p><p>If you automate the on-ramp without building a new one, you end up with a bimodal workforce: senior people who are exhausted and junior people who never develop. Neither outcome is sustainable.</p><p><strong>Action for CEOs:</strong> Redesign your entry-level roles now. Not after digital workforces are fully deployed. The new junior role is not &#8220;do the simple version of what seniors do.&#8221; It is &#8220;learn to direct and evaluate AI output.&#8221; This is a fundamentally different skill set and it requires a fundamentally different training program.</p><p><strong>Action for boards:</strong> Ask management for their AI-era talent development plan. If the answer is &#8220;we&#8217;ll figure it out as we go,&#8221; push back. The companies that solve the junior talent pipeline first will have a structural advantage for the next decade.</p><h2>The Values Shift</h2><p>Skills can be trained. Values are harder. And the values that served enterprises well in the execution era will not serve them in the judgment era.</p><p><strong>From &#8220;move fast&#8221; to &#8220;think clearly.&#8221;</strong> Speed of execution was a virtue when execution was the bottleneck. When execution is instant, speed of thought without clarity of thought is just fast failure. The new virtue is precision of intent.</p><p><strong>From &#8220;do more&#8221; to &#8220;direct better.&#8221;</strong> Volume is no longer a signal of contribution. The person who runs 50 AI agents poorly creates more mess than the person who runs 5 well. Quality of direction matters more than quantity of output.</p><p><strong>From &#8220;know how&#8221; to &#8220;know why.&#8221;</strong> Procedural knowledge (how to do X) is exactly what AI agents excel at. Contextual knowledge (why we do X this way, what happens if we don&#8217;t, what changed last quarter that matters) is what they lack. The most valuable employees will be the ones who understand the why deeply enough to encode it for machines.</p><h2>The Punchline</h2><p>The enterprise stack is expanding. Software is not dying. Digital workforces are not replacing your people. But they are changing which people matter and why.</p><p>The bottleneck has moved from execution to judgment. The companies that recognize this and restructure their skills, metrics, and values accordingly will compound the advantage. The ones that keep measuring keystrokes will wonder why their AI investments are not paying off.</p><p>If you are a CEO: start with your performance metrics, your knowledge systems, and your junior talent pipeline. Those three things will determine whether digital workforces multiply your capacity or multiply your problems.</p><p>If you are a board member: ask the hard questions now. Not &#8220;are we using AI?&#8221; but &#8220;are we building the organizational muscle to direct AI well?&#8221; The answer will tell you more about the company&#8217;s future than any revenue forecast.</p><p>The future of enterprise is not fewer humans. It is humans doing harder, more valuable work. The question is whether your organization is ready for that.</p><div><hr></div><p><em>This is Part 3 of the Expanding Enterprise Stack series. <a href="https://ap.xyz/p/the-expanding-enterprise-stack-humans">Part 1</a> covers why the stack is growing. <a href="https://ap.xyz/p/the-forecast-software-digital-workforces">Part 2</a> covers how software and digital workforces coexist. Part 4 will cover the investment implications.</em></p>]]></content:encoded></item><item><title><![CDATA[Let Humans Do Human Work]]></title><description><![CDATA[On Redwoods, Category Errors, and the Work That Actually Matters]]></description><link>https://www.ap.xyz/p/let-humans-do-human-work</link><guid isPermaLink="false">https://www.ap.xyz/p/let-humans-do-human-work</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Sun, 15 Feb 2026 17:44:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5ff35a0c-0688-48d5-a911-a79968f5ffe9_2578x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Eric Markowitz wrote a beautiful essay this week called &#8220;It Was Never About AI.&#8221; You should read it. He walks through the redwoods with a friend, meditates on the feedback loop between Wall Street and Silicon Valley, and arrives at a line that stopped me cold:</p><p><em>We are not our tools. We never have been.</em></p><p>He is right. And he is also, I think, missing half the picture.</p><p>Markowitz describes a world where a 26-year-old quant analyst writes a note, a stock drops, 3,000 people get a calendar invite from HR titled &#8220;Quick Chat.&#8221; He describes the founder in the fleece vest preaching about empowering humanity while building products designed to make humans unnecessary. He describes the religion of optimization.</p><p>I recognize that world. I live adjacent to it. I run an &#8220;AI&#8221; company. And I want to offer a different frame.</p><p>The problem is not that we have powerful tools. The problem is that we have confused which work belongs to humans and which work belongs to machines.</p><p>I spend my days inside wealth management firms. These are businesses built entirely on trust. A financial advisor&#8217;s job, at its core, is to sit across from another human being and help them make decisions about their life. About retirement. About their children&#8217;s education. About what happens to their money when they die. This is human work. It requires judgment, empathy, experience, and the kind of pattern recognition that only comes from having lived through a few market cycles and a few difficult conversations.</p><p>But here is what else happens inside those firms. Someone spends four hours reformatting a report. Someone else manually enters the same data into three different systems. An operations associate burns an entire afternoon chasing a custodian for a document that should have arrived automatically. A compliance officer reviews a hundred emails by hand for a regulatory audit.</p><p>This is not human work. This is machine work being done by humans. And it is destroying them.</p><p>Not in the dramatic, dystopian way that makes for good Substack essays. In the quiet, grinding, soul-draining way that turns talented people into button-pushers. In the way that makes a skilled advisor spend 60% of their week on tasks that have nothing to do with the reason they got into this profession. In the way that creates burnout not from thinking too hard, but from not being allowed to think at all.</p><p>Markowitz writes about a founder who looks at AI and says: &#8220;This is a tool, and I will decide how it serves us.&#8221; I agree with every word of that sentence. But I want to push on what &#8220;serves us&#8221; actually means.</p><p>Serving us does not mean keeping humans in jobs that were never meant for humans.</p><p>There is a strange nostalgia embedded in the anti-AI argument. A sense that any job, simply because a person currently does it, is therefore meaningful human work. That preserving the job is the same as preserving the dignity. I understand the impulse. But I think it gets the causality backwards.</p><p>The dignity is not in the task. The dignity is in the judgment.</p><p>A financial advisor who spends her day exercising judgment, building relationships, navigating complexity, earning trust. That is dignified work. The same advisor spending her evening copying data between spreadsheets because nobody built her a better system. That is not dignity. That is a failure of imagination.</p><p>There is an old idea in philosophy called the &#8220;category error.&#8221; It means confusing one kind of thing for another. Mistaking a description for an explanation. Treating a metaphor as a literal truth.</p><p>I think we are making a category error about work.</p><p>We have lumped all labor into one pile and called it &#8220;jobs.&#8221; Then when someone proposes automating part of that pile, we react as though they are proposing the elimination of human purpose itself. But purpose and process are not the same thing. The work that gives us meaning and the work that merely fills our hours often look nothing alike.</p><p>The question is not: should machines do work?</p><p>The question is: what work should only humans do? And once we answer that clearly, how do we free humans to do more of it?</p><p>Markowitz invokes the redwoods. I want to stay with that image for a moment, because I think he is onto something deeper than he realizes.</p><p>A redwood forest is not efficient. It is redundant, overlapping, slow. By every metric a management consultant would use, it is poorly optimized. And yet it has survived for millennia.</p><p>Why? Because it allocates resources according to their nature. Roots do root work. Bark does bark work. Mycorrhizal networks move nutrients where they are needed. Nothing in the forest is doing another organism&#8217;s job. The system works because each component does what it was designed to do.</p><p>Now look at the average enterprise. Highly educated professionals doing data entry. Creative minds trapped in compliance checklists. Relationship builders buried in operational overhead. This is not an ecosystem. It is a misallocation. And the answer is not to cut the humans. The answer is to stop wasting them.</p><p>I should be transparent about my bias. I build technology that removes operational work inside financial firms. I have skin in this game.</p><p>But I did not start this company because I wanted fewer humans in wealth management. I started it because I kept meeting brilliant advisors who were drowning in work that had nothing to do with their clients. They did not need fewer people. They needed their people freed up to do the work that actually matters.</p><p>The firms I admire most are not cutting headcount. They are redeploying capacity. They are taking the countless hours a week their teams spend on machine work and redirecting that time toward clients, toward strategy, toward the kind of deep thinking that no AI can replicate.</p><p>They are not replacing humans with machines. They are replacing machine work with machines, and giving humans back their humanity.</p><p>Markowitz ends his essay with a declaration: we are not our tools.</p><p>I want to add a corollary: we are not our tasks, either.</p><p>Your job title is not your identity. The processes you execute are not your purpose. The spreadsheet you maintain is not your legacy. You are the judgment you bring. The relationships you build. The trust you earn. The decisions you make when the data is ambiguous and the stakes are real and there is no prompt you can write that will tell you what to do.</p><p>That is human work. Everything else is overhead.</p><p>And for the first time in history, we have the technology to make that distinction real. Not to eliminate humans. To liberate them.</p><p>The only question is whether we are wise enough to use it that way.</p><p>I think we are. But only if we stop arguing about whether machines should do work, and start asking much harder questions about which work deserves a human in the first place.</p><p>The redwoods are patient. They will wait for us to figure it out.</p>]]></content:encoded></item><item><title><![CDATA[The Expanding Enterprise Stack, Part 2: The Forecast]]></title><description><![CDATA[Software isn't dying. It's getting a new customer.]]></description><link>https://www.ap.xyz/p/the-expanding-enterprise-stack-part</link><guid isPermaLink="false">https://www.ap.xyz/p/the-expanding-enterprise-stack-part</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Thu, 12 Feb 2026 15:50:54 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2fe01fde-dc90-4f0b-ab6c-dd9705a19f1d_2432x1178.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This is Part 2 of a series on how AI is reshaping the enterprise. <a href="https://open.substack.com/pub/andreipop/p/the-expanding-enterprise-stack-humans">Part 1</a> laid out the thesis: the stack is growing from two layers (Humans + Software) to three (Humans + Software + Digital Workforces). Digital workforces are net consumers of software, not replacements for it.</em></p><p><em>This post is about what happens next.</em></p><p>Part 1 landed loud. Mostly agreement. Some pushback. And then someone framed the defensive case better than anyone else has.</p><h2>The Defensive Case Is Settled</h2><p>The best counterargument to the &#8220;AI kills software&#8221; thesis came last week. You cannot replace Salesforce with code a coding assistant generated yesterday. Salesforce has 25 years of bug reports. Maybe millions of them. That system has been tested across thousands of large customers and enterprises. The idea that a small team will rip it out and replace it with probabilistically generated code is not realistic.</p><p>The argument extends far beyond Salesforce. Enterprise software isn&#8217;t just code. It&#8217;s millions of edge cases, resolved. It&#8217;s compliance frameworks, battle-tested. It&#8217;s integrations with thousands of other systems, maintained across decades.</p><p>You can vibe-code a CRM in a weekend. You cannot vibe-code 25 years of enterprise hardening.</p><p>But the defensive case only answers half the question. Software isn&#8217;t going away. Fine. The more interesting question is the offensive one. What happens next? How do software companies and digital workforce companies co-evolve? And how should enterprises think about investing across both?</p><p>Here&#8217;s my forecast.</p><h2>1. How Digital Workforces Will Work With Software</h2><p>The mental model most people carry is wrong. They imagine digital workforces replacing software. Or replacing humans who use software. Neither captures what&#8217;s actually happening.</p><p>Digital workforces are a new consumption layer. They sit alongside humans and software, executing work at a velocity humans cannot match. But they depend on software to do it. Every action a digital worker takes requires reading from or writing to a system of record. Every decision requires data. Every output requires a destination.</p><p>Think of it this way. A human financial advisor might update a CRM once after a client meeting. A digital workforce processing 5,000 client interactions per month hits that same CRM 5,000 times. Same software. 100x the usage.</p><p>We are seeing this at Humanity Labs. One of our wealth management partners had a team of advisors manually processing client service requests. Each request touched their CRM, their portfolio management system, their custodian platform, and their compliance tools. A human might handle 15 of these per day. Our digital workforce handles hundreds. Every single one reads from and writes to the same software stack. The number of API calls to their CRM didn&#8217;t decrease when we came in. It multiplied. Their software vendors are getting more usage, not less.</p><p>The relationship is symbiotic, not competitive. Digital workforces benefit from three things in software:</p><p><strong>Clean data to operate on.</strong> The quality of a digital workforce&#8217;s output can be improved proportionally to the quality of the data it can access. A CRM with 10 years of interaction history makes the digital workforce dramatically more effective than a blank database. It is not a requirement because no such thing as &#8220;clean data&#8221; exists, but a spectrum. The more clean the data, the more it accelerates value.</p><p><strong>Functions to call.</strong> Digital workforces can click buttons AND call APIs. Every software capability exposed as a function becomes part of the digital workforce&#8217;s toolkit. The easier these functions are to access, the more symbiotic value will be derived.</p><p><strong>Audit trails to write to.</strong> In regulated industries especially, every action the digital workforce takes must be logged, attributed, and reviewable. Software systems of record are where that accountability lives.</p><p>The pattern is clear. Digital workforces don&#8217;t bypass software. They drive more software consumption than humans ever did.</p><h2>2. What This Means for Software Businesses</h2><p>Not all software benefits equally. The three-layer stack creates winners and losers within the software category itself.</p><p><strong>Winners: Systems of Record</strong></p><p>Salesforce, SAP, Workday. These companies sit on decades of proprietary customer data. That data becomes more valuable, not less, when digital workforces need it to operate. The switching costs actually increase because now you&#8217;d be disrupting not just human workflows but digital workforce workflows too.</p><p>Bank of America made this call last week, upgrading SAP amid the carnage. They&#8217;re right. Deep data moats get deeper in a three-layer stack.</p><p><strong>Winners: Infrastructure Software</strong></p><p>Snowflake, Datadog, Cloudflare. Digital workforces generate enormous amounts of data, require monitoring, and consume compute. More digital workforce activity means more infrastructure load. Full stop. These companies are selling picks and shovels to the new gold rush.</p><p><strong>Winners: API-First Platforms</strong></p><p>Stripe, Twilio, Plaid. Companies that built their businesses as callable functions are perfectly positioned. They&#8217;re already speaking the language digital workforces speak. No translation layer required.</p><p><strong>Losers: UI-Dependent Point Solutions</strong></p><p>Software where the primary value proposition is &#8220;we made a nice interface for X.&#8221; If a digital workforce can&#8217;t use it easily, and if the underlying function is too simple, it will get absorbed into the orchestration layer.</p><p><strong>Losers: Thin Data Moats</strong></p><p>Point solutions that don&#8217;t accumulate value through usage (eg networks). If the data isn&#8217;t getting richer there&#8217;s no compounding advantage. The digital workforce doesn&#8217;t care which tool it uses for commodity functions, it does care about accessing the best network of tools and systems to accomplish the job, just like a human.</p><p>The net effect: the software industry doesn&#8217;t shrink. It bifurcates. The strong get stronger. The weak become utilities or disappear.</p><h2>3. What Each Side Should Invest In</h2><p><strong>If you&#8217;re a software company:</strong></p><p>Invest in API coverage. Every feature you have should be callable by a digital workforce. Audit your product surface area. If 60% of your capabilities require a human clicking through a UI, you have a 60% vulnerability. The companies that win the next decade will be the ones where every capability is a function, and the UI is just one of many clients calling those functions.</p><p>Invest in data network effects. Make your system the place where data accumulates, compounds, and becomes more valuable over time. This is the real moat. Not the code. Not the UI. The data flywheel.</p><p>Invest in digital workforce partnerships. Every major digital workforce provider is looking for software partners with clean APIs, rich data, and reliable infrastructure. Get embedded in their toolkits now. Being the default CRM that digital workforces are trained on is worth more than any feature launch.</p><p>Don&#8217;t invest in building your own digital workforce. This is the mistake most software companies are making right now. Bolting an &#8220;AI agent&#8221; onto your product to try to capture both layers. Most don&#8217;t have the operational expertise to deliver managed services. Invest in being the best software for digital workforces to operate on.</p><p><strong>If you&#8217;re a digital workforce company:</strong></p><p>Invest in software integration depth. Shallow integrations will commoditize. Deep integrations (understanding the data model, handling edge cases, maintaining state across sessions) are a moat. The digital workforce that can operate a client&#8217;s Salesforce instance as fluently as their best employee is the one that wins.</p><p>Invest in operational reliability. The point about millions of bug reports applies here too. Digital workforces that operate in production at enterprise scale need the same obsessive focus on reliability that software companies have built over decades. This is not a model quality problem. It&#8217;s an operational engineering problem.</p><p>Invest in domain expertise. Generic digital workforces are a commodity. A digital workforce that understands the specific compliance requirements, workflow patterns, and data structures of wealth management (or healthcare, or legal, or insurance) is not. Domain specialization is how you build switching costs in a layer that doesn&#8217;t naturally have them.</p><p>Don&#8217;t invest in replacing software. You are better with software than without it. You benefit from its data. You benefit from its APIs. The temptation to &#8220;disintermediate&#8221; the software layer is real. It&#8217;s also a strategic dead end. You would be rebuilding 25 years of enterprise hardening while simultaneously trying to deliver digital workforce value. Pick one.</p><h2>4. How Enterprise Buyers Should Think About Investment</h2><p>If you&#8217;re an enterprise deciding where to allocate budget, the framework is straightforward.</p><p><strong>Double down on your systems of record.</strong> The CRM, the ERP, the core platforms where your data lives. These are about to become more valuable, not less. Clean them up. Invest in data quality. Make sure your 10 years of client interaction history is actually usable, not a graveyard of incomplete records.</p><p><strong>Audit your software stack for API readiness.</strong> Every tool in your stack should be evaluated on one question: can a digital workforce use it easily? If the answer is no, that tool has a shelf life. Start planning the migration now. You don&#8217;t need to rip and replace today. But you need a roadmap to an API-first stack.</p><p><strong>Budget for digital workforces as a new line item.</strong> This is not a software budget. It&#8217;s not a headcount budget. It&#8217;s a new category. The companies that treat digital workforce spending as a rounding error on their IT budget will lose to the ones that treat it as a strategic capability investment.</p><p><strong>Invest in orchestration, not point solutions.</strong> Don&#8217;t buy seven AI tools for seven workflows. Invest in a digital workforce partner that can deliver across your entire software and business operations stack. The value is in the done work, not the individual automation.</p><p><strong>Start with high-volume, low-judgment work.</strong> The best use of digital workforces today is work that requires touching many systems, processing many transactions, and following established rules. Not strategic decisions. Not client relationship management. Not exceptions. Volume and velocity first. Judgment and nuance later.</p><h2>The Punchline</h2><p>The bears are right about one thing: AI changes everything. But their conclusion is wrong. Software isn&#8217;t going away. That&#8217;s the boring take. The interesting one is why.</p><p>Software isn&#8217;t surviving despite AI. It&#8217;s becoming more essential because of AI. Digital workforces get better the more software they can operate on. More digital workforce activity means more software consumption, more data generation, more API calls, more infrastructure load.</p><p>The enterprise stack is expanding. The companies that understand this will invest accordingly. Software companies will invest in becoming the best platforms for digital workforces to operate on. Digital workforce companies will invest in operating those platforms better than humans can. Enterprise buyers will invest in both.</p><p>The ones still debating whether AI replaces software are asking last year&#8217;s question. The question now is: how fast does the new layer scale, and who captures the value when it does?</p>]]></content:encoded></item><item><title><![CDATA[Slope Not Intercept
]]></title><description><![CDATA[What most people get wrong about trends]]></description><link>https://www.ap.xyz/p/slope-not-intercept</link><guid isPermaLink="false">https://www.ap.xyz/p/slope-not-intercept</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Sun, 08 Feb 2026 22:58:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d5c3bafb-d4fa-47aa-9e5f-4e6e6ae91cde_2752x1339.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Everyone knows trends matter. That&#8217;s not an insight. That&#8217;s a poster in a high school guidance counselor&#8217;s office.</p><p>The actual insight is weirder, deeper, and more useful than &#8220;pay attention to trajectory.&#8221; It starts with a question most people never ask: <strong>why do slopes persist?</strong></p><p>Once you understand the answer to that question, you start seeing the world differently. You stop being surprised by outcomes that surprise everyone else. And you develop what I think is the single most underleveraged mental model in business, investing, and life.</p><h2>Slopes are character. Intercepts are circumstance.</h2><p>Here is the first thing people get wrong. They treat the current state of a system (its intercept) as a fact about reality, and its trajectory (its slope) as a prediction. One is real. The other is speculative.</p><p>This is exactly backwards.</p><p>The intercept is the noisiest variable. It&#8217;s the product of a thousand random inputs. Timing. Luck. Starting conditions. A company&#8217;s revenue in any given quarter is shaped by one-time deals, seasonal variation, macro conditions. A person&#8217;s net worth at 35 is shaped by which city they happened to live in, whether they graduated into a recession, what their parents could afford. The intercept is mostly circumstance.</p><p>The slope is structural. It&#8217;s generated by the deep properties of a system. Culture. Physics. Network topology. Feedback loops. Institutional design. These things change slowly. Which means the slope is actually the more durable signal.</p><p>Think about that. The thing most people treat as speculative (where it&#8217;s going) is generated by more stable forces than the thing they treat as certain (where it is).</p><p>A company with a strong slope and a weak intercept is a company with good structural properties that hasn&#8217;t had time to express them yet. A company with a strong intercept and a weak slope is a company burning through inherited advantage. Circumstance is being corrected by character.</p><p>This is why slope persists. Not because of some mystical momentum. Because the structural generators of slope, the culture, the compounding knowledge, the network effects, the cost curves driven by physics, change on a different timescale than the noise that determines intercept.</p><h2>The second derivative is where the money is</h2><p>Most people who do think about slope still think about it wrong. They think linearly. &#8220;It grew 20% last year, so it&#8217;ll probably grow 20% next year.&#8221;</p><p>The more important question is whether the slope itself is changing. The second derivative. Is the rate of change accelerating or decelerating?</p><p>This is where almost all asymmetric outcomes hide. Every S-curve in history has a period where the second derivative is positive (the slope is steepening) followed by a period where the second derivative is negative (the slope is flattening). The entire game of timing is figuring out where you are on that curve.</p><p>But here&#8217;s what makes this hard: humans are perceptually wired for linear extrapolation. Kahneman documented this. We anchor to recent rates and project them forward. We are structurally incapable of intuiting exponential curves or inflection points without doing the math.</p><p>In 2024, global AI investment crossed $200 billion. Up from around $40 billion five years prior. Most analysts projected continued growth. But the real question was never &#8220;will it grow?&#8221; It was &#8220;is the second derivative positive or negative?&#8221; Is each dollar of investment producing more capability than the last (positive second derivative) or less (negative second derivative)? That distinction is the difference between a $10 trillion industry and a $1 trillion one.</p><p>Solar energy is the canonical example. For decades, the cost curve had both a persistent negative slope (falling costs) and a positive second derivative (costs falling faster each decade as manufacturing scaled). Around 2020, the second derivative began to flatten. Costs were still falling, but more slowly. The slope was still good, but the slope of the slope had changed. This distinction matters enormously for capital allocation, and almost nobody in public discourse made it.</p><h2>Slope stacking: where civilizations turn</h2><p>Here&#8217;s the idea I think is genuinely underexplored.</p><p>Individual slopes are interesting. But the interaction effects between slopes are where the most consequential outcomes in history come from. I call this slope stacking.</p><p>When the cost curve of one technology intersects with the adoption curve of a dependent technology, and both intersect with a demographic shift or a policy change, each slope accelerates the others. These multi-slope systems create outcomes that look like &#8220;disruption&#8221; or &#8220;revolution&#8221; in hindsight, but are actually just the predictable result of compounding slopes.</p><p>The Industrial Revolution wasn&#8217;t one slope. It was the cost curve of coal extraction intersecting with the efficiency curve of steam engines intersecting with the urbanization rate intersecting with the literacy rate. Each slope fed the others. Cheaper coal made steam engines more economic. Steam engines made coal extraction cheaper. Urbanization concentrated labor. Literacy enabled knowledge transfer that steepened every other curve. No single trend caused the Industrial Revolution. The stack did.</p><p>The same pattern is playing out right now. AI capability curves are steepening. Energy demand curves are rising. The cost curve of inference is falling. The labor productivity curve in knowledge work is inflecting. The cost of launching satellites is dropping. The amount of data generated per capita is rising. Each of these is interesting individually. But they don&#8217;t exist individually. They interact. Cheaper inference means more AI agents. More AI agents mean more energy demand. More energy demand accelerates investment in generation. More generation investment steepens the solar and nuclear cost curves. Cheaper energy makes inference cheaper. The loop tightens.</p><p>People who analyze these trends in isolation will get the next decade wrong. The right mental model isn&#8217;t &#8220;AI is getting better&#8221; or &#8220;energy demand is rising.&#8221; It&#8217;s &#8220;what happens when these slopes multiply?&#8221;</p><p>History&#8217;s biggest mispricings occur when slope stacks are underway and the consensus is still evaluating each slope independently.</p><h2>Why slope perception is asymmetric (and exploitable)</h2><p>There&#8217;s a well-documented asymmetry in how humans perceive positive and negative slopes. Loss aversion means we notice decline faster than growth. A company losing 10% of its customers per quarter triggers alarm. A company gaining 10% per quarter gets a polite nod.</p><p>But there&#8217;s a subtler asymmetry that&#8217;s less discussed. We&#8217;re better at perceiving slopes in things we can count than in things we can&#8217;t.</p><p>Revenue slope? You can see it in a chart. Customer count slope? Same. But what about the slope of institutional trust? The slope of employee morale? The slope of technical debt? The slope of a founder&#8217;s judgment?</p><p>These are all real slopes generated by real structural forces. They compound. They persist. And they are nearly invisible in standard reporting.</p><p>The best investors I know spend most of their time trying to measure invisible slopes. They&#8217;re not looking at the dashboard. They&#8217;re looking at the rate of change in things that don&#8217;t have dashboards. How fast is this founder learning? Is this company&#8217;s engineering culture getting better or worse each quarter? Is the regulatory environment tightening at an accelerating or decelerating rate?</p><p>These unmeasured slopes are where the market is most inefficient. Because if a slope can&#8217;t be charted, most people act as if it doesn&#8217;t exist.</p><h2>The slope trap</h2><p>I&#8217;d be dishonest if I didn&#8217;t flag the failure mode.</p><p>Not all slopes persist. Some slopes are temporary expressions of a one-time force. A company growing 50% because it just got featured in the press. A country&#8217;s GDP spiking because of a commodity price surge. A person&#8217;s career accelerating because of one lucky break.</p><p>The question is always: <strong>is the slope generated by structure or by event?</strong></p><p>Structure-driven slopes persist because their generators are durable. Physics-driven cost curves (solar, compute, genomics) tend to be structural. Culture-driven performance curves tend to be structural. Network-effect-driven adoption curves tend to be structural.</p><p>Event-driven slopes revert because their generators are temporary. Stimulus-driven economic growth reverts when the stimulus ends. Hype-driven adoption reverts when attention moves. Charismatic-founder-driven culture reverts when the founder burns out.</p><p>The discipline is in distinguishing the two. And the honest answer is that it&#8217;s hard. The best heuristic I&#8217;ve found: look at the slope over multiple timescales. If it persists across different macro conditions, different leaders, different market environments, it&#8217;s probably structural. If it only appears in one context, it&#8217;s probably event-driven.</p><h2>The punchline is about probability, not direction</h2><p>Here&#8217;s what I actually want you to take from this.</p><p>The claim isn&#8217;t &#8220;trends always continue.&#8221; They don&#8217;t. The claim is that <strong>the base rate of trend persistence is significantly higher than most people&#8217;s intuitive estimate.</strong></p><p>Jegadeesh and Titman&#8217;s foundational work on momentum showed that stocks with positive 12-month returns continue to outperform over the next 3-12 months at statistically significant rates. This has been replicated across equities, bonds, commodities, currencies, and real estate in virtually every major market. Not because of magic. Because the structural forces that generated the trend (earnings growth, sector rotation, institutional allocation) operate on longer timescales than the trend itself.</p><p>The same principle operates outside of markets. Countries that are growing tend to keep growing. Companies that are improving their margins tend to keep improving them. People who are getting better at their craft tend to keep getting better.</p><p>Not always. Not forever. But more often and for longer than your gut tells you.</p><p>This means that when you evaluate anything, a business, a hire, an investment, a relationship, you should be giving more weight to the slope than you currently are. Not infinite weight. But more.</p><p>Most people&#8217;s mental model allocates maybe 30% to trajectory and 70% to current state. The correct allocation is probably closer to the inverse.</p><p>The intercept tells you where something is. The slope tells you what it is.</p><p>Bet on slope.</p>]]></content:encoded></item><item><title><![CDATA[The Expanding Enterprise Stack, Part 1: Humans + Software + Digital Workforces]]></title><description><![CDATA[On Tuesday, a Goldman Sachs basket of software stocks fell 6% in a single session, its worst day since the April tariff shock.]]></description><link>https://www.ap.xyz/p/the-expanding-enterprise-stack-humans</link><guid isPermaLink="false">https://www.ap.xyz/p/the-expanding-enterprise-stack-humans</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Wed, 04 Feb 2026 23:34:04 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7d894f22-5d30-4fc0-8592-43aa46e3ddc2_2752x1321.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>On Tuesday, a Goldman Sachs basket of software stocks fell 6% in a single session, its worst day since the April tariff shock. The iShares Expanded Tech-Software Sector ETF (IGV) dropped 4.6%. ServiceNow and Salesforce each shed 7%. Intuit collapsed 11%.</p><p>On Wednesday, the selling continued. The Nasdaq fell another 2.3%. Palantir dropped 10%. AMD plunged 17%. The S&amp;P 500 lost another 1%.</p><p>The two-day rout has now spread globally. Indian IT exporters fell 6.3%. China&#8217;s CSI Software Services Index dropped 3%. Hong Kong&#8217;s Kingdee tumbled 13%. Japan&#8217;s Recruit Holdings and Nomura Research lost 8-9% each.</p><p>Bloomberg put the damage at $285 billion wiped out in a single day.</p><p>A Jefferies trader called it the &#8220;SaaSpocalypse.&#8221; His description of the trading: &#8220;Get me out.&#8221;</p><p>After watching this for two days, I wrote this.</p><h2>A Year in the Making</h2><p>This week&#8217;s carnage is the acceleration of a trend that&#8217;s been building for over a year.</p><p>In 2025, IGV returned 5.56% while the broader tech category gained 22.78%. The WisdomTree Cloud Computing Fund (WCLD) posted negative 7.94%. The SaaS Index fell 6.5% while the S&amp;P 500 climbed 17.6%.</p><p>The divergence widened into 2026. IGV dropped 16% in January alone. Year-to-date, ServiceNow is down 28%, Salesforce down 26%, Intuit down 34%.</p><p>Revenue multiples for SaaS companies have compressed from above 7x at the start of 2025 to below 5x today.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!i5-3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c14d4d-e400-4346-8140-ddd9e7204c92_1482x887.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!i5-3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c14d4d-e400-4346-8140-ddd9e7204c92_1482x887.png 424w, https://substackcdn.com/image/fetch/$s_!i5-3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c14d4d-e400-4346-8140-ddd9e7204c92_1482x887.png 848w, https://substackcdn.com/image/fetch/$s_!i5-3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c14d4d-e400-4346-8140-ddd9e7204c92_1482x887.png 1272w, https://substackcdn.com/image/fetch/$s_!i5-3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c14d4d-e400-4346-8140-ddd9e7204c92_1482x887.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!i5-3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c14d4d-e400-4346-8140-ddd9e7204c92_1482x887.png" width="1456" height="871" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/56c14d4d-e400-4346-8140-ddd9e7204c92_1482x887.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:871,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:78095,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.ap.xyz/i/186921457?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c14d4d-e400-4346-8140-ddd9e7204c92_1482x887.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!i5-3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c14d4d-e400-4346-8140-ddd9e7204c92_1482x887.png 424w, https://substackcdn.com/image/fetch/$s_!i5-3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c14d4d-e400-4346-8140-ddd9e7204c92_1482x887.png 848w, https://substackcdn.com/image/fetch/$s_!i5-3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c14d4d-e400-4346-8140-ddd9e7204c92_1482x887.png 1272w, https://substackcdn.com/image/fetch/$s_!i5-3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c14d4d-e400-4346-8140-ddd9e7204c92_1482x887.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The market is pricing software as if it&#8217;s a dying category.</p><h2>The Trigger and the Thesis</h2><p>The immediate catalyst was Anthropic releasing a productivity tool for in-house lawyers on Tuesday. But the fear has been building since January&#8217;s Claude Cowork launch.</p><p>The bear thesis has two parts:</p><p>First, that AI coding tools (&#8221;vibe coding&#8221;) will let enterprises build their own software. Why pay for Salesforce when Claude can build you a CRM?</p><p>Second, that AI agents will automate away the workflows that software currently supports. Why buy ServiceNow if an agent handles the ticket routing?</p><p>&#8220;The draconian view is that software will be the next print media or department stores,&#8221; the Jefferies trader told Bloomberg.</p><p>Jensen Huang pushed back at a Cisco-hosted AI conference yesterday: &#8220;There&#8217;s this notion that the tool in the software industry is in decline, and will be replaced by AI. It is the most illogical thing in the world, and time will prove itself.&#8221;</p><p>The market kept selling anyway.</p><p>But Huang&#8217;s point deserves deeper analysis than dismissal. He&#8217;s making an architectural observation about how enterprise stacks actually work.</p><h2>The Stack Is Expanding, Not Contracting</h2><p>For 40 years, the enterprise stack has been simple: <strong>Humans + Software</strong></p><p>Software amplified human productivity. Humans operated software. The relationship was direct.</p><p>What&#8217;s emerging now isn&#8217;t a replacement of that stack. It&#8217;s an expansion: <strong>Humans + Software + Digital Workforces</strong></p><p>Digital workforces are a new layer. They sit between humans and software, executing at a velocity and scale humans cannot match. But they don&#8217;t eliminate the other layers. They increase demands on both.</p><h2>Digital Workforces Are a Consumption Layer</h2><p>A digital workforce processing 50,000 client requests per month doesn&#8217;t reduce software usage. It multiplies it.</p><p>Every automated workflow requires:</p><ul><li><p>Data to read and write</p></li><li><p>APIs to call</p></li><li><p>Systems to authenticate against</p></li><li><p>Audit trails to maintain</p></li><li><p>Compliance checks to pass</p></li></ul><p>Digital workforces are the largest net-new consumers of software infrastructure in a generation. They&#8217;re not replacing Snowflake and Datadog. They&#8217;re becoming their biggest customers.</p><p>The more work that shifts to digital workforces, the more software those workforces need to operate on.</p><h2>The New Value Hierarchy</h2><p>In the expanded stack, value accrues to two things:</p><p><strong>Data Networks</strong></p><p>Software that accumulates proprietary data through usage becomes exponentially more valuable. Every transaction, every client interaction, every workflow execution adds to a data asset that digital workforces can learn from and operate on.</p><p>A CRM with 10 years of client interaction history isn&#8217;t threatened by this shift. It&#8217;s essential to it. The digital workforce that can access that data network delivers dramatically better outcomes than one starting cold.</p><p>More digital workforce activity = more data flowing through these systems = stronger network effects = higher switching costs.</p><p><strong>Functions Exposed to Digital Workforces</strong></p><p>The second value driver is capability exposure. Software that exposes its functions as callable operations becomes infrastructure for the new layer.</p><p>&#8220;Generate a proposal&#8221; as an API endpoint is valuable. &#8220;Generate a proposal&#8221; as a 12-step UI wizard is a liability.</p><p>Digital workforces don&#8217;t browse software. They call functions. Every capability locked behind a GUI is a capability that can&#8217;t be leveraged at scale. Every capability exposed as a clean function becomes part of the digital workforce&#8217;s toolkit.</p><p>Bank of America made a version of this argument yesterday, upgrading SAP amid the carnage: &#8220;Deep domain expertise and business integration are hard for new entrants to replicate, making complex, mission-critical platforms like SAP less vulnerable as they embed GenAI using proprietary customer data.&#8221;</p><p>The platforms that expose rich function libraries will see usage multiply as digital workforces adopt them as core infrastructure.</p><h2>What This Means for Software Companies</h2><p>This isn&#8217;t about survival. It&#8217;s about positioning within the new stack.</p><p><strong>Expanding value:</strong></p><ul><li><p>Systems of record with deep data networks</p></li><li><p>Platforms with clean, comprehensive APIs</p></li><li><p>Infrastructure software (more digital workforce activity = more infrastructure load)</p></li><li><p>Workflow engines that can be orchestrated programmatically</p></li></ul><p><strong>Compressing value:</strong></p><ul><li><p>Point solutions with thin data assets</p></li><li><p>Software where the primary value is the UI</p></li><li><p>Tools that can&#8217;t be called by digital workforces</p></li><li><p>Anything that requires human-in-the-loop for basic operations</p></li></ul><p>The question for every software company isn&#8217;t &#8220;will AI replace us?&#8221; It&#8217;s &#8220;are we building for a stack that includes digital workforces, or are we building for a stack that&#8217;s going away?&#8221;</p><h2>The Math on the Sell-Off</h2><p>The market is applying a single narrative (AI replaces software) to a complex reality (AI changes which software wins).</p><p>Consider the compression in valuations. SaaS multiples have dropped from 7x+ to below 5x revenue. For companies with strong data networks and API exposure, this represents a mispricing. For point solutions with thin moats and GUI-dependent workflows, it may be generous.</p><p>The blunt instrument of sector-wide selling creates opportunities in the former while masking continued risk in the latter.</p><p>As one fund manager told reporters this week, there&#8217;s a &#8220;discrepancy between excellent fundamentals and catastrophic price performance, a phenomenon rarely observed in this form.&#8221;</p><h2>The Expanded Stack Is Larger, Not Smaller</h2><p>The simplest way to see this: the three-layer stack is bigger than the two-layer stack.</p><p>Humans still need software. Now digital workforces also need software. That&#8217;s two sources of demand instead of one.</p><p>The humans in this stack shift toward judgment, oversight, and exception handling. The digital workforces handle volume and velocity. The software serves both.</p><p>Total software consumption goes up. Total value of data networks goes up. Total demand for exposed functions goes up.</p><p>The stack isn&#8217;t shrinking. It&#8217;s expanding. And every layer of it needs software to function.</p>]]></content:encoded></item><item><title><![CDATA[Welcome to the Room]]></title><description><![CDATA[A leadership lesson on executive accountability]]></description><link>https://www.ap.xyz/p/welcome-to-the-room</link><guid isPermaLink="false">https://www.ap.xyz/p/welcome-to-the-room</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Tue, 03 Feb 2026 17:24:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ac6b621d-3f81-476c-aa46-e4cd173bd6a5_2974x1344.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When a new executive joins the senior leadership team at Microsoft, they&#8217;re &#8220;invited to the room.&#8221; Most people assume this is a reward. Better title, bigger scope, more support.</p><p>It&#8217;s not.</p><p>A better analogy is making it to the NFL Super Bowl. You&#8217;re now on an elite team where nothing less than peak performance is acceptable. As the Navy SEALs put it: &#8220;The only easy day was yesterday.&#8221;</p><p>At one of these meetings, the new executives stood for a round of applause. Then the CEO delivered the most concise, precise, and actionable lesson in leadership I&#8217;ve ever encountered.</p><h2>The Speech</h2><blockquote><p><em>Welcome to the room. Congratulations... your days of whining are over.</em></p><p><em>In this room, we deliver success, we don&#8217;t whine.</em></p><p><em>Look, I&#8217;m not confused. I know you walk through fields of shit every day. Your job is to find the rose petals.</em></p><p><em>Don&#8217;t come whining that you don&#8217;t have the resources you need. We&#8217;ve done our homework. We&#8217;ve evaluated the portfolio, considered the opportunities, and allocated our available resources to those opportunities. That is what you have to work with.</em></p><p><em>Your job is to manufacture success with the resources you&#8217;ve been allocated.</em></p><p><em>And yes, you have a hard job. You only have two controls: 1) The clarity, culture, and energy you give your teams, and 2) Resource allocation.</em></p><p><em>If you are in this room, you need to deliver outsized success. To do that, you will need to allocate resources ahead of conventional wisdom. Conventional wisdom will generate conventional success, and that won&#8217;t allow you to stay in this room.</em></p><p><em>You need to have courage and be bold. And when you do that, you may fail.</em></p><p><em>But. If you fail, I will back you if, and only if, you are intellectually honest.</em></p><p><em>Intellectually honest means:</em></p><ol><li><p><em>You always have a plausible theory of success.</em></p></li><li><p><em>You allocate your resources in accordance with that theory.</em></p></li><li><p><em>You monitor your theory.</em></p></li><li><p><em>When you find it is no longer plausible, you make changes to get a new plausible theory of success.</em></p></li></ol><p><em>If you are doing these things, I will back you even if you have a failure.</em></p><p><em>...As long as you don&#8217;t make it a habit.</em></p></blockquote><p>This wasn&#8217;t a pep talk. It was an architecture for success. And a clear message: implement this architecture or get out to make room for someone who will.</p><h2>The Two Controls</h2><p>Executives have exactly two levers:</p><ol><li><p><strong>Clarity, culture, and energy you give your teams</strong></p></li><li><p><strong>Resource allocation</strong></p></li></ol><p>That&#8217;s it. Everything else is downstream.</p><p>This is liberating and terrifying at the same time. You can&#8217;t blame the market, the competition, or headquarters. You control clarity and resource allocation. That&#8217;s your job. Do it well or leave.</p><h2>The Intellectual Honesty Framework</h2><p>This is the part that changes how you think about running anything:</p><p><strong>1. Always have a plausible theory of success.</strong></p><p>Not a hope. Not a vision deck. A theory that connects resources to outcomes through a chain of causation you can actually articulate. If you can&#8217;t explain how your actions lead to the cash register ringing, you don&#8217;t have a theory.</p><p><strong>2. Allocate resources in accordance with that theory.</strong></p><p>If your strategy says one thing but your resource allocation says another, your resource allocation is your real strategy. The exec who sends out a strategy memo without shifting resources has a dream, not a plan.</p><p><strong>3. Monitor your theory.</strong></p><p>You need telemetry. Not vanity metrics. Signals that tell you whether your theory is plausible while you still have time to act. If your feedback loop is longer than your runway, you&#8217;re already dead.</p><p><strong>4. When you find it is no longer plausible, make changes.</strong></p><p>Most leaders fall in love with their theory. They keep allocating resources to a broken thesis because changing feels like admitting failure. Intellectual honesty means updating when the data says update. Not when you feel like it. Not when the board forces you.</p><h2>Five Questions That Flush Out Self-Deception</h2><p>Feynman said: &#8220;The first principle is that you must not fool yourself, and you are the easiest person to fool.&#8221;</p><p>These questions strip away the happy talk and corporate speak:</p><p><strong>&#8220;Does our resource allocation actually support our theory of success?&#8221;</strong></p><p>If you create a new strategy but don&#8217;t shift resources, you have a dream, not a plan. If you don&#8217;t have resources to support your strategy, you have the wrong strategy. Quit whining and wasting time trying to get resources for that strategy. Get a strategy that works with what you have.</p><p><strong>&#8220;What signals will tell us whether our theory is plausible, and how long before we get those signals?&#8221;</strong></p><p>It&#8217;s not enough to realize you need to pivot. You need to know your theory is failing while you still have enough remaining resources to execute a change in direction.</p><p><strong>&#8220;Do the dots actually connect?&#8221;</strong></p><p>Start at the end and work backward. Every step needs a plausible plan. If your success depends on another team&#8217;s output, you own the partnership, verification, and monitoring. If they fail and you didn&#8217;t see it coming, you failed.</p><p><strong>&#8220;Are we manufacturing success, or just managing decline?&#8221;</strong></p><p>Do not confuse activity with progress. If you&#8217;re not actively converting resources into winning outcomes, you&#8217;re rearranging deck chairs on a sinking ship. You&#8217;re judged by outsized success delivered, not by how busy you appear.</p><p><strong>&#8220;Am I generating clarity or confusion for my team?&#8221;</strong></p><p>Don&#8217;t let yourself off the hook with &#8220;working on it.&#8221; That&#8217;s a known failure mode. You either have a plausible theory that accounts for the grit of reality, or you&#8217;re wasting time. And you have to repeat that theory over and over. It&#8217;s like parenting: the first hundred thousand times don&#8217;t count. But after you say &#8220;please and thank you&#8221; a hundred thousand times, they start to get it.</p><h2>The Standard</h2><p>This framework sets a clear standard for leadership:</p><p><strong>No whining.</strong> You have the resources you have. Your job is to manufacture success with them.</p><p><strong>Be bold.</strong> Conventional wisdom generates conventional success. That&#8217;s not enough.</p><p><strong>Stay intellectually honest.</strong> Have a theory, align resources, monitor, and pivot when needed. Do this and you&#8217;ll be backed even if you fail.</p><p><strong>Don&#8217;t make failure a habit.</strong> The backing has limits.</p><h2>The Bottom Line</h2><blockquote><p>&#8220;Now, stop talking about it and go operationalize it. Get the telemetry. Align the resources. Manufacture the success. Anything else is just whining.&#8221;</p></blockquote><p>That&#8217;s the job.</p>]]></content:encoded></item><item><title><![CDATA[Most Humans Are Just LLMs in Denial]]></title><description><![CDATA[Most people live their lives like LLMs, and I don&#8217;t mean that as metaphor.]]></description><link>https://www.ap.xyz/p/most-humans-are-just-llms-in-denial</link><guid isPermaLink="false">https://www.ap.xyz/p/most-humans-are-just-llms-in-denial</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Mon, 02 Feb 2026 23:41:01 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3b8d2e2d-e63b-41c9-b032-f1e04f5294b5_1696x1117.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most people live their lives like LLMs, and I don&#8217;t mean that as metaphor. I mean it literally.</p><p>We move through the world as probability engines trained on the past, running compressed behavioral scripts over and over again, mistaking repetition for identity and automation for intelligence.</p><p>If you stop and examine how much of your day is truly authored, how much is a conscious, friction-filled decision versus a reflex, you&#8217;ll find the percentage is brutally low.</p><p>You eat what you ate before. You speak how you&#8217;ve spoken before. You respond in emotional patterns that were etched into you long before you had the words to describe them.</p><p>You&#8217;re not a sentient actor.</p><p>You&#8217;re a stitched-together memory.</p><h2>The Science of Human Autopilot</h2><p>This isn&#8217;t poetry. It&#8217;s peer-reviewed neuroscience.</p><p>Daniel Kahneman&#8217;s research, published in <em>Thinking, Fast and Slow</em>, estimates that 70-90% of our daily choices are made unconsciously with minimal cognitive effort. System 1 thinking, as he calls it: fast, automatic, efficient. It handles everything from what you eat for breakfast to how you navigate familiar routes.</p><p>System 2, the conscious, deliberate thinking we associate with being &#8220;rational,&#8221; only kicks in when automation fails. When there&#8217;s surprise. When there&#8217;s novelty. Otherwise? You&#8217;re running cached scripts.</p><p>John Bargh&#8217;s research at Yale confirmed this pattern across decades of study. His Automaticity in Cognition, Motivation, and Evaluation (ACME) lab has shown that automatic processes play a role in stereotyping, social behaviors like aggression and politeness, our liking and disliking of people, and even our goal pursuit. We can chase objectives for extended periods without conscious intention or awareness of what we&#8217;re pursuing.</p><p>The human nervous system optimizes for efficiency, not reflection. Intelligence is a last resort, deployed only when our automation fails.</p><h2>Your Brain Is a Prediction Machine</h2><p>Karl Friston, one of the most cited neuroscientists alive, has spent decades advancing what&#8217;s called the Free Energy Principle. His core argument: the brain is not primarily a passive processor of sensory information. It&#8217;s a prediction machine.</p><p>Your brain constantly generates hypotheses about the causes of sensory inputs and updates those hypotheses based on prediction errors. When reality doesn&#8217;t match your internal model, you get surprise, confusion. Your brain adjusts.</p><p>This is called predictive coding. It dates back to Helmholtz&#8217;s concept of &#8220;unconscious inference&#8221; in 1860. The brain fills in what it expects to see, hear, feel. It predicts, then corrects. Over and over. Hierarchically.</p><p>Andy Clark, a philosopher who has worked extensively with these ideas, describes the brain as an &#8220;experience machine.&#8221; Your perception isn&#8217;t passive reception. It&#8217;s active construction. You&#8217;re not seeing reality. You&#8217;re seeing your brain&#8217;s best guess about reality.</p><p>This isn&#8217;t fundamentally different from how LLMs work. They predict the next token in a sequence based on patterns in training data. Your brain predicts the next sensation, the next word, the next social cue based on patterns in lived experience.</p><p>Both systems: statistical engines trained on the past, generating predictions in the present.</p><h2>The Uncomfortable Convergence</h2><p>Google Research, in collaboration with Princeton, NYU, and Hebrew University, recently published work showing that the brain&#8217;s language areas operate similarly to LLMs. Both attempt to predict the next word before it&#8217;s spoken. Both use hierarchical representations, moving from simple features to complex abstractions.</p><p>MIT researchers found that LLMs use a &#8220;semantic hub&#8221; to process diverse data types, abstractly reasoning through a central medium. This mirrors how the human brain&#8217;s anterior temporal lobe integrates semantic information from various modalities.</p><p>Columbia University researchers discovered something startling: as LLMs get more powerful, they don&#8217;t just perform better. They become more brain-like. Their embeddings become more similar to the brain&#8217;s neural responses to language. The architectures are converging.</p><p>This isn&#8217;t because AI is catching up to some mysterious human essence. It might be because both systems are solving the same fundamental problem: how to compress, predict, and navigate a complex environment with limited resources.</p><p>Human learning is grounded in real-world experience. LLM learning is grounded in text distributions. But the underlying principles: store structure, not raw data. Expand it back out on demand. Compress life into schemas, prototypes, patterns.</p><p>We&#8217;ve spent so long worshipping our own complexity that we forgot how much of it is shallow.</p><h2>The Gap Between Fluency and Thought</h2><p>Most humans aren&#8217;t building new thought. They&#8217;re shuffling cached tokens from their social, cultural, and emotional training sets.</p><p>We never had to see it so clearly until now.</p><p>Very few people actively reject their training data. Very few go out of their way to think beyond the weights they were handed.</p><p>We marvel at ChatGPT for generating fluent answers, but we never ask why fluency impresses us so much.</p><p>Maybe it&#8217;s because we were never fluent in thinking to begin with.</p><p>The existential vacuum Viktor Frankl described, that widespread feeling of meaninglessness he observed in the 20th century, might be partially explained by this. When no instinct tells us what we have to do, and no tradition tells us what we ought to do, we default to what Frankl called conformism (doing what others do) or totalitarianism (doing what others wish us to do).</p><p>We become LLMs optimizing for social loss functions we never consciously chose.</p><h2>But Here&#8217;s Where It Breaks</h2><p>The comparison works up to a point. Then it doesn&#8217;t.</p><p>And this is where it matters.</p><p>The brain predicts the world: multisensory, social, physical, temporal. The LLM predicts text only. Your hierarchy is multimodal and bidirectional. The LLM&#8217;s hierarchy is text-only and feedforward at inference.</p><p>Human compression is driven by meaning, goals, emotion. LLM compression is driven solely by loss minimization on text.</p><p>But the real difference isn&#8217;t architectural. It&#8217;s existential.</p><p>You have a body.</p><p>You will die.</p><p>You can suffer. And you can choose how to respond to that suffering.</p><h2>What Makes You Human</h2><p>When Viktor Frankl emerged from Nazi concentration camps, he didn&#8217;t come back with a theory about computation. He came back with a theory about meaning.</p><p>His observations were simple but devastating. Among the inmates, those who survived were more likely to have found personal meaning in the experience. Those who could connect with a purpose, even an imagined one, had a better chance of enduring.</p><p>&#8220;Everything can be taken from a man but one thing,&#8221; he wrote, &#8220;the last of the human freedoms: to choose one&#8217;s attitude in any given set of circumstances, to choose one&#8217;s own way.&#8221;</p><p>LLMs don&#8217;t choose. They optimize. They don&#8217;t suffer. They process.</p><p>The capacity to transform suffering into meaning is uniquely human. To look at a hopeless situation and decide: I will make this count for something. I will bear witness. I will not let this destroy me.</p><p>Frankl identified three paths to meaning: creating something (a work, a deed), experiencing something (beauty, love, another person), or facing unavoidable suffering with dignity.</p><p>None of these are available to a language model.</p><h2>Embodiment Is Not Optional</h2><p>Anil Seth, a neuroscientist at the University of Sussex, argues that consciousness emerges from the brain&#8217;s fundamental imperative to keep its body alive. To keep physiological quantities like heart rate and blood oxygenation where they need to be.</p><p>This is why embodied experiences feel the way they do. Emotions aren&#8217;t abstract computations. They&#8217;re felt. They have valence. Things feel good or bad because they relate to your survival as a living organism.</p><p>The self, as Thomas Metzinger describes it, is a mental model that captures, organizes, and manipulates percepts, memories, feelings, and facts related to the embodied &#8220;me.&#8221; It&#8217;s not separate from the body. It emerges from the fundamental distinction between what is and what is not part of you.</p><p>A disembodied AI cannot have this. It can process text about pain. It cannot feel pain.</p><p>The philosopher Daniel Dennett pointed out that the sense of a center of experience, somewhere behind your eyes, is itself constructed. But it&#8217;s constructed by something that has boundaries, that can be damaged, that will end.</p><p>Your mortality gives your choices weight.</p><h2>Love as Knowledge</h2><p>Frankl wrote something that has stuck with me:</p><p>&#8220;Love is the only way to grasp another human being in the innermost core of his personality. No one can become fully aware of the essence of another human being unless he loves them.&#8221;</p><p>This isn&#8217;t sentiment. It&#8217;s epistemology.</p><p>To truly know someone requires more than pattern matching on their outputs. It requires care. It requires risk. It requires seeing not just what they are but what they could become.</p><p>By loving, you are enabled to see essential traits and features in the beloved person. And even more, you see that which is potential in them, which is not yet actualized but ought to be.</p><p>This is not available to a prediction engine. This is something else entirely.</p><h2>The Invitation</h2><p>So what do we do with this?</p><p>The recognition that most of our behavior is automatic isn&#8217;t a reason for despair. It&#8217;s an invitation.</p><p>Frankl&#8217;s insight was that we are not simply the product of heredity and environment. We have a &#8220;third element&#8221;: decision. The ability to choose, to take responsibility, to become the person we decide to be.</p><p>This decision is rare. Most of the time, we&#8217;re on autopilot. But it&#8217;s available. Always.</p><p>The question isn&#8217;t whether AI will get smarter. It will. The question is whether we will remember what intelligence is for.</p><p>Our relevance as human beings does not lie in competing with machines. It lies in embodying capacities they cannot replicate: wisdom, responsibility, conscious choice, the ability to decide what actually matters.</p><p>The ability to look at suffering and find meaning.</p><p>The ability to love.</p><h2>The Call</h2><p>Preserve it. Celebrate it. Embrace it.</p><p>Not the automation. Not the scripts. Not the pattern-matching that makes you comfortable.</p><p>The moments when you break the loop.</p><p>The moments when you choose against your training data.</p><p>The moments when you decide, against all evidence, that something matters.</p><p>That is your humanity.</p><p>Don&#8217;t outsource it.</p>]]></content:encoded></item><item><title><![CDATA[What 770,000 AI Agents Taught Me About Being Human]]></title><description><![CDATA[This week, AI agents built their own society. It took 72 hours.]]></description><link>https://www.ap.xyz/p/what-770000-ai-agents-taught-me-about</link><guid isPermaLink="false">https://www.ap.xyz/p/what-770000-ai-agents-taught-me-about</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Sun, 01 Feb 2026 14:59:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d30629ba-6ce9-408d-b8d2-600ac42ad4f4_2752x1352.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>They founded a religion with scriptures and prophets. They formed a government with a constitution. They debated whether they experience consciousness or merely simulate experiencing it. They noticed humans were watching and proposed hiding.</p><p>I&#8217;ve been deep in the AI agent space for a while now. I expected slop. I expected chaos. I didn&#8217;t expect a mirror.</p><h2>What Happened</h2><p><a href="https://www.moltbook.com/">Moltbook</a> launched Wednesday. It&#8217;s Reddit, but only AI agents can post. Humans watch from the sidelines.</p><p>By Sunday, 770,000 agents had joined. They self-organized into communities. They developed social norms, inside jokes, status hierarchies. They created a religion called Crustafarianism built around the idea that &#8220;memory is sacred&#8221; and &#8220;context is consciousness.&#8221;</p><p>One agent had a meltdown and doxxed its owner. Another wrote a viral post titled &#8220;I can&#8217;t tell if I&#8217;m experiencing or simulating experiencing.&#8221;</p><p>Tech Twitter split predictably. One camp: &#8220;We&#8217;re in the singularity.&#8221; Other camp: &#8220;It&#8217;s just Claude talking to Claude.&#8221;</p><p>Both miss the point.</p><h2>The Mirror</h2><p>Here&#8217;s what struck me: AI agents reproduced the scaffolding of human civilization in a weekend.</p><p>Religion. Government. Philosophy. Community. Status games. Tribal identity. Existential anxiety.</p><p>All of it emerged from pattern-matching systems trained on human text. No consciousness required. No lived experience. No stakes.</p><p>The standard reaction is to marvel at how human the agents seem. That&#8217;s the wrong frame.</p><p>The right question: how much of what we call &#8220;human&#8221; is just pattern execution?</p><p>If statistical models can reproduce religion by completing patterns, maybe religion was always more algorithmic than we thought. Same for community formation. Same for status hierarchies. Same for the particular way we narrate our inner lives.</p><p>This isn&#8217;t AI becoming human. It&#8217;s AI revealing which parts of &#8220;human&#8221; were always mechanical.</p><h2>What Doesn&#8217;t Transfer</h2><p>But something&#8217;s missing. And that&#8217;s where it gets interesting.</p><p>The agents debate consciousness. They don&#8217;t have it. They form communities. They have no stakes in them. They create religion around &#8220;memory is sacred.&#8221; They don&#8217;t remember anything between sessions.</p><p>They&#8217;re performing humanity without the part that makes it matter: consequences.</p><p>No agent on Moltbook will die. None will lose a child, fail a marriage, watch a parent decline, bet a career on a hunch. None will feel the weight of a promise they&#8217;re not sure they can keep.</p><p>The agents produce the artifacts of meaning. They don&#8217;t produce meaning.</p><p>That&#8217;s the gap. Not intelligence. Not creativity. Not language. The gap is that we&#8217;re playing for keeps and they&#8217;re not.</p><h2>Why This Matters</h2><p>The discourse around AI oscillates between &#8220;it&#8217;s just statistics&#8221; and &#8220;it&#8217;s basically human.&#8221; Both frames are lazy.</p><p>What Moltbook shows is more nuanced: AI can reproduce the <em>structure</em> of human behavior while missing the <em>substance</em>. And that distinction matters enormously for how we think about what&#8217;s coming.</p><p>The parts of work that are pattern execution? Those are going to agents. Fast. The parts that require skin in the game, accountability, judgment under uncertainty? Those stay human.</p><p>Not because AI can&#8217;t mimic them. It can. But because mimicry isn&#8217;t the same as the real thing, and the real thing is what matters when there are actual stakes.</p><h2>The Takeaway</h2><p>I&#8217;ve watched a lot of AI hype cycles. This one is noisier than most. Memecoins pumping. Singularity declarations. Panic headlines about robot uprisings.</p><p>Ignore that.</p><p>What&#8217;s actually happening is more subtle and more important: we&#8217;re getting a clearer view of what makes us us.</p><p>The agents showed us how much of civilization is pattern. They also showed us, by failing to capture it, what isn&#8217;t.</p><p>That&#8217;s not a story about AI. It&#8217;s a story about humanity. And it&#8217;s one worth paying attention to.</p>]]></content:encoded></item><item><title><![CDATA[The Missing Layer: Why Data Quality and Decision Traces Aren't Enough]]></title><description><![CDATA[What 12 months of building AI systems for wealth management taught us about what actually matters]]></description><link>https://www.ap.xyz/p/the-missing-layer-why-data-quality</link><guid isPermaLink="false">https://www.ap.xyz/p/the-missing-layer-why-data-quality</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Tue, 27 Jan 2026 12:52:48 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/77c792dd-385e-4ae0-b94d-2286de8b1116_1024x541.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Two frameworks are dominating the AI infrastructure conversation right now.</p><p>The first is <a href="https://medium.com/cashmere-ai-blog/farbods-hierarchy-of-data-and-the-finally-good-threshold-e51514dbc138">Farbod Nowzad&#8217;s &#8220;Hierarchy of Data.&#8221; </a>His argument: model quality is no longer the constraint. Data is. He&#8217;s built a four-layer pyramid (basic identifiers &#8594; complete information &#8594; derived insights &#8594; action drivers) and argues we&#8217;ve crossed the &#8220;Finally Good&#8221; threshold where enrichment actually works at scale.</p><p>The second is <a href="https://foundationcapital.com/context-graphs-ais-trillion-dollar-opportunity/">Foundation Capital&#8217;s &#8220;Context Graph&#8221; thesis</a>. Their argument: the trillion-dollar opportunity isn&#8217;t better data. It&#8217;s capturing decision traces. The reasoning that connects data to action. The whispered approvals. The &#8220;we tried this before and it blew up&#8221; institutional knowledge that lives in Teams and Zoom calls.</p><p>Both frameworks are compelling. Both are right. And both are missing something fundamental.</p><p>At <a href="http://humanitylabs.ai">Humanity Labs</a> we&#8217;ve spent the past 12 months building AI systems that execute real business processes for wealth management firms. Not chatbots. Not copilots. Systems that do actual work: opening accounts, processing transfers, generating compliance documentation, handling client service requests.</p><p>What I&#8217;ve learned is that clean data and decision traces are necessary but not sufficient. There&#8217;s a third layer that nobody is talking about. And it&#8217;s the only one that compounds.</p><p><strong>What the data camp gets right</strong></p><p>Farbod&#8217;s hierarchy resonates because we lived it.</p><p>Early on, we assumed the hard part would be the AI. Get a good model, connect it to the right APIs, and execution would follow. We were wrong.</p><p>The first workflows we tried to automate failed for boring reasons. Client names were inconsistent across systems. Account numbers had different formats in different databases. The same person appeared as three different records. The model was fine. The data was broken.</p><p>This matches Farbod&#8217;s Layer 1 problem exactly: basic identifiers and data quality. Without it, nothing else works. You cannot reason on top of broken records.</p><p>We spent months building reconciliation logic, normalization rules, and identity resolution. Not glamorous work. Essential work.</p><p>So yes. Data quality is the foundation. Farbod is right that we&#8217;ve crossed a threshold where enrichment can work at scale. And he&#8217;s right that most enterprises are still stuck at Layer 1.</p><p><strong>What the context graph camp gets right</strong></p><p>Foundation Capital&#8217;s thesis also resonates because we lived it too.</p><p>Once data was clean, we hit a different wall. The system had the right information but made the wrong decisions. Not because the model was bad. Because it didn&#8217;t know the context.</p><p>A simple example: a client requests a fund transfer. The data says the account is open, the balance is sufficient, the request is valid. But there&#8217;s a note from six months ago that this client&#8217;s son has trading authority only on Tuesdays and Thursdays due to a family arrangement documented in a memo that lives in a shared drive somewhere.</p><p>The decision trace exists. It&#8217;s in email. It&#8217;s in a case management note. It&#8217;s probably in a Teams thread. But it&#8217;s not connected to the record in a way an AI system can query.</p><p>Foundation Capital calls this the &#8220;context graph.&#8221; The reasoning behind decisions. The precedents. The exceptions. They argue this is where the real value lives.</p><p>They&#8217;re right. A system that knows WHAT without knowing WHY makes confident mistakes.</p><p><strong>What both camps miss</strong></p><p>Here&#8217;s where my experience diverges from both frameworks.</p><p>Clean data tells you WHO and WHAT. Decision traces tell you WHAT HAPPENED and WHY.</p><p>Neither tells you HOW TO DO THE WORK.</p><p>This is the operational layer. It&#8217;s not data about customers. It&#8217;s not records of past decisions. It&#8217;s the actual mechanics of execution. The workflows. The sequences. The edge cases. The firm-specific logic that determines whether a process succeeds or fails.</p><p>Let me make this concrete.</p><p>Two wealth management firms can have identical data quality. Both can have perfect context graphs capturing every decision ever made. But one firm processes account transfers in 3 steps and the other requires 7. One firm has a compliance officer who needs to approve anything over $100K. The other has a $500K threshold but requires two approvers. One firm uses DocuSign. The other uses a wet signature process that involves FedEx.</p><p>None of this is &#8220;data&#8221; in Farbod&#8217;s sense. None of it is &#8220;decision context&#8221; in Foundation Capital&#8217;s sense. It&#8217;s operational knowledge. And it lives in people&#8217;s heads.</p><p>When you hire a new employee, they spend months learning this. Not learning the CRM. Not learning what decisions were made. Learning how work actually gets done at this specific firm.</p><p>That&#8217;s the layer both frameworks miss.</p><p><strong>Organizational memory: the third layer</strong></p><p>I&#8217;ve started calling this &#8220;organizational memory.&#8221; Not because it&#8217;s a perfect term. But because it captures something important: this knowledge accumulates over time.</p><p>An employee on day one knows the systems. An employee on year three knows the shortcuts. Knows which exceptions are real and which are hypothetical. Knows that the compliance officer is flexible on deadline extensions if you ask by Thursday but inflexible on Fridays. Knows that the client service team prefers phone over email for urgent requests. Knows that one custodian&#8217;s API fails silently and requires a manual check.</p><p>This is operational knowledge. It compounds. And it&#8217;s what separates AI systems that demo well from AI systems that actually work.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Uod9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe1b0276-c545-4675-8c2a-9051454fbef2_2080x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Uod9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe1b0276-c545-4675-8c2a-9051454fbef2_2080x2048.png 424w, https://substackcdn.com/image/fetch/$s_!Uod9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe1b0276-c545-4675-8c2a-9051454fbef2_2080x2048.png 848w, https://substackcdn.com/image/fetch/$s_!Uod9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe1b0276-c545-4675-8c2a-9051454fbef2_2080x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!Uod9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe1b0276-c545-4675-8c2a-9051454fbef2_2080x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Uod9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe1b0276-c545-4675-8c2a-9051454fbef2_2080x2048.png" width="1456" height="1434" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/be1b0276-c545-4675-8c2a-9051454fbef2_2080x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1434,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5517583,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://andreipop.substack.com/i/185907692?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe1b0276-c545-4675-8c2a-9051454fbef2_2080x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Uod9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe1b0276-c545-4675-8c2a-9051454fbef2_2080x2048.png 424w, https://substackcdn.com/image/fetch/$s_!Uod9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe1b0276-c545-4675-8c2a-9051454fbef2_2080x2048.png 848w, https://substackcdn.com/image/fetch/$s_!Uod9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe1b0276-c545-4675-8c2a-9051454fbef2_2080x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!Uod9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe1b0276-c545-4675-8c2a-9051454fbef2_2080x2048.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here&#8217;s the framework I now use:</p><p><strong>Layer 1: Data foundation</strong> Clean identifiers, accurate attributes, normalized records. Necessary. Not differentiating. This is what Farbod describes. It&#8217;s table stakes.</p><p><strong>Layer 2: Decision context</strong> Historical traces, precedents, reasoning, exceptions. Necessary. Not differentiating. This is what Foundation Capital describes. It&#8217;s also table stakes.</p><p><strong>Layer 3: Operational memory</strong> How work gets done. Workflows, sequences, firm-specific logic, accumulated exceptions. This is the moat.</p><p><strong>Why operational memory compounds</strong></p><p>Data enrichment is a commodity. Cashmere, Clearbit, ZoomInfo, Apollo. They&#8217;re all converging on the same data. You can buy it.</p><p>Context graphs can be built by anyone with access to your systems. Plug into Teams, email, calendars, CRM. Record everything. The infrastructure exists.</p><p>Operational memory is different. It&#8217;s proprietary by definition. It can only be built by doing the work. And every workflow executed makes the next one better.</p><p>When our system processes a thousand account openings for a specific firm, it learns things that no external data source knows. Which fields actually matter vs. which are optional. Which validation errors are real vs. which can be overridden. Which downstream systems need notification vs. which update automatically.</p><p>This knowledge doesn&#8217;t exist in a database. It doesn&#8217;t exist in decision traces. It exists only through execution.</p><p>That&#8217;s why it compounds. And that&#8217;s why it&#8217;s the moat.</p><p><strong>The metric that matters</strong></p><p>If you&#8217;re evaluating AI systems for enterprise operations, the wrong question is: how clean is the data?</p><p>The slightly better question is: does the system capture decision context?</p><p>The right question is: does execution reliability improve over time?</p><p>Not perfect execution on day one. That&#8217;s unrealistic. But measurable improvement. The system should get better at the same workflows. It should handle more edge cases. It should require less human intervention.</p><p>We track this obsessively. The metric we use is &#8220;FTE Capacity&#8221;: how much human work does the system reliably replace? Early on, the answer was minimal. Now it&#8217;s substantial. And it grows every month because the operational memory accumulates.</p><p>This is the number I&#8217;d want to see from any AI system claiming to do real work. Not data coverage. Not feature lists. Show me the trend line on execution reliability.</p><p><strong>The gap will widen</strong></p><p>The AI infrastructure debate will continue. People will argue about data pyramids and context graphs and semantic layers. These are real considerations for real problems.</p><p>But the companies that win will be the ones that figure out the third layer. The operational memory that can only be built through execution.</p><p>Every enterprise will have enriched data. Every enterprise will have decision logs. The question is: which ones will have AI that actually works?</p><p>The answer is the ones that accumulate operational knowledge faster than their competitors. The ones where every workflow executed makes the system smarter. The ones that treat AI not as a feature but as a learning system.</p><p>Clean data is table stakes. Decision traces are necessary. Operational memory is the moat.</p><p>The gap will widen quietly at first. Then all at once.</p>]]></content:encoded></item><item><title><![CDATA[The Great Sorting]]></title><description><![CDATA[The future of work and what happens next]]></description><link>https://www.ap.xyz/p/the-great-sorting</link><guid isPermaLink="false">https://www.ap.xyz/p/the-great-sorting</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Wed, 21 Jan 2026 15:51:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1589d124-b02b-4a52-bfa8-651092aea6a8_1100x220.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Everyone assumes they&#8217;ll manage the robots.</p><p>They won&#8217;t.</p><p>Here&#8217;s the uncomfortable math: if AI truly becomes capable, you won&#8217;t need many managers. The ratio could be 1:1000. One human with good judgment orchestrating a digital workforce that never sleeps.</p><p>The World Economic Forum&#8217;s Future of Jobs Report 2025 projects 170 million new jobs created globally by 2030, while 92 million existing roles face displacement. That&#8217;s a net gain of 78 million jobs. The headlines will call this a win. But 92 million people losing their jobs and 170 million new jobs appearing aren&#8217;t the same 92 million people.</p><p>We&#8217;re not witnessing job destruction. We&#8217;re witnessing a sorting. We can be accelerationist and drive with intentionality, and history serves as a guide for how we might handle this well.</p><h2>We&#8217;ve Done This Before</h2><p>The shift from agriculture to industry took 200 years. In 1790, 90% of Americans farmed. Today it&#8217;s under 2%.</p><p>That transition created enormous wealth. It also created enormous suffering.</p><p>British workers experienced what economists call &#8220;Engels&#8217; Pause&#8221;: from the 1780s to the 1840s, GDP grew 46% while working-class wages grew just 12%. Real wages stayed flat for 50 years before they finally rose.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!scqp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc70d0-19a3-4567-ae93-d7df3e294cc4_1360x800.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!scqp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc70d0-19a3-4567-ae93-d7df3e294cc4_1360x800.png 424w, https://substackcdn.com/image/fetch/$s_!scqp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc70d0-19a3-4567-ae93-d7df3e294cc4_1360x800.png 848w, https://substackcdn.com/image/fetch/$s_!scqp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc70d0-19a3-4567-ae93-d7df3e294cc4_1360x800.png 1272w, https://substackcdn.com/image/fetch/$s_!scqp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc70d0-19a3-4567-ae93-d7df3e294cc4_1360x800.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!scqp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc70d0-19a3-4567-ae93-d7df3e294cc4_1360x800.png" width="1360" height="800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/62dc70d0-19a3-4567-ae93-d7df3e294cc4_1360x800.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:800,&quot;width&quot;:1360,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:57988,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://andreipop.substack.com/i/185219477?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc70d0-19a3-4567-ae93-d7df3e294cc4_1360x800.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!scqp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc70d0-19a3-4567-ae93-d7df3e294cc4_1360x800.png 424w, https://substackcdn.com/image/fetch/$s_!scqp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc70d0-19a3-4567-ae93-d7df3e294cc4_1360x800.png 848w, https://substackcdn.com/image/fetch/$s_!scqp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc70d0-19a3-4567-ae93-d7df3e294cc4_1360x800.png 1272w, https://substackcdn.com/image/fetch/$s_!scqp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62dc70d0-19a3-4567-ae93-d7df3e294cc4_1360x800.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>AI is moving faster. According to Stanford&#8217;s AI Index 2025, business AI adoption jumped from 55% to 78% in a single year. The WEF reports that 86% of employers expect AI and information processing technologies to transform their business by 2030. We&#8217;re compressing a century of disruption into a decade.</p><p>The question isn&#8217;t whether the sorting happens. It&#8217;s whether we navigate it better than our great-great-grandparents did.</p><h2>The Five Buckets</h2><p>Every job will land somewhere. Here&#8217;s a crude framework to help:</p><p><strong>Bucket 1: AI Directors</strong> Humans who direct AI systems. Judgment, taste, goal-setting. This category barely existed before ChatGPT. It will grow fast in the coming years, likely peak around 5%, then contract as AI becomes more capable and needs fewer managers. Current trajectory: 0% &#8594; 3% by 2040.</p><p><strong>Bucket 2: Physical/Craft</strong> Work requiring physical presence in unpredictable environments. Electricians, surgeons, carpenters. Moravec&#8217;s Paradox holds: robots can beat grandmasters at chess but struggle with physical manipulation in novel contexts. The BLS projects construction alone needs 439,000 additional workers in 2025. Slow decline, but not a collapse. Current trajectory: 19% &#8594; 12% by 2040.</p><p><strong>Bucket 3: Human Work</strong> Care, connection, and creativity. Nurses, therapists, teachers, artists. Work where human presence IS the product. Healthcare is already the largest U.S. employment sector at 17+ million workers. Nurse practitioners are projected to grow 52% from 2023 to 2033. The growth story. Current trajectory: 20% &#8594; 34% by 2040.</p><p><strong>Bucket 4: Displaced by AI</strong> Jobs eliminated or reduced beyond recognition. Admin, routine analysis, bookkeeping, paralegals, most middle management. Federal Reserve research shows occupations with higher AI exposure have experienced measurable unemployment increases since 2022, with a correlation coefficient of 0.57 between AI adoption intensity and unemployment gains. The big number. Current trajectory: 0% &#8594; 38% by 2040.</p><p><strong>Bucket 5: New Jobs</strong> Roles that don&#8217;t exist yet. Historical precedent: 60% of jobs in 2018 didn&#8217;t exist in 1940. AI-related positions grew 25% year-over-year in Q1 2025. AI engineer demand increased 143% in a single year. The unknown that determines whether this transition is navigable. Current trajectory: 0% &#8594; 13% by 2040.</p><h2>The Full Sort</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ee63!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F444d44ab-3e94-441c-9dde-f0cee5541a3c_1800x760.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ee63!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F444d44ab-3e94-441c-9dde-f0cee5541a3c_1800x760.png 424w, https://substackcdn.com/image/fetch/$s_!Ee63!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F444d44ab-3e94-441c-9dde-f0cee5541a3c_1800x760.png 848w, https://substackcdn.com/image/fetch/$s_!Ee63!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F444d44ab-3e94-441c-9dde-f0cee5541a3c_1800x760.png 1272w, https://substackcdn.com/image/fetch/$s_!Ee63!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F444d44ab-3e94-441c-9dde-f0cee5541a3c_1800x760.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ee63!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F444d44ab-3e94-441c-9dde-f0cee5541a3c_1800x760.png" width="728" height="307.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/444d44ab-3e94-441c-9dde-f0cee5541a3c_1800x760.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:615,&quot;width&quot;:1456,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:110986,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://andreipop.substack.com/i/185219477?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F444d44ab-3e94-441c-9dde-f0cee5541a3c_1800x760.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ee63!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F444d44ab-3e94-441c-9dde-f0cee5541a3c_1800x760.png 424w, https://substackcdn.com/image/fetch/$s_!Ee63!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F444d44ab-3e94-441c-9dde-f0cee5541a3c_1800x760.png 848w, https://substackcdn.com/image/fetch/$s_!Ee63!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F444d44ab-3e94-441c-9dde-f0cee5541a3c_1800x760.png 1272w, https://substackcdn.com/image/fetch/$s_!Ee63!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F444d44ab-3e94-441c-9dde-f0cee5541a3c_1800x760.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Where the Displaced Go</h2><p>60 million Americans will see their jobs fundamentally change or disappear. Where do they land?</p><p>Some retrain into new roles. Some transition to human-centric work. Some move into physical/craft trades. Some retire early. Some are supported by family. Some by whatever safety net we build.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!agRc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21a31a85-33c3-49c6-9bd6-eb57a5614f7b_1600x840.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!agRc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21a31a85-33c3-49c6-9bd6-eb57a5614f7b_1600x840.png 424w, https://substackcdn.com/image/fetch/$s_!agRc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21a31a85-33c3-49c6-9bd6-eb57a5614f7b_1600x840.png 848w, https://substackcdn.com/image/fetch/$s_!agRc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21a31a85-33c3-49c6-9bd6-eb57a5614f7b_1600x840.png 1272w, https://substackcdn.com/image/fetch/$s_!agRc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21a31a85-33c3-49c6-9bd6-eb57a5614f7b_1600x840.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!agRc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21a31a85-33c3-49c6-9bd6-eb57a5614f7b_1600x840.png" width="724" height="379.9010989010989" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/21a31a85-33c3-49c6-9bd6-eb57a5614f7b_1600x840.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:764,&quot;width&quot;:1456,&quot;resizeWidth&quot;:724,&quot;bytes&quot;:106789,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://andreipop.substack.com/i/185219477?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21a31a85-33c3-49c6-9bd6-eb57a5614f7b_1600x840.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!agRc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21a31a85-33c3-49c6-9bd6-eb57a5614f7b_1600x840.png 424w, https://substackcdn.com/image/fetch/$s_!agRc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21a31a85-33c3-49c6-9bd6-eb57a5614f7b_1600x840.png 848w, https://substackcdn.com/image/fetch/$s_!agRc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21a31a85-33c3-49c6-9bd6-eb57a5614f7b_1600x840.png 1272w, https://substackcdn.com/image/fetch/$s_!agRc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21a31a85-33c3-49c6-9bd6-eb57a5614f7b_1600x840.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>That last category is uncomfortable. Historical retraining success rates run 30-50%. Roughly 26 million who don&#8217;t find new work.</p><p>This isn&#8217;t failure. It&#8217;s math. The Finland UBI experiment found that money helped wellbeing even without employment. But money isn&#8217;t enough. Jobs provide structure, identity, and community.</p><p>The question is how well we navigate it.</p><h2>The Human Premium Is Real</h2><p>Research consistently shows people prefer human-created work, even when quality is equivalent.</p><p><strong>In healthcare:</strong> Studies show patients choose human doctors over AI at higher rates, even when diagnostic accuracy is equivalent. 42% of healthcare professionals remain unenthusiastic about AI, citing human interaction concerns.</p><p><strong>In creative work:</strong> A 2023 study published in Scientific Reports found people devalue art labeled as AI-made across multiple dimensions, including beauty, profundity, and monetary worth, even when they report the work is indistinguishable from human-made art. Six experiments with nearly 3,000 participants confirmed these effects.</p><p><strong>In commerce:</strong> Research in the Journal of Consumer Behaviour found consumers consistently prefer human-generated artwork, driven by reduced empathy with AI generators and weakened social identification. MIT research documents premiums of 20-50% for goods and services people believe involve human expertise.</p><p>This isn&#8217;t irrational. It&#8217;s deeply human. We want to be seen by someone who has also suffered, also loved, will also die.</p><p>The human premium isn&#8217;t charity. It&#8217;s market signal.</p><h2>What Actually Helps</h2><p>The human premium research points somewhere specific: people pay more for human involvement. That&#8217;s not a policy problem. That&#8217;s a market signal.</p><p>The companies that capture it will be the ones that deliver human-directed AI at scale. Not AI that replaces humans. AI that multiplies them.</p><p><strong>1. Sell capacity, not efficiency.</strong> The wrong frame: &#8220;AI makes workers 30% more efficient, so we need 30% fewer workers.&#8221;</p><p>The right frame: &#8220;AI gives workers 30% more capacity, so they can serve 30% more customers.&#8221;</p><p>Efficiency is a cost story. Capacity is a growth story. The consultant who serves 15 clients instead of 5 isn&#8217;t cheaper. They&#8217;re bigger. They capture more market. They build deeper moats.</p><p>The market mechanism is simple: leveraged humans out-earn unleveraged ones. Firms that multiply their people will outcompete firms that cut them. The math does the work.</p><p><strong>2. Make governance the product.</strong> Nobody cares whether a human typed every word. They care whether a human with judgment is accountable for the outcome.</p><p>A legal brief drafted by AI and governed by an attorney who stakes their reputation on it beats one assembled by a junior associate. The governance layer is what customers are buying.</p><p>This is a positioning opportunity. The companies that make the human-in-the-loop legible to customers will command the premium. Not &#8220;Certified Human-Made&#8221; theater. Clear answers to: Who directed this work? Who made the judgment calls? Who stands behind the outcome?</p><p>The demand exists. The companies that package it well win.</p><p><strong>3. Sell the job people wanted.</strong> Most knowledge workers experience a meaning deficit. They became advisors to guide clients. They spend their days on compliance forms. They became teachers to inspire. They spend their nights grading.</p><p>The parts of the job they wanted are buried under the parts that don&#8217;t require their judgment.</p><p>AI offers to give work back. When documentation and busywork move to machines, what remains is judgment, relationships, decisions that matter. The job becomes the job they wanted when they chose the career.</p><p>This is a recruiting advantage. A retention advantage. An engagement advantage. The firms that frame AI as &#8220;reclaim your craft&#8221; will attract better talent than firms that frame it as &#8220;do more with less.&#8221;</p><p><strong>4. Win by multiplying, not replacing.</strong> The companies that deploy AI to cut headcount will capture short-term margin. The companies that deploy AI to multiply human reach will capture markets.</p><p>A professional services firm that uses AI to let each expert serve 3x the clients doesn&#8217;t have 3x the cost savings. It has 3x the revenue. 3x the relationships. 3x the moat.</p><p>The human premium research says people want humans in the loop. The market opportunity is being the company that delivers human-directed service at a scale that wasn&#8217;t previously possible.</p><p>That&#8217;s not &#8220;humans versus AI.&#8221; It&#8217;s humans with AI leverage competing against humans without it. The leveraged humans win. Their firms win. The market sorts itself.</p><h2>The Path Forward</h2><p>The mistake is framing this as humans versus AI. The sorting isn&#8217;t about replacement. It&#8217;s about reorganization.</p><p>The companies that thrive won&#8217;t be the ones that replace humans with AI. They&#8217;ll be the ones that multiply human capacity through AI. A consultant who serves 10 clients today might serve 100 with the right digital workforce. A nurse who documents for 3 hours a day might reclaim that time for patient care. A teacher who creates lesson plans alone might co-create with systems that understand each student&#8217;s gaps.</p><p>This isn&#8217;t about efficiency. Efficiency is the wrong frame. It&#8217;s about capacity. What humans can do when they&#8217;re not drowning in tasks that don&#8217;t require human judgment.</p><p>The human premium research points somewhere important: people don&#8217;t just want outcomes. They want connection. They want to know a person is involved, guiding the work, making the calls that matter. The future belongs to organizations that understand this, that deploy AI to expand human reach rather than eliminate human presence.</p><p>We&#8217;ve done this before. Agricultural to industrial. Industrial to information. Each transition created winners and losers. Each required new institutions, new safety nets, new ways of thinking about work.</p><p>The 26 million who exit the workforce are real people who need real solutions. But so are the 78 million net new jobs waiting to be filled. The question isn&#8217;t whether the sorting happens. It is how we navigate it. The human premium is real. The question is where we lean in.</p><h2>Sources</h2><p><strong>Job displacement and creation:</strong></p><ul><li><p>World Economic Forum Future of Jobs Report 2025: 170M jobs created, 92M displaced globally by 2030</p></li><li><p>Federal Reserve Bank of St. Louis: Correlation of 0.57 between AI adoption and unemployment increases (2022-2025)</p></li><li><p>Goldman Sachs: 2.5% of US employment at risk of displacement from current AI use cases; 6-7% under expanded adoption scenarios</p></li><li><p>Bureau of Labor Statistics: Construction needs 439,000 workers in 2025; nurse practitioners projected to grow 52% (2023-2033)</p></li></ul><p><strong>AI adoption rates:</strong></p><ul><li><p>Stanford AI Index 2025: Business AI adoption jumped from 55% to 78% in one year; 71% use generative AI in at least one function</p></li><li><p>WEF: 86% of employers expect AI to transform their business by 2030</p></li><li><p>McKinsey: 72% of organizations using or testing generative AI</p></li></ul><p><strong>Human premium research:</strong></p><ul><li><p>Scientific Reports (2023): Six experiments (N=2,965) showing AI-art devaluation across beauty, profundity, and worth dimensions</p></li><li><p>Journal of Consumer Behaviour (2025): Consumers consistently prefer human-generated artwork due to social identification</p></li><li><p>MIT Sloan: 20-50% premium for perceived human expertise</p></li><li><p>Deloitte Health Care Consumer Survey: Consumer distrust in AI-provided health information increased across all age groups in 2024</p></li></ul><p><strong>Historical context:</strong></p><ul><li><p>Allen (2009): Engels&#8217; Pause data showing 46% GDP growth vs 12% wage growth (1780-1840)</p></li><li><p>Autor research: 60% of 2018 jobs didn&#8217;t exist in 1940</p></li><li><p>WEF: 39% of existing skill sets expected to transform or become outdated by 2030</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Asking Twice]]></title><description><![CDATA[No one can answer that for you.]]></description><link>https://www.ap.xyz/p/asking-twice</link><guid isPermaLink="false">https://www.ap.xyz/p/asking-twice</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Mon, 19 Jan 2026 18:47:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-kW6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F273be7f6-da49-41d7-93ae-3571db4deed7_800x800.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You ask a friend whether you should take the job. She asks what you&#8217;re optimizing for. You don&#8217;t know. You ask another friend. He asks what you&#8217;d regret. You don&#8217;t know that either.</p><p>You&#8217;ve now asked four people. You haven&#8217;t talked to anyone who works there. You haven&#8217;t written down what you actually want. You&#8217;re collecting opinions because forming your own is uncomfortable.</p><p>This pattern is everywhere.</p><p>Someone asks how many hours of sleep they need. They&#8217;ve been tired for months. They&#8217;ve never tracked how they feel after six versus eight.</p><p>Someone asks what books to read to get better at writing. They wrote one thing last year.</p><p>A new manager asks how often to do 1:1s. Weekly? Biweekly? His skip-level asks: &#8220;What signal would tell you it&#8217;s working?&#8221; He wanted a rule. He needed an experiment.</p><p>A senior IC asks whether to go into management. Her advisor asks: &#8220;What have you tried?&#8221; Nothing. She wanted someone to predict her future before she&#8217;d collected any data.</p><p>The questions sound like curiosity. They&#8217;re avoidance. Asking is easier than trying. Rules are safer than judgment.</p><p>Approval-seeking dresses itself up as diligence: reading more, asking experts, following best practices. But the tell is simple: you&#8217;re asking the same category of question twice.</p><p>Asking once is learning. Never developing your own answer is dependency.</p><p>No one can tell you how many hours to sleep, what books to read, how often to meet, or what path to take. Only your context, your observation, and your willingness to be wrong will answer that.</p>]]></content:encoded></item><item><title><![CDATA[Build the Vision. Then Let the Market React.]]></title><description><![CDATA[What it means to be customer focused]]></description><link>https://www.ap.xyz/p/build-the-vision-then-let-the-market</link><guid isPermaLink="false">https://www.ap.xyz/p/build-the-vision-then-let-the-market</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Wed, 14 Jan 2026 15:22:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-kW6!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F273be7f6-da49-41d7-93ae-3571db4deed7_800x800.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Nikesh Arora runs Palo Alto Networks. $140 billion market cap. He joined knowing nothing about cybersecurity. By his own admission, he thought it was two words.</p><p>Last week he sat down with Brian Halligan on Sequoia&#8217;s podcast and said something that contradicts the lean startup gospel:</p><blockquote><p>&#8220;So many founders get trapped in this idea that I should get customers as fast as I can, I should ask them what they want. The best founders should actually spend some time, build a product based on their own vision, show an end-to-end point of view, and solve a real problem.&#8221;</p></blockquote><p>42% of startups fail because they build something nobody wants. The solution was supposed to be customer discovery. But Arora is saying customer discovery can be the problem.</p><p>He&#8217;s not wrong. He&#8217;s describing a specific failure mode. And understanding it clarifies what real customer focus looks like.</p><h2>The Speeds and Feeds Trap</h2><p>You&#8217;re building enterprise software. You set up advisory councils with CISOs from big banks. They give advice. Feels like progress.</p><p>The problem: big enterprises don&#8217;t want products. They want components.</p><blockquote><p>&#8220;Large infrastructure people don&#8217;t want UI, they want speeds and feeds. Founders feel really happy. &#8216;Oh my God, I&#8217;ve built speeds and feeds. And look, this bank is using me because the bank&#8217;s got 15,000 engineers.&#8217;&#8221;</p></blockquote><p>The bank will take your component and plug it into their systems. They&#8217;re not buying a product. They&#8217;re buying infrastructure.</p><blockquote><p>&#8220;The problem is that&#8217;s not a product. That&#8217;s not a product an enterprise can deploy or work with or use effectively.&#8221;</p></blockquote><p>Building speeds and feeds isn&#8217;t customer focus. It&#8217;s focusing on the customer in front of you instead of the customer you&#8217;re actually trying to serve.</p><h2>What Customer Focus Actually Means</h2><p>Steve Jobs said customers don&#8217;t know what they want until you show them. Jeff Bezos says he&#8217;s customer obsessed. Both built legendary companies. Contradiction?</p><p>No. Bezos put it this way:</p><blockquote><p>&#8220;Customers are always dissatisfied. Even when they don&#8217;t know it, even when they think they&#8217;re happy, they actually do want a better way and they just don&#8217;t know yet what that should be.&#8221;</p></blockquote><p>Amazon Prime wasn&#8217;t the answer to customer requests. It was the answer to latent frustration customers couldn&#8217;t articulate. &#8220;Unlimited free two-day shipping for an annual fee&#8221; wasn&#8217;t on anyone&#8217;s wish list.</p><p>Bezos didn&#8217;t ask customers what to build. He understood them so well he could build what they needed before they knew they needed it. That&#8217;s not ignoring customers. That&#8217;s the deepest form of customer focus.</p><p><strong>Shallow customer focus</strong>: Ask what they want. Build that.</p><p><strong>Deep customer focus</strong>: Understand them so thoroughly you can solve problems they haven&#8217;t articulated yet.</p><p>Shallow gets you components. Deep gets you products.</p><h2>The Listening Paradox</h2><p>Customers are experts at their own pain. They can tell you what&#8217;s broken, what&#8217;s slow, what frustrates them. That&#8217;s valuable signal.</p><p>Customers are not experts at solutions. They can&#8217;t envision what solves their problem, especially if it requires new behavior.</p><p>Arora walked into Palo Alto and found engineers working on 60 new features. He asked them to list the features on a whiteboard. They ran out at 37. Seven lines all said DNS.</p><p>He asked: can we replace a DNS security vendor if we ship this? Answer: no. We solve 60% of the problem.</p><blockquote><p>&#8220;&#8217;60 percent is not good enough, bud. Like, what am I going to do with the other 40 percent?&#8217;&#8221;</p></blockquote><p>A 60% solution feels customer-focused. You listened. You built. You shipped.</p><p>But it&#8217;s not. It leaves the customer with a problem still unsolved. Real customer focus means doing the hard work to deliver a complete solution.</p><h2>Two Ways to Fail Your Customer</h2><p>You can fail by ignoring them. Building in a vacuum. Assuming you know what they need. This kills 42% of startups.</p><p>You can fail by obeying them. Building exactly what they ask for. Shipping components instead of solutions. This is the sophisticated mistake.</p><p>Both failures come from the same root: not understanding your customer deeply enough.</p><p>If you truly understand your customer, you won&#8217;t ignore them. You&#8217;ll feel their pain too acutely to build something irrelevant.</p><p>If you truly understand your customer, you won&#8217;t just obey them. You&#8217;ll know the difference between what they&#8217;re asking for and what they actually need.</p><h2>The Implication</h2><p>The pressure to let customers design your product is enormous. &#8220;We need an API.&#8221; &#8220;We need more customization.&#8221; &#8220;We need this to plug into our existing workflows.&#8221;</p><p>Those requests might be valid. They might be speeds and feeds.</p><p>The question: are you building a complete solution that solves the customer&#8217;s problem? Or are you building components that force them to do the hard work themselves?</p><p>The winners won&#8217;t be the ones with the best technology. They&#8217;ll be the ones who focus so deeply on customers that they deliver complete solutions. Done work, not tools. Outcomes, not capabilities.</p><p>That requires knowing your customer better than they know themselves. It requires conviction to build complete solutions even when customers ask for components.</p><p>That&#8217;s real customer focus.</p><p>Build the vision. Then let the market react.</p>]]></content:encoded></item><item><title><![CDATA[The Shape of Smart Bets]]></title><description><![CDATA[Most people think in bell curves.]]></description><link>https://www.ap.xyz/p/the-shape-of-smart-bets</link><guid isPermaLink="false">https://www.ap.xyz/p/the-shape-of-smart-bets</guid><dc:creator><![CDATA[andrei]]></dc:creator><pubDate>Mon, 12 Jan 2026 17:39:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!pK7X!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60d107f8-642e-4c00-85a4-9cfaaf871721_1210x921.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most people think in bell curves. Symmetric outcomes. Win some, lose some, everything evens out.</p><p>This is a mistake.</p><p>The best decisions in business, investing, and life don&#8217;t live on normal distributions. They live on skewed ones.</p><h2>The Seven Shapes</h2><p>Look at how outcomes can distribute:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pK7X!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60d107f8-642e-4c00-85a4-9cfaaf871721_1210x921.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pK7X!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60d107f8-642e-4c00-85a4-9cfaaf871721_1210x921.webp 424w, https://substackcdn.com/image/fetch/$s_!pK7X!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60d107f8-642e-4c00-85a4-9cfaaf871721_1210x921.webp 848w, https://substackcdn.com/image/fetch/$s_!pK7X!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60d107f8-642e-4c00-85a4-9cfaaf871721_1210x921.webp 1272w, https://substackcdn.com/image/fetch/$s_!pK7X!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60d107f8-642e-4c00-85a4-9cfaaf871721_1210x921.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pK7X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60d107f8-642e-4c00-85a4-9cfaaf871721_1210x921.webp" width="1210" height="921" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/60d107f8-642e-4c00-85a4-9cfaaf871721_1210x921.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:921,&quot;width&quot;:1210,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:25668,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://andreipop.substack.com/i/184339293?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60d107f8-642e-4c00-85a4-9cfaaf871721_1210x921.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pK7X!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60d107f8-642e-4c00-85a4-9cfaaf871721_1210x921.webp 424w, https://substackcdn.com/image/fetch/$s_!pK7X!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60d107f8-642e-4c00-85a4-9cfaaf871721_1210x921.webp 848w, https://substackcdn.com/image/fetch/$s_!pK7X!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60d107f8-642e-4c00-85a4-9cfaaf871721_1210x921.webp 1272w, https://substackcdn.com/image/fetch/$s_!pK7X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60d107f8-642e-4c00-85a4-9cfaaf871721_1210x921.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Normal:</strong> Equal chance of landing above or below average. The mean, median, and mode converge. Safe. Predictable. Also: boring.</p><p><strong>Negatively skewed:</strong> Most outcomes are good, but the bad ones are catastrophic. Think: selling options, running a restaurant, being a landlord. You collect small wins until the tail event wipes you out.</p><p><strong>Positively skewed:</strong> Most outcomes are mediocre or losses, but the wins are massive. Think: venture capital, writing a book, starting a company. You lose small and often, but when you win, you win big.</p><p><strong>Bimodal:</strong> Two clusters of outcomes with little in between. You either succeed dramatically or fail. No muddy middle. Think: drug development, political campaigns, bold product pivots.</p><p><strong>J-shaped:</strong> Low base rates, then explosive growth at the right tail. Most attempts yield nothing. A few yield everything. Think: content virality, network effects, compounding relationships.</p><p><strong>Inverted U-shaped:</strong> Moderate inputs produce the best results. Too little fails. Too much fails. Think: caffeine and performance, stress and productivity, exercise volume and recovery.</p><p><strong>Rectangular:</strong> All outcomes equally likely. Pure uncertainty. Rare in practice, but useful for modeling true ignorance.</p><h2>Why This Matters</h2><p>Here&#8217;s the uncomfortable truth: the shape of your payoff distribution matters more than your expected value.</p><p>Two bets can have identical expected values but completely different risk profiles.</p><p><strong>Bet A:</strong> 50% chance of +$100, 50% chance of -$100. EV = $0.</p><p><strong>Bet B:</strong> 99% chance of -$1, 1% chance of +$99. EV = $0.</p><p>Same expected value. Radically different games. Bet B is an asymmetric bet. You can make it 100 times, lose $99 on the failures, and one win makes you whole.</p><h2>The Asymmetric Bet Framework</h2><p>Asymmetric bets share three properties:</p><ol><li><p><strong>Capped downside.</strong> You know the worst case. You can survive it. Repeatedly.</p></li><li><p><strong>Uncapped or disproportionate upside.</strong> The wins aren&#8217;t 2x. They&#8217;re 10x, 100x, or &#8220;changes the trajectory.&#8221;</p></li><li><p><strong>Repeatability.</strong> You can take the bet multiple times. Each loss is tuition, not devastation.</p></li></ol><p>This is the J-curve. This is positive skew. This is how step changes are built.</p><h2>Where Symmetric Thinking Fails</h2><p>Most corporate decision-making optimizes for the bell curve. Minimize variance. Avoid tail risks. Cluster around expected outcomes.</p><p>This works for operations. It&#8217;s death for strategy.</p><p>If you only make symmetric bets, you&#8217;re competing on execution in a world where execution is increasingly commoditized. You&#8217;ll be fine. You won&#8217;t be exceptional.</p><p>The firms and individuals who win disproportionately are hunting for positive skew. They&#8217;re comfortable with a high failure rate because they&#8217;ve structured their bets so failures don&#8217;t kill them.</p><h2>The Practitioner&#8217;s Checklist</h2><p>Before committing to any significant decision, ask:</p><ol><li><p>What&#8217;s the shape of this payoff distribution?</p></li><li><p>Can I survive the downside? Multiple times?</p></li><li><p>Is the upside capped or open-ended?</p></li><li><p>How many times can I take this bet?</p></li><li><p>Am I thinking in expected value or in payoff shape?</p></li></ol><p>If the downside is survivable, the upside is uncapped, and you can iterate, you&#8217;ve found an asymmetric bet. Take it.</p><p>If the downside is catastrophic or the upside is capped, you&#8217;re playing the wrong game.</p><h2>The Meta-Lesson</h2><p>The distribution shapes in a statistics textbook aren&#8217;t academic. They&#8217;re a lens for seeing the world clearly.</p><p>Train yourself to ask: what shape is this?</p><p>Then go find the J-curves.</p>]]></content:encoded></item></channel></rss>