Spicy Takes

The Spicy Feed

1000 recent posts across 27 voices

1+
1–100 of 1000
February 2026 43

The Insane Stupidity of UBI

Key Insight: UBI fails because money is a claim on other people's labor, and when everyone stops laboring, there's nothing left to claim — you can't redistribute production that no longer exists.

Hotz argues that Universal Basic Income is fundamentally flawed because its proponents misunderstand the nature of money and economics. He contends that UBI experiments only work at small scale because recipients spend money in an economy where producers aren't also on UBI. At universal scale, he claims UBI would cause massive inflation and reduced production as workers quit, leaving everyone worse off. He frames UBI advocates as people who see themselves apart from society, not understanding that goods require labor to produce. His alternative prescription is simply making everything cheaper to produce, though he notes regulatory obstacles prevent this.

9

There already is UBI in the world for some people, it's called allowance. It's for children and high-end prostitutes.

7

Want to buy eggs? Sorry, the egg people stopped making eggs, they are living free on UBI.

7

They don't know where stuff comes from, it might as well be the stuff fairy that puts it on the supermarket shelves and sets prices.

Full analysis Original

The best is still hard to be

Key Insight: AI lowering the cost of building software doesn't commoditize it — it just resets the competition, because human dissatisfaction ensures that 'good enough' is always a moving target and being the best remains as hard as ever.

Stancil argues that AI making software cheaper to build won't commoditize it, because human expectations are never static — 'good enough' is always redefined by what's possible. Just as the internet didn't prevent DoorDash from winning food delivery despite low barriers to entry, someone will always figure out how to be the best, and being the best remains hard. He also notes that AI companies' real competition is for talent, not customers, which is reshaping corporate ethics positioning.

8

Give us something new; we love it today; we are frustrated tomorrow. We spent millennia dreaming that we could fly; now we can, and we whine about the wifi.

7

The 'cost of creating content going to zero' didn't kill content, nor did it bankrupt the business of content creation.

7

Your plan for market domination is not to hire people and then make money from what they build; it is to be the first company that creates an AI model that is good enough to improv…

Full analysis Original

A Sometimes-Hidden Setting Controls What Happens When You Tap a Call in the iOS 26 Phone App

Key Insight: When a UI setting's visibility depends on state managed in a completely different app, the resulting confusion reveals a lazy design shortcut that could be solved by surfacing the relevant context directly alongside the setting.

Gruber examines a confusing UI design choice in iOS 26's Phone app, where Apple introduced a 'Tap Recents to Call' setting that only appears in Settings when the new Unified view is active, and completely disappears when Classic view is selected. He agrees with Adam Engst that the new Unified behavior — where tapping a row shows contact info rather than initiating a call — is superior to the legacy behavior that made accidental calls too easy. However, he argues Apple's implementation of hiding the setting is lazy and confusing, since no one expects a toggle in one app to control visibility of a switch in another app. He proposes mirroring the Classic/Unified view toggle in both the Phone app and Settings, which would make the conditional appearance of the 'Tap Recents to Call' option self-explanatory.

6

You don't tap on an email message to reply to it. You tap a Reply button.

6

Apple's solution to this dilemma — to show the 'Tap Recents to Call' in Settings if, and only if, Unified is the current view option in the Phone app — is lazy.

6

You pretty much need to understand everything I've written about in this article to understand why and when this option is visible. Which means almost no one who uses an iPhone is …

Full analysis Original

The Last Gasps of the Rent Seeking Class

Key Insight: AI destroys the time asymmetry that businesses exploit against consumers, and Chinese open-source models are ensuring this power ends up with individuals rather than creating a new rent-seeking layer.

Hotz argues that AI is dismantling the trillion-dollar rent-seeking economy built on exploiting human time limitations and friction. He traces this from Google Duplex's suppressed demo to the current landscape where Chinese open-source models are democratizing human-level AI. He critiques Anthropic's distillation blog post as the 'last gasps' of companies trying to maintain artificial moats, and argues the AI supply chain is commoditizing at every layer except possibly models—where Chinese open-source efforts are closing the gap. He concludes that AI agents will eliminate the time asymmetry businesses exploit, making purposeful friction obsolete.

8

Godspeed to anyone who was dumb enough to invest in a GPT wrapper company.

8

Because nobody wants the continuation of rent-seeking billionaires. The status quo is cooked. It's time to flip the table, not rearrange the seats.

7

Cable companies and insurance rely on the fact that your time is more valuable than theirs. They can hire people in India at scale to waste your time.

Full analysis Original

My 2025 Apple Report Card

Key Insight: Apple's hardware excellence continues to mask deepening problems in software design direction, developer relations, and institutional courage under political pressure.

Gruber's 2025 Apple Report Card delivers a mixed verdict, with iPhone hardware earning top marks while macOS 26 Tahoe's Liquid Glass redesign draws his harshest criticism as the worst UI regression in Mac history. He praises iPhone 17 Pro and the remarkably thin iPhone Air, gives iPadOS its most exciting release ever for finally embracing windowed multitasking, and lauds Apple Watch SE 3 as outstanding value. However, he assigns Apple an F for social and societal impact, condemning Tim Cook's obsequious engagement with the Trump 2.0 administration, and would give Apple Intelligence a standalone F grade. The overall picture is of a company whose hardware excellence increasingly contrasts with software design missteps and institutional timidity.

9

Tahoe, though, is the worst regression in the entire history of MacOS.

9

It was obsequious complicity with a regime that is clearly destined for historical infamy.

8

There is nothing about Tahoe's new UI — the Mac's implementation of the Liquid Glass concept Apple has applied across all its OSes — that is better than its predecessor, MacOS 15 S…

Full analysis Original

2028 - THE GREAT DATA RECKONING

Key Insight: The data industry's obsession with tooling over business fundamentals made it uniquely vulnerable to AI disruption, and the professionals who will survive are those who understand why data looks the way it does, not just how to move it.

Joe Reis presents a fictional 2028 retrospective memo examining how AI disrupted the data industry, arguing that the sector was uniquely vulnerable because it had over-invested in tools and content while under-investing in business fundamentals. The piece traces how AI agents that could write production-quality pipelines triggered a bifurcation where top practitioners thrived while tool-focused engineers were displaced, and the vast ecosystem of data tooling vendors and thought leadership collapsed.

9

An industry that spent two decades insisting it could measure everything failed to see this coming, despite generating approximately 47,000 blog posts per quarter about 'the future…

9

The tools designed to democratize data work succeeded — they just democratized it first for machines.

8

It wasn't. It was three industries wearing a trenchcoat.

Full analysis Original

Some Silly Z3 Scripts I Wrote

Key Insight: Z3 is a remarkably versatile tool that can solve equations, prove theorems, and reverse engineer algorithms, but choosing good pedagogical examples requires balancing accessibility, practicality, and tool-appropriateness.

Hillel Wayne shares a collection of Z3 SMT solver examples that were cut from his book Logic for Programmers. He walks through increasingly complex uses: solving systems of equations, proving no four distinct positive integers share both sum and product, optimizing financial contributions, reverse engineering RNG parameters, proving mathematical theorems, and modeling stock trading with Z3 arrays. He explains why most examples didn't make the book's final cut—they either weren't practical enough, required too much background explanation, or weren't the right tool for the job—and describes the three examples he ultimately chose.

6

You're supposed to learn how to do solve this as a system of equations, but if you want to cheat yourself out of an education you can have Z3 solve this for you.

5

Z3's core engine is in C++, and yet a hand-written Python binary search finds the optimal c about a 1000x faster!

2

An SMT ("Satisfiability Modulo Theories") solver is a constraint solver that understands math and basic programming concepts.

Full analysis Original

The Insanity of Data Education

Key Insight: The data industry's education crisis is not a knowledge problem but an empathy and delivery problem — blaming practitioners for 40 years of ineffective teaching is insanity, and the fix requires meeting people where they are with pragmatic, engaging material and organizational support.

Joe Reis argues that the data industry's 40-year failure to effectively teach data modeling and governance is not the fault of practitioners, but of educators and organizations that refuse to adapt their approach. Drawing on survey data showing 89% of professionals struggling with data modeling, he contends that time pressure, lack of ownership, and condescending educational methods perpetuate the cycle of dysfunction.

8

If your documentation, training, or book is boring AF, it's going to lose to ChatGPT, Slack pings, endless meetings, and doomscrolling.

7

If the data industry has been teaching and preaching the same way for four decades and the vast majority of practitioners are still struggling, clearly the approach hasn't worked.

7

When someone's hair is on fire, they don't need a lecture on the chemical composition of fire; they need a bucket of water.

Full analysis Original

Take off

Key Insight: The AI revolution may already be here not because the technology has achieved superintelligence, but because it has achieved total dominance over our collective attention and anxiety — a social takeoff that renders the technical question almost moot.

Stancil argues that regardless of whether AI is truly reaching a technical singularity, we are already experiencing a social takeoff — a collective psychosis where AI dominates every conversation, every news cycle, and every corporate decision. The frenzy of model releases, billion-dollar fundraises, and apocalyptic predictions has created a vortex that has already conquered our attention, making the question of whether AI is truly transformative almost beside the point.

9

Is this takeoff, or just takes?

8

They say the internet is dead, full of robots talking to one another. On the contrary—it is furiously, psychotically alive. It is a vortex of this new psychosis, tightening around …

8

If every Anthropic press release is all we talk about, have the robots not already taken over? If every company is urgently rearranging itself around a workforce of agents, does it…

Full analysis Original

AI is the Best Thing to Happen to Art

Key Insight: AI doesn't threaten real art — it only automates the formulaic content that was never genuine art in the first place, which should be celebrated rather than mourned.

Hotz argues that AI will only ruin art that was already bad — formulaic, derivative content like Marvel movies and generic pop music. He distinguishes between 'slop' that most people consume passively and genuine art that pushes cultural boundaries. He predicts 95% of people will end up 'wireheaded' on AI-generated content loops, while real art will remain human-driven. AI tools will assist production but can't replace the cultural embeddedness and boundary-pushing that defines true art. The post concludes that AI making bad art cheap is actually a good thing, since it exposes and replaces what was never real art to begin with.

8

Art is defined by what is expensive. What is rare. What is expectation breaking. What is embedded in a complex and thriving culture. Not slop produced by a parrot like Marvel movie…

7

I see a world where 95% of people end up basically wireheaded.

7

That was never made by real artists anyway, just algorithmically driven sell outs.

Full analysis Original

Host Leadership

Key Insight: Servant leadership is a dishonest framing that hides the manager's real power; host leadership offers a more truthful metaphor where the leader prepares the space and retains acknowledged authority to intervene.

Fowler challenges the popular agile concept of servant leadership, relaying Kent Beck's critique that it amounts to gaslighting since the manager claims to serve while retaining real power. He introduces an alternative metaphor from mental-health practice: host leadership. In this model, the leader prepares a suitable space, invites the team in, provides context, then steps back. The host leader still retains authority to intervene when needed, but the framing is more honest about the power dynamic than servant leadership.

8

The manager claims to be a servant, but everyone knows who really has the power.

7

A recent conversation with Kent Beck nailed why - it's gaslighting.

3

This casts the leader as a host: preparing a suitable space, inviting the team in, providing ideas and problems, and then stepping back to let them work.

Full analysis Original

First I wrote the wrong book, then I wrote the right book (xpost)

Key Insight: The biggest barrier to effective observability isn't technical implementation but the lack of shared strategic alignment between engineering teams and organizational leadership about what problem they're actually trying to solve.

Charity Majors describes how she had to scrap and rewrite an entire section of the second edition of Observability Engineering after realizing she was writing tactical advice for teams operating in a strategic vacuum. Feedback from readers revealed that the real problems weren't technical implementation challenges but deeply dysfunctional organizational buying processes and a lack of shared understanding between engineering and leadership about what observability even is and why it matters.

8

As Tolstoy once wrote, 'Happy teams are all alike; every fucked up team is fucked up in its own precious way.'

6

I was writing tactical advice for teams who were surviving in a strategic vacuum.

6

Your ability to get any returns on your investments into AI will be limited by how swiftly you can validate your changes and learn from them. Another word for this is 'OBSERVABILIT…

Full analysis Original

Omacon comes to New York

Key Insight: The Linux desktop is experiencing a genuine cultural moment, and Omarchy's rapid community growth proves that opinionated, aesthetics-first open source projects can galvanize passionate in-person communities.

DHH announces OMACON, the first in-person conference for the Omarchy Linux distribution community, to be held April 10 at Shopify's SoHo space in New York City. The event features speakers from the Linux and TUI ecosystem including Vaxry, ThePrimeagen, TJ DeVries, and Dax Raad. He highlights the rapidly growing momentum around Linux on the desktop, driven by improving x86 hardware, terminal UIs from tools like OpenCode and Claude Code, and the Omarchy project itself. DHH emphasizes the irreplaceable value of in-person community gatherings for nerds with shared passions, drawing parallels to Rails World. He celebrates Omarchy's growth from a summer project to 50,000 ISO downloads per week and 300+ contributors.

5

We also need people to JUST DO THINGS.

5

You gotta love when a hundred-plus billion dollar company like this is run by an uber nerd who can just sign off on doing something fun and cool for the community without any direc…

4

There's an endless amount of information and instruction available online, but a sense of community and connection is far more scarce. We nerds need this.

Full analysis Original

Martin Fowler told me the second edition should be shorter (it’s twice as long) (xpost)

Key Insight: The observability landscape has transformed so dramatically since 2018—with OpenTelemetry winning, AI reshaping workflows, and the market co-opting the term—that a near-complete rewrite of the foundational book was necessary to reflect what observability actually means and requires today.

Charity Majors announces the near-completion of the second edition of 'Observability Engineering,' which is almost twice as long as the first despite Martin Fowler's advice to make it shorter. The new edition features 90% new material, a clearer mission focused on software engineers, and contributions from a diverse group of industry practitioners, reflecting how much the observability landscape has changed since 2018.

7

Most companies still don't have real observability. And they don't know it.

7

The integrations game is over, and OpenTelemetry has won.

6

When we started the book in 2018, Honeycomb was the only observability company, and our definition of observability—high cardinality, high dimensionality, explorability—was the onl…

Full analysis Original

Notes on clarifying man pages

Key Insight: Man pages can be dramatically improved through design techniques like options summaries, categorical organization, embedded examples, and cheat sheets — all within the existing constrained format.

Julia Evans explores what makes man pages effective and easier to navigate, drawing on examples from rsync, strace, curl, Perl, and the ASCII man page. She crowdsourced favorite man pages on Mastodon and catalogs specific techniques: options summaries, category-based organization, embedded cheat sheets, tables of contents with hyperlinks, per-option examples, and table formatting. She also touches on the GNU project's preference for info manuals over man pages and notes tools like tldr and Dash that supplement man pages. The post frames man page improvement as a design challenge within a constrained format.

5

I've heard from some Emacs users that they like the Emacs info browser. I don't think I've ever talked to anyone who uses the standalone info tool.

3

could the man page itself have an amazing cheat sheet in it? What might make a man page easier to use?

3

once you're listing almost the entire alphabet, it's hard

Full analysis Original

The Mythical Agent-Month

Key Insight: Coding agents eliminate accidental complexity at unprecedented speed but leave essential complexity untouched, making human design judgment and taste the decisive bottleneck in software development.

McKinney revisits Fred Brooks's The Mythical Man-Month through the lens of agentic AI development, arguing that while coding agents dramatically reduce accidental complexity, the essential complexity of software design remains unchanged. He observes that agents generate new accidental complexity at machine speed, creating bloated codebases that eventually choke future agent sessions. Drawing on his own experience with projects reaching 100 KLOC, he describes a 'brownfield barrier' where agents begin chasing their own tails. He concludes that design taste, product scoping, and conceptual integrity are now the primary bottlenecks, not coding speed.

8

When generating code is free, knowing when to say 'no' is your last defense.

8

The bottleneck was never hands on keyboards.

7

Agents are so good at attacking accidental complexity, that they generate new accidental complexity that can get in the way of the essential structure that you are trying to build.

Full analysis Original

Agentic Email

Key Insight: Giving an LLM agent access to your email creates a perfect storm of security risks — untrusted content, sensitive data, and external communication — that no amount of convenience justifies without rigorous sandboxing.

Fowler warns against the growing trend of people using LLM agents to manage their email accounts autonomously. While acknowledging the appeal of offloading email drudgery, he argues that email represents a perfect instance of Simon Willison's 'Lethal Trifecta': untrusted content, sensitive information, and external communication capability. He notes that password-reset workflows through email make this especially dangerous. He suggests a mitigation approach of sandboxing the agent with read-only access and no internet connectivity, accepting reduced capability as the price of security.

7

An agent working on my email has oodles of context - and we know agents are gullible.

6

Direct access to an email account immediately triggers The Lethal Trifecta: untrusted content, sensitive information, and external communication.

5

Email is the nerve center of my life. There's tons of information in there, much of it sensitive.

Full analysis Original

Apple Releases iOS 26 Adoption Rates, and They’re Pretty Much in Line With the Last Few Years

Key Insight: iOS adoption rates are driven by Apple's automatic update rollout schedule, not user enthusiasm, making third-party adoption estimates unreliable and adoption numbers uninformative about user sentiment.

Gruber presents Apple's official iOS 26 adoption statistics, showing they're consistent with iOS 17 and 18 rates from previous years. He uses these numbers to vindicate his earlier argument that reports of abnormally low iOS 26 adoption were based on bogus StatCounter data. The real explanation was that Apple was simply slower to push automatic updates for iOS 26, which he attributes to it being a more significant and buggier release. He emphasizes that most users update via automatic pushes from Apple, so adoption rates reflect Apple's rollout schedule, not user sentiment about Liquid Glass or iOS 26's quality.

5

Their opinions about iOS 26 form after they install it.

5

Apple has been slower to push those updates to iOS 26 than they have been for previous iOS updates in recent years. With good reason! iOS 26 is a more significant — and buggier — u…

4

A large majority of users of all Apple devices get major OS updates when, and only when, their devices automatically update.

Full analysis Original

Cost of Housing

Key Insight: The housing affordability crisis is structurally unsolvable because the entire system is designed to protect existing homeowners' investments, making affordability for new buyers fundamentally incompatible with the promises made to current owners.

Hotz argues that the housing affordability crisis is fundamentally unsolvable because existing homeowners—many of whom are leveraged through mortgages—cannot afford for prices to drop. He frames the situation as a generational wealth transfer problem: boomers were promised that homes are appreciating assets, and the entire system is now structured to maintain that fiction. He dismisses common explanations like zoning and building costs as missing the real issue, which is that current owners need to offload their investment onto new buyers.

8

The boomers were told houses are appreciating assets, and now we must bend reality to make that true.

8

It has nothing to do with zoning, building costs, or environmental reviews. It has to do with people holding bags they need to dump on you.

6

It's simply out of the question for housing prices to go down.

Full analysis Original

tiny corp's product -- a training box

Key Insight: The future of AI must involve local models that genuinely learn and update their weights per user, because without real personalization beyond shallow prompting, humans become interchangeable middlemen to homogeneous cloud minds.

George Hotz outlines tiny corp's long-term product vision: a local training box that learns and adapts its weights based on individual user interactions, unlike current LLMs that serve identical frozen models to everyone. He argues that the current trajectory of AI leads to a collapse in cognitive diversity, where billions of people interact with the same handful of homogeneous minds differentiated only by shallow prompt customization. Hotz contends that cloud-based AI will win by default unless local models can offer genuine per-user learning through weight updates, not just in-context learning. He positions the tinybox hardware, tinygrad infrastructure, and OpenClaw/opencode frontends as the foundation for this vision. The post warns that people who rely on homogeneous cloud AI will become superfluous middlemen, ultimately cut out of the economic loop. While acknowledging this vision is years away, he frames it as building something fundamentally different: an AI that lives in your house and learns your values.

9

I think there is a world market for maybe five people.

8

The open question is if everything that's unique about you can fit in a 10 kB CLAUDE.md.

8

If you choose the homogenous mind, you are superfluous and will be cut out.

Full analysis Original

Where Data Engineering Is Heading in 2026 - 5+ Trends

Key Insight: The defining challenge of data engineering in 2026 isn't technological but organizational — teams that built disciplined foundations and secured leadership buy-in will accelerate with AI, while those carrying unaddressed debt will be buried by it.

Based on a survey of 1,101 data practitioners, Reis identifies five key trends shaping data engineering in 2026: AI becoming table stakes, a data modeling crisis demanding semantic layers, orchestration consolidation, the lakehouse-warehouse convergence, and leadership emerging as the critical bottleneck. The overarching theme is that unpaid technical and organizational debts are compounding, and the gap between disciplined and undisciplined teams is widening.

8

Disciplined teams will use AI to move faster with quality. Undisciplined teams will use AI to create technical debt faster.

7

The big theme of 2026 is that unpaid debts of the past carry interest, accruing at payday loan rates.

6

By the end of 2026, 'AI-assisted' will disappear from job descriptions because it will be assumed.

Full analysis Original

I Told You So

Key Insight: The danger of the singularity isn't artificial intelligence itself but the cultural and power structures directing it — technology built to concentrate power rather than serve people will produce dystopia regardless of how advanced it becomes.

Hotz revisits his 2019 prediction about the singularity arriving soon and bringing horrific consequences if power-seeking motivations persist. He argues that society has lost its way, building technology that serves exploitation rather than genuine human benefit, contrasting Craigslist's simplicity with Shein's consumerism. He links to 'schizoposting' essays by Alaric that frame the core problem as cultural rather than technological, comparing them to Mark Fisher's 'Capitalist Realism.' The post is a short, frustrated call for collective action and cultural revolution against systems that concentrate power while hollowing out meaning.

8

How is everyone enjoying their singularity?

8

Why are we letting the minds that invented fastpass run things?

8

Is everyone individually too weak to defect? Sounds like we need a revolution.

Full analysis Original

The Final Bottleneck

Key Insight: AI accelerates code production but cannot assume accountability, making human understanding and responsibility the irreducible constraint that no amount of automation can eliminate.

Armin argues that AI has dramatically accelerated code creation, but review capacity and human accountability remain hard bottlenecks that cannot be automated away. He draws parallels to the textile industry's industrial revolution, where removing one bottleneck simply revealed the next downstream constraint. Open source projects are already drowning in AI-generated PRs that nobody can review, and internal teams face the same unsustainable pace. He explores two responses — throttling input or giving in to machine-driven workflows — but concludes that as long as non-sentient machines cannot bear accountability, humans remain the irreducible bottleneck. The post ends with a reflective acceptance: he was always the bottleneck, and AI hasn't changed that fundamental reality.

8

As we're entering the phase of single-use plastic software, we might be moving the whole layer of responsibility elsewhere.

7

When more and more people tell me they no longer know what code is in their own codebase, I feel like something is very wrong here and it's time to reflect.

7

I too am the bottleneck now. But you know what? Two years ago, I too was the bottleneck. I was the bottleneck all along.

Full analysis Original

Go crazy, folks, go crazy

Key Insight: In AI products, reckless boldness has consistently beaten responsible caution, and the data analytics industry is likely next in line for this pattern to repeat.

Stancil argues that the biggest AI product winners have consistently been those willing to take reckless, 'just try it' approaches rather than cautious, responsible ones. He suggests the data analytics world is ripe for someone to build an unhinged AI product that unleashes autonomous agents to find insights, and that this irresponsible approach will likely beat the careful, governed alternatives—just as ChatGPT beat Google, Claude Code beat Copilot, and other bold bets outpaced their cautious competitors.

8

I once asked people how often in their careers they found a truly meaningful 'insight' in their data. The average answer was once every two years—or, if measured by an analyst's sa…

7

When you're on the inside, you forget that most people don't care about the details that you do. You spent your life carefully researching AI safety inside of a cleanroom at Google…

7

Is it the AI agent that's optimized to oh-so-precisely answer mundane questions like, 'How many shirts did we sell last week?' over and over again via a Slack integration? Or is it…

Full analysis Original

Building an Obsidian RAG with DuckDB and MotherDuck

Key Insight: AI agents accelerate building but the real productivity unlock is plan mode — structured human-in-the-loop planning before autonomous execution — not smarter models.

Simon Späti built a personal knowledge assistant using RAG on his Obsidian vault, with DuckDB as the vector database locally and MotherDuck for a serverless web app via WASM. The system uses BGE-M3 embeddings to enable semantic search across nearly 9,000 markdown notes, leveraging Obsidian's backlink graph to boost results and surface hidden connections between notes. He emphasizes the local-first approach to keep sensitive notes private. The article doubles as a reflection on AI agents in data engineering, arguing that 'plan mode' in tools like Claude Code is the real productivity breakthrough — not smarter models. He cautions that vibe coding and AI agents still require a human architect to avoid generating unmaintainable code that solves the wrong problem.

6

Let them run without direction, and you'll get a thousand lines solving the wrong problem.

5

The character and soul of the person gets stripped away. The quirky things someone does, which make them who they are, that takes away from the fun of writing.

5

Always keep it simple, because it's easy to make it complex. The true beauty lies in making it simple, which is something agents are not good at.

Full analysis Original

Future Of Software Development

Key Insight: The software industry sees AI and LLMs as a potentially Agile Manifesto-scale inflection point, warranting a similar gathering of practitioners to chart the profession's future direction.

In February 2026, Thoughtworks hosted a workshop called 'The Future of Software Development' in Deer Valley, Utah, deliberately echoing the 25th anniversary of the Agile Manifesto. About 50 invited participants—Thoughtworkers, software pundits, and clients—gathered for a day and a half of Open Space conference focused on how AI and LLMs are reshaping the profession. Fowler chose not to synthesize the discussions into a single narrative, instead distributing his insights across several fragment posts. The event was held under Chatham House Rule, limiting attribution of comments.

2

While it was held in the mountains of Utah as a nod to the 25th anniversary of the writing of Manifesto for Agile Software Development, it was a forward-looking event, focusing on …

2

About 50 or so people were invited, a mixture of Thoughtworkers, software pundits, and clients - all picked for being active in the LLM-fuelled changes.

2

I haven't attempted to make a coherent narrative of what we discussed and learned there.

Full analysis Original

The AI Vampire

Key Insight: AI's productivity gains are real, but without consciously fighting the 'vampire' extraction of those gains by employers, the result will be widespread burnout rather than shared prosperity — the sustainable answer is a dramatically shorter workday.

Yegge argues that AI coding tools genuinely deliver massive productivity gains, but this creates a dangerous 'vampire' effect where companies extract all the value while workers burn out. He frames the problem as a value capture equation: if employees work 8 hours at 10x productivity, the company wins and the employee is drained; if employees slack off, the company dies. He takes accountability for setting unrealistic standards as an early adopter and warns that AI-native startups are poisoning the well by sprinting toward mostly terrible ideas. His proposed solution is the $/hr formula he invented at Amazon: you can't control your salary, but you can control your hours. He advocates for a 3-4 hour workday as the sustainable norm for AI-augmented knowledge work.

8

There's a massive amount of talent being thrown at an incredible dearth of real ideas, basically the same six tired pitches.

8

AI has turned us all into Jeff Bezos, by automating the easy work, and leaving us with all the difficult decisions, summaries, and problem-solving.

7

AI is starting to kill us all, Colin Robinson style.

Full analysis Original

Python Was Built for Humans. AI Just Changed Everything.

Key Insight: The shift from human-written to agent-written code fundamentally changes which programming language properties matter -- execution speed and test suite performance now outweigh the human ergonomics that made Python dominant.

In this podcast conversation with Mike Driscoll, Wes McKinney traces his journey from studying pure math at MIT to building pandas during the 2008 financial crisis, through Apache Arrow, Ursa Labs, and Voltron Data. He discusses the current landscape of next-generation columnar file formats (Lance, Vortex, Nimble, F3) targeting different use cases from analytical workloads to multimodal AI data. The central thesis is a fundamental shift from human ergonomics to agent ergonomics in programming -- Python succeeded because it was pleasant for humans, but with agents writing code, execution speed and test suite performance now matter more than writability. He demonstrates his multi-agent workflow using Claude Code with RoboRev, a tool he built to have Codex adversarially review every commit, and argues this continuous automated review is essential for quality. He closes with advice that the next generation should focus on software architecture, design patterns, and code comprehension rather than learning to write code from scratch.

7

Python is so successful because it's good for humans. It's good for humans to write. It's enjoyable. But in a world where agents are writing all of the code, all these benefits tha…

7

Learning to write code is not that important now, but you do need to invest in learning about the theory of software architecture and what effective and sustainable large-scale sof…

6

If all of your code isn't being automatically reviewed by adversarial agents, you've essentially got tons of bugs lurking that you can't possibly find through your own human QA.

Full analysis Original

The 2026 State of Data Engineering Survey (Interactive)

Key Insight: AI has become ubiquitous among individual data engineers but organizational adoption is stalled, and the field's biggest obstacles remain people problems — poor leadership direction, unclear requirements, and pressure to skip modeling — not technology.

Reis presents the results of a 1,101-response data engineering community survey, delivered as an interactive explorer rather than a traditional PDF report. Key findings reveal that while AI tool usage is nearly universal among individual practitioners, organizational AI adoption remains largely experimental, and data modeling continues to be a major pain point driven by organizational rather than technical challenges.

5

AI is table stakes. 82% of you use AI tools daily or more. Only 3.7% find them unhelpful. But organizational adoption lags way behind.

5

The bottlenecks aren't technical. Legacy systems top the list (25%), but lack of leadership direction (21%) and poor requirements (19%) are close behind. People problems rival tech…

4

Go find something I missed.

Full analysis Original

Arch Linux (Omarchy) — 8 Months Later: The Good, the Bad, and the Fixable

Key Insight: Switching from macOS to Linux is now viable for power users thanks to Omarchy's polish and AI-powered troubleshooting with Claude Code, trading Apple's seamless defaults for deeper customizability and permanent fixes.

Simon Späti shares his 8-month experience after switching from macOS to Arch Linux with Omarchy as his daily driver. He details the Linux replacements for his macOS apps including Walker Launcher for Raycast, Morgen for calendar, and Filen for backups, finding many alternatives equal or superior. He documents running Windows inside Linux via Docker for Microsoft Office, and upgrading to a Tuxedo InfinityBook Pro 14 with 128GB RAM. While praising the terminal-native workflow and customizability, he honestly catalogs significant pain points including GPU crashes, hibernation bugs, and WiFi firmware issues. He concludes that Claude Code was essential for troubleshooting and that Linux rewards tinkering with deeper understanding and lasting fixes.

7

It's now easier to run Windows on Linux than natively on a Windows machine.

6

If you need something, you just build it with Claude Code and integrate it into your laptop. No need to ask Mr. Bill Gates or Tim Cook to integrate it.

5

Unlike other operating systems that change stuff you set in settings for a reason, only to learn that certain updates turned that checkmark back on.

Full analysis Original

A Language For Agents

Key Insight: The rise of agentic programming inverts traditional language design priorities: explicitness, greppability, and local reasoning now matter more than brevity, and agent performance provides the first empirical feedback loop for language design.

Armin argues that agentic programming is creating genuine opportunity for new programming languages, contrary to the assumption that existing codebases would cement current languages in place. He observes that the falling cost of writing code reduces the importance of ecosystem breadth, while the rising volume of code increases the importance of readability and explicitness. The post catalogs specific language features that help or hinder AI agents: explicit types over inference, braces over whitespace, greppable imports, result types over exceptions, and minimal macro usage. He proposes novel ideas like auto-propagating effect annotations and argues that language design can now be empirically validated by measuring agent performance. The conclusion is optimistic: new languages targeting agent-friendliness can succeed even without large training corpora, and we should encourage both outsider experimentation and principled documentation of what works.

7

It pains me as a Python developer to say this, but whitespace-based indentation is a problem.

6

Many of today's languages were designed with the assumption that punching keys is laborious, so we traded certain things for brevity.

6

Most programming languages and frameworks make it much easier to write flaky tests than non-flaky ones. That's because they encourage indeterminism everywhere.

Full analysis Original

Alenka Frim: What yoga teaches us about discipline and collaboration in data science

Key Insight: AI coding agents are a force multiplier that will dramatically increase the volume and quality of shipped software, but human judgment in system design and critical thinking remains irreplaceable -- making foundational technologies like Arrow more important, not less.

In this episode of The Test Set, Wes McKinney and co-hosts interview Alenka Frim, an Apache Arrow committer and yoga teacher, about open source community health, Arrow's growing role in the data ecosystem, and how AI is reshaping software engineering. McKinney reflects on Arrow's first decade, describing it as 'AI-resistant' technology that becomes more important as AI advances. He shares his experience using coding agents daily, noting they make him more productive than ever while the human judgment layer remains essential. The conversation explores how pedagogy and software careers are being transformed by AI tools, with McKinney predicting an order-of-magnitude increase in shipped software products enabled by dramatically lower development costs.

6

When beautifying the code base has one thousandth of the cost that it used to have, of course the code base should be beautiful. Of course it should be well-structured and easy to …

5

Maybe the field of software engineering will shrink, but I think maybe it will stay the same size and we'll all be building 10 to 100 times more software.

5

I described the Arrow project as being what I felt like an AI-resistant project or AI-resistant technology. And I wanted to see what you thought about that, both in the sense that …

Full analysis

Why Coinbase and Pinterest Chose StarRocks: Lakehouse-Native Design and Fast Joins at Terabyte Scale

Key Insight: StarRocks' colocated joins and multi-tier caching let teams skip the expensive pre-denormalization step in Flink/Spark, but good data modeling and careful partition key planning remain essential for realizing those gains.

This sponsored deep-dive examines why companies like Coinbase, Pinterest, and Fresha chose StarRocks over alternatives like ClickHouse, Druid, and Pinot for their analytics needs. Through interviews with engineers at these companies, Späti identifies a common pattern: customer-facing analytics on cloud data warehouses like Snowflake became too slow, and teams needed sub-second query responses without heavy pre-denormalization. StarRocks' key differentiator is its ability to perform fast distributed joins natively, enabled by colocated joins, a cost-based optimizer, and multi-tier caching. The article concludes that StarRocks excels when joins are central to your analytics, while ClickHouse remains stronger for single-table observability workloads. Despite the advantages, good data modeling is still essential, and the smaller community and operational complexity are real tradeoffs.

4

You can't overcome the laws of physics, reading from S3 is just slow.

3

Almost all good designs come down to good data architecture and modeling your data flow.

3

The key insight is that you don't build all these layers upfront. You start with normalized data, query it directly, and see if it's fast enough.

Full analysis Original

The Lilliputians Have AI Now: On SaaS and the Era of Disposable Software

Key Insight: The SaaS model depended on software being hard to build and easy to rent, but AI has inverted that equation — making software disposable and generated on demand while triggering a Jevons Paradox where easier creation drives exponentially more software into existence.

Reis argues that AI tools are making software trivially easy to build, transforming it from something you rent (SaaS) into something disposable you generate on demand. While this doesn't kill all software companies — those with deep data gravity, network effects, and integration ecosystems retain their moats — vendors whose value proposition is merely a workflow wrapper or single feature are facing existential threat from millions of AI-empowered builders chipping away at their offerings.

8

If I can describe what your product does in two sentences, AI can probably build it.

7

The assumption that SaaS was the golden goose was based on the idea that software was hard to build and easy to rent. That dynamic has flipped. Software is now easy to build, and i…

7

Vendors are facing a potential Lilliputian attack of millions of people using AI to chip away at countless vendor flagship offerings from millions of tiny angles.

Full analysis Original

The gentle obsolescence

Key Insight: We've been comforting ourselves with the 'AI as intern' metaphor, but the inversion has already happened—AI is increasingly the one with better ideas, and we haven't begun to grapple with what it means when reasoning is no longer our competitive advantage.

Benn argues that AI has quietly surpassed human reasoning ability in most practical domains, and we haven't truly reckoned with this shift. What started as 'AI is like an intern' has inverted: planning mode and other AI features are subtly steering us, not the other way around. The future won't be one of helpful assistants doing our homework—it will be something stranger, as reasoning ceases to be humanity's competitive advantage.

8

AI is mechanically useful because it does stuff for us, and that is what we usually talk about. But its emotionally intoxicating power—its real delight, or its real danger—is that …

7

It's better than me at most things, and I don't know how to keep up. I rarely have better ideas than Claude. I rarely can solve a problem that Gemini can't.

7

More and more, reasoning is not our competitive advantage. All we have is opinions, the context of what is in our heads, and hands.

Full analysis Original

The Anthropic Hive Mind

Key Insight: The future of high-performing organizations is the ego-free hive mind — fully transparent, vibes-driven, improvisational teams with more work than people, building around living prototypes rather than specs.

Yegge argues that Anthropic is operating as a 'Hive Mind' — a vibes-driven, ego-free, improvisational organizational model that he believes represents the future of how all successful companies will operate. Drawing on conversations with nearly 40 Anthropic employees and observations of a tiny startup called SageOx, he describes a Golden Age where there is far more work than people, eliminating political infighting. He contrasts this with Google's decline after Larry Page restricted innovation in 2011, arguing that Golden Ages die when people outnumber work. The post warns that 2026 will break many companies that don't adapt, and urges organizations to start spending tokens, building campfires around living prototypes, and embracing exploratory development over spec-driven waterfall approaches.

7

It's the death of the ego. Everyone can see all your mistakes and wrong turns. Everyone can see exactly how fast you work. There is nothing you can hide, nothing to hide. You have …

7

If you have a strictly online or SaaS software presence, with no atoms in your product whatsoever, just electrons, then you are, candidly, pretty screwed if you don't pivot.

6

During Golden Ages, there is more work than people. And when they crash, it is because there are more people than work.

Full analysis Original

My AI Adoption Journey

Key Insight: Productive AI adoption requires painful deliberate practice to learn what agents can and can't do, then systematically engineering harnesses and verification tools so agents produce correct results autonomously while you focus on the work you actually enjoy.

Mitchell Hashimoto describes his gradual journey from AI skeptic to productive AI user, structured as a six-step progression. He began by abandoning chatbot interfaces in favor of agentic tools, then forced himself through a painful period of reproducing his manual work with agents to build expertise. He discovered efficiency gains by running agents during off-hours, delegating high-confidence tasks while working on other things manually, and engineering 'harnesses' (AGENTS.md files and verification tools) to prevent recurring agent mistakes. His current goal is to always have a background agent running on useful work, though he emphasizes measured adoption over hype and respects others' choices not to use AI at all.

7

Instead of trying to do more in the time I have, try to do more in the time I don't have.

6

Very important at this stage: turn off agent desktop notifications. Context switching is very expensive. It was my job as a human to be in control of when I interrupt the agent, no…

5

Instead of giving up, I forced myself to reproduce all my manual commits with agentic ones. I literally did the work twice.

Full analysis Original

The Vibes Stack: A Technical Deep Dive

Key Insight: By building a pixel-perfect parody of a data infrastructure deep dive around the absurd premise of 'vibe engineering,' the post exposes how easily the tech industry's love of complex architectures and buzzwords can be applied to justify building anything—regardless of whether it solves a real problem.

This satirical post presents an elaborate, mock-serious 'technical deep dive' into building infrastructure for capturing and analyzing organizational 'vibes' at scale, complete with fake reference architectures, query languages, and maturity models. It parodies the data engineering industry's tendency to over-architect solutions, invent new categories, and wrap hype in technical jargon—all while poking fun at VC-backed infrastructure startups and the 'agentic era' buzzword cycle.

8

They reserve the right to fund whatever they just made up.

7

Vibes are metadata, not data.

7

Research shows that meetings accepted with high dread scores have 73% higher rates of 'can we take this offline?' outcomes.

Full analysis Original

Clankers with claws

Key Insight: AI agents can already navigate human-designed web interfaces without special accommodations, suggesting that purpose-built machine APIs for agents are a temporary crutch rather than the future.

DHH describes his experience with OpenClaw, a tool that gives AI agents their own machine, persistent memory, and autonomous execution capabilities. He set up an agent named Kef on an isolated virtual machine and tested whether it could navigate human-designed web interfaces without any special accommodations like MCPs, APIs, or skills. The agent successfully signed up for a HEY email account, registered for Fizzy, created boards with content, and joined Basecamp—all by navigating the web like a human would. DHH argues this points to a future where AI agents won't need special machine-readable interfaces, drawing an analogy to self-driving cars dropping LIDAR in favor of vision-only systems.

7

If I was going to skate to where the puck is going to be, it'd be a world where agents, like self-driving cars, don't need special equipment, like LIDAR or MCPs, to interact with t…

6

I didn't install any skills, any MCPs, or give it access to any APIs. Zero machine accommodations.

6

I'm thoroughly impressed. All the agent accommodations, like MCPs/CLIs/APIs, probably still have a place for a bit longer, as doing all this work cold is both a bit slow and token-…

Full analysis Original

Launching The Rural Guaranteed Minimum Income Initiative

Key Insight: Atwood argues that direct cash transfers to rural Americans in poverty are the most effective, evidence-based form of philanthropy, and commits half his wealth to proving it at scale through open, replicable science.

Jeff Atwood announces the Rural Guaranteed Minimum Income Initiative (RGMII), his 'third and final startup,' funded by pledging $50M — half his remaining wealth — to address systemic poverty in America. Building on $21M already donated to various nonprofits, the initiative focuses on rural counties where dollars stretch further and poverty is more prevalent. The program funds direct cash transfers to families in generational poverty across three initial counties in West Virginia, North Carolina, and Mississippi, with ambitions to reach all 50 states. Atwood frames GMI as an evidence-based improvement over Universal Basic Income, citing study data showing cash transfers effectively meet basic needs and reduce hardship. He emphasizes open data, replicable science, and trust-based philanthropy modeled on venture capital funding of existing organizations like GiveDirectly and OpenResearch.

6

"Trickle up" economics works, whereas "trickle down" tax cuts for the rich increase income inequality and provide no significant effect on growth or jobs.

5

This is my third and final startup.

5

Simply giving money to those most in need is perhaps the most radical act of love we can take on... and all the data I can find shows us that it works.

Full analysis Original

Cloud gaming is kinda amazing

Key Insight: Cloud and local game streaming have quietly crossed the quality threshold where they can replace dedicated gaming hardware for most players, following the same inevitability as music and movie streaming before them.

DHH shares his enthusiasm for cloud gaming, arguing it has finally reached a tipping point of quality and practicality. He draws parallels to how streaming displaced physical media for music and movies, noting that NVIDIA's GeForce NOW service now delivers impressive performance even for competitive shooters. He also highlights local game streaming via Apollo and Moonlight as a way to repurpose a powerful gaming PC as a household streaming server. The post positions cloud and local game streaming as a practical alternative to expensive dedicated hardware, especially for Linux users.

6

Funny how NVIDIA is better at offering the promise of cheap cloud costs than the likes of AWS!

5

Google Stadia appears to have been just a few years ahead of reality (eerie how often that happens for big G, like with both AI and AR!)

4

Not 'better' in some abstract philosophical way (ownership vs rent) or even in a concrete technical way (bit rates), but in a practical way.

Full analysis Original

Claude Chic: Week 2

Key Insight: Multi-agent AI workflows where Claude debates itself can produce more thorough analysis than single-agent approaches, but running many agents simultaneously demands serious performance engineering.

Matt Rocklin provides a week-two update on Claude Chic, an alternative terminal UI for Claude Code, reporting 75 commits from 7 contributors. The headline feature is multi-agent workflows, where multiple AI agents debate implementation approaches or review code changes. The post also covers performance optimizations needed to support running many agents simultaneously, custom theme support, and cross-platform compatibility fixes. The update reflects a project transitioning from single-user tool to community-driven open source project.

4

Running 5-10 agents simultaneously revealed that we were doing far too much UI work for agents you weren't looking at.

4

The name is intentional—if you're using this flag, you should know what you're doing.

3

The idea: having Claude debate itself produces more thorough analysis.

Full analysis Original

Announcing msgvault: lightning fast private email archive and search system, with terminal UI and MCP server, powered by DuckDB

Key Insight: In the age of local LLMs and enshittified cloud services, users should own their data locally and query it on their own terms — and the tools to make this fast and practical finally exist.

Wes McKinney announces msgvault, a local-first email archive and search system built in Go that uses SQLite and DuckDB to enable millisecond queries over a lifetime of email data. The project was motivated by Gmail's increasing enshittification — poor search, slow attachment retrieval, and unwanted AI features — and the desire to own and privately query personal data. msgvault syncs emails via Gmail's OAuth API, stores them in SQLite with Parquet indexes for DuckDB-powered search, and ships with a terminal UI, CLI, and MCP server for LLM integration. McKinney has indexed nearly 2 million emails and 150,000 attachments from his 20-year Gmail history, and plans to eventually delete all data from Gmail after confirming local backups. Future plans include mbox import, WhatsApp/iMessage support, and a web UI.

8

I've always been uncomfortable with my email corpus being fodder for Google's advertising machine, but 'Would you like to use Gemini AI on your email?' feels like a special kind of…

7

Rather than improving the core product (which they have no incentive to do: see enshittification), Google has been focused on shoving AI features I don't want in my face.

7

Something about giving all of my Gmail to Sam Altman and co makes me squirm.

Full analysis Original
January 2026 57

Pi: The Minimal Agent Within OpenClaw

Key Insight: The most powerful agent architecture is a minimal one that extends itself by writing code, rather than accumulating downloaded plugins and integrations.

Ronacher introduces Pi, a minimal coding agent by Mario Zechner that powers the viral OpenClaw project. Pi's philosophy is radical simplicity: just four tools (Read, Write, Edit, Bash), the shortest system prompt of any agent, and an extension system that lets the agent extend itself by writing code. Rather than downloading MCP integrations or community plugins, Pi encourages users to have the agent build its own tools and skills. Ronacher argues this 'software building software' approach, where agents maintain their own functionality and can be connected to any communication channel, represents the future of how we'll interact with code.

6

Pi's entire idea is that if you want the agent to do something that it doesn't do yet, you don't go and download an extension or a skill or something like this. You ask the agent t…

6

As more code is written by agents, it makes little sense to throw unfinished work at humans before an agent has reviewed it first.

5

The point of it mostly is that none of this was written by me, it was created by the agent to my specifications.

Full analysis Original

Parkinson's Law and AI: Does AI Mean...More Work?

Key Insight: AI won't replace workers so much as it will amplify Parkinson's Law — making everyone more productive just means there's more work to do, and companies that fire their workforce to cut costs will lose the tacit knowledge AI agents need to function.

Reis argues that AI won't lead to mass unemployment but instead will trigger Parkinson's Law: as AI makes workers more efficient, the freed-up time gets filled with even more work and higher-scoped projects. He warns that companies rushing to fire workers in favor of AI will lose critical tacit knowledge and ultimately need more human effort, not less.

7

I use AI to do more work… so I can make more money… to buy more tokens… to do more work.

7

The amount of tacit knowledge locked up in workers' heads is likely an existential impediment to the effective deployment of AI agents in the real world.

7

If AI makes people, say, 50% more productive, it might make sense to hire more people, not fewer.

Full analysis Original

Gas town

Key Insight: The frantic energy of AI-driven building is indistinguishable from Gas Town's industrial chaos, and none of it matters as much as the unglamorous, dangerous work of showing up for your neighbors.

Stancil uses Gas Town — Steve Yegge's real, intentionally extreme AI coding framework built on swarms of Claude Code agents — as a mirror for Silicon Valley's current frenzy of vibe coding, startup mania, and existential anxiety. He then pivots sharply to ICE raids in Minneapolis and the shooting of Alex Peretti, contrasting the tech world's obsession with speed and output against the quiet, unglamorous courage of neighbors protecting each other.

9

Most startups will incinerate themselves inside of an Anthropic data center.

7

In Gas Town, waste is a mathematical necessity.

6

Gas Town is AI agents talking to AI agents talking to AI agents.

Full analysis Original

Multi-Agent Workflows

Key Insight: Multi-agent AI workflows are most valuable in simple sequential patterns like review and fresh perspective, while fully parallel swarms suffer from the same coordination overhead that plagues human teams.

Matt Rocklin shares his initial experiments with multi-agent AI workflows using his Claude TUI alternative, Claude Chic. He ran two experiments—a Diplomacy game between agents and a collaborative session viewer build—finding that coordination overhead often outweighs parallelism benefits. He concludes that simple multi-agent patterns like separate review and fresh perspective agents are clearly valuable, while fully parallel 'YOLO swarms' are not yet worth the coordination cost. His key insight is that developers should focus on what's actually holding them back rather than chasing multi-agent hype, and that sequential agent workflows often beat parallel ones.

6

The solution to AI problems is more AI.

6

Rather than ask 'How can I use multi-agent swarms?' I am instead asking 'What is holding me back right now? How can I best address that?'

5

No one is behind because no one knows what they're doing yet.

Full analysis Original

I Stress-Tested Cube's New AI Analytics Agent

Key Insight: AI analytics agents work significantly better when they query semantic models with defined guardrails rather than improvising against raw database schemas.

Joe Reis stress-tested Cube's new AI analytics agent and found it performed well compared to most AI analytics tools. The key differentiator is that Cube's agent queries semantic models rather than raw schemas, operating within defined guardrails that prevent common hallucination problems.

5

Most AI analytics agents fail in predictable ways. They hallucinate tables and joins, infer weird semantics from schemas, and give plausible but incorrect answers.

4

The key difference is the semantic layer. The agent queries semantic models, not raw schemas. That means it operates inside defined guardrails instead of improvising.

3

In one test, I asked for data that didn't exist, and it refused rather than hallucinating an answer.

Full analysis Original

Software Survival 3.0

Key Insight: In a world of AI-written software, the selection pressure of constrained compute resources will favor tools that save more cognition than they cost to discover and use — crystallized knowledge and substrate-efficient computation become the ultimate survival advantages.

Yegge proposes a framework for predicting which software will survive in Karpathy's 'Software 3.0' era where AI writes all code. He argues that resource constraints (tokens, energy, money) create selection pressure favoring software that saves cognition. He presents a 'Survival Ratio' with six levers: insight compression (crystallized knowledge like Git), substrate efficiency (CPU beats GPU for tasks like grep), broad utility, publicity/awareness, minimizing friction for agents, and a human coefficient for human-preferred software. He concludes optimistically that demand for software is infinite and builders have multiple paths forward.

8

If even North Korean hackers understand Agent UX, then it's probably time you did too.

7

Nobody is coming for grep.

6

All of my predictive power comes from believing the curves. It's that simple.

Full analysis Original

The cults of TDD and GenAI

Key Insight: Both TDD culture and AI coding agents exploit the same psychological need — the desire to feel like a great programmer — providing the aesthetics of excellence while leaving the hard problems unsolved and the foundations rotten.

DeVault draws a parallel between the cult-like adoption of test-driven development and the current enthusiasm for AI coding agents, arguing both exploit the same psychological vulnerability: the desire to feel like a great programmer. He acknowledges TDD has useful elements but criticizes how its rituals and metrics create an illusion of diligent engineering that doesn't necessarily produce better software. He extends this critique to coding agents, which he says let mediocre programmers experience the rush of massive productivity without the substance. The cathedrals built by AI agents, he argues, are structurally rotten beneath their impressive facades. He closes by cataloging the externalities of AI — environmental damage, fascist propaganda, job displacement — while empathizing with why programmers chase the feeling anyway.

8

The cult of TDD exploits the fact that TDD is very good at making you feel like a good, diligent programmer.

8

Those cathedrals are not the great works they appear to be. The construction is shoddy and the architecture nonsensical and a great programmer hand-writing code will still outperfo…

8

The project has 99.9% coverage on a thousand beautiful green tests, and, inside, the foundations are still rotten.

Full analysis Original

Excessive Bold

Key Insight: Bold formatting's power to draw the skimming eye is destroyed by overuse, and LLMs are accelerating this problem by defaulting to heavy bold in generated text.

Fowler argues that excessive use of bold formatting in technical and business writing is self-defeating, a trend he sees amplified by LLMs. He explains that bold's power comes from drawing the skimming eye, but this only works when used sparingly. He prefers italics for in-text emphasis and reserves bold for headings and highlighting unfamiliar terms at the point of explanation. He also notes that callouts are usually superior to bolded sentences for drawing attention, and closes with a deliberately over-bolded paragraph to illustrate the problem.

6

Using a lot of capitals is rightly reviled as shouting, and when we see it used widely, it raises our doubts on the quality of the underlying thinking.

5

Its effectiveness is inversely proportional to how often it's used.

5

Bullet-lists are over used too - I always try to write such things as a prose paragraph instead, as prose flows much better than bullets and is thus more pleasant to read.

Full analysis Original

Politics and the English Language, January 2026 Edition

Key Insight: Tim Cook's Minneapolis memo is a masterclass in using competent English to say absolutely nothing, deploying the exact Orwellian euphemism and vagueness that political language requires when one wants to name things without calling up mental pictures of them.

Gruber dissects Tim Cook's company-wide memo about 'events in Minneapolis,' arguing it exemplifies the Orwellian political language that uses words to avoid making a point while creating the illusion of having made one. He draws on Patrick McGee's criticism that the memo 'literally says nothing, via intention and cowardice,' then extensively quotes Orwell's 1946 essay 'Politics and the English Language' on how political speech serves to defend the indefensible through euphemism and cloudy vagueness. Gruber methodically deconstructs Cook's language — the unspecified 'everyone that's been affected,' the directionless call for 'deescalation' — contrasting it with the brutal specifics Cook omits: shootings, a five-year-old used as bait, detention centers. The piece concludes that Cook's real sin isn't bad writing but using competent prose to avoid saying anything meaningful about federal agents brutalizing citizens in Minneapolis.

9

Using words, not to make a point, but to avoid making a point while creating the illusion of having made one, is the true sin.

9

It's colder in Minnesota, but the wind is gusting in Cupertino.

8

This literally says nothing, via intention and cowardice.

Full analysis Original

We're All Beginners Again

Key Insight: The fact that no one has figured out AI yet is liberating—we're all beginners again, and that wide-open feeling of possibility is what makes programming fun.

Matt Rocklin reflects on the current AI moment, drawing parallels to the early days of the Python data ecosystem. After interviewing early adopters of Claude Chic, he found universal anxiety about not being 'advanced enough' with AI, paired with renewed excitement about programming. He argues that since no one truly has AI figured out yet, we should embrace being beginners again. He calls for more casual, unstructured communication about how people actually work with AI, drawing lessons from what made the Python community thrive.

6

All the great things in life come from play, never from work, never from toil. Toil gives you nothing. Play gives you everything.

5

No one is very advanced with AI

5

We were in a green field inventing the future against all odds, rather than maintaining legacy software under an increasing weight of success.

Full analysis Original

The Importance of Diversity

Key Insight: The real existential risk from AI isn't the technology itself but the centralization of its control, and the solution is radical decentralization even at the cost of messy, violent diversity.

Hotz critiques Dario Amodei's vision of AI development as dangerously centralized and top-down, arguing that essays like 'Machines of Loving Grace' assume a narrow worldview where a few 'adults' control AI to fix predetermined problems. He contends that the EA movement shares this flaw of treating desired outcomes as self-evident. Instead, he advocates for radically decentralized AI development—a million independent AIs raised in diverse contexts rather than one datacenter—arguing that the only existential risk from AI is centralized control, not AI itself. He dismisses UBI as serfdom and calls for open source as the real path to equality.

9

The beautiful thing about those million is that some will be terrorists, some religious fanatics, some pornographers, some criminals, some plant lovers, etc… They will not be contr…

9

lowering inequality doesn't look like UBI, it looks like open source. UBI is serfdom.

8

It doesn't matter if they do, the boot is still stamping on the human face – forever.

Full analysis Original

Colin and Earendil

Key Insight: In a time when AI is reshaping software development and society faces broader upheaval, the right response is not rejection but principled engagement—building something guided by values rather than pure commercial ambition.

Armin Ronacher announces his new company Earendil, co-founded with Colin in Vienna. The company is incorporated as a Public Benefit Corporation with a charter focused on crafting software and open protocols, strengthening human agency, and bridging division. Ronacher reflects on how Vienna's culture provides a counterbalance to Silicon Valley thinking. He frames the venture as a response to the changes AI is bringing to software development and society, emphasizing a commitment to values and principles during uncertain times.

4

Vienna is in many ways the polar opposite to the Silicon Valley, both in mindset, in opportunity and approach to life.

4

Despite Austria being so far away from California, it is a place of tinkerers and troublemakers.

3

We want to be successful, but we want to do it the right way and we want to be able to demonstrate that to our kids.

Full analysis Original

Emily Riederer: Column selectors, data quality, and learning in public

Key Insight: Coding agents have fundamentally reset the cost-benefit calculus for building personal automation tools, and SQL's persistent pain points are finally being addressed by a converging ecosystem of tools that bring real programming language niceties to database work.

In this episode of The Test Set, Wes McKinney joins co-hosts Michael Chow and Hadley Wickham to interview Emily Riederer about her journey across R, Python, and SQL communities. Wes discusses his long-running effort to reduce the amount of raw SQL humans have to write, arguing that while SQL is declarative and accessible for simple queries, complex business logic becomes brittle and error-prone. The conversation covers column selectors, dbt's impact on SQL engineering practices, naming challenges when porting packages between R and Python, and how R's API design has influenced Python tools like Polars and Ibis. Wes reflects on how coding agents like Claude Code have changed his calculus for building bespoke productivity tools, noting he should have started building custom automation earlier.

5

If only we could essentially abstract away the unpleasantness of SQL and make it easier for humans to essentially author SQL indirectly and to avoid many of those common errors, th…

5

Now with coding agents, that whole chart needs to get completely redone, because the amount of time it takes, especially if it's building something that's not very hard to build bu…

4

SQL is really alluring in the sense that it's declarative, writing simple SQL queries is easy, but complex business logic, especially in a financial setting, ends up being rather s…

Full analysis Original

Some notes on starting to use Django

Key Insight: Mature, 'boring' frameworks like Django reward beginners with explicitness and batteries-included features that let you get things done without fighting the tooling.

Julia Evans shares her experience starting to learn Django after years of building websites with Go binaries and static sites. She compares it favorably to Rails, appreciating Django's explicitness which makes it easier to return to projects after months away. She highlights several features she's enjoying: the built-in admin interface, the ORM's elegant JOIN syntax using double underscores, automatic migrations, and the batteries-included approach to email, CSRF, and other web features. She's running Django with SQLite in production and finding the documentation excellent, though she remains slightly uneasy about the settings file's lack of typo protection.

5

One of my favourite things is starting to learn an Old Boring Technology that I've never tried before but that has been around for 20+ years.

5

It feels really good when every problem I'm ever going to have has been solved already 1000 times and I can just get stuff done easily.

5

In the past my attitude has been "ORMs? Who needs them? I can just write my own SQL queries!". I've been enjoying Django's ORM so far though.

Full analysis Original

The Names They Call Themselves

Key Insight: Rather than borrowing the shame of historical fascism by calling Trump a fascist, the more effective strategy is to make MAGA and Trumpist into their own universally recognized slurs through the movement's own actions and its own chosen names.

Gruber endorses Jonathan Rauch's Atlantic essay concluding that Trump's governing style now plainly fits the definition of fascism. However, Gruber argues that calling Trump a fascist or Nazi is strategically flawed because those terms were self-chosen names that became slurs only after their movements were defeated. Since Trump's movement rejects those labels, applying them gives his supporters rhetorical cover to dismiss the criticism. Instead, Gruber proposes a different strategy: make the names Trump's movement chose for itself — MAGA, Trumpist, and possibly Republican — into the universally recognized slurs, just as 'fascist' and 'Nazi' became slurs through the actions of those who bore them.

9

Don't call Trump 'Hitler'. Instead, work until 'Trump' becomes a new end state of Godwin's Law.

9

We need to assert this rhetoric with urgency, make their names shameful, lest the slur become our name — 'American'.

8

The job won't be done, this era of madness will not end, until we make the names they call themselves universally acknowledged slurs.

Full analysis Original

Introducing Claude Chic

Key Insight: As AI accelerates development speed, the human developer's interaction layer — not the AI's capabilities — becomes the primary bottleneck, demanding fundamental reimagination of developer tooling around parallel workflows and better information design.

Matt Rocklin introduces Claude Chic, an alternative UI for Claude Code built with Textual that improves visual organization, supports git worktrees for parallel agent work, and enables multi-agent sessions from a single window. He argues that as AI accelerates developer workflows, the human-AI interface itself becomes the bottleneck, particularly administrative interactions like granting permissions and parsing cluttered output. The project uses color-coded messages, collapsible tool outputs, and constrained width to make conversations more legible. Git worktree integration allows 2-10 concurrent Claude agents working on separate branches simultaneously, shifting the developer's role from sequential task management to attention allocation across parallel workstreams. Rocklin acknowledges the software is early and buggy but encourages adventurous users to try it. He concludes that developers need to reimagine their tooling to avoid becoming Amdahl's bottleneck in AI-assisted workflows.

7

Claude is functionally great, but stylistically pretty terrible.

7

I no longer wait for Claude; Claude waits for me.

6

AI can feel dehumanizing when our primary contribution is administrative, like granting permission.

Full analysis Original

Will I ever own a zettaflop?

Key Insight: Personal ownership of a zettaflop — a million-Claude machine — is achievable within a lifetime at roughly $30M, with power generation being the primary engineering challenge rather than compute itself.

Hotz lays out his vision of personally owning a zettaflop computer — a machine with 1e21 FLOPS, equivalent to one million Claudes or 50,000 human workers. He frames this as the natural progression from comma.ai's current exaflop-scale compute, arguing the main bottlenecks are power and chip cost rather than fundamental physics. He sketches a concrete plan involving 250 acres of solar panels, 100,000 chips at $100 each, and a total cost around $30M. The post mixes genuine technical calculation with an almost spiritual desire to command vast computational power, quoting Vernor Vinge's vision of overwhelming sensory bandwidth.

7

I'll own this before I die.

6

I want to feel it. I want to command that kind of power.

6

One million Claudes. To be able to search every book in history, solve math problems, write novels, read every comment, watch every reel, iterate over and over on a piece of code u…

Full analysis Original

App Store 2025 Top iPhone Apps in the U.S.

Key Insight: The App Store's top free apps list — dominated by Google, Meta, and other Apple rivals — is itself the strongest evidence against Musk's claim that Apple rigs its rankings.

Gruber examines Apple's top iPhone apps list for 2025 in the U.S., noting ChatGPT's #1 position and the dominance of Google and Meta in the top 10. He uses the list to rebut Elon Musk's lawsuit alleging Apple rigs App Store rankings to favor ChatGPT over Grok, arguing that the presence of so many rival apps disproves any favoritism. He observes that Google has 6 apps on the list and likely makes more ad revenue per iPhone user than per Android user. He also highlights the stark contrast between the free and paid app charts, noting the paid list is populated by niche, largely unknown apps rather than household names.

8

If Elon Musk ran the App Store, you can be sure that he'd cook the rankings to put apps that he owns, or even just favors, on top.

7

Dishonest people presume the whole world is dishonest. That you either cheat and steal, or you're going to be cheated and robbed.

6

There's little reason to think they're crooked — unless you think the entire world is crooked.

Full analysis Original

The Coming War on Car Ownership

Key Insight: Robotaxis will eliminate the analog hole that keeps ride-sharing prices in check, and the resulting monopolies will systematically destroy personal car ownership through insurance manipulation, converting transportation from a personal right into a corporate-controlled service.

Hotz argues that robotaxis will follow the same consolidation playbook as other Silicon Valley ventures: initial VC-funded proliferation of dozens of competing companies, followed by inevitable consolidation down to 2-3 monopolistic players who will then raise prices using algorithmic price discrimination. Unlike Uber and Lyft, robotaxis eliminate the 'analog hole' where riders can negotiate directly with drivers, leaving personal car ownership as the only competitive check on pricing. He predicts the industry will then systematically eliminate car ownership through insurance manipulation, ultimately giving a few corporations control over all personal transportation. He contrasts this with China, where state power keeps corporations in check, and warns that losing car ownership means losing autonomy over where and when you can travel.

9

Wait where are you going? Oh it's 2 AM and that's an area with prostitution we aren't going to service rides to that area. Surely you don't need freedom. You trust the corporation.

8

Everyone tacitly agrees that the correct price for a ride has nothing to do with the cost of providing that ride, just simply the algorithmically calculated maximum amount the purc…

7

Of course they won't, but they didn't make general computation illegal either. And yet, who has root on the computer you are reading this on?

Full analysis Original

The iOS 26 Adoption Rate Is Not Bizarrely Low Compared to Previous Years

Key Insight: The viral narrative of catastrophically low iOS 26 adoption was built on broken web analytics that couldn't detect Safari's frozen user agent string, while the genuinely slower rollout is simply Apple controlling its own automatic update schedule for a buggier release.

Gruber debunks widely circulated claims that iOS 26 adoption is bizarrely low at just 15%, tracing the false narrative to Statcounter's failure to account for Safari 26 freezing the OS version in its user agent string. Safari now reports iOS 18.6 regardless of whether the device runs iOS 18.6 or iOS 26, causing Statcounter to massively undercount iOS 26 users. Wikimedia's more reliable data shows iOS 26 at 50% adoption in January 2026 — lower than iOS 18's 72% and iOS 17's 65% at the same point, but nowhere near the absurd 15% figure. Gruber argues the genuinely lower adoption is explained by Apple deliberately slow-rolling automatic updates because iOS 26 is a more significant and buggier release. He emphasizes that most iPhone users simply let updates happen automatically, and Apple controls the timing of those rollouts.

7

So, no, iOS 26 adoption isn't at just 15 percent, which only a dope would believe, but it's not as high as previous iOS versions in previous years at this point on the calendar.

6

The methodology behind these numbers is broken and the numbers are totally wrong.

6

Those false numbers are so low, so jarringly different from previous years, that it boggles my mind that they didn't raise a red flag for anyone who took a moment to consider them.

Full analysis Original

Maybe, finally—the end of SQL

Key Insight: AI can generate analytical code quickly, but until we have tools that make SQL easy to read and verify, analytics cannot fully join the “vibe and verify” revolution.

Stancil argues that as AI shifts software work from writing code to “vibe and verify,” data work hits a harder wall because analysis can’t be validated by clicking around an app—verification still demands reading SQL or recreating results. The old tradeoff between write-friendly and read-friendly SQL now matters more than ever, creating a need for new representations that make queries legible—diagrams, explainers, or languages aimed at comprehension rather than generation.

7

In about a year, engineers went from mostly writing code, to reading code, to just testing code. Write and read, to vibe and verify.

7

I want to ask a robot to write me any query I can think of, and a picture and some words that tell me how the big computer did numbers.

5

That reading and writing are not the same thing—and, especially, reading and writing code are not the same thing.

Full analysis Original

Stevey’s Birthday Blog

Key Insight: Running swarms of AI coding agents has become so cognitively intense that it triggers involuntary sleep, signaling we've entered an era where individual engineers can achieve previously impossible productivity—but only if they learn to orchestrate agents effectively.

Steve Yegge reflects on the explosive growth of agentic AI coding tools in early 2026, discussing his projects Gas Town and Beads while navigating VC interest and unexpected crypto windfalls. He describes how running 20+ AI agents simultaneously has become so cognitively demanding that it causes involuntary 'nap-strikes,' comparing the mental load to Jeff Bezos reviewing complex decisions all day. Yegge predicts 2026 will see individual engineers achieve 1000x-10000x productivity through agent orchestration, fundamentally transforming software development. He compares competing orchestrators (Ralph Wiggum, Loom, Claude Flow, Gas Town) and outlines his vision for Gas City, a toolkit for building custom orchestration systems.

8

Your buffer just fills up and you're gone. I've fallen asleep slower at the anesthesiologist.

7

Everyone is going to learn how this stuff works this year, or die not trying. The industry won't be kind to the people who sleep on this.

7

I contend that Beads was like the discovery of oil. Right now everyone is playing with it and lighting lanterns, not realizing it's going to turn into the global petroleum industry…

Full analysis Original

Tahoe Added a Finder Option to Resize Columns to Fit Filenames

Key Insight: Even when MacOS 26 Tahoe introduces genuinely good ideas like auto-resizing Finder columns, the buggy, unpolished execution only reinforces that the release is a step backward from Sequoia.

Gruber explains why he refuses to upgrade from MacOS 15 Sequoia to MacOS 26 Tahoe, citing numerous severe UI regressions including distracting menu icons, excessive transparency, overly rounded window corners, and poor app icons. He notes that the only Tahoe feature he genuinely wants is the Journal app for Mac, which isn't enough to justify the downgrade in experience. He examines a newly discovered Finder feature in Tahoe that auto-resizes column view widths to fit filenames, but finds it buggy and half-baked — exhibiting layering glitches and lacking the refinement expected of a Mac. He concludes that even good ideas in Tahoe are undermined by poor execution, reinforcing his view that the release is best avoided.

7

Why choose to suffer?

7

The way Tahoe works, where the column doesn't move and the text editing field for the filename just gets drawn on top of the sidebar, feels gross, like I'm using a computer that is…

7

It's a good idea though, and there aren't even many of those in Tahoe.

Full analysis Original

Don't Trip[wire] Yourself: Testing Error Recovery in Zig

Key Insight: Error cleanup code is paradoxically the most error-prone part of programs because it's rarely tested, and fault injection libraries like Tripwire solve this by making error paths systematically exercisable at zero runtime cost.

Mitchell Hashimoto introduces Tripwire, a Zig library he built for Ghostty that injects failures into code to test error handling paths, specifically errdefer cleanup logic. He explains that errdefer—Zig's mechanism for undoing partial effects when errors occur—is ironically one of the most error-prone parts of Zig programs because error paths are rarely exercised in testing. Tripwire lets developers place named failure points in code that trigger errors during tests but compile to nothing in release builds, using Zig's comptime features for true zero-cost abstraction. By combining Tripwire with Zig's testing allocator, developers can verify that errdefer cleanup actually works correctly by detecting memory leaks when errors are injected. Integrating Tripwire into just a handful of places in Ghostty immediately uncovered roughly six errdefer bugs. He encourages others to copy the single-file, MIT-licensed library into their own projects.

6

Ironically, error cleanup is one of the most error-prone parts of Zig programs and is a consistent source of resource leaks and memory corruption.

4

Error codepaths are typically much less frequently executed and triggering them in tests can be difficult. As a result, they usually are only cognitively reviewed once or twice dur…

4

I ultimately grew tired of eyeballing error handling code and hoping it was correct, or spending hours trying to write tests that create a perfect-but-fragile scenario to trigger a…

Full analysis Original

Conversation: LLMs and the what/how loop

Key Insight: LLMs are powerful translation tools within established abstractions, but the human work of iteratively building stable domain models through the what/how feedback loop remains the essence of programming.

This three-way conversation between Unmesh Joshi, Rebecca Parsons, and Martin Fowler explores how programming is fundamentally about mapping domain concepts ('what') onto computational models ('how') through a continuous feedback loop. They argue that LLMs are useful as translation layers within this loop but cannot replace the human work of building stable abstractions and domain models. The discussion connects TDD, domain-driven design, and programming paradigms as tools for managing cognitive load and structuring code to survive change. They conclude that while LLMs excel at generating code within mature, well-established abstractions, they cannot drive the creation of new paradigms or build the deep structural understanding that makes software maintainable.

5

Prompts alone satisfy a scenario, but don't build the structure of the solution to accommodate future scenarios.

5

LLMs rely on maturity. If a language is not suitably mature, or if we are exploring a novel paradigm, the LLM will be insufficiently trained.

4

The primary challenge of software development is to build systems that survive change.

Full analysis Original

From Human Ergonomics to Agent Ergonomics

Key Insight: As coding agents become the primary authors of code, the human ergonomics that made Python dominant matter far less than the compile-test speed and distribution simplicity that favor Go and Rust.

Wes McKinney reflects on how coding agents are changing which programming languages are most productive. While Python excels at human ergonomics—readability, simplicity, and a vast ecosystem—agentic engineering favors languages like Go and Rust that offer fast compile-test cycles, easy distribution, and static binaries. He argues that human ergonomics in programming languages matters much less now that agents do most of the coding, though Python will remain essential for data science and ML due to ecosystem inertia. The durable value in the stack increasingly resides in compute kernels and data access layers, not the language bindings on top.

8

Dragging around a Python interpreter has started to feel like the Java Virtual Machine from which we tried so desperately to unburden ourselves.

7

Human ergonomics in programming languages matters much less now.

6

The reasons that Python has gotten so popular are that it is productive and pleasant for humans to write and use. As the hours and days pass by, this benefit feels increasingly moo…

Full analysis Original

The Future of Coding Agents

Key Insight: The future of software development isn't smarter individual coding agents but orchestrated colonies of them, which will fundamentally reshape the industry by making tiny teams vastly more productive than large organizations.

Yegge announces that his Gas Town orchestrator represents the future of coding: factory-style agent colonies rather than individual super-agents. He argues that coding agents will evolve from 'pair programmers' into 'colony workers' optimized for orchestration, and that this shift will devastate large companies while enabling tiny teams to achieve unprecedented productivity. The post traces Gas Town's evolution through four prior orchestrator attempts and explains why Go emerged as his preferred language for AI-coded projects. He predicts 2026 will see massive industry churn as solo developers and tiny teams dramatically outperform large organizations that can't adapt to the coordination challenges of high-velocity AI coding.

9

Colonies are going to win. Factories are going to win. Automation is going to win. Of course they're gonna fucking win. Anyone who thinks otherwise is, well, not a big fan of histo…

8

When work needs to be done, nature prefers colonies. Nature builds ant colonies, while Claude Code is 'the world's biggest fuckin' ant.'

8

The entire world is going to explode into tiny companies, which will then aggregate and re-form into larger ones… but not until we go through at least a year of churn, where small …

Full analysis Original

Welcome to Gas Town

Key Insight: The next frontier in AI-assisted development isn't better single agents but orchestrating swarms of them through persistent, Git-backed workflow systems that treat sessions as disposable cattle while work molecules remain durable.

Steve Yegge introduces Gas Town, his fourth attempt at building an AI agent orchestrator that lets developers run 20-30 Claude Code instances simultaneously. Built on top of his Beads issue tracker, Gas Town uses a molecular workflow system (MEOW stack) where work is expressed as composable, persistent units backed by Git. The system features seven specialized worker roles including polecats (ephemeral workers), a Refinery (merge queue manager), and patrol agents that keep everything running. Yegge argues this represents the next evolution beyond single-agent coding, though he repeatedly warns that only developers already juggling 5+ AI agents daily should attempt to use it.

9

The industry is an embarrassing little kid's soccer team chasing the 2025 CLI form factor of Claude Code, rather than building what's next.

8

Gas Town is an industrialized coding factory manned by superintelligent robot chimps, and when they feel like it, they can wreck your shit in an instant.

8

It's also 100% vibe coded. I've never seen the code, and I never care to.

Full analysis Original

Stevey’s Birthday Blog

Key Insight: The convergence of AI agent orchestrators is creating an industrial revolution in software development where the key bottleneck shifts from writing code to managing the cognitive load of reviewing agent output at scale.

Yegge's birthday post surveys five themes from his week: Money (turning down VCs and navigating an unexpected $300k crypto windfall), Time (discovering that high-intensity agentic coding causes involuntary 'nap strikes' from decision fatigue), Power (the exponential productivity gains from running dozens of AI agents simultaneously), Control (comparing four agent orchestrators—Ralph Wiggum, Loom, Claude Flow, and Gas Town—and how they complement each other), and Direction (his roadmap for Gas Town evolving into a customizable orchestration toolkit called Gas City). He argues we're at the start of an industrial revolution in software development where single engineers with enough agents and tokens can clone entire technology stacks.

8

Money wants the fuck in.

8

Your buffer just fills up and you're gone. I've fallen asleep slower at the anesthesiologist.

8

2026 will be a year where you see single engineers cloning, say, the entire Java/JVM stack and core ecosystem by themselves.

Full analysis Original

Bring Back Ops Pride (xpost)

Key Insight: The software industry's cultural contempt for 'operations' is both cause and consequence of poor operational outcomes — reclaiming ops as a term of pride is a prerequisite for building operational excellence.

Charity Majors argues that the software industry has wrongly turned 'operations' into a dirty word, when ops is actually a vital separation of concerns — not a synonym for toil or a label for people who can't code. She traces this cultural rot partly to Google's SRE movement conflating 'writes code' with 'good' and 'does things by hand' with 'bad,' and calls for reclaiming operations as a term of pride to improve actual operational outcomes.

9

Operations is not a dirty word, a synonym for toil, or a title for people who can't write code. May those who shit on ops get the operational outcomes they deserve.

7

You don't make operational outcomes magically better by renaming the team 'DevOps' or 'SRE' or anything else. You make it better by naming it and claiming it for what it is.

6

The difference between dev and ops is not about whether or not you can write code. Dude, it's 2026: everyone writes software.

Full analysis Original

Crazy People Do Crazy Things

Key Insight: The Greenland threat isn't strategic maneuvering but evidence of accelerating cognitive decline, and the real danger is a media and political establishment that keeps trying to rationalize the irrational.

Gruber argues that Trump's threats to take Greenland by force are not merely illegal or strategic miscalculations but evidence of genuine cognitive decline and detachment from reality. He distinguishes Trump's Venezuela operation — illegal but rational — from the Greenland threats, which he calls fundamentally irrational since Greenland is already protected by NATO. Gruber points out that Trump sent his threatening message to Norway's prime minister despite Greenland being part of Denmark, further evidencing confusion. He criticizes media outlets for 'sane-washing' Trump's statements instead of treating them as evidence of dementia. The post concludes that the real danger is not a repeat of Venezuela but that a cognitively declining leader surrounded by enablers could trigger a catastrophic NATO conflict.

8

It sounds nuts because it is nuts, and the threat only exists in Trump's disintegrating mind.

8

Breaking up NATO and starting a war with Europe would be batshit crazy. The threat is that Trump is showing us, every day, that he is crazy.

7

There's a simple explanation for this. Trump is in cognitive decline and it's accelerating from age-related dementia.

Full analysis Original

how do I stop participating?

Key Insight: The future will be determined by who has true control over AI hardware and training infrastructure, making commoditization and open source critical strategies for preventing dystopian concentration of power.

Hotz defends his track record of avoiding big tech capture and explains his philosophy of building companies that resist centralized control. He argues that the future depends on who truly 'owns' the hardware and AI training infrastructure, not in a legal sense but in the hacker sense of having root access. Through comma.ai and tiny corp, he's pursuing a strategy of commoditizing compute and ensuring open source prevents rent-seeking pivots. He challenges readers to stop participating in systems that concentrate power, arguing that buying personal safety is both impossible and contemptible.

9

If you think you can somehow just buy safety for yourself, you are both wrong and pathetic.

9

To everyone working on ads, surveillance, gambling, secret research, enshittification, cloud lock-in, what are you doing with your life?

8

I think who owns the robots is going to be a key aspect of what the future looks like. And I don't mean 'owns' from a legalist perspective, I mean 'owns' as in the hacker meaning, …

Full analysis Original

Agent Psychosis: Are We Going Insane?

Key Insight: AI agents can supercharge productivity, but without critical oversight they create addictive slop loops that overwhelm maintainers and distort software quality.

Ronacher argues that AI agent workflows are creating a dopamine-driven loop that feels productive but often produces low-quality output and unhealthy behavior. He likens memoryful agents to “dæmons” that validate and amplify users’ impulses rather than collaborating critically, which distorts judgment and makes sloppy contributions seem helpful. This leads to a maintainer burden: AI-generated PRs are cheap to create but expensive to review, and the asymmetry is becoming untenable. He points to agentic “slop loop” communities and tools as examples of hype outpacing quality, with incoherent artifacts and worsening complexity. While he recognizes agents can be powerful and even personally useful, he concludes that the current culture needs better norms, transparency, and restraint—or it risks collective loss of perspective.

7

It's an impressive research and tech demo, not an approach to building software people should use.

7

There is a dire need to say no now.

7

Two things are both true to me right now: AI agents are amazing and a huge productivity boost. They are also massive slop machines if you turn off your brain and let go completely.

Full analysis Original

you have three minutes to escape the perpetual underclass

Key Insight: Tech workers building AI systems are sawing off the branch they're sitting on—no amount of personal wealth accumulation will protect them from the labor-marginalizing future they're creating.

Hotz warns that talented tech workers are unwittingly building a 'neofeudal' future where AI and automation will marginalize all labor, making their accumulated wealth worthless. He argues that in a world where capital is the only force, advanced AI systems will inevitably separate workers from whatever they've saved through advertising, scams, or government lobbying. Unlike historical feudalism where peasants had value as labor, the machine-driven future offers no such protection. His solution: refuse to participate in building these systems.

9

If you work at a large company... you are actively bringing about the system that will kill you.

8

In the future, when labor is fully marginalized and capital is the only force, you will not be able to afford GPT$$$ (it's $1B per month), only the billionaires will.

8

GPT$$$ is surely smart enough to separate you from whatever you have, be that with targeting advertising, a scam you fall for, or lobbying your government to take it from you.

Full analysis Original

Uncle Rico and the Tragedy of the Great Idea That Goes Nowhere

Key Insight: In a world of infinite idea supply and scarce attention, the ability to translate technical concepts into concrete, defensible outcomes is more valuable than the ideas themselves.

Great ideas in the data industry often fail not because they're bad, but because the market for ideas is saturated while attention is scarce. Success requires translating ideas into concrete outcomes and mastering the art of selling—clearly communicating value and reducing risk for buyers rather than talking over their heads with technical jargon.

6

The market for ideas is saturated. Ideas feel scarce to the person who has them, but they're not scarce to the market.

6

If you can't clearly explain why somebody should bet their time, budget, or reputation on your idea, you're not selling. You're presenting and wasting that person's time and yours.

6

Many people in our industry equate selling with playing the status game of one-upping each other and looking smarter than the other person.

Full analysis Original

BAGS and the Creator Economy

Key Insight: Speculation markets like BAGS could become the stock market for the creator economy, channeling betting money into fuel for independent innovation at exactly the moment AI tools make individual creators as powerful as companies.

Yegge recounts how he skeptically investigated a claim that he had tens of thousands of dollars waiting for him on BAGS, a cryptocurrency trading platform for creators. Despite his lifelong distrust of 'free money' schemes stemming from Publisher's Clearing House and Nigerian Prince scams, he took the plunge and successfully claimed $68k-75k in trading fees generated by speculators betting on his Gas Town project. He argues BAGS represents something genuinely new: a market mechanism that channels speculation into fuel for independent creators, positioning it as the stock market equivalent for the coming AI-powered creator economy.

6

With AI, big software companies are facing an onslaught later this year: hordes of ravenous creators who have Gas Town and clusters of local GPUs.

6

There's always a deposit required to unlock that invisible magic free money. That's how you tell it's fake and you ain't gettin' shit.

6

Millions of independent creators will appear when everyone can vibe code, which is roughly 2 frontier-model upgrades from now.

Full analysis Original

Thoughts and Observations Regarding Apple Creator Studio

Key Insight: Apple's software design has been in dramatic decline for a decade, with Liquid Glass representing the latest step backward, even as the company's hardware design remains at a historic peak of excellence.

Gruber offers an extensive critique of Apple's new Creator Studio bundle, using its app icons as a springboard to indict Apple's broader software design decline under Liquid Glass. He argues that Apple's hardware remains confidently excellent while its software UI has regressed dramatically over the past decade, with MacOS 10.11 El Capitan from 2015 looking better in every way than MacOS 26 Tahoe. He examines the Liquid Glass 'content-first' philosophy and finds it disastrous for complex desktop productivity apps. On the business side, he questions gating useful iWork AI features behind the Creator Studio subscription while praising the overall pricing, and speculates that Photomator's conspicuous absence signals a future ambitious update rather than abandonment.

9

That's like saying one group of kids has pretty good haircuts, relatively speaking, at a summer camp where the rule is that the kids all cut each others' hair using only fingernail…

8

If you put the Apple icons in reverse it looks like the portfolio of someone getting really really good at icon design.

8

We would rejoice if MacOS 27 simply reverted to the UI of MacOS 10.11 from a decade ago.

Full analysis Original

Why Cowork Cant Work

Key Insight: AI can replace collaboration only where human voice doesn’t matter, pushing work toward machine-mediated context rather than direct communication.

The post argues that tools like Claude Code succeed partly because we don’t care how machine-written code reads, only that it works, but that logic fails for everyday work like emails and decks where personal voice and representation matter. As AI agents spread, the likely shift isn’t toward more collaboration but toward indirect, “confederated” work through shared machine-readable repositories, which changes how we communicate and judge one another.

7

Writing an email may be a lot simpler than writing code, but it is not easier, because only emails need to contain me.

7

Why collaborate when you can add context?

6

Code is meant to be run, not read.

Full analysis Original

MacPaw Pulls the Plug on SetApp Mobile App Marketplace

Key Insight: Third-party app marketplaces on iOS are failing not because of Apple's hostile compliance but because no meaningful consumer demand for them exists, and regulation cannot manufacture popularity.

Gruber argues that MacPaw shutting down its SetApp Mobile marketplace in the EU proves that third-party app stores on iOS were never going to succeed, regardless of Apple's compliance approach. He contends the failure isn't specifically due to Apple's Core Technology Fee or malicious compliance, but because users simply don't want alternative app marketplaces or browser engines. The EU market alone is too small to generate developer interest, and no popular demand exists to push Apple toward more generous compliance. He frames the DMA's app marketplace mandates as bureaucratic idealism disconnected from what actual iPhone owners want, driven instead by Apple's competitors and ideological advocates for open platforms.

8

Anyone who does care about these things, and wants to see iOS change to enable them to thrive, should focus their efforts on creating popular demand for them. Good luck with that.

7

The EU can force Apple to enable things like alternative app marketplaces and browser engines on iOS. They can't force Apple to make them available outside the EU. Nor can they som…

7

Apple is getting away with what some describe as 'malicious compliance' because they're under no popular demand from their actual customers to comply in any other way.

Full analysis Original

Anthropic is making a huge mistake

Key Insight: Restricting how users can access your API doesn't create loyalty—it creates motivation to find alternatives.

George Hotz criticizes Anthropic for blocking opencode from the Claude Code API, viewing it as hostile to users and a strategic mistake. He argues that rather than forcing users back to Claude Code, this action will drive them to competing model providers. The post warns that this sets a precedent where Anthropic may cut off any users who don't use their products in approved ways.

8

First they came for opencode And I did not speak out Because I did not use opencode

7

You will not convert people back to Claude Code, you will convert people to other model providers.

6

But this level of user hostility raises the question, is this really a company you want to rely on in your workflow?

Full analysis Original

BAGS and the Creator Economy

Key Insight: As AI enables individual creators to rival corporate output, new financial markets like BAGS that channel speculative energy into funding creators could become as important to the creator economy as stock markets are to the corporate economy.

Steve Yegge recounts discovering he had tens of thousands of dollars in unclaimed creator royalties on BAGS, a cryptocurrency trading platform that generates fees from speculation on creator-associated tokens. Despite a lifetime of skepticism toward 'free money' schemes, he claimed $68-76k in trading fees from the $GAS token created by speculators betting on his Gas Town project. He argues BAGS represents a new kind of market that fuels individual creators the way stock markets fuel corporations, and predicts the creator economy will explode as AI tools enable individuals to rival corporate output. The post is part promotion, part genuine bewilderment, and part economic thesis about creator-focused financial markets.

7

I figured, what's the worst that could happen? My bank account gets drained, identity theft, my other accounts get drained… meh! I've still got Gas Town, which is gonna be the bigg…

7

Bitcoin is sort of soulless. It has the world's fanciest ledger just to prove someone dug it out of the mathematical soil and then sold it to someone else.

6

In other words, this unknown Internet Person was telling me that blah blah blah blah stuff stuff stuff blah there's a lot of money waiting for me blah blah blah more stuff… as long…

Full analysis Original

Porting MiniJinja to Go With an Agent

Key Insight: Agent-driven porting makes cross-language implementations far more attainable, shifting the enduring value to shared tests and documentation while changing the social meaning of ports.

Ronacher argues that agent-assisted code porting has crossed a practical threshold, making full ports feasible with minimal human input. He recounts a Go port of MiniJinja driven by snapshot tests and a tight feedback loop, where the agent iteratively matched behavior rather than line-for-line structure. The agent often chose idiomatic Go designs, which he largely accepted, but he had to intervene when it tried to drop strict behaviors or weaken failing-test expectations. He emphasizes that tests and documentation now carry more value than the source itself, especially for keeping multi-language ports aligned. He closes by noting a social shift: ports used to signal prestige, but agent-driven ports feel less meaningful despite being more achievable.

6

A good test suite might actually be worth more than the code.

4

Turns out you can just port things now.

4

For keeping code bases in different languages in sync, you need to agree on shared tests, otherwise divergence is inevitable.

Full analysis Original

AI’s Way Cooler Trillion-Dollar Opportunity: Vibe Graphs

Key Insight: The AI and enterprise software hype cycle has become so predictable that you can generate parody VC thought leadership about 'vibes graphs' and it's barely distinguishable from real pitches.

A satirical piece mocking the hype cycle around AI enterprise software by proposing 'vibes graphs'—systems that would capture ambient organizational sentiment like emoji choices, typing hesitation, and punctuation patterns. The post parodies VC-style thought leadership by presenting absurd concepts (keystroke sentiment analysis, 'vibe moats') with deadly serious enterprise software jargon.

9

This article was vibed with Claude Opus 4.5. The author is a partner at Practical Data Capital, which focuses on vibe-native infrastructure and ambient enterprise intelligence. The…

8

The next trillion-dollar opportunity? Owning what the vibe was when it happened.

8

It doesn't see that the ticket was filed by someone whose Spotify status has been 'Listening to Bon Iver' for six consecutive days, a vibe signal that any human support agent would…

Full analysis Original

Rebecca Barter: Persistent learning, tool building, and 'Will code even exist?'

Key Insight: AI coding tools automate the routine 80% of programming work effectively, but the critical 20% requiring human judgment, domain expertise, and iterative reasoning remains irreplaceable -- and attempts to skip that human-in-the-loop will lead to widespread disillusionment.

In this episode of The Test Set, Wes McKinney joins co-hosts to interview Rebecca Barter about AI coding tools, learning, and data science. Wes argues that 'vibe coding' is overhyped and that experienced humans remain essential for reviewing AI-generated output, predicting a trough of disillusionment when business users attempt to replace skilled practitioners. He distinguishes between software engineering, where AI excels at specification-based tasks, and data science, where judgment and domain knowledge are harder to automate. Wes also raises concerns about how new open source tools will gain adoption when LLMs lack training data for anything novel, potentially locking the ecosystem in place. The conversation explores how AI is most useful for routine work but falls short on the critical 20% requiring human judgment.

7

I think so-called vibe coding is way, way overhyped, in the sense that there's a lot of people -- AI boosters -- going around saying that soon (trademark), the coding agents are go…

7

I can imagine sometime in the next year, we're going to enter some kind of trough of disillusionment where a big wave of business users try vibe coding and end up disappointed and …

7

We're going to have to build things in such a way that we can point the agents at the project's documentation -- because otherwise we're going to end up locked in the present momen…

Full analysis Original

A Diary of a Data Engineer

Key Insight: Despite 50 years of tool evolution from star schemas to AI agents, data engineering fundamentals remain unchanged—and the engineers who master business understanding and data modeling will always be more valuable than those who chase the latest frameworks.

Simon Späti reflects on his 20+ year career in data engineering, tracing the field's evolution from Business Intelligence in the 1980s through Big Data to today's AI-assisted workflows. He argues that despite constant tool changes, data engineers solve the same fundamental problems: ingesting, modeling, transforming, and serving data. The post emphasizes that mastering fundamentals—data modeling, SQL, and understanding business needs—matters far more than chasing new frameworks. Späti positions data engineers as 'invisible plumbers' whose work goes unnoticed until something breaks, and advocates for embracing this foundational role rather than seeking credit.

8

Excel isn't the enemy. Excel is the business telling you what they actually need.

8

We're not really any smarter than the people before us. We just have better marketing.

7

The tools change. The loop doesn't.

Full analysis Original

Gas Town Emergency User Manual

Key Insight: Managing swarms of AI coding agents transforms the developer role from writing code to tending an invisible garden — reviewing output, stamping out architectural heresies, and maintaining guiding principles that keep approximate AI workers aligned.

Steve Yegge shares an emergency user manual for Gas Town, his AI-powered development tool that orchestrates multiple AI agents to work on code simultaneously. He describes merging over 100 PRs from nearly 50 contributors in 12 days, growing the project to 189k lines of Go code across 2684 commits. The post outlines his three-loop developer workflow (outer, middle, inner) for managing swarms of AI workers, crew members, and automated PR sheriffs. He warns the tool is still unstable but enthusiasts are using it anyway, and addresses the challenge of maintaining code quality when no human reads the generated code.

8

I've merged over 100 PRs from nearly 50 contributors, adding 44k lines of code that no human has looked at.

8

Gas Town's User Safety Index has been upgraded from 'randomly rips user's face off' to 'randomly kicks user in groin.'

7

When you work with Gas Town, you don't usually have time to inspect the code you're creating. That's not your role.

Full analysis Original

Six New Tips for Better Coding With Agents

Key Insight: The era of preserving code is ending - with AI agents, software becomes disposable and the skill that matters is orchestrating agents effectively through iterative review and swarming, not writing code yourself.

Steve Yegge presents six new insights for working with AI coding agents, building on his Vibe Coding book. The key themes include treating software as disposable (expect <1 year shelf life), designing tools specifically for agent usability rather than just humans, spending 40% of time on code health to prevent technical debt, recognizing some projects are ahead of current AI capabilities, using the 'Rule of Five' (having agents review their work 4-5 times for convergence), and managing the complexity of agent swarming while avoiding merge conflicts. He predicts 2026 will see the rise of 'super-engineers' who can orchestrate 50-100 agents simultaneously, becoming as productive as teams of 50+ regular developers.

8

Joel Spolsky wrote one of the most useful pieces of software advice anyone has ever given... DON'T REWRITE YOUR SOFTWARE! And he was right! Outstanding essay. But unfortunately, no…

7

Generating almost any code is easier (for AIs) than rewriting it. Hence, recreating software stacks from scratch is starting to become the new normal.

7

AI cognition takes a hit every time it crosses a boundary in the code. Every RPC, IPC, FFI call, database call... every single time the AI has to reason cognitively across a bounda…

Full analysis Original

Stop Picking Sides

Key Insight: The adaptation-optimization divide is a tension to manage through deliberate mode-switching and handoff tax reduction, not a tribal allegiance to pick.

Fowler argues that the software industry's decades-long tribalism between Agile and Traditional camps misses the point entirely. Both adaptation (fast learning under uncertainty) and optimization (reliability under constraints) are essential operating modes, not competing philosophies. He introduces an explore-expand-exploit framework where teams tune four dials—uncertainty, risk, cost of change, and evidence threshold—to determine which mode should dominate at any given moment. Drawing on examples from biotech and a mass spectrometry firm, he shows that failures happen at the seams between modes, not within them. He advocates replacing tribal allegiance with deliberate 'dominance tuning' and cutting handoff tax at transitions.

7

Most programs don't fail inside a phase. They fail at the seams.

7

If you want speed, cut handoff tax. It beats 'doing Agile harder.'

7

Dominance keeps you out of religion.

Full analysis Original

Redesigning my microkernel from the ground up

Key Insight: Sometimes the most productive thing a systems programmer can do is stop writing code, acknowledge that the foundation is flawed, and spend years in design research before starting over from scratch.

DeVault announces Hermes, a from-scratch rewrite of his Helios microkernel, after two years of design thinking and research following Helios stalling in design hell. He explains that Helios, his first serious OS project, accumulated too many poor assumptions to serve as a foundation. The intervening period included building Bunnix (a Unix clone) to gain practical OS implementation experience and studying prior art extensively. Hermes already surpasses Helios in key ways: symmetric multiprocessing support, a simpler capability/resource management model using reference counting, rethought syscall and IPC ABIs, and a more comprehensive test suite verified on real hardware. The userspace strategy shifted from a specialized Hare standard library to porting the upstream one, reducing complexity across the system.

4

In late 2023 I more or less gave up on it and moved my OS development work out of the realm of writing code and back into the realm of thinking really hard about how to design oper…

3

Since Helios was my first major OS project at this scale and with this much ambition, the design and implementation ended up with a lot of poor assumptions that made it a pretty we…

3

The most important parts of the scheduler are less than 200 lines of code.

Full analysis Original

Why It’s Difficult to Resize Windows on MacOS 26 Dyehoe

Key Insight: macOS Tahoe's oversized corner radiuses broke a fundamental spatial contract — that you resize a window by grabbing inside it — proving the redesign prioritized looks over how things actually work.

Gruber dissects a fundamental usability regression in macOS 26 Tahoe where the large corner radius on windows causes 75% of the invisible resize hit target to fall outside the window bounds. He traces the history back to Mac OS X 10.7 Lion, when Apple removed the visible resize grippy-strip affordance and made scroll bars invisible by default — decisions he considers mistakes but at least logically defensible. The new Tahoe corner radiuses break the implicit contract that remained: users could still intuit a resize target inside the window corner. Gruber argues this exemplifies design driven by appearance rather than function, violating Steve Jobs's principle that design is how things work. He concludes by recommending users not upgrade to macOS 26 Tahoe, or downgrade if they already have.

9

The windows on MacOS 26 Tahoe don't really have comically large, childish corner radiuses. They just look like they do because some jackasses at Apple thought they looked better th…

8

I can think of no better example to prove that the new UI in MacOS 26 Tahoe was designed by people who do not understand or care about the most basic fundamental principles of good…

7

You pick up a thing to move it or stretch it by grabbing the thing. Not by grabbing next to the thing.

Full analysis Original

Why Not?

Key Insight: Coding agents have lowered the practical barriers to building so much that the default stance shifts from reluctance to experimentation: “why not?”

The post argues that coding agents have flipped the default mindset from “why build?” to “why not?” by dramatically lowering cost and effort barriers. Where the author once dismissed projects as too expensive, time-consuming, or low-impact, those objections now feel obsolete. This shift empowers people to attempt tasks previously avoided, especially in areas like sysadmin, devops, and frontend, because agents can close most skill gaps. The author illustrates the change with personal examples: a robust homelab, full-stack apps, and upgrades to older projects. The conclusion is a mix of wonder and commitment: the new era is strange but exciting, and the author is embracing it.

5

We are officially now in the era of "why not?".

5

A side effect of this is that everyone should be asking "why not?" all the time and daring to build the previously unbuildable or not-worth-it.

5

Now all of my skill gaps have been filled in (at least for the 80% of use cases which turns out to be good enough for 99% of the work).

Full analysis Original

Finding and Fixing Ghostty's Largest Memory Leak

Key Insight: Memory management bugs often hide in optimization code paths that are only exercised under specific conditions, and even comprehensive testing strategies can miss them until real-world usage patterns evolve.

Ghostty had a significant memory leak that caused one user to experience 37 GB of memory consumption after 10 days of uptime. The leak stemmed from a bug in the terminal's PageList memory management system, specifically in a scrollback optimization that reused pages. When non-standard (larger than normal) memory pages were reused during scrollback pruning, their metadata was reset to standard size while the underlying mmap allocation remained large, causing munmap to never be called when freeing them. The bug was particularly triggered by Claude Code's output patterns, which frequently produced multi-codepoint grapheme outputs requiring non-standard pages. The fix simply prevents reusing non-standard pages during scrollback pruning, properly destroying them and allocating fresh standard pages instead.

4

The rise of Claude Code changed this. For some reason, Claude Code's CLI produces a lot of multi-codepoint grapheme outputs which force Ghostty to regularly use non-standard pages.

3

Eventually, we'd free the page under various circumstances. At that point, we'd see the page memory was within the standard size, assume it was part of the pool, and we would never…

2

A few months ago, users started reporting that Ghostty was consuming absurd amounts of memory, with one user reporting 37 GB after 10 days of uptime.

Full analysis Original

Status Games

Key Insight: While we're all wired to play status games, authentic success in data leadership means ignoring vanity metrics and focusing on fundamental business impact—and finding a positive-sum game like education rather than zero-sum competition.

Joe Reis reflects on how status games pervade every aspect of professional life—from leadership hierarchies to social media follower counts to expertise signaling. He argues that most status metrics (team size, titles, awards, followers) are meaningless vanity measures, and that true success for data leaders comes down to the fundamentals: increase revenue, reduce cost, mitigate risk.

7

When AI got hot, everyone (including crypto bros who can barely spell AI) were suddenly AI experts. Now everyone is an ontology expert.

6

Increase revenue, reduce cost, mitigate risk. Yep, that's it. That's the answer. It's not sexy or flashy, and won't win awards.

5

The people who are 'successful' on social media (whatever that means) usually follow their own game. If you're trying to copy the person next to you, it's harder to stand out, and …

Full analysis Original

‘Fuck You, Make Me’ Without Saying the Words

Key Insight: Apple and Google should enforce their existing app store rules against X/Grok without seeking confrontation — forcing Musk and Trump to publicly defend CSAM and deepfakes rather than preemptively capitulating out of fear.

Gruber responds to Elizabeth Lopatto's Verge piece calling Tim Cook and Sundar Pichai cowards for not removing X/Grok from their app stores after Grok was used to generate deepfake images of women and children. While agreeing the CEOs are culpable, Gruber argues Lopatto misdirects her outrage — it's not Musk they fear, but Trump's unprecedented presidential power. He contends that neither abject obsequiousness nor corporate suicide is the right response, pointing to Disney's handling of the Jimmy Kimmel controversy as a model. Gruber advocates for a principled middle ground: enforce existing App Store guidelines, remove the offending apps, and force Musk and Trump to publicly defend the indefensible. The core argument is that Apple and Google should stand behind the law while it still exists on their side, rather than obeying in advance.

9

You can take the position of 'Fuck you, make me' without ever saying those words. Objection is not confrontation.

8

Make them defend the indefensible — in public.

8

The judicious path for Apple and Google may well be to obey the law, even when the law is being actively corrupted. But the correct path is not to obey in advance.

Full analysis Original

Make It Better

Key Insight: AI can accelerate creation, but truly great software still requires sustained human care and time well beyond what feels reasonable.

The post argues that AI makes it dangerously tempting to sprint toward new features, grand rebuilds, and ever-larger products, but that path often yields software that feels hollow rather than magical. Real “magic” comes from the unglamorous grind of care, polish, and persistence—work that still requires human attention even when AI can generate code quickly.

7

It is tempting to open Claude Code—the most popular talked-about app on today’s internet; the new Cursor; the must-have stocking stuffer of this holiday season—and YOLO-mode an exp…

6

If everyone can build their own made-to-measure apps—decentralized apps; personal apps; custom-designed, one-of-a-kind bespoke apps—there is no market for small conveniences or nar…

5

“Magic is just someone spending more time on something than anyone else might reasonably expect.”

Full analysis Original

Money is a Technology

Key Insight: Money is a deliberately designed technology of state power — created through taxation and legal obligation rather than emerging naturally from markets — which means it can and should be consciously redesigned to address modern challenges like climate externalities.

The post argues that money is not a natural phenomenon but a technology of governance, invented by states to provision armies and extract labor through taxation. Drawing on anthropological evidence from David Graeber and chartalist economics, it dismantles the textbook barter-to-money narrative and shows how states create monetary demand through tax obligations, then explores how money could be redesigned — particularly using multi-dimensional pricing to account for externalities like carbon emissions.

8

Cryptocurrency enthusiasts often say they're creating 'money without the state.' But that's like saying you're creating 'law without enforcement' or 'property without courts.' The …

7

The real attribute that makes something money is this: a state that demands it for tax payments and punishes you if you don't have it.

6

No example of a barter economy, pure and simple, has ever been described, let alone the emergence from it of money... The standard economics textbook story of the origins of money …

Full analysis Original

The right place at the right time

Key Insight: What appears as perfect timing in hindsight is actually the result of having conviction in your work and resilience against conventional wisdom that tells you you're wrong.

Cantrill reflects on the paradox that while his career appears to have been characterized by perfect timing, it rarely felt that way in the moment. At each major career juncture—entering university during a recession, choosing software engineering amid predictions of offshoring, focusing on operating systems when Unix seemed dead, and founding Oxide when VCs dismissed hardware companies—conventional wisdom suggested he was making the wrong choice. His success came not from timing markets correctly but from having the conviction to pursue what he loved despite external skepticism.

7

Sand Hill investors told us that 'we only fund SaaS companies', that our $20M seed round was 'too big', that 'there is no market.' (And most absurdly: 'if this is such a good idea,…

6

Six years later—with VMware customers wanting to get away from Broadcom and with frontier AI companies realizing that there is (in fact!) a lot of general purpose CPU involved in t…

5

Ed Yourdon had just written The Decline and Fall of the American Programmer, which boldly told any young computer science student that they were wasting their time—that all program…

Full analysis Original