Helping front office teams grow better

How software re-eats the world - #431

When Marc Andreessen said that "software is eating the world," we all nodded along. It was (and is!) true that 'businesses are being run on software' and 'delivered as online services.' In the longer view, what software has eaten is less the business or industry and more their information flows. Software is a whole lot better at bytes than atoms. Want proof? Just look at what my almost-self-driving car does on snowy road: its systems shut down because their sensors can no longer identify the lane. (As a highly addicted downhill skier, my minivan is on snowy roads more than you'd advise.) The software is great until there's no data.From social to the metaverse to NFTs to crypto itself, we've seen new technologies start their hype cycle a 'eat the world' only to end up at 'niche' or 'deprecated' or 'net harmful' or 'banned in schools.' I think that's why the original LLMs and their use cases, while neat, seemed thin when they emerged 2.5 years ago and still can seem not-too-impressive today. If the best use of an AI is to generate some text when given a prompt, it's not doing much aside from adding to an already-too-loud digital noise. Between prompt engineering and editing, most LLM usage along these lines takes about as long as the "old way." As I wrote in that late-2024 post, real utility comes from retrieval augmented generation or LLMs meaningful interacting with other software—real data and real tools. 

On the one hand, AI's evolutionary leaps forward, like Model Context Protocol are very promising—from a tooling perspective, we're moving quickly from neat to necessary. On the other hand, it's limited: this software isn't eating a new part of the world, it's more efficiently eating (digesting?) the part of the world it already ate. We've got a wildly impressive new way to interact with software and data or to make new software and data. It's a big jump in our ability to make software and data, but we haven't yet seen this jump induce new sectors to join the software/data revolution. You could put that into personal tech terms: AI makes phone internet searches better. Your Gemini's ability to recognize what widget you need at Home Depot better and faster. But to do something truly new, you have to wear a dystopian always-on microphone and camera, to feed everything you're experiencing as data into an AI. So very few people will do this that it's fair to say no one does it.

In answering the question of what AI's impact on the US's economic growth rate, a lot of our futurists claim 20 or 30% growth. To be clear: that's not anchored in anything we've ever seen. From a baseline of 2% growth, AI company Anthropic's co-founder Jack Clark (you've seen their economic impact index linked to from me before) said he expected growth in the 3 or 5% range, which is dramatic, but more on the order of previous revolutions. His reasoning worth quoting at length:

The reason that my numbers are more conservative is, I think that we will enter into a world where there will be an incredibly fast-moving, high-growth part of the economy, but it is a relatively small part of the economy. It may be growing its share over time, but it's growing from a small base.

I think that the things that would make me wrong are if AI systems could meaningfully unlock productive capacity in the physical world at a really surprisingly high compounding growth rate, automating and building factories and things like this.

Even then, I'm skeptical because every time the AI community has tried to cross the chasm from the digital world to the real world, they've run into 10,000 problems that they thought were paper cuts but, in sum, add up to you losing all the blood in your body. I think we've seen this with self-driving cars, where very, very promising growth rate, and then an incredibly grinding slow pace at getting it to scale.

Clark is skeptical that AI will meaningfully change the physical world enough to see dramatically wild economic impact. His bullish case is still pretty wild, but you get the sense he views that as a low-probability outcome. This from an AI company co-founder!

AI isn't yet making the jump from data, the easiest thing to digitize, to atoms. Andreessen said that software was eating the world. While it still may be eating our attention and without attention our world is decaying, what software is eating now is what's it already eaten, but much more efficiently. I'm not that bearish nor an anti-AI luddite nor interested in dystopian hot takes (aside from skepticism about always-on personal cameras). Working in software, you can't be. There's tremendous ground to be gained by applying the "much more efficiently" to the parts of the world software already has eaten. AI agents, model context protocol, and similar promise to be career-making innovations for early adopters. We need to focus on real problems that, when solved, will help people achieve meaningful goals. It'd be nice to not have the end of some software deployment be yet another digital form that someone has to fill out to do their job or to show some system that they have done their job. That's the sort of revolution I can get behind.

For the reading this week, I have a well-argued essay that LLM-generated text isn't something we should read (implicitly, that we should create). And some more optimistic reading about what's latest and greatest in AI approaches to software and data. If you are still mystified by how it all works, then the must-read is the first third of the Slobodan Stojanović piece below (last link). It could be the most understandable summary of what AI does and how it works. Enjoy the reading!


Reading

The items below are essays or articles that I found interesting, either in topic or in the quality of the writing. The titles and subheads are from the originals. When possible, I give free or gift links to paywalled content.

leftright split screen on the left is a persons head with a thought bubble on the right is an image of computer code on a screenI'd rather read the prompt

I have never seen any form of create generative model output (be that image, text, audio, or video) which I would rather see than the original prompt. The resulting output has less substance than the prompt and lacks any human vision in its creation. If it's not worth writing, it's not worth reading.

claytonwramsey.com

250319-Example-MCP-x2000A Deep Dive Into MCP and the Future of AI Tooling

It's clear that there needs to be a standard interface for execution, data fetching, and tool calling. APIs were the internet's first great unifier—creating a shared language for software to communicate — but AI models lacked an equivalent until Model Context Protocol (MCP).

a16z.com

agentsHow AI Agents work and how to build them

Despite their complexity, AI Agents can be thought of as LLMs with added capabilities, often organized in a structured way to improve interaction. Put another way: AI Agents sound very complicated, but they are actually like LLM "while loops" with tools.

slobodan.me