Between October and January, I'm working in a new way. Instead of architecting solutions for any HubSpot-related technical challenge, I'm working as a forward-deployed architect for our AI features. In practice, this means I am configuring a specific set of our software's tools for our customers to use them better. The work is more narrow, much more hands-on, and thus deeper than my usual work. I'm about six weeks in, and am just arriving at my first few practical lessons. Among which: it's way easier to make recommendations on how to configure the software than it is to actually configure the software in a way that's really useful. That may not be surprising, but for someone who has spent eight years (and counting!) in an advisory position, it's been a notable lesson to learn.More specifically, I'm in the trenches of go-to-market AI: what makes it useful, why people don't use it, what it seems to be good for, and where its limits are (and what actually pushes the limits further vs. what is just a shiny new thing). My top lesson thus far is that context is scarce and lacking it is what causes most AI initiatives to fail. I came to this with a few early wins and losses and thinking about why they went the way they did. The maxim isn't mine, though, it's Tyler Cowen's: "Context is that which is scarce." Here's Cowen explaining what he means by it. I take it to mean that you need to deeply understand something before you can develop a really useful idea to make it better. Your first idea or first impression, when not backed by any depth, is more likely to be wrong, in the sense of being useless, than novel. In AI software land, we're too often over-confident in the specific software feature, because what it does is novel. And we mostly lack the deep understanding it takes to make the software feature really useful. We assume we know how a process or system or person works and base our ideas for improvements on those shallow assumptions.
While this flaw might be endemic to AI, I think it's wider than that. It's the downside implication of the portion of the world software has eaten. Take the top link below, from The Atlantic, where our fearless correspondent wades into the experience of trying to get something done with giant companies's customer support teams. His misadventure through car manufacturer warrantees is exactly the nightmare you'd expect. But it's not the function of failures to optimize or lack of systems thinking or anything else: our author argues that companies make it tough to interact with customer service in order to minimize the costs. The harder it is to get the full refund, within the bounds of whatever's not a P.R. nightmare or outright illegal, the fewer refunds companies will have to hand out. You could put the same concept a bit more positively: the easier it is for a profit-seeking company to lose money, the more money it will lose and the less profit it will make. The customer service team may be wise to keep some of the "sludge" in place to ensure that too much money doesn't just walk out the door. That is the context the "it is too hard to get a refund" complaint often misses.
Back to AI. LLMs make text. They're even pretty good at slide decks and websites and uncanny valley podcasts. In corporate world, it has never been easier to create long-form emails, memos, and bullet points. I'm not sure anyone, though, was asking for more of that sort of thing. If we weren't able to keep up with the stuff before it was written by robots, how do we expect to keep up with it now? I'm sure everyone has been tasked with reading and replying to something the sender didn't bother to read themselves; I'm sure everyone has seen the AI assistant's "output" pasted into an email, without the sender bothering to remove the "here's that email you asked for" topper. The wag reply is pretty easy (and not un-useful): we can just use AI to summarize all of this stuff. Efficient, sure, but to be productive in knowledge work is to actually understand, make decisions, and clearly communicate. HBR, in the second link below, has a point: AI-generated "workslop" isn't helping us do any of that. Instead of making us productive, it makes us less so. Why? Because the context of what we are actually up to and what we actual need, is missing.
I'm going to keep noodling on this one. And I'm going to keep using AI to create mid-century magazine ad versions of this message for my LinkedIn feed.
Endless wait times and excessive procedural fuss—it’s all part of a tactic called “sludge.”
A confusing contradiction is unfolding in companies embracing generative AI tools: while workers are largely following mandates to embrace the technology, few are seeing it create real value.