After writing about clowns and books and skiing, I'm returning to the reason you read this email: hot takes on AI.It's more like tepid takes. By the time I've saved something to read, actually read it, thought about it, and then put it into this email, the discourse has long since moved on. This happened back when I wrote my digest about politics and it's happening as I've focused on AI. Rather than try and give you the latest, my thoughts and the links that follow aim to reflect a longer or broader view about the discourse.
The state of the AI discourse is confused. And I think it's only going to get moreso.
On the one hand, you have people whose Tweets or whatever go viral based on who they are or who they argue with, according to the general unhingedness of their claims. The first link below is a well-argued piece responding to an example of the latter. A very-online entrepreneur wrote a long post about how preserving cash, teaching your kids hard skills, and getting on the AI work train (using, as it happens, by following him) was the only way to survive an impending giant and general spike in unemployment. The post went viral, as people reacted to its scariness. Any argument that concludes you should follow the writer or go broke ought to be dismissed out of hand. These dour predictions may be directionally right, but like the more extreme climate alarmists, a prediction with a precise and short-term deadline says a lot more about the predictor than it does our immediate future. Because the online discourse machine really likes hot takes and because the people writing them tend to make more money when they get more attention, the hot takes will continue apace.
The other side of the discourse, more positive but not non-additive to its confusion, is provided by the people who are actually doing the work. The engineers adopting new tools to write code and apps have a lot of interesting reports of how things are actually going. The second link below, about AI fatigue, is a report from the actual front lines. Its author, an engineer, has a legion of agents doing his bidding, he works on the breaking wave of their outputs, and, while he's productive, he's not capturing anything like the leisure gains one expects from a tool that multiplies his output. His essay chalks this up to the kind of fatigue produced by an endless increase in information to consume. While our engineer won't become a luddite, he's tilting his workday towards what gets things done well, rather than the newest methods. This kind of acid test—what does the AI actually deliver—is how I see sharp operators testing, adapting, and then adopting/discarding AI. The people who weren't engineers and are now building and launching custom software are interesting and insightful to read as well. A way to see whether you're reading an actual builder is to measure how far they are from the hype and hot takes on the other side of the discourse.
There's a third layer to the AI discourse, though, and it takes a lot of sifting past the first two to find it. If you read any of the links below, then make it the last one: the best example of this third layer I've seen this year. Published in 2021, Meghan O’Gieblyn, author of God, Human, Animal, Machine, offers a deep consideration of what we mean when we say AI thinks or writes or creates. She posits that by providing every written word to these AIs, they have become a collective unconsciousness of humanity. If our creativity in writing is just a thoughtful autocomplete, then they are doing exactly what we do. The crank would say: of course writing isn't autocomplete, of course it is creative. O'Gieblyn doesn't accept this shallow argument. She's noticed that metaphors and thoughts from her reading emerge into her writing, both unknowingly and anti-knowingly—even when she thought they were of her own creation. I notice the same thing in some of my writing: even when I think I'm doing something unique and authentic, I'm often reciting the argot of software-land with a very slight twist. She also forces us to notice why we anthropomorphize our tools. We want machines to have souls so that we won't feel so alone. We want more data fed in and more actual work done by the robots so we can see them as some kind of unique creation. Hidden in this ambition is a tale older than time: a tower in Mesopotamia.
We're not in a February 2020 moment, and ordinary people will be fine
You're using AI to be more productive. So why are you more exhausted than ever? The paradox every engineer needs to confront.
Could a machine have an unconscious?