A few months ago I had all my arguments lined up for why AI is absolutely not “the next crypto”. Use cases abound! Don’t take it on faith — ask it anything and you’ll be impressed at the answer! Look at it make a website from a napkin sketch!
And to be fair, when all of these legitimately impressive examples started coming out, there was a period when the reality was outpacing the hype. But that golden period of opportunity (where opportunity is defined as the delta between reality and expectations) lasted for maybe six weeks. Now, well…
So now my ears are perking up. This is starting to feel… uh… a little bit like crypto two years ago.
Now I am NOT SAYING that AI is just as overhyped and vacuous as crypto ended up being. What I AM SAYING is that I feel a strong personal and professional need to have made that qualification in the previous sentence, lest you think I am a dummy luddite who doesn’t get it. That sort of emotional CYA impulse gets me very interested.
Do you remember when the Overton window on crypto discourse spanned “this is going to remake every aspect of the financial system and beyond” on the bull end and “there’s a lot of scammy activity here but the foundational technology is very promising” on the most skeptical end? People in the technology field felt very uncomfortable saying things like “I really don’t get it” and “this seems entirely fake” out in public. It felt like doing something like that would incur legitimately damaging career risk. (I regrettably wussed out in my anti-crypto article in April 2022 by singling out “crypto scams” when really what I was too afraid to say was “basically the entire industry.”)
Let’s compare to AI today. Just yesterday I spotted this on Rihard Jarc’s blog post on Generative AI:
That’s interesting, I don’t see a citation there. Not to pick on Rihard, whose content I enjoy, but what is this? A non-falsifiable claim? An article of faith? A defensive qualifier, a la “a decentralized blockchain is very compelling and will have huge impacts on every enterprise?”
It’s important to note that Rihard is not an AI expert, he’s a generalist technology/growth investor. What was he writing about two years ago? Not AI…
So we can see that perhaps some trend surfing is at play here. Which is fine — legitimately important things like the Internet have their own hype/trend cycles too. But let’s return to the Overton window concept. Like “despite the scams, web3 will have an enormous impact”, Rihard’s claim is the current bear case for AI. The bull case is that AI kills all humans!
When “nearly every company can benefit” is the bear case and “humanity will die” is the bull case, are you even allowed to say something like “GPT-4 is just glorified auto-complete and the economic impact of this innovation will amount to barely a rounding error?” Feels like a risky thing to write, right?? Even though I don’t really believe it, I’m struck by how far outside of consensus it is, how much it codes as a “dangerous idea”.
To me that screams “bubble”.
Other bubbly things going on in AI right now
Here’s some other wild stuff I am seeing left and right:
Private companies raising tons of money / public company multiples skyrocketing after they pivot their pitches to AI. Ninety percent of the inbound I’ve gotten this quarter has been AI-branded.
Companies and investors explaining away such a quick pivot (faster than anybody could have been reasonably expected to actually implement the technology) with a “well we’ve actually always been working on AI” talk track:
Use cases getting fuzzier over time… remember like three months ago when everyone was sharing GPT prompts and responses on Twitter? Doesn’t it seem like a lot more people are now talking about AI rather than using it themselves and showing their work?
A new public face of the movement. How much multimedia of Jensen Huang and Sam Altman have you seen in the past month?
A Time Magazine cover:
Questions about AI from my mother-in-law who definitely does not work in the technology industry. Attorneys getting in major trouble for using ChatGPT in the courtroom and completely misunderstanding the limits of the technology.
Non-expert generalists seem the most enthusiastic…
… while true domain experts seem to mostly be sitting out the debate. How would we rank the enthusiasm of actual machine learning academics… cautiously optimistic at best?
Predictions of catastrophic job losses (this is your periodic reminder that U.S. unemployment has never been this low since the 1960s.)
Widespread anthropomorphizing of technological or financial phenomena, with a particular emphasis on good/evil dichotomies. Is “the AI wants to kill all humans” really that different from “the short-selling hedgies want to crush the individual retail trader?” Is AI democratizing machine learning and sharing computational power broadly across humanity, just like the blockchain democratized censorship-free financial freedom?
What to do in a bubble
Remember that a bubble doesn’t mean that that something isn’t real, it just means that the narrative and market pricing have become self-reinforcing (funds have to buy NVDA because it’s the one thing that reliably goes up) and disconnected from the reality on the ground. If AI doubles economic productivity but the market was pricing in a triple or quadruple, that’s still a bubble even though nothing at all was fake about the technology.
One thing I definitely would not advise is to short a bubble. I have no idea how long this mania will last or how intense it will get. Part of me thinks that we’re in the first inning of a long rally that will culminate in an absolutely spectacular blowup in 3-4 years. Another part thinks will have a moderate cooling-off between now and the end of the year as pundits and commentators gain some self-awareness at just how breathless the discourse has become.
Something I can share from the world of private venture investing is: I think the span of time in which companies are allowed to talk about their AI plans without actually doing anything in practice, is beginning to end. In March/April every VC was hosting roundtable discussions where they quizzed portfolio companies on their AI strategy, and everybody from investors to founders to executives nodded their heads dutifully about how this was going to change everything and you either needed to get onboard or go out of business. Now three months later, the companies that have come up with good AI ideas are busy building those, and the ones who haven’t have mostly stopped pretending. This makes me optimistic for a “soft landing” in AI which allows space for reality to catch up to expectations without the usual bubble-popping apocalypse. But we shall see.
You hope that the companies that are AI-curious will be pushed to move and some will be... But one of the wars that's being fought right now in the courts is around the training data used for models. The companies with large data moats are some of the worst in terms of claiming to be AI companies, but never actually doing any AI: they make it extremely expensive for anyone else to do AI with their data. If restrictions around ingesting public web data into AI models get more extreme, the incentive to actually build AI will be reduced, I think.