16 Comments
User's avatar
Brian Villanueva's avatar

I do know something about AI, having spent about 10 years in the IT field and played around with developing basic neural networks for fun.

This article is spot on. AGI is likely a pipe dream. I believe that for both technical and philosophical reasons. For postliberals who actually want to try and improve the world, the AI-related low-hanging fruit is erotic chatbots and deepfake video.

Erotic AI chatbots can already produce a Choose-Your-Own-Adventure style narrative, but the story never ends and can be adapted instantly to any fantasy you have. It is INCREDIBLY addictive. I respect if that sounds like a minor problem, but with a birthrate of 1.6 already, what percentage of men will choose this Matrix-like world, particularly as it "improves"? Right now it's text with a few AI generated pictures; we're maybe 3 years away from being able to do the same with real-time, interactive video. You think today's porn and video games are bad; you have no idea what's coming. If you believe in virtue at all, this requires your attention.

The same AI video technology can already produce CCTV level deepfakes that are indistinguishable from the real thing. Within that same 3 year timeframe, it will be able to produce broadcast quality deepfakes as well. A picture will no longer be worth 1000 words; in terms of evidence of truth, it will be worth nothing: in newspapers, on the web, or in court. We're going back to 1800's information ecosystem, where the only way you knew whether something actually happened was the past veracity of the person who told you about it (be it a friend or a reporter.)

There is nothing we can do about AGI. If it's possible, someone will do it, and we'll all have to deal. But there are lots of things that could be done on these two fronts that could improve the world today, right now, and the AGI debate is a distraction from doing them.

Expand full comment
Connor Lundrigan's avatar

Fantastic write up, captures precisely how I have felt about the technology in a much more elegant way than I could have written

Expand full comment
Paul Schopis's avatar

Quite a good piece, and I do/did have a technology background, (I am retired). In regards to AGI, I am a little surprised that no one mentions the Turing test, which was developed by Alan Turing, as a meaningful benchmark for assessing AGI. It is straight forwardly simple. One communicates with an entity that is not seen or heard, it is behind a door so to speak. The tester queries the entity and carrys on a converstation and tires to determine if it is another human or AI. When one can't tell the difference AI has passed the test. In my own experience with chatGPT, there always questions that easily trip it up. As the piece points out the expertise window is rather narrow. An additional issue is power consumption. The requirements are enormous. It takes a certain amount of mass to produce the unit of energy to compute a byte. David Foster Wallace cited an individual who calculated that if the entire mass of the planet where devoted to computational power it tops out at 2.54 x 10^192 concurrent one byte computuaions, if I am recollecting correctly. In other words, the number of "smart AI brains" that can be operated and at what cost are highly relevant and bounded questions.

Expand full comment
Luke Lea's avatar

"A community determined to demonstrate how great its models are should be much more eager to create a much broader set of benchmarks simulating real-world challenges and scenarios, and demonstrating to the world how the models perform."

For instance, could an AGI ever come up with the idea of factories in the countryside that run on part-time jobs as a potential solution to a myriad of social problems: retirement, technological unemployment, housing, childcare, marriage stability, the slowdown in total factor productivity, etc.?

My guess is only after reading about it in a book written by an actual human being: https://www.amazon.com/dp/B00U0C9HKW

As to the more general question, never bet against the law of diminishing returns.

Expand full comment
Concerned Conservative's avatar

Does the release of DeepSeek R1 and Deep Research change anything in this article?

Expand full comment
John Fabian's avatar

Without a doubt, the most cogent and sobering description of AI that I have read thus far. I remember my first dot.com clients in 1997 (I was a database marketing consultant), who told me they were going to put all the "bricks and mortar" retailers out of business.

Didn't quite happen that way. Don't get me wrong... dot.commerce is a fantastic thing (who doesn't order from Amazon), but it compliments physical stores, not replaces them.

Expand full comment
stanley goldstein's avatar

Rather than any contrary reaction, I liked the sobriety of your analysis. Yes, AI will produce a lot but it will be a lot along the parallel lines you have drawn. The p/e ratios my drop from 50 to 35 but that would not be a bad thing.

Expand full comment
Rebecca Frankel's avatar

There's a difference from the dot-com boom: Al Gore did not invent AI. Ok, just kidding. Obviously, he didn't invent the internet either. But he successfully lobbied for its professional design and specification. This was done early, well before it supported the killer applications over which the market frothed.

There might have been a similar early effort for the professional design and specification of the infrastructure underneath AI. In the late nineties, early 2000's, there was work at MIT gearing up for such an effort. But Bush was elected instead of Gore. He appointed a mini-Musk type to DARPA, who similarly ripped apart the agency. They fired the career program managers who knew how to oversee long-term infrastructure investments. As a result, we are seeing the first round of infrastructure build-out that Al Gore did not invent.

When discussing the relevance of Krugman's famous mistake, one should keep this history in mind. In particular, asking whether the impact of "AI" is comparable to that of "the internet" is comparing apples to oranges. The internet is infrastructure. It should be compared to Nvidia's GPUs and CUDA. "AI" is an application running on top of that infrastructure, so it should be compared to browsers and HTML.

One might note that Krugman's comment, if it was meant to refer to the basic HTML that powered the excitement over Pets.com, would have been correct. The reason he was wrong is that "the internet" enabled so much more than merely HTML. But that's because of the professional design that Al Gore supported. It meant the platform was designed by people like David Clark, who famously championed generality and neutrality.

For instance, there was a later boom of rich internet applications, which mitigated the damage of the dot.com crash. The internet platform was general enough to support new applications which weren't part of the original conception, thanks to the passion of web architects like Tim Berners Lee, enabled by the neutrality championed by Clark. The consequences of their professionalism is the central reason Krugman was proved wrong.

But Jensen Huang is not like those early web architects.. He is not similarly passionate about neutrality and extensibility. So there might be reason to doubt that past performance predicts future results.

Expand full comment
Engineer Guy's avatar

Solid article about the real potential (good and bad) of AI. FYI a thoughtful book about AI for the layman is "Taming Silicon Valley" by Gary Marcus.

Expand full comment
Tidewater Lord's avatar

You should check out this video. A good supplement to the arguments you make here.

https://youtu.be/EOS3JkKmjm8?si=CobArznd8XsvFP2l

Expand full comment
Roger Abbiss's avatar

Here’s a concern I have. What if you applied those three tests to the development of the atomic bomb? I dare say we they would not have helped us to see what was coming.

Here are three tests I would suggest we consider (with the splitting of the atom in the back of my mind):

What is the worst case scenario if the claims turned out to be true?

Why are some of the brightest minds in the field issuing grave warnings about its potentially devastating effects?

How much capital and energy is being applied to this development of this new technology?

Graphically representing the splitting of the atom scenario would likely be represented by some kind of S curve as well, but it would be a dangerously dangerous S curve indeed. Although the basic principles still apply, nuclear weapons are now 1000 times more powerful than the bombs dropped on Hiroshima and Nagasaki. As regards the development of nuclear weapons, TTID certainly applied. Could an AI convince a rogue world leader that some new delivery system would allow them to win a nuclear war by launching a first strike? I’d be afraid to even ask it to that question.

Apocalyptic scenarios aside, artificial intelligence may already be quietly wreaking havoc as the technology evolves making it difficult (and eventually impossible) to distinguish that which is real from that which is not. As a result, trust in our systems and institutions are crumbling. As trust erodes civilization is at risk of a complete breakdown.

Perhaps not that interesting, but I fear that AI may be much more than a parlour trick and it seems to me potentially dangerous to view it that way.

Expand full comment
T.L. Parker's avatar

Want to go on a deep dive on this subject…see Frank Wright’s post today…Elon Musk - digital Bonaparte

Expand full comment
T.L. Parker's avatar

To answer the title of this post: Yes.

Artificial sweeteners,artificial flowers, artificial butter, artificial diamonds, artificial limbs, and artificial intelligence each is a Pandora’s Box…Hope is what is being sold, but little or no mention of the problems that might arise from the ramifications of the potential life altering contrivance.

“Tiger got to hunt, bird got to fly

Man got to sit and wonder why, why, why.

Tiger got to sleep, bird got to land.

Man got to tell himself he understand.”

…..and so it goes

Would we not be better served cultivating our innate intelligence without artificial interference?

Expand full comment
PLG's avatar

In general, I'm with you that the LLMs thus far are more of a parlor trick than transformative technology, but my fears are twofold:

1) People are spending biblical amounts of money to develop them - that's certainly not proof that the technology will be worth the investment, but it's at least evidence that many people with the credibility to generate multi-billion dollar investments are choosing this approach

2) These same people seem to throw out casual statements like "there's a 20-25% chance these technologies will take over and enslave humanity", and I don't have much trust in our leadership class to have thoughtful approaches to how to minimize this risk.

Not sure there's anything we can do as ordinary people, so I don't let it bother me too much, but it's kind of crazy how little debate there's been about whether we should go down this path.

Expand full comment
Jeff Herrmann's avatar

In the 90’s there was a lot of premature hype about the internet. It happened in a big way but it took longer than people and stock prices thought it would. I expect the same is true with AI. We are currently in the hype cycle.

Expand full comment
Richard's avatar

AI probably won't destroy humanity but something will even if we have to wait for the Big Bang to implode. The Sun will probably supernova first or genocidal space aliens will arrive.

In the meantime, AI is wonderful for compilation of enemy lists or collapsing the electric grid. I can imagine some positive uses like medical diagnosis or engineering design that will make the process faster, cheaper and more accurate. For now the biggest impact on the private sector seems to be in HR which is interesting since there is so much natural stupidity there.

Expand full comment