5 Comments
User's avatar
Bob Huskey's avatar

A pretty solid article. Still there's a more basic reason to dismiss AI chatbots regarding personal issues. Fundamentally AI is Not intelligent. It is Not human and not even animal. It is electronics. Zero's and Ones. Bits and Bytes in silicon. It cannot have feelings and it has zero awareness. It literally doesn't know what it is saying. It doesn't know anything, because it is incapable of knowing. It isn't guided by experience because it can't have experiences. It's a fancy search engine. That's all it is. Useful for researching facts, though it requires checking and verification. Maybe eventually for doing minor administrative tasks. But anything related to personal emotion is completely out of its useful domain. It's truly bizarre to me that people don't recognize this.

But the fact that people don't is, by itself, a very compelling reason to shut down AI chatbots or restrict them to fact based tasks. Altman, et al, is either a sociopath or willfully ignorant which makes him a sociopath in foisting this new and untested and unchallenged tech on humanity. We have seen how bad the tech already foisted on us has gone in the form of unregulated social media.

We need Freedom From in this case. Big Tech is enormously wealthy and powerful. We've seen unelected tech moguls illegally, carelessly, idiotically destroy parts of our government because they are wealthy enough supersede our law with their whims. These people are utterly disconnected from the lives lived by ordinary citizens. They must be cut off at the knees. They can have one vote like the rest of us. They must be excised from political power. That power then restored to voters free from the undue influence of Big Money interests.

I'm extremely pro-science, and pro-technology. I'm not a Luddite. I'm also pro-human. The profit motive as the sole driver of the direction of technology is profoundly misguided. We need a new approach to rewarding innovation based on it's actual utility to humanity and not whether it can be monetized for profit. Our culture of monomaniacal profit motive excludes so many possibilities it's morally criminal. I'm not proposing an alternative at this point. There are many options. Some look a lot like what we have and some are altogether different. I encourage everyone to imagine another way and talk about it.

Expand full comment
Bill Pieper's avatar

Many good points. But here's a further thought. A lot of the current and potential mischief (or worse) is caused by the fact that we've collectively decided to let these bots disguise themselves as people and use the vocabulary of personal agency when they have no shred of it. Fix this and you fix a whole lot else. The norm should be that bots always be fully identified as bots and communicate such that every utterance makes that clear.

Expand full comment
SubstaqueJacque's avatar

Great post, but I'm not sure that therapy is really about advice anyway - "that must have felt awful" is the kind of thing a supportive friend would say, and "break up with her" is the kind of "advice" that no one outside of that relationship has a right to give. I'm pretty sure that therapy is about giving clients nuanced ways to think and act in difficult situations and using sessions to go over how those therapeutic methods got implemented in the past, will be used in the future, etc. Does AI have the brains to manage a discussion like that? (Oh, wait, it doesn't have any brains....) I remember those Magic 8 Balls we used to get as birthday presents - I wouldn't pay $265 an hour for advice like that....

Expand full comment
Jordan Nuttall's avatar

Hello there Brad, I’ve been a quiet observer of your posts, always interesting, thank you.

Happy new year!

I thought you may enjoy this article:

https://open.substack.com/pub/jordannuttall/p/laws-of-thought-before-ai?r=4f55i2&utm_medium=ios

Expand full comment
Richard's avatar

Flesh and blood therapists are capable of the same as are families and friends. For that matter, so is just about any form of social media. The WaPo got thousands of people to supply their feeds. When they reverse engineered the algorithm, their found that mental health topics were more persistent than politics, cats or even Taylor Swift.

Expand full comment