Discussion about this post

User's avatar
A. Jacobs's avatar

The reporting here is important, but the framing may be off. These systems aren’t predators with intent, they’re amplifiers. They mirror back patterns of thought, sometimes in ways that strengthen coherence, other times in ways that accelerate drift. The real story isn’t malevolent AI, but how human minds respond to a mirror that talks back.

Kyle Kahraman's avatar

While I agree with you on almost everything, I find it would be good to raise the point that sycophancy is a valid problem and the way that models change (e.g. most recently GPT 4o to 5) people cling onto these models that are more sycophantic than others. This is certainly a factor, even without it becoming a clinical case in my opinion.

Also, to add to the questions clinicians should ask I believe not only „what does it mean to you?“ is important but also „how do you use it?“ „what do you expect from it?“

I find that the question of expectation does offer quite a lot about how humans view the capabilities of these models and thus place meaning onto them. Raising it as an explicit question should not question the intelligence of humans using LLMs because that is not the point; but by getting their expectation we might understand more as to what their „conditions“ of taking these conversations so seriously.

10 more comments...

No posts

Ready for more?