James Dutton via Hampshire wrote on 2026-05-11 06:28:
> For example, you can bake in that it just says "I don't know." > instead of hallucinating. That is an approach I would prefer,
I would prefer it too, but LLMs don't actually know anything and
therefore cannot know what they don't know.
It's unfortunate that "I don't know" isn't a more common answer; it's
perfectly valid and LLMs are not alone in generating BS answers based on
whatever sounds good. Alas.
> On the aspect of it always trying to agree with you. That is baked > in as part of the training process.
The Behind the Bastards Podcast ("a podcast about the worst people in
history") had a 2-part episode on "How AI Chatbots Became Cult Leaders"
discussing their obsequious sycophancy. I disagreed with some of the
points raised, but episode 2 was particularly interesting.
www.youtube.com
Part One: How AI Chatbots Became Cult Leaders <#>
Over the last year or so a series of news articles have sounded warnings
about AI psychosis, and now an AI-generated cult, the Spiralists. Robert
sits down w...
🔗
https://www.youtube.com/watch?v=Q1dRFv28PiQ
<
https://www.youtube.com/watch?v=Q1dRFv28PiQ>
Or, audio-only version:
|
https://www.iheart.com/podcast/105-behind-the-bastards-29236323/episode/part-one-how-ai-chatbots-became-332611718/|
|
|
--
Please post to: Hampshire@???
Manage subscription:
https://mailman.lug.org.uk/mailman/listinfo/hampshire
LUG website:
http://www.hantslug.org.uk
--------------------------------------------------------------