Hi,
There have been various mentions about AI / LLM in the efforts to
solve the original problem with dovecot setup.
I have done some analysis of LLMs, and there is actually a way, maybe
a little expensive, to determine whether the LLM is hallucinating or
not.
You ask the LLM exactly the same question 10 times.
If it comes back with the same answer all 10 times, it is unlikely to
be hallucinating.
If it comes back with different answers most of the time. I.e. the 10
answers are different, it is hallucinating.
So, LLMs are not consistent with their hallucinations. I.e. it is not
the same hallucination every time, so one can use that to detect
hallucinations.
I found this out when doing some different research but I thought it
might be helpful to others.
On the aspect of it always trying to agree with you. That is baked in
as part of the training process.
It is possible to bake in other approaches but they are not so popular
currently.
For example, you can bake in that it just says "I don't know." instead
of hallucinating. That is an approach I would prefer, but it seems the
big companies don't get so much revenue if they took that approach in
the training of the LLMs.
Kind Regards
James
--
Please post to: Hampshire@???
Manage subscription:
https://mailman.lug.org.uk/mailman/listinfo/hampshire
LUG website:
http://www.hantslug.org.uk
--------------------------------------------------------------