On Monday, 11 May, 2026, Gordon wrote
> On 10/05/2026 22:12, Nick Chalk via Hampshire wrote:
> > Interesting. I have been reading similar positive
> > feedback recently. It seems the LLMs have reached
> > a point of being generally useful on technical
> > subjects.
>
> I've found similar, but the hard an fast rule is
> always to check/test what AI tells us.
>
> Some is helpful/useful, some is complete nonsense!
I watched a (company internal) presentation a
couple of weeks ago. It was delivered by a
mathematician who specialises in geometry, but
mainly works on optimising physical simulations.
He described a common optimisation problem: find
the minimum distance between a point and any part
of a triangular surface in 3D space. His initial
C++ implementation took 20ns; with a little work,
he reduced it to 18ns.
As an experiment, he described the problem to two
LLMs, and asked them to provide a time-optimised
solution. They returned near-identical algorithms,
which initially looked promising - 4.5ns run time.
However, he also believed in checking their
output.
After working through the code manually, he found
that both solutions gave completely wrong answers
in corner cases - such as when the point lies on
the surface. Since the two solutions were so
close, he surmised that both LLMs had just
regurgitated someone's incorrect code that had
been posted online!
(He then went on to show an implementation that
ran in 2.3ns, but that required hand-written SIMD
assembler.)
> I also like to fully understand what the code
> actually does, in preference to what AI tells me
> it does. I think one can't say AI lies, but it
> certainly makes things up if it doesn't actually
> find a proper answer.
I dislike using the term "AI" as there's no
intelligence. They are clever statistical models,
no more - and the old adage "garbage in, garbage
out" still applies.
Now, if someone comes up with a C/C++ focussed LLM
which has been trained on the works of Kernighan,
Stroustrup, Meyers, et al. - with proper permission -
then I might be interested.
> Agree entirely ... developers are usually too
> close and the know (or should!) how the software
> works, what it should do and tend not tho
> consider the "dumb" questions that non-
> programmers quite reasonably have.
It's also too easy to become blinkered, thinking
that you know how a feature will be used. Then a
user surprises you with a completely different
application.
> My one frustration about so many technical
> writers, though, is that I'll usually give them
> a techy-draft so that they can get started, only
> to find that they're lifted bits and rewritten
> them a poor grammar.
Perhaps I've been lucky, or my present employer
has an effective interview procedure for authors.
I think the only disagreement I've had with that
team is the nature of the audience. A guide to
installing our product on z/OS, for example, does
not need to go into detail about JCL, DD cards, or
REXX.
Nick.
--
Nick Chalk ................. once a Radio Designer
Confidence is failing to understand the problem.
--
Please post to: Hampshire@???
Manage subscription:
https://mailman.lug.org.uk/mailman/listinfo/hampshire
LUG website:
http://www.hantslug.org.uk
--------------------------------------------------------------