Good thoughts here, Gordon, thanks.
With respect to the solution I got working... I can see that I had to
make changes to 6 of the Dovecot config files in /etc/dovecot/donf.d/,
plus the overall /etc/dovecot.conf file. I also had to make changes to
main.cf and master.cf for Postfix, so that's two more files.
I did also add the DNS records - so far that would be MX, A, SPF and
DMARC - I've not looked at DKIM yet.
There were so many iterations to some of the files during last night's
Claude-aided fix that I'm not sure I could sensibly turn that into a
"correct sequence of actions"... but I need to document all the critical
changes, which I can do with the help of "diff" and write up in a guide.
Since there aren't many Dovecot 2.4 guides out there yet, and as long as
it is permitted on this mailing list, I'm happy to reply with a copy of
the document once it's done. No idea how long that will take - life has
a habit of getting in the way.
I thought Gordon's comments around AI support for coding was spot on...
A couple of months ago I added the Github Copilot plugin to my local
instance of Visual Studio Code. After a couple of hours of use I found
something interesting... I would come to write a new method for an
object and once I got clear in my mind what the inputs and outputs
needed to be, I would then create an empty method and populate it with a
series of English-language comment statements that summarised the code I
needed to write.
You can probably imagine the sort of thing - to validate a user's
credentials I might have something like:-
Query the user table searching for a record with a UserID that matches
the provided value
If no records are returned, set the authentication_failed flag and return
If more than one record is returned, dump all relevant values to the log
file, then perform an HTTP redirect to the major_error script
If one record is returned, retrieve the password_salt value from the
record, append the user-provided password, then compute a hash on the
combined string...
... and so on...
What I found was that after I'd converted the first line or two of that
pseudo-code in to actual PHP, Copilot would finish the job for me,
without waiting to be asked. I would say that the results were sometimes
perfect, always at least "very close" and that a couple of minor edits
would get me there... At least 90% of the way there, at minimum, for
each line. But... if I missed out a critical step, it would almost
always fail to prompt me to add it in.
Curiously, I've used both ChatGPT and Claude much more in my
professional life and there I used them to help me write a comprehensive
set of cyber and technology controls based on a synthesis of e.g. ISO
27k, NIST SP800-53, ISACA COBIT, ISF Standard of Good Practice, even ITIL4.
That turned out to be a task where the results were hugely dependent
upon very carefully crafting prompts and on having reduced the source
material to sufficiently granular statements. But with an hour of
trial-and error and maybe a dozen "ranging shots" to get the prompt
tuned, I was then able to crank through person-years worth of effort in
a couple of days of elapsed time. Exceptional quality - and quite
shocking to see it do such a good job so quickly. But. There is a
*world* of difference between asking a model to write a business-English
summary of some dry technical documents and asking it to write
functional, syntactically-correct code. The former task is much more
forgiving of minor grammatical variations.
And as for *completely* different tasks - like impersonating my voice
after being given a 5-minute recording of me reading a few pages of
text, or having it compose and perform a song about raindrops, sung by
Freddie Mercury and performed like a number from Queen - the models are
getting scarily good - and I understand why artists are uncomfortable.
So - two main factors from what limited experience I've gained: they're
better at some things than others... and they still exhibit the most
basic principle of technology anywhere: garbage in... garbage out...
On 11/05/2026 09:58, Gordon Scott via Hampshire wrote:
> Hi all,
>
> I've 'lurked' on this because I have some, but not enough, of this
> setup ... Postfix and DKIM on a public-facing Raspberry Pi, and an
> internal Postfix Dovecot 2.3 on kubuntu; no MariaDB. It seems I was
> probably right to hold back.
>
> I'm also glad to hear you got it sorted.
>
>
> On 10/05/2026 22:12, Nick Chalk via Hampshire wrote:
>> Interesting. I have been reading similar positive
>> feedback recently. It seems the LLMs have reached
>> a point of being generally useful on technical
>> subjects.
>
>
> I've found similar, but the hard an fast rule is always to check/test
> what AI tells us.
>
> Some is helpful/useful, some is complete nonsense!
>
>
>> I read an article a few weeks ago, written by a
>> proponent of using LLMs in development, who stated
>> that they amplify a Software Engineer's deviation
>> from the mean.
>>
>> If you are an average programmer, the LLMs will
>> not help much. If you are a bad programmer, they
>> will feed you bug-ridden code and you will not
>> notice. However, if you are a good programmer then
>> they will take away the drudgery, leaving you more
>> time to concentrate on the heart of the problem.
>
>
> It seems AI is generally designed to 'flatter' the questioner a bit.
> It will agree with what one says, whether or not it's correct, which
> helps to help someone who knows the subject well, but can easily
> misguide someone who doesn't. I wonder about the weighting of answers
> that are fed back into the LLM.
>
>
>> I haven't touched them myself. I am concerned that
>> the copyright problem has not been resolved, so
>> I'm wary about shipping code that I'm not fully
>> responsible for.
>
>
> I also like to fully understand what the code actually does, in
> preference to what AI tells me it does. I think one can't say AI
> lies, but it certainly makes things up if it doesn't actually find a
> proper answer.
>
>>
>> I have come to the conclusion that the worst
>> people to document a piece of software are its
>> developers. One holds far too many unconscious
>> assumptions to be able to explain its use to
>> others.
>>
>> I highly respect the Technical Authors who can
>> take the obscure descriptions I throw at them, and
>> turn it into instructions that a complete novice
>> can follow.
>
>
> Agree entirely ... developers are usually too close and the know (or
> should!) how the software works, what it should do and tend not tho
> consider the "dumb" questions that non-programmers quite reasonably have.
>
>
> My one frustration about so many technical writers, though, is that
> I'll usually give them a techy-draft so that they can get started,
> only to find that they're lifted bits and rewritten them a poor
> grammar. One of the arguments for a tech author should be the old saw
> of "engineers don't really do English", so how come I so often have to
> ask ... politely and diplomatically ... to make the grammar sensible?
>
>
> Ho Hum.
>
>
> Gordon.
>
>
--
Please post to: Hampshire@???
Manage subscription:
https://mailman.lug.org.uk/mailman/listinfo/hampshire
LUG website:
http://www.hantslug.org.uk
--------------------------------------------------------------