[Openid-aiim] Generative AI should never be an agent without real-time evaluation
Tom Jones
thomasclinganjones at gmail.com
Tue Nov 4 20:55:26 UTC 2025
Generative AI can never be trusted as an agent, we see in the brain that
generative network output must pass through an evaluation network before it
is acted upon. This tale cuts to the heart of both AI governance and
cognitive science. Some process other than the generative one should
evaluate the result.
Nearly everyone with a computing device is using generative AI every day to
create text and every video that helps us in our daily digital life. A
significant fraction of that output, by intention or by accident, is
anti-social, or worse. So how could it ever be possible to enable a
generative AI to act on behalf of a human, as their agent.
The Dialectic
*Thesis:* *Generative AI can never be trusted as an agent.*
-
Generative models (LLMs, diffusion models, etc.) are *stochastic pattern
completers*, not intentional actors.
-
They lack grounding, goals, or evaluative conscience.
-
Left ungoverned, they will happily produce unsafe, incoherent, or
manipulative outputs.
-
Therefore, they cannot be trusted to act autonomously in the world.
*Antithesis:* *In the human brain, generative network output must pass
through an evaluation network before action.*
-
The *Default Mode Network (DMN)* that exists in the human brain
generates ideas, scenarios, and narratives.
-
The *Executive Control Network (ECN)* and *Salience Network* evaluate,
inhibit, or authorize those ideas.
-
Action only occurs when evaluative systems green‑light generative
proposals.
-
This layered control is why humans can imagine wild things without
acting on them.
Synthesis: Toward AI Control Architectures
The dialectic suggests a design principle:
-
*Generative AI should never be an agent in itself.*
-
Instead, it should be embedded in a *two‑system architecture*:
-
*Generator*: produces options, plans, or hypotheses.
-
*Evaluator/Governor*: applies rules, risk rubrics, provenance checks
against human-created policy, and , when necessary, human‑in‑the‑loop
escalation before execution.
This mirrors the brain’s “dream and doubt cycle: imagination without
inhibition is chaos; inhibition without imagination is paralysis. Safety
comes from the oscillation.
To say that all human thinking is essentially of two kinds— reasoning on
the one hand, and narrative, descriptive, contemplative thinking on the
other—is to say only what every reader’s experience will
corroborate.—William James
References
20 years of the default mode network: A review and synthesis
<https://www.med.stanford.edu/content/dam/sm/scsnl/documents/Neuron_2023_Menon_20_years.pdf>
https://www.linkedin.com/posts/rptomj_generative-ai-should-never-be-an-agent-in-activity-7388346211600125953-pPPO?utm_source=social_share_send&utm_medium=member_desktop_web&rcm=ACoAAAQINiEB4PUslZ5LmEz7ZYR_sTmrWb0PYFk
Peace ..tom jones
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openid.net/pipermail/openid-aiim/attachments/20251104/55198364/attachment.htm>
More information about the Openid-aiim
mailing list