[Openid-aiim] IAM needs for Agentic AI and Path Forward

Richard Bird rbird at singulr.ai
Tue Jul 22 17:43:49 UTC 2025


Something I've been working on with several people at the conversational
level, including David Lee, Eric Olden, Aubrey Turner, Ian Glazer, and an
Identiverse bar conversation with Heather Flanagan and Andi Hindle.

As we begin or progress the conversation about identity ascription for AI
features, services, and agents, is there something we've not considered
about an unresolved problem within the identity domain?

That problem? We do an awful job of entity definition and classification,
both for old and new entities. Case in point - the vast majority of
conversations I've seen and been a part of (particularly with companies
already in the IAM/identity security solution space) are rushing to treat
agentic AI like a workforce identity. It's the classic "I have a hammer and
everything looks like a nail" response. We're so bad at entity definition
and classification that the brave new world of NHI suggests we never had
the problem of contractor, machine, functional, and third-party identities
until around 2.5 years ago, rather than the actual 30-plus years that these
and other identity types have existed. NHI is a tacit recognition that
there are many different types of identities associated with many kinds of
entities - anyone who has ever tried to manage or solve for CIAM use cases
immediately recognizes this problem and reality. A consumer identity does
not fit in almost any identity stack or solution construct we've ever made.

So - AI agents, services, and features. AI is not human. AI is the
virtualization of a human function or set of human functions. Suggesting
otherwise is like saying one human blood cell or organ is human. But, AI
agents, services, and features are also NOT service accounts. Which leads
to the obvious question. Is AI a new type of entity? And if we define and
understand the entity, the identity ascription, rights, privileges, and
delegation become easier to define and understand in the context of that
entity type. And frankly, does entity type classification as a formal
structure within identity allow us to improve our capabilities to manage,
control, and secure all identity types correctly?

For those who don't know me, I apologize for my contrarian stance. It's who
I am - by birth and by experience within identity and cybersecurity.
Hopefully, a few of the folks here who are old friends and colleagues will
confirm that I'm not a problem child.

Thanks for your patience and future kindness...

Rb



On Tue, Jul 22, 2025 at 11:00 AM Ayesha Dissanayaka via Openid-aiim <
openid-aiim at lists.openid.net> wrote:

> Hi everyone, Thank you for your comments and suggestions. Please keep them
> coming. I see that there have been some interesting discussions on the
> thread while I was away.
> The purpose of this brainstorming document is exactly to identify and
> commonly agree on what areas we should be focusing on from an identity and
> access management point of view when it comes to agentic AI, how to adopt
> and apply existing concepts and standards, and where we need extensions or
> innovations.
>
> I recently came across this nice article about many faces of agentic
> identities <https://cyata.ai/blog/many-faces-of-agentic-identities>,
> which discusses a behavioral classification of agents with real-world
> examples.
>
>    1. Agents acting as human counterparts
>    2. Agents acting on behalf of users
>    3. Agents acting hybrid of both above types
>
> I agree that non-deterministic and completely autonomous agents add lots
> of complexity to the existing infrastructure, but I don't think we can stop
> agents getting there with the latest technological advancements, and
> without having strict low enforcement for building and deploying such
> agents.
> However, I do believe there needs to be a responsible party (a human/an
> organization or some legally bound entity) who is ultimately responsible
> for the agent, its actions, and side-effects. The party that the agent acts
> on behalf of can be different from the party who employs the agent.
>
> As some of you mentioned above, agent's identity can be a choice of a
> service account, an application, a workload, something else, or something
> completely new. I don't like to ground agent identity to this or that yet.
> But believe Agents should have some identity to uniquely identify the
> agent, credentials (s) to prove their identity, and a responsible party.
>
> As Eve suggested, I completely agree that we could start with agents
> acting upon human delegators' command, which is the most common case today.
>
> Also, as Jeff suggested, I'm happy to improve this brainstorming document
> with the community feedback and collaboration so that we have a commonly
> agreeable base for what problems we should be solving for agent IAM, and
> what directions we should take.
> And of course, we can start a new document on agent assurance levels,
> which can be one of the many recommendations/guidelines that come out of
> AIIM-CG.
>
>
> On Tue, Jul 22, 2025 at 5:45 PM Richard Bird via Openid-aiim <
> openid-aiim at lists.openid.net> wrote:
>
>> Operational issues and considerations:
>>
>> 1. If an agent isn’t fully autonomous, does it have agency or is it an
>> only policy bound entity? Is partial agency, agency at all?
>>
>> 2. How long post-broad acceptance of agentic AI until “human in the loop”
>> is an unsustainable transactional control model (several people in the
>> industry- example Rock Lambros -  believe that answer is almost instantly.
>> That human in the loop is fundamentally unachievable except for possibly
>> only the highest risk transactions.)
>>
>> 3. A log isn’t a control mechanism, it is a post-incident investigative
>> tool. Logs are lagging indicators, not leading indicators.
>>
>> 4.  What control mechanisms can or will be used when agents begin to
>> delegate to other agents? Authorization mechanisms have never been
>> controlled by identity and devops have associated authorization calls with
>> apps and app functions - not with identities and roles - either human or
>> agentic.
>>
>> 5. An immutable record of agentic actions, delegations and changes could
>> be created using blockchain - but even then the abberant or errant behavior
>> of an agent would need to generate and action or trigger as a response or
>> mitigation to that agent’s behavior - otherwise the immutable ledger is
>> just another tool for post event/incident research.
>>
>> 6. 90 plus percent of all AI exposure is external to our organizations
>> (and it seems that this percentage will remain well above 50% for the
>> foreseeable future). What does key assignment, identity ascription and
>> identity control look like for a huge class of entities that we do not have
>> direct control over? This issue is why most OEM software companies are
>> assigning service accounts to their agents instead of identities. It
>> eliminates the overhead associated with identity management and control,
>> completely. And it allows them to bypass T&C notification requirements for
>> code changes (I.e. retraining)
>>
>> Sorry for the ad hoc drive-by on this thread. Wasn't entirely sure what
>> the best way to engage was but have found this thread very interesting. Not
>> now nor have ever been a standards type - I’m an old grubby operator by
>> experience. But obviously this topic is hugely important for the future and
>> I’ve been eyeballs deep in operationalized AI for a few years now.
>>
>> Kindest,
>>
>> Richard Bird
>>
>> Rb
>>
>> On Mon, Jul 21, 2025 at 11:52 PM Sachin Mamoru via Openid-aiim <
>> openid-aiim at lists.openid.net> wrote:
>>
>>> Hi Eleanor and Tom,
>>>
>>> Can't we use AI to govern the AI? I mean, one aspect of governing could
>>> be to track the Agent's behaviour through logs. Periodical scans of these
>>> logs using AI can help us understand when the Agent malfunctions (i.e.,
>>> when it does not perform the intended behaviour), and then rectify it.
>>> Having full autonomy in Agents will be difficult to achieve, especially
>>> since, ultimately, IAM is zero trust. In the chain of command, there should
>>> always be at least one human being to oversee this.
>>>
>>> Sachin.
>>>
>>> On Mon, 21 Jul 2025 at 10:49, Tom Jones via Openid-aiim <
>>> openid-aiim at lists.openid.net> wrote:
>>>
>>>> interesting question - is there ever any reason what-so-ever for a full
>>>> autonomous agent. I hope the short term answer is no.
>>>>
>>>> What is important is that a message sent from the agent must be clear.
>>>> Let's try to start with the assumption that any message from any agent
>>>> must include at least one delegation of authority if an action is to be
>>>> undertaken.
>>>>
>>>> Peace ..tom jones
>>>>
>>>>
>>>> On Sun, Jul 20, 2025 at 7:26 PM Eleanor Meritt <ehmeritt at gmail.com>
>>>> wrote:
>>>>
>>>>> At the most fundamental level we need to agree on what autonomy means
>>>>> for AI agents. Does that mean there is no logging of their behaviors? No
>>>>> monitoring? No failure handling? No intervention if “something goes wrong”?
>>>>> My gut feeling is that AI agents should always be monitored by humans as -
>>>>> and Ayesha said it - there is no guarantee that they will behave in the
>>>>> same way twice for the same requests.
>>>>>
>>>>> Then - getting philosophical - can we agree that every AI agent should
>>>>> always have an ultimately responsible human owner?
>>>>>
>>>>> Until we agree on fundamentals like this one, we won’t get very far on
>>>>> defining AIIM standards.
>>>>>
>>>>> Eleanor.
>>>>>
>>>>> On Sun, Jul 20, 2025 at 1:44 PM Lombardo, Jeff via Openid-aiim <
>>>>> openid-aiim at lists.openid.net> wrote:
>>>>>
>>>>>> I think we can thank Ayesha for putting forward the idea of baes that
>>>>>> can define the relation in between an human and an agent, an agent and a
>>>>>> resource.
>>>>>>
>>>>>>
>>>>>>
>>>>>> There is space for improvement on this first Draft for sure, Ayesha
>>>>>> candidly opened her text and requested feedback from this group.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Maybe the best approach is to propose new formulation for the mental
>>>>>> model and text description of it, with at heart to remind that this
>>>>>> Community Group is here to expose and document the current state and what
>>>>>> needs to be done for the best state with whatever exist today or need to be
>>>>>> created tomorrow.
>>>>>>
>>>>>>
>>>>>>
>>>>>> In this vein (pun intended), I think we should:
>>>>>> - comment wherever needed on Ayesha document to make it more robust
>>>>>> - start a new document on Agentic Assurance Levels
>>>>>>
>>>>>>
>>>>>>
>>>>>> *Jean-François “Jeff” Lombardo* | Amazon Web Services
>>>>>>
>>>>>>
>>>>>>
>>>>>> Architecte Principal de Solutions, Spécialiste de Sécurité
>>>>>> Principal Solution Architect, Security Specialist
>>>>>> Montréal, Canada
>>>>>>
>>>>>> ( +1 514 778 5565
>>>>>>
>>>>>> *Commentaires à propos de notre échange? **Exprimez-vous **ici*
>>>>>> <https://urldefense.com/v3/__https:/feedback.aws.amazon.com/?ea=jeffsec&fn=Jean*20Francois&ln=Lombardo__;JQ!!Pe07N362zA!0k9CkAV8Djpw_8EfIAKrbhP3TQrJr0oMnznlUgBJ3V3NoEk6hihx7dNHnQuejn6SSH2CP8Iow3G-tTzppHeg$>
>>>>>> *.*
>>>>>>
>>>>>>
>>>>>>
>>>>>> *Thoughts on our interaction? Provide feedback **here*
>>>>>> <https://urldefense.com/v3/__https:/feedback.aws.amazon.com/?ea=jeffsec&fn=Jean*20Francois&ln=Lombardo__;JQ!!Pe07N362zA!0k9CkAV8Djpw_8EfIAKrbhP3TQrJr0oMnznlUgBJ3V3NoEk6hihx7dNHnQuejn6SSH2CP8Iow3G-tTzppHeg$>
>>>>>> *.*
>>>>>>
>>>>>>
>>>>>>
>>>>>> *From:* Openid-aiim <openid-aiim-bounces at lists.openid.net> *On
>>>>>> Behalf Of *Tom Jones via Openid-aiim
>>>>>> *Sent:* July 20, 2025 10:29 PM
>>>>>> *To:* Eve Maler <eve at vennfactory.com>
>>>>>> *Cc:* Tom Jones <thomasclinganjones at gmail.com>; peace at acm.org;
>>>>>> openid-aiim at lists.openid.net
>>>>>> *Subject:* RE: [EXT] [Openid-aiim] IAM needs for Agentic AI and Path
>>>>>> Forward
>>>>>>
>>>>>>
>>>>>>
>>>>>> *CAUTION*: This email originated from outside of the organization.
>>>>>> Do not click links or open attachments unless you can confirm the sender
>>>>>> and know the content is safe.
>>>>>>
>>>>>>
>>>>>>
>>>>>> *AVERTISSEMENT*: Ce courrier électronique provient d’un expéditeur
>>>>>> externe. Ne cliquez sur aucun lien et n’ouvrez aucune pièce jointe si vous
>>>>>> ne pouvez pas confirmer l’identité de l’expéditeur et si vous n’êtes pas
>>>>>> certain que le contenu ne présente aucun risque.
>>>>>>
>>>>>>
>>>>>>
>>>>>> *CAUTION*: This email originated from outside of the organization.
>>>>>> Do not click links or open attachments unless you can confirm the sender
>>>>>> and know the content is safe.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Those ideas are completely broken.
>>>>>>
>>>>>> If an agent, on behalf of a legal person, is allowed to order and pay
>>>>>> for goods, then a legal contract was created and satisfied.
>>>>>>
>>>>>> Anything else is not agency.
>>>>>>
>>>>>> So the question is, do we have an agent or not?
>>>>>>
>>>>>> .https://www.law.cornell.edu/wex/agent
>>>>>>
>>>>>>
>>>>>>
>>>>>> Peace ..tom jones
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Jul 20, 2025 at 9:56 AM Eve Maler <eve at vennfactory.com>
>>>>>> wrote:
>>>>>>
>>>>>> Feeling philosophical today: Is there room to square this circle?
>>>>>>
>>>>>>
>>>>>>
>>>>>> There’s an emerging field of relational AI (vs. transactional —
>>>>>> behaviors vs. actions). I’ve been talking to the developer
>>>>>> <https://kaystoner.substack.com> of a number of custom GPTs that are
>>>>>> aligned with very precisely drawn personas — and, yes, have also been
>>>>>> playing with some of them. The outputs are indeed variable but the
>>>>>> behaviors are designed to provide certain kinds of interactive support.
>>>>>> Their design also includes some guardrails and some level of transparency.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Maybe what needs to come first, before we can trust a
>>>>>> high-autonomy-level transactional agent, is measurable behavioral alignment
>>>>>> with their human delegator (Agentic Assurance Level? :-) ). Perhaps only
>>>>>> then can we start to assess the alignment of any actions that agent takes.
>>>>>>
>>>>>>
>>>>>>
>>>>>> (Human delegates are not immune to misalignment with their delegator,
>>>>>> of course, which is why agency law and the concept of fiduciary duty exist.
>>>>>> I doubt AI agents will win humanlike legal status any time soon, but if
>>>>>> they are ever to get anywhere near it, they’ll need to solve these sorts of
>>>>>> issues.)
>>>>>>
>>>>>>
>>>>>>
>>>>>> Eve
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Eve Maler, president and founder
>>>>>>
>>>>>> Cell and Signal +1 (425) 345-6756 <+1-425-345-6756>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Jul 19, 2025, at 12:33 PM, Tom Jones via Openid-aiim <
>>>>>> openid-aiim at lists.openid.net> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> non-deterministic agents do present serious challenges to *trust*,
>>>>>> *security*, and *governance*. In domains like digital identity, law,
>>>>>> finance, and public infrastructure, *unpredictability* isn't just
>>>>>> inconvenient—it’s potentially *unacceptable*. Let’s break down why:
>>>>>> ⚠️ *Why Non-Determinism Breeds Unacceptability*
>>>>>>
>>>>>>    - *Inconsistent behavior*: Agents that act differently under the
>>>>>>    same conditions can’t be reliably audited or certified.
>>>>>>    - *Untraceable outputs*: It becomes hard to pinpoint cause,
>>>>>>    responsibility, or compliance status.
>>>>>>    - *Vulnerability to manipulation*: Adversaries can exploit
>>>>>>    probabilistic logic to induce unwanted outcomes.
>>>>>>    - *Loss of control*: Especially in systems involving user consent
>>>>>>    or legal transactions, determinism enables meaningful boundaries.
>>>>>>
>>>>>> The above is what a bing bot thinks of this idea.  I agree with it.
>>>>>>
>>>>>> Peace ..tom jones
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sat, Jul 19, 2025 at 10:19 AM Ayesha Dissanayaka <
>>>>>> ayshsandu at gmail.com> wrote:
>>>>>>
>>>>>> Hi Tom,
>>>>>>
>>>>>>
>>>>>> Thank you for your input. Of course, defining an agent is a top
>>>>>> priority when considering IAM. It's the very first term in the taxonomy
>>>>>> document
>>>>>> <https://github.com/openid/cg-ai-identity-management/blob/main/deliverable/taxonomy.md> that
>>>>>> the CG is constructing. 😃
>>>>>>
>>>>>>
>>>>>>
>>>>>> Major AI framework providers have their definitions for AI agents, as
>>>>>> I tried to summarize here.
>>>>>> <https://docs.google.com/document/d/1PhWC4KRO00kOPUW113ldG06Vii5dZjW3ljiV1tA0GCc/edit?tab=t.1iyru8xdjt9u>.
>>>>>> We can draw some inspiration from them when constructing a definition for
>>>>>> the AI agents in the context of IAM for Agents.
>>>>>>
>>>>>>
>>>>>> On your suggestion for the agent definition, the term "consistent
>>>>>> behavior" might not go well with an agent, as agents are, by
>>>>>> design, undeterministic and dynamic. If you ask an agent to do the same
>>>>>> thing twice, there is a fair chance that it will do the task differently,
>>>>>> unlike a traditional application or a workload.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sat, Jul 19, 2025 at 12:19 AM Tom Jones <
>>>>>> thomasclinganjones at gmail.com> wrote:
>>>>>>
>>>>>> you talk about giving ai agents and id, but there appears to be no
>>>>>> definition of what an agent must be to deserve an ID.
>>>>>>
>>>>>> Let's do that  - how about this.
>>>>>>
>>>>>>
>>>>>>
>>>>>> An agent is a persistent collection of software and language models
>>>>>> together in a workload with a consistent behavior (identity) for the
>>>>>> duration of the validity of an assigned Identifier.
>>>>>> An agent can be delegated authority by Entities, that is by named
>>>>>> objects.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Peace ..tom jones
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Jul 18, 2025 at 10:49 AM Ayesha Dissanayaka via Openid-aiim <
>>>>>> openid-aiim at lists.openid.net> wrote:
>>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>>
>>>>>> Thanks, everyone, for your comments on the thoughts on the doc. And I
>>>>>> had a great time discussing this during the CG meeting yesterday. Following
>>>>>> up on our discussion i
>>>>>> <https://github.com/openid/cg-ai-identity-management/wiki/20250717-%E2%80%90-Meeting-notes:-July-17,-2025#ayeshas-agent-identity-discussion-iam-need-for-agentic-ai---brainstorming>n
>>>>>> the last CG meeting, I am moving this conversation to email so that it's
>>>>>> easier to comment and gather thoughts from everyone.  Please refer to
>>>>>> this
>>>>>> <https://docs.google.com/document/d/1PhWC4KRO00kOPUW113ldG06Vii5dZjW3ljiV1tA0GCc/edit?tab=t.0>
>>>>>> documen
>>>>>> <https://docs.google.com/document/d/1PhWC4KRO00kOPUW113ldG06Vii5dZjW3ljiV1tA0GCc/edit?tab=t.0>t
>>>>>> for detailed information.
>>>>>>
>>>>>> The complexity of AI-native applications, when considering GenAI,
>>>>>> has progressed in added stages of complexity :
>>>>>>
>>>>>>    1. *Task-Specific AI:* Simple applications using LLMs for
>>>>>>    specific tasks like text generation.
>>>>>>
>>>>>>
>>>>>>    2. *RAG-Enabled AI:* Applications that can access and synthesize
>>>>>>    external knowledge bases.
>>>>>>
>>>>>>
>>>>>>    3. *Apps that include Agents:* Applications where agents can make
>>>>>>    decisions and execute tasks on a user's behalf.
>>>>>>
>>>>>>
>>>>>>    4. *Agent Teammates:* The current frontier, where agents act on
>>>>>>    their own accord and collaborate with humans in shared workflows.
>>>>>>
>>>>>> This evolution presents exciting opportunities, but it also brings a
>>>>>> new set of challenges, particularly in how we manage identity and access.
>>>>>> To ensure we build a secure and trustworthy ecosystem for these agents, we
>>>>>> need to establish a robust set of IAM best practices.
>>>>>>
>>>>>> Here are some of the key requirements that we should be thinking
>>>>>> about:
>>>>>>
>>>>>>    - *Seamless Integration:* Agents need to interact with existing
>>>>>>    systems, like those using OAuth, with minimal disruption.
>>>>>>
>>>>>>
>>>>>>    - *Flexible Action:* Agents should be able to act on their own or
>>>>>>    securely on behalf of a user or another entity.
>>>>>>
>>>>>>
>>>>>>    - *Just-in-Time Permissions:* To mitigate risks from the
>>>>>>    non-deterministic nature of agents, we need mechanisms for granting
>>>>>>    just-enough access, precisely when it's needed.
>>>>>>
>>>>>>
>>>>>>    - *Clear Accountability:* There must be a designated responsible
>>>>>>    party for an agent's actions.
>>>>>>
>>>>>>
>>>>>>    - *Auditable Traceability:* All agent actions should be traceable
>>>>>>    back to their identity and the delegating authority.
>>>>>>
>>>>>>
>>>>>>    - *Agent-Specific Controls:* Resource servers may need to
>>>>>>    identify and apply specific controls for actions initiated by agents.
>>>>>>
>>>>>>
>>>>>>    - *Lifecycle Management:* We need clear governance for the entire
>>>>>>    lifecycle of an agent, from onboarding to decommissioning.
>>>>>>
>>>>>> This is a pivotal moment for us to lead the way in defining the
>>>>>> standards and best practices that will shape the future of agentic AI. To
>>>>>> get the ball rolling, let's consider a few key questions:
>>>>>>
>>>>>>    1. Where can we apply * existing standards and best practices*?
>>>>>>
>>>>>>
>>>>>>    2. What are the *novel problems* that existing solutions can't
>>>>>>    address?
>>>>>>
>>>>>>
>>>>>>    3. Where do we need to *extend current standards or innovate*?
>>>>>>
>>>>>>
>>>>>>    4. How should an *agent's identity* be defined and structured?
>>>>>>
>>>>>>
>>>>>>    5. Develop a shared vocabulary for scenarios, actors, and
>>>>>>    challenges.
>>>>>>
>>>>>>
>>>>>>    - Happening at
>>>>>>       https://github.com/openid/cg-ai-identity-management/blob/main/deliverable/taxonomy.md
>>>>>>       as initiated at AIIM-CG
>>>>>>
>>>>>> Please share your thoughts, any references, and any ideas you might
>>>>>> have on the above.
>>>>>>
>>>>>> Looking forward to continuing the discussion.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Jul 9, 2025 at 10:04 PM Ayesha Dissanayaka <
>>>>>> ayshsandu at gmail.com> wrote:
>>>>>>
>>>>>> Thanks, Alex, for the comments.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Jul 7, 2025 at 8:41 PM Alex Babeanu <
>>>>>> alex.babeanu at indykite.com> wrote:
>>>>>>
>>>>>> Added some comments to the doc, thanks for sharing Ayesha. This could
>>>>>> serve as a starting point for discussion...
>>>>>>
>>>>>> A side question, could we use a common share drive to such docs or
>>>>>> material ?
>>>>>>
>>>>>> Sure, if the CG has such a shared space, I can move the doc there.
>>>>>>
>>>>>> Athul <atul at sgnl.ai>, do we have any such for the AIIM CG?
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Cheers,
>>>>>>
>>>>>>
>>>>>>
>>>>>> ./\.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Jul 3, 2025 at 10:56 AM Ayesha Dissanayaka <
>>>>>> ayshsandu at gmail.com> wrote:
>>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>>
>>>>>>
>>>>>> It's great to be part of this exciting community to discuss IAM for
>>>>>> the Agentic Era.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Bubbling up a discussion in the Slack channel, I'm sharing this
>>>>>> analysis on emerging IAM challenges from Agentic AI
>>>>>> <https://docs.google.com/document/d/1PhWC4KRO00kOPUW113ldG06Vii5dZjW3ljiV1tA0GCc/edit?tab=t.0#heading=h.secnaj745bir>
>>>>>> systems that now function as autonomous workforce members, and how we can
>>>>>> approach addressing them.
>>>>>>
>>>>>> I'd love to hear working groups' thoughts on this, and collaborate to
>>>>>> extend this work to commonly identify the IAM problems we need to be
>>>>>> solving for agentic AI systems and how.
>>>>>>
>>>>>> I'm happy to discuss these findings at an upcoming meeting. Till
>>>>>> then, let's collaborate on the mailing list and in the doc
>>>>>> <https://docs.google.com/document/d/1PhWC4KRO00kOPUW113ldG06Vii5dZjW3ljiV1tA0GCc/edit?tab=t.0#heading=h.secnaj745bir>
>>>>>> itself.
>>>>>>
>>>>>> Cheers!
>>>>>>
>>>>>> - Ayesha
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Openid-aiim mailing list
>>>>>> Openid-aiim at lists.openid.net
>>>>>> https://lists.openid.net/mailman/listinfo/openid-aiim
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>>
>>>>>> * Alex Babeanu*
>>>>>> Lead Product Manager, AI Control  Suite
>>>>>>
>>>>>> t. +1 604 728 8130
>>>>>> e. alex.babeanu at indykite.com
>>>>>> w. www.indykite.com
>>>>>>
>>>>>> --
>>>>>> Openid-aiim mailing list
>>>>>> Openid-aiim at lists.openid.net
>>>>>> https://lists.openid.net/mailman/listinfo/openid-aiim
>>>>>>
>>>>>> --
>>>>>> Openid-aiim mailing list
>>>>>> Openid-aiim at lists.openid.net
>>>>>> https://lists.openid.net/mailman/listinfo/openid-aiim
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Openid-aiim mailing list
>>>>>> Openid-aiim at lists.openid.net
>>>>>> https://lists.openid.net/mailman/listinfo/openid-aiim
>>>>>>
>>>>> --
>>>> Openid-aiim mailing list
>>>> Openid-aiim at lists.openid.net
>>>> https://lists.openid.net/mailman/listinfo/openid-aiim
>>>>
>>>
>>>
>>> --
>>>
>>> Sachin Mamoru
>>> Senior Software Engineer, WSO2
>>> +94771292681
>>> | sachinmamoru.me  <https://sachinmamoru.me>
>>> sachinmamoru at gmail.com  <sachinmamoru at gmail.com>
>>> <https://www.linkedin.com/in/sachin-mamoru/>
>>> <https://twitter.com/MamoruSachin>
>>>
>>> --
>>> Openid-aiim mailing list
>>> Openid-aiim at lists.openid.net
>>> https://lists.openid.net/mailman/listinfo/openid-aiim
>>>
>> --
>> Openid-aiim mailing list
>> Openid-aiim at lists.openid.net
>> https://lists.openid.net/mailman/listinfo/openid-aiim
>>
> --
> Openid-aiim mailing list
> Openid-aiim at lists.openid.net
> https://lists.openid.net/mailman/listinfo/openid-aiim
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openid.net/pipermail/openid-aiim/attachments/20250722/b3809efc/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 16340 bytes
Desc: not available
URL: <http://lists.openid.net/pipermail/openid-aiim/attachments/20250722/b3809efc/attachment-0001.png>


More information about the Openid-aiim mailing list