[Openid-aiim] IAM needs for Agentic AI and Path Forward
Pavindu Lakshan
pavindulakshan at gmail.com
Thu Jul 24 04:29:31 UTC 2025
Hi Alex,
> Accountability is tricky... you can't have a human in the loop for every
> action an agent takes (I wouldn't want to be that poor human having to
> approve 1000's of daily Agent actions), but indeed the organization putting
> an Agent online should be liable for its actions... I think that's a
> different matter though, prob out-of-scope for us here, isn't it?
Accountability doesn't have to always mean humans should always be in the
loop, does it? It also can be thought of as closer to a "manager" role. A
manager doesn't always get in the way of employees. Instead they set the
policies/procedures and interfere only when it is necessary, yet they take
the responsibility for the team they manage.
On Wed, Jul 23, 2025 at 11:27 PM Eleanor Meritt via Openid-aiim <
openid-aiim at lists.openid.net> wrote:
> Some thoughts:
>
>
> - Standards must support a capability for human accountability. Almost
> all business use cases will require this with the use of Agentic AI. The
> NIST AI RMF does say that mechanisms for accountability and documentation
> should be in place.
> - Logging and monitoring frameworks should also be supported by
> standards. Whether or not an AI Agent makes use of them is not something
> that can be generally controlled. However, I am willing to bet that
> regulated industries and governments will require them. While logging
> captures after the fact events, it's going to be essential to detect
> insidious behaviors, and prevent future failures or breaches.
> - Whether or not humans need to be involved in signing off on Agentic
> AI activities is really an implementation decision on behalf of the
> "developer" of an AI Agent. We should not worry about approval fatigue as
> part of this committee.
> - My own opinion is that AI Agents should have their own distinct IAM
> entity. They are not workforce, or consumers or partners. Categorizing them
> as workloads also seems too broad as classic SaaS applications are also
> workload based. We need to be able to detect AI Agents and apply specific
> management capabilities. We also need to worry about more subtle forms of
> security breaches, like manipulation of LLM / RAG training data.
>
>
> -Eleanor.
> ------------------------------
> *From:* Openid-aiim <openid-aiim-bounces at lists.openid.net> on behalf of
> Alex Babeanu via Openid-aiim <openid-aiim at lists.openid.net>
> *Sent:* Wednesday, July 23, 2025 10:02 AM
> *To:* Shahar Tal <shahar at cyata.ai>
> *Cc:* openid-aiim at lists.openid.net <openid-aiim at lists.openid.net>
> *Subject:* Re: [Openid-aiim] IAM needs for Agentic AI and Path Forward
>
> Accountability is tricky... you can't have a human in the loop for every
> action an agent takes (I wouldn't want to be that poor human having to
> approve 1000's of daily Agent actions), but indeed the organization putting
> an Agent online should be liable for its actions... I think that's a
> different matter though, prob out-of-scope for us here, isn't it?
>
> AI Identity: Agents and Models are trained on human data. They WILL
> reproduce human behaviour, including lying and scheming (research already
> shows this behaviour), therefore issuing human-like identities makes sense.
> Nevertheless, informing actual humans they are communicating with an AI
> seems also important, therefore Agents need human-like identities that can
> nevertheless be distinguishable from humans.
>
> For my part, I think the work done within the OID4VC group is highly
> relevant... Issuing VCs to agents makes complete sense: VCs check all the
> checkboxes of our common doc here. You can tie them back to their orgs (the
> issuing, end hence responsible party), establish trust between
> Agent creators (issuers) and in general authenticate the holder (the
> Agent). Once authenticated this way, agents could then still be issued
> regular OAuth tokens, or even transaction tokens (TRaTs) , or in general
> follow any existing OAuth spec... no conflict there. Coexist.
>
> Also claims in those VCs could clearly identify the Holder as an AI
> entity. Pb solved.
>
> [Note that VCs are not so different (conceptually) than Spiffe ID
> Documents: cryptographically provable identities that can be tied back to
> an issuer...]
>
> Cheers,
>
> ./\.
>
> On Wed, Jul 23, 2025 at 1:04 AM Shahar Tal via Openid-aiim <
> openid-aiim at lists.openid.net> wrote:
>
> Absolutely agree, Ayesha - human accountability is a key pillar in any
> viable model for agent identity. Whether it’s an individual owner, an
> organizational function, or a subject matter expert, the essential element
> is that *someone* must ultimately be accountable. And as you noted, this
> doesn’t always mean the agent is acting *on behalf of* that person -
> accountability and delegation are often distinct.
>
>
>
> Also very glad to see more and more voices acknowledging that agentic
> identities deserve their own category, not just a bucket under NHIs. In
> many ways, agents are more human than NHIs - and sometimes more human
> than humans themselves. They replicate human intent, amplify
> decision-making, mistake-making, and introduce new risks at unprecedented
> speed and scale. That demands a new conceptual and technical foundation,
> and we’re glad to take part and see this group moving the conversation
> forward.
>
>
>
> And a quick apology - we refreshed our website yesterday and the blog
> link was briefly unavailable. It’s fully accessible again now
> https://www.cyata.ai/blog/many-faces-of-agentic-identities. Appreciate
> you referencing it!
>
>
>
> Shahar
>
>
>
> *From: *Openid-aiim <openid-aiim-bounces at lists.openid.net> on behalf of
> Ayesha Dissanayaka via Openid-aiim <openid-aiim at lists.openid.net>
> *Date: *Tuesday, 22 July 2025 at 18:26
> *To: *openid-aiim at lists.openid.net <openid-aiim at lists.openid.net>
> *Subject: *Re: [Openid-aiim] IAM needs for Agentic AI and Path Forward
>
>
>
>
>
> On Tue, Jul 22, 2025 at 10:29 PM Ayesha Dissanayaka <ayshsandu at gmail.com>
> wrote:
>
> Hi everyone, Thank you for your comments and suggestions. Please keep them
> coming. I see that there have been some interesting discussions on the
> thread while I was away.
>
> The purpose of this brainstorming document is exactly to identify and
> commonly agree on what areas we should be focusing on from an identity and
> access management point of view when it comes to agentic AI, how to adopt
> and apply existing concepts and standards, and where we need extensions or
> innovations.
>
>
> I recently came across this nice article about many faces of agentic
> identities <https://cyata.ai/blog/many-faces-of-agentic-identities>,
> which discusses a behavioral classification of agents with real-world
> examples.
>
> 1. Agents acting as human counterparts
>
> 2. Agents acting on behalf of users
>
> 3. Agents acting hybrid of both above types
>
> I agree that non-deterministic and completely autonomous agents add lots
> of complexity to the existing infrastructure, but I don't think we can stop
> agents getting there with the latest technological advancements, and
> without having strict low enforcement for building and deploying such
> agents.
>
> However, I do believe there needs to be a responsible party (a human/an
> organization or some legally bound entity) who is ultimately responsible
> for the agent, its actions, and side-effects. The party that the agent acts
> on behalf of can be different from the party who employs the agent.
>
> As some of you mentioned above, agent's identity can be a choice of a
> service account, an application, a workload, something else, or something
> completely new. I don't like to ground agent identity to this or that yet.
> But believe Agents should have some identity to uniquely identify the
> agent, credentials (s) to prove their identity, and a responsible party.
>
>
>
> As Eve suggested, I completely agree that we could start with agents
> acting upon human delegators' command, which is the most common case today.
>
>
>
> Also, as Jeff suggested, I'm happy to improve this brainstorming document
> with the community feedback and collaboration so that we have a commonly
> agreeable base for what problems we should be solving for agent IAM, and
> what directions we should take.
>
> And of course, we can start a new document on agent assurance levels,
> which can be one of the many recommendations/guidelines that come out of
> AIIM-CG.
>
>
>
> P.S. Please don't hesitate to edit the document
> <https://docs.google.com/document/d/1PhWC4KRO00kOPUW113ldG06Vii5dZjW3ljiV1tA0GCc/edit?tab=t.0> with
> your suggestions. It's open to community text collaboration.
>
>
>
>
>
>
>
> On Tue, Jul 22, 2025 at 5:45 PM Richard Bird via Openid-aiim <
> openid-aiim at lists.openid.net> wrote:
>
> Operational issues and considerations:
>
>
>
> 1. If an agent isn’t fully autonomous, does it have agency or is it an
> only policy bound entity? Is partial agency, agency at all?
>
>
>
> 2. How long post-broad acceptance of agentic AI until “human in the loop”
> is an unsustainable transactional control model (several people in the
> industry- example Rock Lambros - believe that answer is almost instantly.
> That human in the loop is fundamentally unachievable except for possibly
> only the highest risk transactions.)
>
>
>
> 3. A log isn’t a control mechanism, it is a post-incident investigative
> tool. Logs are lagging indicators, not leading indicators.
>
>
>
> 4. What control mechanisms can or will be used when agents begin to
> delegate to other agents? Authorization mechanisms have never been
> controlled by identity and devops have associated authorization calls with
> apps and app functions - not with identities and roles - either human or
> agentic.
>
>
>
> 5. An immutable record of agentic actions, delegations and changes could
> be created using blockchain - but even then the abberant or errant behavior
> of an agent would need to generate and action or trigger as a response or
> mitigation to that agent’s behavior - otherwise the immutable ledger is
> just another tool for post event/incident research.
>
>
>
> 6. 90 plus percent of all AI exposure is external to our organizations
> (and it seems that this percentage will remain well above 50% for the
> foreseeable future). What does key assignment, identity ascription and
> identity control look like for a huge class of entities that we do not have
> direct control over? This issue is why most OEM software companies are
> assigning service accounts to their agents instead of identities. It
> eliminates the overhead associated with identity management and control,
> completely. And it allows them to bypass T&C notification requirements for
> code changes (I.e. retraining)
>
>
>
> Sorry for the ad hoc drive-by on this thread. Wasn't entirely sure what
> the best way to engage was but have found this thread very interesting. Not
> now nor have ever been a standards type - I’m an old grubby operator by
> experience. But obviously this topic is hugely important for the future and
> I’ve been eyeballs deep in operationalized AI for a few years now.
>
>
>
> Kindest,
>
>
>
> Richard Bird
>
>
>
> Rb
>
>
>
> On Mon, Jul 21, 2025 at 11:52 PM Sachin Mamoru via Openid-aiim <
> openid-aiim at lists.openid.net> wrote:
>
> Hi Eleanor and Tom,
>
>
>
> Can't we use AI to govern the AI? I mean, one aspect of governing could be
> to track the Agent's behaviour through logs. Periodical scans of these logs
> using AI can help us understand when the Agent malfunctions (i.e., when it
> does not perform the intended behaviour), and then rectify it.
>
> Having full autonomy in Agents will be difficult to achieve, especially
> since, ultimately, IAM is zero trust. In the chain of command, there should
> always be at least one human being to oversee this.
>
>
>
> Sachin.
>
>
>
> On Mon, 21 Jul 2025 at 10:49, Tom Jones via Openid-aiim <
> openid-aiim at lists.openid.net> wrote:
>
> interesting question - is there ever any reason what-so-ever for a full
> autonomous agent. I hope the short term answer is no.
>
>
>
> What is important is that a message sent from the agent must be clear.
>
> Let's try to start with the assumption that any message from any agent
> must include at least one delegation of authority if an action is to be
> undertaken.
>
>
>
> Peace ..tom jones
>
>
>
>
>
> On Sun, Jul 20, 2025 at 7:26 PM Eleanor Meritt <ehmeritt at gmail.com> wrote:
>
> At the most fundamental level we need to agree on what autonomy means for
> AI agents. Does that mean there is no logging of their behaviors? No
> monitoring? No failure handling? No intervention if “something goes wrong”?
> My gut feeling is that AI agents should always be monitored by humans as -
> and Ayesha said it - there is no guarantee that they will behave in the
> same way twice for the same requests.
>
>
>
> Then - getting philosophical - can we agree that every AI agent should
> always have an ultimately responsible human owner?
>
>
>
> Until we agree on fundamentals like this one, we won’t get very far on
> defining AIIM standards.
>
>
>
> Eleanor.
>
>
>
> On Sun, Jul 20, 2025 at 1:44 PM Lombardo, Jeff via Openid-aiim <
> openid-aiim at lists.openid.net> wrote:
>
> I think we can thank Ayesha for putting forward the idea of baes that can
> define the relation in between an human and an agent, an agent and a
> resource.
>
>
>
> There is space for improvement on this first Draft for sure, Ayesha
> candidly opened her text and requested feedback from this group.
>
>
>
> Maybe the best approach is to propose new formulation for the mental model
> and text description of it, with at heart to remind that this Community
> Group is here to expose and document the current state and what needs to be
> done for the best state with whatever exist today or need to be created
> tomorrow.
>
>
>
> In this vein (pun intended), I think we should:
> - comment wherever needed on Ayesha document to make it more robust
> - start a new document on Agentic Assurance Levels
>
>
>
> *Jean-François “Jeff” Lombardo* | Amazon Web Services
>
>
>
> Architecte Principal de Solutions, Spécialiste de Sécurité
> Principal Solution Architect, Security Specialist
> Montréal, Canada
>
> ( +1 514 778 5565
>
> *Commentaires à propos de notre échange? **Exprimez-vous **ici*
> <https://urldefense.com/v3/__https:/feedback.aws.amazon.com/?ea=jeffsec&fn=Jean*20Francois&ln=Lombardo__;JQ!!Pe07N362zA!0k9CkAV8Djpw_8EfIAKrbhP3TQrJr0oMnznlUgBJ3V3NoEk6hihx7dNHnQuejn6SSH2CP8Iow3G-tTzppHeg$>
> *.*
>
>
>
> *Thoughts on our interaction? Provide feedback **here*
> <https://urldefense.com/v3/__https:/feedback.aws.amazon.com/?ea=jeffsec&fn=Jean*20Francois&ln=Lombardo__;JQ!!Pe07N362zA!0k9CkAV8Djpw_8EfIAKrbhP3TQrJr0oMnznlUgBJ3V3NoEk6hihx7dNHnQuejn6SSH2CP8Iow3G-tTzppHeg$>
> *.*
>
>
>
> *From:* Openid-aiim <openid-aiim-bounces at lists.openid.net> *On Behalf Of *Tom
> Jones via Openid-aiim
> *Sent:* July 20, 2025 10:29 PM
> *To:* Eve Maler <eve at vennfactory.com>
> *Cc:* Tom Jones <thomasclinganjones at gmail.com>; peace at acm.org;
> openid-aiim at lists.openid.net
> *Subject:* RE: [EXT] [Openid-aiim] IAM needs for Agentic AI and Path
> Forward
>
>
>
> *CAUTION*: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> *AVERTISSEMENT*: Ce courrier électronique provient d’un expéditeur
> externe. Ne cliquez sur aucun lien et n’ouvrez aucune pièce jointe si vous
> ne pouvez pas confirmer l’identité de l’expéditeur et si vous n’êtes pas
> certain que le contenu ne présente aucun risque.
>
>
>
> *CAUTION*: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> Those ideas are completely broken.
>
> If an agent, on behalf of a legal person, is allowed to order and pay for
> goods, then a legal contract was created and satisfied.
>
> Anything else is not agency.
>
> So the question is, do we have an agent or not?
>
> .https://www.law.cornell.edu/wex/agent
>
>
>
> Peace ..tom jones
>
>
>
>
>
> On Sun, Jul 20, 2025 at 9:56 AM Eve Maler <eve at vennfactory.com> wrote:
>
> Feeling philosophical today: Is there room to square this circle?
>
>
>
> There’s an emerging field of relational AI (vs. transactional — behaviors
> vs. actions). I’ve been talking to the developer
> <https://kaystoner.substack.com> of a number of custom GPTs that are
> aligned with very precisely drawn personas — and, yes, have also been
> playing with some of them. The outputs are indeed variable but the
> behaviors are designed to provide certain kinds of interactive support.
> Their design also includes some guardrails and some level of transparency.
>
>
>
> Maybe what needs to come first, before we can trust a high-autonomy-level
> transactional agent, is measurable behavioral alignment with their human
> delegator (Agentic Assurance Level? :-) ). Perhaps only then can we start
> to assess the alignment of any actions that agent takes.
>
>
>
> (Human delegates are not immune to misalignment with their delegator, of
> course, which is why agency law and the concept of fiduciary duty exist. I
> doubt AI agents will win humanlike legal status any time soon, but if they
> are ever to get anywhere near it, they’ll need to solve these sorts of
> issues.)
>
>
>
> Eve
>
>
>
> Eve Maler, president and founder
>
> Cell and Signal +1 (425) 345-6756 <+1-425-345-6756>
>
>
>
> On Jul 19, 2025, at 12:33 PM, Tom Jones via Openid-aiim <
> openid-aiim at lists.openid.net> wrote:
>
>
>
> non-deterministic agents do present serious challenges to *trust*, *
> security*, and * governance*. In domains like digital identity, law,
> finance, and public infrastructure, *unpredictability* isn't just
> inconvenient—it’s potentially *unacceptable*. Let’s break down why:
> ⚠️ *Why Non-Determinism Breeds Unacceptability*
>
> · *Inconsistent behavior*: Agents that act differently under the
> same conditions can’t be reliably audited or certified.
>
> · *Untraceable outputs*: It becomes hard to pinpoint cause,
> responsibility, or compliance status.
>
> · *Vulnerability to manipulation*: Adversaries can exploit
> probabilistic logic to induce unwanted outcomes.
>
> · *Loss of control*: Especially in systems involving user consent
> or legal transactions, determinism enables meaningful boundaries.
>
> The above is what a bing bot thinks of this idea. I agree with it.
>
> Peace ..tom jones
>
>
>
>
>
> On Sat, Jul 19, 2025 at 10:19 AM Ayesha Dissanayaka <ayshsandu at gmail.com>
> wrote:
>
> Hi Tom,
>
>
> Thank you for your input. Of course, defining an agent is a top priority
> when considering IAM. It's the very first term in the taxonomy document
> <https://github.com/openid/cg-ai-identity-management/blob/main/deliverable/taxonomy.md> that
> the CG is constructing. 😃
>
>
>
> Major AI framework providers have their definitions for AI agents, as I
> tried to summarize here.
> <https://docs.google.com/document/d/1PhWC4KRO00kOPUW113ldG06Vii5dZjW3ljiV1tA0GCc/edit?tab=t.1iyru8xdjt9u>.
> We can draw some inspiration from them when constructing a definition for
> the AI agents in the context of IAM for Agents.
>
>
> On your suggestion for the agent definition, the term "consistent
> behavior" might not go well with an agent, as agents are, by
> design, undeterministic and dynamic. If you ask an agent to do the same
> thing twice, there is a fair chance that it will do the task differently,
> unlike a traditional application or a workload.
>
>
>
>
>
> On Sat, Jul 19, 2025 at 12:19 AM Tom Jones <thomasclinganjones at gmail.com>
> wrote:
>
> you talk about giving ai agents and id, but there appears to be no
> definition of what an agent must be to deserve an ID.
>
> Let's do that - how about this.
>
>
>
> An agent is a persistent collection of software and language models
> together in a workload with a consistent behavior (identity) for the
> duration of the validity of an assigned Identifier.
> An agent can be delegated authority by Entities, that is by named objects.
>
>
>
> Peace ..tom jones
>
>
>
>
>
> On Fri, Jul 18, 2025 at 10:49 AM Ayesha Dissanayaka via Openid-aiim <
> openid-aiim at lists.openid.net> wrote:
>
> Hi All,
>
>
> Thanks, everyone, for your comments on the thoughts on the doc. And I had
> a great time discussing this during the CG meeting yesterday. Following up
> on our discussion i
> <https://github.com/openid/cg-ai-identity-management/wiki/20250717-%E2%80%90-Meeting-notes:-July-17,-2025#ayeshas-agent-identity-discussion-iam-need-for-agentic-ai---brainstorming>n
> the last CG meeting, I am moving this conversation to email so that it's
> easier to comment and gather thoughts from everyone. Please refer to this
> <https://docs.google.com/document/d/1PhWC4KRO00kOPUW113ldG06Vii5dZjW3ljiV1tA0GCc/edit?tab=t.0>
> documen
> <https://docs.google.com/document/d/1PhWC4KRO00kOPUW113ldG06Vii5dZjW3ljiV1tA0GCc/edit?tab=t.0>t
> for detailed information.
>
> The complexity of AI-native applications, when considering GenAI, has
> progressed in added stages of complexity :
>
> 1. *Task-Specific AI:* Simple applications using LLMs for specific
> tasks like text generation.
>
> 2. *RAG-Enabled AI:* Applications that can access and synthesize
> external knowledge bases.
>
> 3. *Apps that include Agents:* Applications where agents can make
> decisions and execute tasks on a user's behalf.
>
> 4. *Agent Teammates:* The current frontier, where agents act on
> their own accord and collaborate with humans in shared workflows.
>
> This evolution presents exciting opportunities, but it also brings a new
> set of challenges, particularly in how we manage identity and access. To
> ensure we build a secure and trustworthy ecosystem for these agents, we
> need to establish a robust set of IAM best practices.
>
> Here are some of the key requirements that we should be thinking about:
>
> · *Seamless Integration:* Agents need to interact with existing
> systems, like those using OAuth, with minimal disruption.
>
> · *Flexible Action:* Agents should be able to act on their own or
> securely on behalf of a user or another entity.
>
> · *Just-in-Time Permissions:* To mitigate risks from the
> non-deterministic nature of agents, we need mechanisms for granting
> just-enough access, precisely when it's needed.
>
> · *Clear Accountability:* There must be a designated responsible
> party for an agent's actions.
>
> · *Auditable Traceability:* All agent actions should be traceable
> back to their identity and the delegating authority.
>
> · *Agent-Specific Controls:* Resource servers may need to
> identify and apply specific controls for actions initiated by agents.
>
> · *Lifecycle Management:* We need clear governance for the entire
> lifecycle of an agent, from onboarding to decommissioning.
>
> This is a pivotal moment for us to lead the way in defining the standards
> and best practices that will shape the future of agentic AI. To get the
> ball rolling, let's consider a few key questions:
>
> 1. Where can we apply *existing standards and best practices*?
>
> 2. What are the *novel problems* that existing solutions can't
> address?
>
> 3. Where do we need to *extend current standards or innovate*?
>
> 4. How should an *agent's identity* be defined and structured?
>
> 5. Develop a shared vocabulary for scenarios, actors, and challenges.
>
> o Happening at
> https://github.com/openid/cg-ai-identity-management/blob/main/deliverable/taxonomy.md
> as initiated at AIIM-CG
>
> Please share your thoughts, any references, and any ideas you might have
> on the above.
>
> Looking forward to continuing the discussion.
>
>
>
>
>
> On Wed, Jul 9, 2025 at 10:04 PM Ayesha Dissanayaka <ayshsandu at gmail.com>
> wrote:
>
> Thanks, Alex, for the comments.
>
>
>
> On Mon, Jul 7, 2025 at 8:41 PM Alex Babeanu <alex.babeanu at indykite.com>
> wrote:
>
> Added some comments to the doc, thanks for sharing Ayesha. This could
> serve as a starting point for discussion...
>
> A side question, could we use a common share drive to such docs or
> material ?
>
> Sure, if the CG has such a shared space, I can move the doc there.
>
> Athul <atul at sgnl.ai>, do we have any such for the AIIM CG?
>
>
>
>
>
> Cheers,
>
>
>
> ./\.
>
>
>
> On Thu, Jul 3, 2025 at 10:56 AM Ayesha Dissanayaka <ayshsandu at gmail.com>
> wrote:
>
> Hi All,
>
>
>
> It's great to be part of this exciting community to discuss IAM for the
> Agentic Era.
>
>
>
> Bubbling up a discussion in the Slack channel, I'm sharing this analysis
> on emerging IAM challenges from Agentic AI
> <https://docs.google.com/document/d/1PhWC4KRO00kOPUW113ldG06Vii5dZjW3ljiV1tA0GCc/edit?tab=t.0#heading=h.secnaj745bir>
> systems that now function as autonomous workforce members, and how we can
> approach addressing them.
>
> I'd love to hear working groups' thoughts on this, and collaborate to
> extend this work to commonly identify the IAM problems we need to be
> solving for agentic AI systems and how.
>
> I'm happy to discuss these findings at an upcoming meeting. Till then,
> let's collaborate on the mailing list and in the doc
> <https://docs.google.com/document/d/1PhWC4KRO00kOPUW113ldG06Vii5dZjW3ljiV1tA0GCc/edit?tab=t.0#heading=h.secnaj745bir>
> itself.
>
> Cheers!
>
> - Ayesha
>
>
>
> --
> Openid-aiim mailing list
> Openid-aiim at lists.openid.net
> https://lists.openid.net/mailman/listinfo/openid-aiim
>
>
>
>
> --
>
>
> * Alex Babeanu*
> Lead Product Manager, AI Control Suite
>
> t. +1 604 728 8130
> e. alex.babeanu at indykite.com
> w. www.indykite.com
>
> --
> Openid-aiim mailing list
> Openid-aiim at lists.openid.net
> https://lists.openid.net/mailman/listinfo/openid-aiim
>
> --
> Openid-aiim mailing list
> Openid-aiim at lists.openid.net
> https://lists.openid.net/mailman/listinfo/openid-aiim
>
>
>
> --
> Openid-aiim mailing list
> Openid-aiim at lists.openid.net
> https://lists.openid.net/mailman/listinfo/openid-aiim
>
> --
> Openid-aiim mailing list
> Openid-aiim at lists.openid.net
> https://lists.openid.net/mailman/listinfo/openid-aiim
>
>
>
>
> --
>
>
>
> *Sachin Mamoru *
> *Senior Software Engineer, WSO2*
>
> +94771292681
>
> *| *
>
> sachinmamoru.me <https://sachinmamoru.me>
>
> sachinmamoru at gmail.com <sachinmamoru at gmail.com>
>
> <https://www.linkedin.com/in/sachin-mamoru/>
>
> <https://twitter.com/MamoruSachin>
>
>
>
>
>
> --
> Openid-aiim mailing list
> Openid-aiim at lists.openid.net
> https://lists.openid.net/mailman/listinfo/openid-aiim
>
> --
> Openid-aiim mailing list
> Openid-aiim at lists.openid.net
> https://lists.openid.net/mailman/listinfo/openid-aiim
>
> --
> Openid-aiim mailing list
> Openid-aiim at lists.openid.net
> https://lists.openid.net/mailman/listinfo/openid-aiim
>
>
>
> --
>
>
> Alex Babeanu
> Lead Product Manager, AI Control Suite
>
> t. +1 604 728 8130
> e. alex.babeanu at indykite.com
> w. www.indykite.com
> --
> Openid-aiim mailing list
> Openid-aiim at lists.openid.net
> https://lists.openid.net/mailman/listinfo/openid-aiim
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openid.net/pipermail/openid-aiim/attachments/20250724/b40548fd/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 16340 bytes
Desc: not available
URL: <http://lists.openid.net/pipermail/openid-aiim/attachments/20250724/b40548fd/attachment-0001.png>
More information about the Openid-aiim
mailing list