3 Oct 2025

The OpenID Foundation (OIDF), a pioneer in open identity standards, has released a comprehensive whitepaper addressing the mounting authentication, authorisation, and identity management challenges posed by the rapid rise of AI agents.

This critical whitepaper, “Identity Management for Agentic AI: The new frontier of authorisation, authentication, and security for an AI agent world,” which is STRICTLY EMBARGOED until 9 am (ET) Tuesday 07 October, was researched and compiled by the OpenID Foundation’s Artificial Intelligence Identity Management Community Group (AIIMCG) – a team of global AI and Identity experts collaborating to address rising identity management challenges in AI systems.

Impending future identity challenges

The whitepaper provides solid guidance for those working at the intersection of AI agents

The whitepaper provides solid guidance for those working at the intersection of AI agents and access management – developers, architects, standards bodies and enterprises. It also provides strategic direction for those stakeholders to address impending future identity challenges.

AI agents, as discussed in the paper, are AI systems that can autonomously take actions and make decisions to achieve goals, adapting to new situations through reasoning rather than following fixed rules.

The whitepaper reveals that while current security frameworks can handle simple AI agent scenarios, such as company agents accessing internal tools, they break down when AI agents need to work across different companies, act independently, or handle complex permission sharing between multiple users. This has created major security gaps.

Several critical future challenges

Several critical future challenges that require immediate attention from developers, standards bodies, and enterprises, have also been uncovered through the research.

  • Agent identity fragmentation. Companies are creating separate identity systems instead of common standards, making development harder and less secure.
  • User impersonation vs delegated authority. AI agents look like regular users, making it impossible to tell who actually did what. Clear "acting on behalf of" systems are needed.
  • Scalability problems in human oversight. Users will face thousands of permission requests and likely approve everything, creating security risks.
  • Recursive delegation risks. When agents create other agents or delegate tasks, it creates complex permission chains without clear limits.
  • Multi-user agent limitations. Current systems work for individuals, not agents serving multiple users with different permissions in shared spaces.
  • Automated verification gaps. Computer systems are needed to automatically verify agent actions without constant human supervision.
  • Browser and computer use agent challenges. Agents controlling screens and browsers bypass normal security checks, potentially forcing internet lockdowns.
  • Multi-facet agent identity. Agents can switch between acting independently and acting for users, but current systems can't handle this dual nature or track which mode the agent is operating in.

Constant human supervision

Tobin South, Head of AI Agents at WorkOS, Research Fellow with Stanford’s Loyal Agents Initiative, and Co-Chair of the OpenID Foundation’s AIIM CG, said: “AI agents are outpacing our security systems. Without industry collaboration on common standards, we risk a fragmented future where agents can't work securely across different platforms and companies.”

Atul Tulshibagwale, CTO of SGNL and Co-Chair of the OpenID Foundation’s AIIM CG, said: “This whitepaper is an important industry milestone, which captures all aspects of the intersection of AI and identity and access management.”

Triage specification requirements

Gail Hodges, Executive Director of the OpenID Foundation said: “We know AI and Identity experts alike are trying to unlock Agentic AI use cases while security and identity experts are trying to ensure safeguards for security, privacy, and interoperability are incorporated.”

This whitepaper offers a primer for how we can approach this daunting challenge. Beyond the paper, the AI and Identity Management CG will continue to triage specification requirements, assess priorities, and collaborate with standards body peers to accelerate work on the most pressing requirements.”

A call for industry-wide collaboration

Organisations should implement robust standards, such as OAuth 2.0, and adopt standard interfaces

The OpenID Foundation's whitepaper issues a clear call for industry-wide collaboration to securely advance the future of AI. For today's AI agents, particularly those in simpler, single-company scenarios, the paper recommends immediate action using proven security frameworks. 

Organisations should implement robust standards, such as OAuth 2.0, and adopt standard interfaces, like the Model Context Protocol (MCP), for connecting AI to external tools using recommended security measures.

Instead of building custom solutions, companies are urged to use dedicated authorisation servers and integrate agents into existing enterprise login and governance systems, ensuring every agent has a clear owner and is subject to rigorous security policies.

Rigorous security policies

However, these immediate technical steps are only the beginning. The report stresses that the larger, more complex challenges of securing a future with highly autonomous, interconnected agents cannot be solved in isolation.

Moving from basic security to a world of trustworthy, verifiable agent identities requires a fundamental evolution in how they manage delegation, authority, and accountability online, as well as the provisioning and de-provisioning of agents in the enterprise. 

Providing specific recommendations

The whitepaper concludes with an urgent appeal for the entire industry to work together on open, interoperable standards. The whitepaper provides specific recommendations for each key stakeholder:

  • Developers and architects: Build on existing secure standards, while designing for flexibility in delegated authority and agent-native identity. Align with enterprise profiles like IPSIE to ensure security, interoperability, and enterprise readiness.
  • Standards bodies: Accelerate protocol development that formalises these concepts, creating an interoperable foundation rather than fragmented proprietary systems.
  • Enterprises: Treat agents as first-class citizens in IAM infrastructure. Establish robust lifecycle management — from provisioning to de-provisioning — with clear governance policies and accountability.

Without this unified effort, the ecosystem risks fracturing into a collection of proprietary, incompatible identity silos, hindering innovation and creating significant security gaps.