OpenID Foundation - Experts & Thought Leaders
Latest OpenID Foundation news & announcements
The OpenID Foundation (OIDF), a pioneer in open identity standards, has released a comprehensive whitepaper addressing the mounting authentication, authorisation, and identity management challenges posed by the rapid rise of AI agents. This critical whitepaper, “Identity Management for Agentic AI: The new frontier of authorisation, authentication, and security for an AI agent world,” which is STRICTLY EMBARGOED until 9 am (ET) Tuesday 07 October, was researched and compiled by the OpenID Foundation’s Artificial Intelligence Identity Management Community Group (AIIMCG) – a team of global AI and Identity experts collaborating to address rising identity management challenges in AI systems. Impending future identity challenges The whitepaper provides solid guidance for those working at the intersection of AI agents The whitepaper provides solid guidance for those working at the intersection of AI agents and access management – developers, architects, standards bodies and enterprises. It also provides strategic direction for those stakeholders to address impending future identity challenges. AI agents, as discussed in the paper, are AI systems that can autonomously take actions and make decisions to achieve goals, adapting to new situations through reasoning rather than following fixed rules. The whitepaper reveals that while current security frameworks can handle simple AI agent scenarios, such as company agents accessing internal tools, they break down when AI agents need to work across different companies, act independently, or handle complex permission sharing between multiple users. This has created major security gaps. Several critical future challenges Several critical future challenges that require immediate attention from developers, standards bodies, and enterprises, have also been uncovered through the research. Agent identity fragmentation. Companies are creating separate identity systems instead of common standards, making development harder and less secure. User impersonation vs delegated authority. AI agents look like regular users, making it impossible to tell who actually did what. Clear "acting on behalf of" systems are needed. Scalability problems in human oversight. Users will face thousands of permission requests and likely approve everything, creating security risks. Recursive delegation risks. When agents create other agents or delegate tasks, it creates complex permission chains without clear limits. Multi-user agent limitations. Current systems work for individuals, not agents serving multiple users with different permissions in shared spaces. Automated verification gaps. Computer systems are needed to automatically verify agent actions without constant human supervision. Browser and computer use agent challenges. Agents controlling screens and browsers bypass normal security checks, potentially forcing internet lockdowns. Multi-facet agent identity. Agents can switch between acting independently and acting for users, but current systems can't handle this dual nature or track which mode the agent is operating in. Constant human supervision Tobin South, Head of AI Agents at WorkOS, Research Fellow with Stanford’s Loyal Agents Initiative, and Co-Chair of the OpenID Foundation’s AIIM CG, said: “AI agents are outpacing our security systems. Without industry collaboration on common standards, we risk a fragmented future where agents can't work securely across different platforms and companies.” Atul Tulshibagwale, CTO of SGNL and Co-Chair of the OpenID Foundation’s AIIM CG, said: “This whitepaper is an important industry milestone, which captures all aspects of the intersection of AI and identity and access management.” Triage specification requirements Gail Hodges, Executive Director of the OpenID Foundation said: “We know AI and Identity experts alike are trying to unlock Agentic AI use cases while security and identity experts are trying to ensure safeguards for security, privacy, and interoperability are incorporated.” “This whitepaper offers a primer for how we can approach this daunting challenge. Beyond the paper, the AI and Identity Management CG will continue to triage specification requirements, assess priorities, and collaborate with standards body peers to accelerate work on the most pressing requirements.” A call for industry-wide collaboration Organisations should implement robust standards, such as OAuth 2.0, and adopt standard interfaces The OpenID Foundation's whitepaper issues a clear call for industry-wide collaboration to securely advance the future of AI. For today's AI agents, particularly those in simpler, single-company scenarios, the paper recommends immediate action using proven security frameworks. Organisations should implement robust standards, such as OAuth 2.0, and adopt standard interfaces, like the Model Context Protocol (MCP), for connecting AI to external tools using recommended security measures. Instead of building custom solutions, companies are urged to use dedicated authorisation servers and integrate agents into existing enterprise login and governance systems, ensuring every agent has a clear owner and is subject to rigorous security policies. Rigorous security policies However, these immediate technical steps are only the beginning. The report stresses that the larger, more complex challenges of securing a future with highly autonomous, interconnected agents cannot be solved in isolation. Moving from basic security to a world of trustworthy, verifiable agent identities requires a fundamental evolution in how they manage delegation, authority, and accountability online, as well as the provisioning and de-provisioning of agents in the enterprise. Providing specific recommendations The whitepaper concludes with an urgent appeal for the entire industry to work together on open, interoperable standards. The whitepaper provides specific recommendations for each key stakeholder: Developers and architects: Build on existing secure standards, while designing for flexibility in delegated authority and agent-native identity. Align with enterprise profiles like IPSIE to ensure security, interoperability, and enterprise readiness. Standards bodies: Accelerate protocol development that formalises these concepts, creating an interoperable foundation rather than fragmented proprietary systems. Enterprises: Treat agents as first-class citizens in IAM infrastructure. Establish robust lifecycle management — from provisioning to de-provisioning — with clear governance policies and accountability. Without this unified effort, the ecosystem risks fracturing into a collection of proprietary, incompatible identity silos, hindering innovation and creating significant security gaps.
The OpenID Foundation (OIDF), a pioneer in open identity standards, and the body behind identity protocols used by billions of people worldwide, announced the formation of the Artificial Intelligence Identity Management (AIIM) Community Group. This new initiative addresses a critical gap in the advancing artificial intelligence (AI) landscape. Bringing together recognised pioneers and security experts from across the AI and identity ecosystems, its goal is to bridge the growing disconnect between advancing artificial intelligence systems and established identity management practices. Specific needs of AI agents The separation between AI and identity development presents significant challenges around privacy As AI systems expand across social interactions, digital commerce, financial services, and the broader digital ecosystem, the separation between AI and identity development presents significant challenges around privacy, security, and interoperability. Current standards only partially address the specific requirements of AI agents, particularly around delegated authority, agent authentication, authorisation propagation between agents, and agent discovery and governance. Why this matters now The AIIM Community Group has arrived at a critical stage, as AI integration spans multiple dimensions of digital infrastructure. Without appropriate identity management frameworks, AI deployments face substantial security vulnerabilities and interoperability challenges that could undermine confidence in AI systems. "The pace of AI development is accelerating rapidly, but we must ensure that security and privacy remain foundational," said Atul Tulshibagwale, co-chair of the AIIM Community Group and CTO at SGNL. "This Community Group provides the essential forum where AI innovators and identity experts can collaborate to establish secure, trusted frameworks for AI deployment. We're here to enable responsible innovation at scale." Identity management frameworks AIIM Community Group has arrived at a critical stage, as AI integration spans multiple dimensions Co-chair Tobin South, AI Agents Lead at WorkOS and Research Fellow at Stanford University, emphasised the significance: "We're seeing AI agents making decisions and taking actions on behalf of users at unprecedented scale. Without proper identity management frameworks, we risk building systems without solid foundations." "The AIIM Community Group will help establish the robust infrastructure that AI systems require to operate securely and interoperably." AI and identity communities Co-Chair Jeff Lombardo, Principal Identity Specialist at AWS, highlighted the collaborative approach: "What distinguishes this initiative is our commitment to creating an open forum where conversations currently happening in isolation can converge." "Our goal is to develop a comprehensive taxonomy, mental model, and roadmap that advance through multiple standards bodies and forums, leveraging the collective expertise of both AI and identity communities." Proven leadership in identity standards The OpenID Foundation's leadership in this space builds upon its established track record in creating interoperable, secure identity standards that scale globally. For example: With OpenID Connect now used by billions of people across millions of applications, and FAPI securing open banking and open data in hundreds of implementations worldwide, the foundation brings proven expertise in bridging complex technical communities. OpenID Foundation is also at the forefront of standards work in verifiable credentials (OpenID for Verifiable Presentation and OpenID for Verifiable Credential Issuance) adopted by the European Commission, the UK government, the Swiss Government, the State of California, and many open source projects. OpenID Foundation’s existing and emerging standards OpenID Foundation’s existing and emerging standards could help underpin emerging AI deployments The OpenID Foundation’s existing and emerging standards could help underpin emerging AI deployments to support and accelerate their ability to deliver high-quality, robust use cases. "The OpenID Foundation has spent nearly two decades developing the standards that enable secure, interoperable identity at internet scale through millions of public and private sector implementations," said Gail Hodges, Executive Director of the OpenID Foundation. Partner with the AI community Hodges added: "Our community can partner with the AI community to identify where existing specifications can help, where gaps need to be addressed, and the most expeditious way to fill those gaps without sacrificing privacy and security.” “Regardless of whether the gaps are closed with the OpenID Foundation or in other expert groups, we hope to establish a shared roadmap so that AI systems and use cases can leverage and enhance identity infrastructure for years to come." Key objectives and deliverables The AIIM Community Group will focus on five strategic areas: Gap identification: mapping areas not currently addressed by existing standards that require focused attention from the identity community. Terminology consensus: establishing shared vocabulary to enable clear communication across AI and identity domains. Industry engagement: facilitating dialogue with major platform vendors and stakeholders. Use case definition: developing agentic AI champion use cases that organisations can reference and implement. Regulatory monitoring: tracking government AI regulations that impact identity management. Open participation and community building The AIIM Community Group operates as a forum for open discussion protected by intellectual property agreements, removing barriers to participation while ensuring that ideas and work products remain freely available to the global community. The group operates under core principles of respect, privacy through consent, and interoperability, with no participation fees to encourage broad engagement from AI and identity experts worldwide. Organisations and individuals interested in participating, please visit: Artificial Intelligence Identity Management Community Group - OpenID Foundation where they can sign the Participation Agreement to take part.
One system, one card
DownloadEnhancing physical access control using a self-service model
DownloadAligning physical and cyber defence for total protection
DownloadUnderstanding AI-powered video analytics
DownloadHow to implement a physical security strategy with privacy in mind
Download