The Center for Internet Security, Inc. (CIS®), Astrix Security, and Cequence Security have announced a collaborative partnership aimed at crafting new cybersecurity guidelines specifically tailored to the challenges presented by artificial intelligence (AI) and agentic systems.
This initiative expands upon the well-established CIS Critical Security Controls® by adapting its principles for AI environments where autonomous decision-making, tool and API access, and automated threats pose fresh challenges. The partnership's initial focus is on creating two CIS Controls companion guides: one for AI Agent Environments and another for Model Context Protocol (MCP) environments.
Addressing unique risks in MCP environments
MCP environments present distinct risks such as credential exposure, ungoverned local execution
MCP environments present distinct risks such as credential exposure, ungoverned local execution, unauthorised third-party connections, and unregulated data flows between models and tools.
The new guides will offer specific safeguards for organisations operating in dynamic settings where MCP agents, tools, and registries frequently interact with enterprise systems.
"AI presents both tremendous opportunities and significant risks," remarked Curtis Dukes, Executive Vice President and General Manager of Security Best Practices at CIS. "By partnering with Astrix and Cequence, we are ensuring that organisations have the tools they need to adopt AI responsibly and securely."
Securing AI agents and non-human identities
Astrix's role in the partnership focuses on enhancing the security of AI agents, MCP servers, and the Non-Human Identities (NHIs) such as API keys, service accounts, and OAuth tokens that connect them to crucial systems.
"AI agents and the non-human identities that power them bring great potential but also new risks," said Jonathan Sander, Field CTO of Astrix Security. "Our focus is helping enterprises discover, secure, and deploy AI agents responsibly, with the confidence to scale. Through this partnership, we're providing clear, practical guidance to keep AI ecosystems safe so organisations can innovate with confidence."
Valuable API security experience
Cequence contributes its extensive experience in enterprise application and API security to support
Cequence contributes its extensive experience in enterprise application and API security to support the secure enablement of agentic AI.
"As organisations embrace agentic AI, trust hinges on visibility, governance, and control over what those agents can see and do to your applications and data," explained Ameya Talwalkar, CEO of Cequence Security. "Security is strongest through collaboration, and this partnership gives organisations clear guidance to adopt AI safely and securely."
Supporting AI ecosystem resilience
This partnership is poised to extend trusted cybersecurity frameworks into AI environments, addressing potential risks from autonomous systems and integrations. It aims to provide clear, prioritised safeguards thereby guiding enterprises towards secure and responsible AI adoption.
By combining expertise in standards, API security, and application defence, the partnership seeks to deliver comprehensive protection. The new guidelines are expected to be released in early 2026, with CIS, Astrix, and Cequence offering workshops, webinars, and supporting resources to help organisations implement these recommendations.
Through this collaboration, enterprises will be better equipped to build a robust foundation of trust, transparency, and resilience across the AI ecosystem. By operating from a shared framework, enterprises, vendors, and security leaders can align on a unified approach to securing AI environments.
Learn why leading casinos are upgrading to smarter, faster, and more compliant systems
