Two years ago, who would have thought that Artificial Intelligence (AI) would transform how we interact with information? With the latest advancements in OpenAI and Anthropic models, breaking all limits with every new release, agents are inevitable. Last month, Anthropic released their “Computer agent,” which can control a computer to perform actions based on user prompts. This raises some serious concerns: How does access control work?

Imagine you have an army of agents, each specializing in a particular area. Would you be comfortable delegating all your access to these agents? Before getting to it, let’s define what an AI Agent is. AWS defines an agent as “a software program that can interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals.”

Few Facts About Agents

Agents are “smart” computing units capable of performing tasks, but here are a few things to note:

  • Humans are responsible for setting goals, and an agent determines the tasks needed to accomplish those goals.
  • Agents are connected to tools, and this gives them the ability to perform useful tasks.
  • Agents use LLMs as their brains to convert a goal into a set of tasks.

The ubiquitous chatbots around us today are nothing more than agents. If you think of a Retrieval Augmented Generation (RAG) chatbot, it’s an LLM agent connected to data sources, in some cases the Internet.

The access control problem is non-existent when data sources are public. If your agent is connected to the Internet, you wouldn’t mind. However, as AI adoption rises in enterprises, its usability will come from being able to connect to internal tools and data sources. Multiple agentic companies are out there, building solutions for enterprises and consumers. Access is restricted to protected resources, and more often than not, these agents are connected using “administrative” credentials. If you delegate access to these agents, how do you verify and monitor that they’re doing what they are supposed to do? Enterprises use single sign-on tools like Okta to manage access control for human users. With agents, traditional authentication doesn’t work.

Limitations of Current Authentication Mechanisms

The AI industry is focused on developing more capabilities for these agents, but little attention is paid to building access control. Existing authentication frameworks don’t work because:

  • Static and Persistent: Credentials in today’s world are static. For human users, we have multi-factor authentication, but it’s not suitable for agents. On the other hand, non-human identities rely on service accounts or API keys. This works primarily because current services (non-AI) are deterministic. Persistent access is managed by manual access control. AI agents are stochastic by design; granting them persistent access will lead to unauthorized access.
  • Fine-grained Access Control: Previously an identity was easily associated with a department or role. This allowed simplification by using Role-based Access Control (RBAC). AI agents often cross that boundary leading to many to many mapping between users to agents. This makes the existing RBAC unusable.
  • Credentials Management: Agents require delegated access via hardcoded credentials. Even if you’re using OAuth 2.0 or OpenID Connect (OIDC), agents maintain access to underlying resources at all times. Access scope relies on the underlying service’s granular support. For example, if the authorization server supports coarse-grained access, agents are forced to be overprovisioned.

If you want your email agent to only read your emails from a particular sender, there’s no way to contain it using existing mechanisms. In addition to these challenges in access control, shared credentials between agents, i.e., your service credentials delegated to multiple agents, lead to broken audit trails. There’s no way to differentiate which agent did what.

Requirements for the New Future

Agents become useful when they can perform tasks without human intervention. But, in 1% of cases where agents are taking sensitive actions, we want explicit approval. For example, when a human shares a document outside their organization, the service typically warns the user of that action. Agents operating at the API level don’t see such warnings or ignore them.

The complexity starts with the definition of sensitivity and why it is different. Because human employees go through “security training”, teaching them what a sensitive action is. Agents, on the other hand, don’t understand these policies. As access control becomes more granular, the need for customizability becomes important. Here are the top requirements for an agentic AAA framework:

  • It needs to be dynamic enforcement rather than static. Agents operate on unstructured data, and the understanding of information flow on the fly and being able to restrict an agent’s access is central to agentic access control.
  • Five to ten years ago, User Entity Behavior Analysis (UEBA) introduced how corporate entities can be monitored across their chain of actions. Agentic AAA framework must be able to chain an agent’s actions to provide proper detection and response. A single action of sending an email might look innocuous, but combining it with classified file access might violate a policy.
  • Most importantly, agents can’t have persistent access to underlying resources. Each access request needs to be evaluated against a policy, and ephemeral tokens will be issued to perform the action. Agent’s ability to unrestricted access to underlying resources isn’t viable for enforcing fine-grained access control.

Beyond OAuth and OIDC

It’s clear that we need new authentication and authorization frameworks for agents. Non-human identities (NHIs) are an emerging trend in cybersecurity, but they focus on service accounts and other system credentials. Merging agent authentication with that will not be prudent. Agents are more “sentient” than our existing automation. Permissions become dynamic and contextual. An agent performing a sensitive email summary should not be able to write a public document in the same session.

The above system diagram provides a high-level overview of the authentication and authorization flow in the agentic world. In short, agents should not have any access by default. A separate auth server will verify each agent's action before granting an access token. An agent’s access control will be defined by the maximum access it can have, where each request is granted on a need-to-know basis.

This is only possible by extending existing frameworks and NHIs into customizable domains. Administrators need a way to easily set up these policies based on an agent’s proposed plan of action. This will allow agent-specific policy enforcement. Methodologies like this will emerge as the platform for tomorrow, on top of which agents will be built.

This platformized approach to secure agentic interaction with tools and services will make development faster for agent builders, and make securing easy for administrators.

About the Author: Akash Mukherjee, an experienced security leader with a decade of experience defending large organizations from Advanced Persistent Threats (APTs) and Nation-State actors. He’s the Head of Engineering at Realm Labs, an AI Security startup building the AAA framework for AI agents and chatbots. Previously, Akash led security at Apple AIML and Google Chrome. He was one of the builders of the Supply Chain Security framework, SLSA. He has led the development of the Google Cybersecurity certificate and is the author of a book named Defense in Depth.

Akash Mukherjee — Head of Engineering at Realm Labs https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOL6NK7F_12Wd0Cljq9dpDvD4_OfmjRfxP4Sr4QGShKGKP_tPBOE4N_k4qsdsYLjG4vrainNcXtHwOuzRnOuDj8QxY3R9OqdoZg93o7NBSkM0165Kd10I-GThs8Rhfg1ozWu41TnKwLKDbD16mmuZngElY7nMiP5yaLujysVdzX0fHmrALCgNs4t6iSVQ/s100-rw-e365/au.png
Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.