Lately, some of the most interesting coffee chats have been leading to the same question: when software starts acting on our behalf, what does it mean to trust it?
This question keeps coming up because AI agents are no longer just responding to prompts. They are beginning to take actions. They book things. They make purchases. They move data. They trigger workflows.
Over the last decade, I've built and operated identity and onboarding systems used by millions of people across highly regulated markets so naturally, when I hear about the rise of agents, my mind goes to scale, safety, and regulation. But before we get into that, let's start with the basics.
Agents and Identity
What is an agent? One definition I find useful comes from Andrew Ng, who describes an agent as "software that can observe, reason, and act toward a goal with a degree of autonomy." What matters to me most in that definition is not intelligence but the ability to act.
The moment software can act, it needs a relationship to someone or something in the real world.
In my view, an agent should ALWAYS represent a human or a business. If it does not, I believe we need to pause and ask why.
How Does Identity Impact Agents?
In our everyday lives, delegation is normal. We authorize people, systems, and services to act on our behalf all the time. Employees sign contracts. Accountants file taxes. Applications access our data. All of this works because identity is clear, and authorization is clear.
I believe the same should be true for agents.
If an agent is acting on my behalf, I should be able to understand who it represents, what it is allowed to do, and where that authority comes from. I should also be able to take that authority away if needed.
Think, zero trust architecture. In zero trust systems, "nothing is assumed. Every action is evaluated. Identity sits at the center."
I believe agents should be treated the same way.
Trust should not come from the fact that an agent exists, or that it presents a key, or that it lives inside a trusted platform. Trust should come from verified identity and clearly scoped delegation.
But creating and using this verified identity has to be easy to use.
How Do You Solve for Agent Identity at Scale?
If identity becomes heavy or slow or manual, it will not scale. People will bypass it. Developers will find shortcuts. The system will lose the very trust it was meant to create.
That is why I think of agent identity as infrastructure.
At its core, you already have information about a user or a business. That information can be consumed in different ways. One way is through an API, where a system asks another system what is known and what has been verified.
Another way is through a portable certification token that can be delegated to an agent. A token that says this agent is acting on behalf of this verified human or business, with defined authorizations. Think: Strong Identity!
This matters because agents are not static. They operate across tools, vendors, and environments. Identity cannot be trapped inside a single database or platform. It has to move with the agent, while remaining verifiable and revocable.
In a nutshell, agent identity can be defined as a representation of a verified real-world entity and the agent acting for it, grounded in context. It encodes representation, agent and model identity, declared or self-attested capabilities, and operating context in a form other systems can verify and reason with.
Takeaways
As agents become more capable, the question of trust shifts. It is no longer about whether an agent is intelligent, or whether it runs inside a trusted platform. It is about whether its actions are clearly tied to a real person or business, with authority that is explicit, verifiable, and revocable.
It means knowing who built the agent and whether that developer or company has been verified. It means who the agent represents, what model powers it, Claude, GPT or something opensource and finetuned. Different models carry different behaviours and different risk profiles. It means knowing where the agent runs, which cloud and region, under which infrastructure.
It also means understanding what the model is capable of. What tools it has access to. Whether it can read files, send messages, move money or call other APIs. It is just as important to understand what it cannot do. These boundaries and prohibitions keep it scoped to authorized actions. And finally how the agent handles data? Whether it stores your inputs, for how long and whether it feeds any of it back to train models.
If we get this right, agents can operate safely across systems and tools. Not to mention, many of the harder conversations around safety, accountability, and compliance start to feel more grounded.
If we get this wrong, scale can be dangerous. Agent identity is not a nice to have, it is what makes delegation, trust, and autonomy possible.
That is how I think about agent identity. If you are exploring this area, I would love to connect and chat!