Glossary

AI Agent Governance

AI agent governance is the set of controls, policies, and audit mechanisms that keep deployed AI agents operating inside defined boundaries.

Agent governance covers several layers: an explicit agent constitution (scope and rules), a runtime policy engine that enforces those rules, retrieval grounding to keep responses accurate, tool-call permission controls, audit logging on every interaction, and a change-control process for the constitution itself.

In regulated industries — financial services, healthcare, legal — governance is not optional. Compliance teams must be able to audit what the agent said, what sources it cited, which policies were evaluated, and which humans reviewed the output.

Effective governance is designed into the deployment from day one, not retrofitted after launch. Retrofitting policy onto an agent that has been running "by vibes" is expensive and rarely complete.

See also
  • Agent ConstitutionAn agent constitution is the written policy that defines what an AI agent is authorized to do, what it must refuse, how it escalates, and how it speaks — enforced at runtime by a policy layer.
  • Constitutional AIConstitutional AI is an approach to training and deploying AI systems in which model behavior is guided by an explicit written set of principles — a "constitution" — rather than only by reinforcement from human feedback.
  • Model Context Protocol (MCP)Model Context Protocol (MCP) is an open standard for connecting AI agents to the tools, data sources, and prompts they need, through a consistent client-server protocol.