The enterprise AI glossary.
Definitions for the terms that actually come up in production enterprise AI deployments — written to be useful, not to pad word count.
An agent constitution is the written policy that defines what an AI agent is authorized to do, what it must refuse, how it escalates, and how it speaks — enforced at runtime by a policy layer.
An AI agent is a software system that uses a large language model to perceive its environment, reason about tasks, and take actions in external systems on behalf of a user.
AI agent governance is the set of controls, policies, and audit mechanisms that keep deployed AI agents operating inside defined boundaries.
An AI chatbot — more accurately, an AI chat agent — is an AI agent that interacts with users through text: web chat, WhatsApp, SMS, Slack, Teams, or social messaging.
An AI SDR is an AI agent that performs the work of a sales development representative — responding to inbound leads, qualifying them against an ideal customer profile, and booking qualified meetings onto account-executive calendars.
An AI voice agent is an AI agent that interacts with users through voice — answering inbound calls, placing outbound calls, and conversing in real time.
Constitutional AI is an approach to training and deploying AI systems in which model behavior is guided by an explicit written set of principles — a "constitution" — rather than only by reinforcement from human feedback.
Enterprise AI is the application of artificial intelligence — particularly large language models and AI agents — inside organizations, with the governance, integration, and operational controls required for production business use.
Model Context Protocol (MCP) is an open standard for connecting AI agents to the tools, data sources, and prompts they need, through a consistent client-server protocol.
Multi-agent orchestration is the practice of coordinating multiple specialized AI agents to accomplish a task that is too complex or too broad for a single agent.
Retrieval-augmented generation (RAG) is an architecture that grounds a language model's responses in a specific knowledge base by retrieving relevant passages at inference time and conditioning the response on them.
Tool use — also called function calling — is the capability of a language model to emit structured calls to external tools, enabling an agent to take real actions in connected systems.