AI Glossary
A plain-English reference for terms you’ll see in our work together. If you don’t see a term here, email me and I’ll add it.
How to use this page: Scan by alphabet. Short definitions first, then when useful a quick example or “see also.”
A/B test
A simple experiment where you compare two versions (A vs. B) to see which performs better.
Accordion
A web section that expands/collapses to show more details. Useful for long pages (like this glossary).
Agent
A multi-step AI workflow that can use tools (search, email, calendar, CRM) to complete a task.
Example: draft a reply, look up a customer in your CRM, add a follow-up reminder, and prepare a summary.
AI model
The underlying system that generates text, images, or audio based on patterns learned from data.
Alt text
A short description of an image for accessibility and SEO.
Example: “Douglas Hunter headshot, blue checkered shirt.”
API
A way for software to talk to other software. Lets us connect your systems to AI without manual copy-paste.
See also: webhook, integration.
Artifact
Any file or output an AI workflow produces (doc, email draft, brief, spreadsheet).
Automation
Rules that run tasks without manual effort. Often triggered by an event.
Example: when a form is submitted, draft a thank-you email in your voice.
Bias (model bias)
When an AI’s outputs lean unfairly toward certain people or ideas. We test and adjust to reduce this risk.
Brand voice
Your company’s tone, vocabulary, and style. We tune assistants to match it.
See also: style guide, few-shot example.
Chunking
Splitting long documents into smaller pieces so an AI can read and search them accurately.
See also: context window.
Compliance
Following laws and company policies (privacy, security, industry rules). We align builds with your counsel/IT.
Consent (client consent)
Written OK to use specific data, tools, or testimonials in a project.
Context window
How much text the model can “hold in mind” at once. Longer windows let the model consider more material.
Creativity settings (frequency/presence)
Settings that nudge the model to avoid repeating itself or to explore new wording.
CTA (call to action)
The next step you want a visitor to take (for example, “Book your free 15-minute discovery call”).
Custom GPT
A tailored assistant configured for your business, with your instructions, examples, and allowed tools.
Data governance
How an organization manages data access, quality, retention, and compliance. We align builds to your rules.
See also: data inventory, retention, compliance.
Data inventory
A short list of what data a project uses, where it lives, who can access it, and how long we keep it.
Data minimization
Only collecting/using the least amount of data needed to do the job.
See also: de-identification, redaction.
Data residency
Where data is stored geographically. Some companies require specific regions.
De-identification
Removing personal details so data can’t easily be tied back to a person.
See also: PII, PHI, redaction.
Deep-Dive Strategy
A focused working session to map workflows and build tailored assistants, followed by clear next steps.
See also: discovery call, prompt audit.
Discovery call
A short, free conversation to identify the biggest AI opportunity and fit.
See also: prompt audit, deep-dive strategy.
DPA (data processing addendum)
A simple contract add-on that sets privacy, retention, and subprocessors for a project.
See also: compliance.
Embedding
A numeric representation of text that helps the system find related ideas. Used for search and retrieval.
Encryption (in transit / at rest)
Protecting data while it moves across the internet (in transit) and while stored on disk (at rest).
Eval set (evaluation set)
A small, representative set of examples we use to test an assistant’s accuracy and tone before launch.
See also: human-in-the-loop, guardrails.
Fallback / human handoff
When the assistant can’t proceed, it hands off to a person or a simpler workflow—so users aren’t stuck.
Few-shot example
A couple of short examples inside a prompt to teach the model your format and tone.
Example: “Here are 2 sample replies in our voice. Now write one for this new situation.”
Fine-tuning
Training a model further on a small set of your examples to improve style or accuracy for your use case.
GA4 (Google Analytics 4)
Google’s analytics tool for measuring site traffic and conversions.
See also: UTM.
Grounding
Keeping answers tied to your approved sources. RAG is one way to do this.
See also: RAG, knowledge base.
Guardrails
Rules that restrict what a workflow can read or write, and how it behaves.
Example: allow-list data sources; block sending emails without human review.
Hallucination
When a model sounds confident but makes something up. We reduce this with retrieval, examples, and review.
Human-in-the-loop (HITL)
A person approves or edits AI output before it goes out. Default in my builds for client-facing messages.
Instruction (system instruction)
The standing guidance that sets tone, role, and boundaries for an assistant.
Integration
Connecting two tools so they share data or trigger actions (for example, AI ↔ CRM).
See also: API, webhook.
Jailbreak / prompt injection
Tricks that try to make a model ignore instructions or leak data. We test and add defenses before launch.
Knowledge base
The set of approved documents the assistant can reference.
See also: retrieval, RAG.
Latency
Delay between request and response. We keep it low so helpers feel snappy.
Least privilege
Giving only the minimum access necessary to complete the work.
See also: access and security, RBAC.
LLM (large language model)
A type of AI model that predicts likely next words to generate text.
Logging
Keeping a record of operations to help debug and audit. You can opt out of nonessential logs.
MFA (multi-factor authentication)
An extra sign-in step (such as a code) that protects accounts even if a password is stolen.
NDA (non-disclosure agreement)
A confidentiality agreement to protect shared information. Often used before deeper data access.
Onboarding
The steps to get you set up: access, data samples, style examples, and a quick success plan.
On-prem / private deployment
Running AI in a private environment so prompts/outputs aren’t retained by a public vendor.
PHI (protected health information)
Personal health data covered by regulations. Requires special handling—or we avoid using it.
PII (personally identifiable information)
Data that can identify a person (name, email, phone, etc.). Handle carefully.
See also: PHI, redaction.
Pilot
A limited trial with real users to validate a workflow before broader rollout.
See also: rollout, eval set.
Prompt
The instruction you give the model. Clear prompts get better results.
Prompt audit
A working session to improve prompts and identify quick wins; includes before/after prompts.
See also: discovery call, deep-dive strategy.
Prompt template
A reusable prompt with blanks for the details.
Example: “Write a friendly two-paragraph email in our voice to [name] about [topic]…”
RAG (retrieval-augmented generation)
Have the AI read your approved docs before answering so it stays grounded in your content.
Rate limits
Caps vendors set on how many requests you can make per minute. We design around these.
RBAC (role-based access control)
Access granted by role (for example, “read-only finance”). Keeps permissions tidy.
See also: least privilege.
Reasoning
The intermediate steps a system takes to decide what to output. Some models expose traces; many don’t. We focus on reliable results, not hidden inner workings.
Redaction
Masking sensitive details before sharing or storing text.
Example: replace phone numbers with [REDACTED].
Retention (data retention)
How long data is stored. We default to short timelines and delete at project end unless you require otherwise.
See also: data lifecycle and deletion.
Role
Who the assistant is pretending to be (for example, “You are a helpful client-services assistant”).
Rollout
Turning on a workflow for more users in stages after the pilot.
Safety policy
Rules that prevent harmful or disallowed outputs. Applied at the vendor level and in our prompts.
Scope
The precise boundaries of what a helper will and won’t do. We keep scopes small and useful.
Session memory
Short-term context an assistant can remember during a conversation to stay consistent—then reset.
See also: context window.
SOC 2 / ISO 27001
Third-party security standards some vendors have. We prefer tools with these attestations when relevant.
SOP (standard operating procedure)
Plain-English steps so a task is done the same way every time. My SOPs are one-pagers in your voice.
SOW (statement of work)
A short document listing scope, deliverables, timeline, and responsibilities.
See also: scope, versioning.
Streaming
Sending output as it’s generated so you see it appear quickly, useful for longer drafts.
Style guide
A brief reference for tone, formatting, and dos/don’ts so outputs stay on-brand.
See also: brand voice, few-shot example.
System of record
The official place a piece of information lives (for example, your CRM for client notes).
See also: single source of truth.
Temperature / top-p
Settings that control creativity. Lower = safer/steadier; higher = more varied.
Token
A small chunk of text models count to measure length and cost. Roughly 3–4 characters on average.
Tool use
Letting an assistant call an external tool mid-task (search, calendar, database, email send).
Tracking link (UTM)
A link with tags (utm_source, utm_medium, etc.) so we can see which campaign brought a visitor.
See also: GA4, QR code.
Training data
What a model learned from before you ever used it. We avoid tools with unclear or suspect training sources for IP-sensitive work.
Trigger / action
Trigger: the event that starts a workflow. Action: what happens next.
Vector database
A specialized store for embeddings that makes similarity search fast and accurate.
Versioning
Labeling iterations of prompts, workflows, and models so we know what’s running where.
Webhooks
Automatic messages tools send each other when something happens (“a form was submitted”).
See also: API, integration.
White-space opportunity
Useful improvements you’re not doing yet because no one connected the dots. Often small helpers with big payoff.
Zero-retention mode
A vendor setting where prompts and outputs aren’t stored or used for training.
Zero-shot / one-shot / few-shot
How many examples you give the model in the prompt. Zero-shot: none. One-shot: one example. Few-shot: a small set.
See also: few-shot example.
Last updated: August 12, 2025 (HST)