Glossary

Terms and concepts for product managers working with AI, agents, and customer intelligence. From Propane-specific features to the broader AI PM landscape.

A

Acceptance Criteria

Product

Conditions a feature or story must meet to be considered complete. Acceptance criteria make requirements testable and reduce ambiguity between product and engineering. One of the six dimensions scored in Canvas Eval.

Agent Memory

Agent

The ability of an agent to retain and recall information across separate sessions or conversations. Distinct from the context window, which only holds the current request. Memory allows agents to build a model of a user or workspace over time — personalizing responses, remembering past decisions, and maintaining continuity. It also introduces new governance considerations: what gets stored, who can access it, and how it's cleared.

Agentic Loop

Agent

The repeating cycle of perceive → reason → act that an AI agent runs through to complete a task. The loop continues until the agent reaches a stopping condition — a finished output, an error, or a human checkpoint. Understanding the loop helps PMs reason about latency, cost, and failure modes.

AI Agent

Agent

A software system that perceives its environment, makes decisions, and takes actions to achieve a goal — often across multiple steps without a human in the loop for each one. In product contexts, agents can draft documents, query data sources, run research sessions, and hand off work to other agents.

AI Interview

Product

An asynchronous research method where an AI agent conducts a structured conversation with a research participant — asking follow-up questions, probing on themes, and capturing responses. Enables PM teams to run continuous discovery at scale without scheduling live sessions.

AI-Native

AI

Designed from the ground up to be powered by AI — not a traditional product with AI bolted on. AI-native tools use models as core infrastructure rather than as an add-on feature. The distinction matters to PMs evaluating vendors: AI-native tools can redesign the workflow, not just automate it.

C

Canvas

Propane

The core workspace surface in Propane — sometimes called a document. Where teams collect signals, align on decisions, and prepare work to ship. A canvas can hold a PRD, GTM plan, technical spec, or any product artifact. Content is versioned and shareable.

Canvas Agent

Propane

The AI agent operating inside a canvas. Pulls from signals, connectors, files, and action agents. Queries sources, summarizes, scores priorities, and drafts content directly in the canvas surface — so the work stays in one place.

Canvas Eval

Propane

Built-in readiness assessment for a canvas. Scores six dimensions — Problem & Goal, Existing product context, Technical context, Customer evidence, Competitive context, and UX / acceptance criteria — as Strong, Partial, or Weak, producing an overall readiness percentage.

Coding Agent

Agent

An AI agent that writes, edits, reviews, or runs code — tools like Cursor, Lovable, and GitHub Copilot. In AI-native product development, coding agents receive aligned requirements from product (via Push) and implement them. PMs are increasingly responsible for producing agent-ready specs.

Compound Intelligence

Product

The compounding effect that emerges when customer signals, business context, and AI reasoning accumulate over time. Unlike a one-off research report, compound intelligence gets more valuable as more data connects — each new signal makes every existing signal richer.

Connectors

Propane

On-demand MCP connections the canvas agent uses to query external tools at run time. Unlike integrations, connectors are invoked by the agent when needed rather than running continuously in the background.

Context Engineering

AI

The systematic design of the information environment a model draws on when generating a response. While prompt engineering focuses on how instructions are written, context engineering focuses on what knowledge is available at inference time — which documents, data, or context get retrieved and injected. For PMs building AI products, it's an architecture decision: how does the system automatically surface the right context so every response is grounded and relevant?

Context Window (model)

AI

The maximum amount of text a language model can process in a single request — its effective working memory. Larger context windows allow agents to reason over longer documents and conversation histories. PMs designing agent workflows need to account for context limits when chaining tasks.

Context Window (product)

Product

In product intelligence, the body of information — customer data, business goals, technical constraints — that an agent draws on when responding. Broader, richer product context leads to more accurate and relevant AI output. Propane's integrations expand this context automatically.

Continuous Discovery

Product

A product development practice where teams maintain an ongoing rhythm of customer conversations and signal collection — rather than discrete research sprints. AI agents and always-on integrations make continuous discovery tractable for small teams at scale.

Customer Evidence

Product

Quotes, interview excerpts, support tickets, survey responses, or usage data that support a product decision. One of the six dimensions scored by Canvas Eval. Strong customer evidence separates conviction built on real signal from conviction built on assumption.

E

Embeddings

AI

A technique that transforms text into numerical vectors capturing semantic meaning. This lets AI systems find related content even when exact words don't match — so a query about 'canceling a subscription' can surface results about 'terminating an account.' Embeddings are the engine behind semantic search and RAG. Understanding them helps PMs diagnose why search results feel off, or why the right documents aren't surfacing in an AI feature.

Evaluation (Evals)

AI

A systematic process for measuring how well an AI model or agent performs on a defined task. Evals replace intuition with measurement — critical for shipping AI features confidently. Good PMs define evals before they start building, not after something breaks in production.

F

Fine-Tuning

AI

Additional training applied to a pre-trained foundation model on a smaller, task-specific dataset. Fine-tuning improves performance on narrow tasks but requires labeled data and compute. Most product teams achieve their goals with prompt engineering or RAG before needing fine-tuning.

Foundation Model

AI

A large-scale AI model trained on broad data that can be adapted or fine-tuned for specific tasks. Foundation models are the base layer beneath most AI applications. Understanding the capability differences between providers matters when evaluating AI product vendors.

Full Stack Builder

Product

A PM who goes beyond writing requirements and works directly with AI systems — experimenting with prompts, calling APIs, and prototyping features hands-on. Full stack builders shorten feedback loops by validating ideas in hours rather than waiting for engineering cycles. The term reflects an emerging archetype in AI-native teams: product people who can explore feasibility directly and bring credible technical insight into planning conversations.

Function Calling

Agent

A structured method for language models to invoke external tools by outputting a function name and arguments in a defined format. The application then executes the function and returns the result to the model. Underpins most modern agent implementations.

G

Grounding

Agent

Connecting a model's outputs to real, verifiable data sources rather than relying purely on its training. Grounded responses cite specific customer feedback, product data, or documents — which is why Propane surfaces signal citations alongside insights.

GTM Plan

Product

Go-to-Market Plan. A strategy document outlining how a product or feature will reach its target customers — covering positioning, pricing, launch channels, and success metrics. GTM plans are a common artifact type in Propane canvases.

Guardrails

AI

Constraints placed on model behavior to prevent unsafe, off-brand, or incorrect outputs. Guardrails are how teams make probabilistic AI feel reliable to users — filtering harmful content, enforcing tone, blocking out-of-scope responses, and catching hallucinations before they surface. For PMs shipping user-facing AI features, guardrails are a launch requirement, not an afterthought.

H

Hallucination

AI

When a language model generates plausible-sounding but factually incorrect output. A critical failure mode for AI in product work. Grounding, retrieval, and structured evaluation patterns (like Canvas Eval) are defences against hallucination in high-stakes documents.

Human-in-the-Loop

Agent

A design pattern where a human reviews or approves an agent's output before it proceeds to the next step. Critical for high-stakes actions — publishing, sending, or pushing to production. PMs need to decide where in the agentic loop to insert human checkpoints.

I

Inference

AI

The act of a model generating a response — the moment it runs and produces output for a user. Inference latency is one of the most common AI UX failure modes: a feature that works perfectly in testing can still feel broken if it takes too long to respond. PMs need to understand the tradeoff between model quality and inference speed when making model selection decisions.

Insight

Product

A synthesized understanding derived from multiple signals — an observation about customer behavior, need, or pain that has strategic or tactical implications. Insights are what research produces; signals are the raw material. Propane surfaces insights from aggregated signal.

Integrations

Propane

SaaS connections that continuously sync signals into Propane — powering company, people, and signal data. Examples include CRM, support tools, and product analytics platforms. Integrations run in the background, keeping your intelligence layer current.

J

Job-to-be-Done

Product

A framework for understanding what customers are trying to accomplish — the underlying goal they hire a product to help with. Jobs-to-be-done shift focus from features to outcomes, and are a useful lens for writing prompts that extract meaningful signal from customer feedback.

L

LLM

AI

Large Language Model. A neural network trained on large text datasets that can generate, summarize, translate, and reason about language. GPT-4, Claude, and Gemini are LLMs. Most AI-native product tools — including Propane — are built on top of LLM APIs.

LLM-as-Judge

AI

A pattern where one language model is used to evaluate the outputs of another. Because manual review doesn't scale, LLM-as-judge is the main practical method for running automated quality evals across large volumes of AI output. PMs building AI features use it to measure consistency, catch regressions, and validate that changes to prompts or models improve — not degrade — output quality.

M

MCP (Model Context Protocol)

Agent

An open standard for connecting AI models to external data sources and tools. MCP defines how models discover, invoke, and receive results from integrations — making it easier to build interoperable agent systems. Propane Connectors are built on MCP.

Model Routing

Agent

Also called model orchestration. Logic that determines which AI model handles a given request, based on cost, speed, quality requirements, or task type. Not every query needs the most powerful model. PMs building AI features use model routing to balance output quality against per-query cost — routing complex, high-stakes tasks to more capable models while handling routine tasks with lighter, cheaper ones.

Multi-Agent System

Agent

An architecture where multiple specialized AI agents collaborate to complete a task. One agent might orchestrate the workflow while others handle specific jobs — research, writing, code, evaluation. Propane uses this pattern internally to coordinate canvas actions.

N

Noise

Product

Data that doesn't carry meaningful information about customer needs or product performance. One of the core challenges in customer intelligence is separating signal from noise — which is where AI categorization and filtering are most useful.

O

Observability

Agent

The ability to monitor, trace, and understand what an AI agent did and why — not just whether it succeeded or failed, but the reasoning path that led there. In traditional software, logs tell you what happened. In agentic systems, observability tells you how the model decided to do it. For PMs, observability is a launch requirement for any agent running in production — without it, debugging failures is guesswork.

Opportunity Sizing

Product

Estimating the potential value of a market or feature investment — typically expressed as TAM, SAM, or SAM/SOM. Opportunity sizing grounds prioritization decisions in market reality, not just internal intuition. AI agents can accelerate early-stage sizing by synthesizing market signals.

Orchestrator Agent

Agent

The agent responsible for coordinating other agents in a multi-agent system. It receives a high-level goal, breaks it into subtasks, delegates to specialist agents, and assembles the results. PMs building with AI need to understand where orchestration happens to design useful human checkpoints.

P

Positioning

Product

How a product is framed relative to alternatives in a customer's mind — the combination of audience, problem, and differentiated value. Good positioning informs every layer of go-to-market, from copy to sales narrative. Competitive context in Canvas Eval directly surfaces positioning gaps.

PRD

Product

Product Requirements Document. A written specification that describes what a product or feature should do, why it exists, who it serves, and how success will be measured. In AI-native workflows, PRDs live in canvases — so the document and the intelligence behind it stay in one place.

Prioritization

Product

The process of deciding what to work on next, given limited time and resources. Common frameworks include RICE, ICE, and MoSCoW. AI-assisted prioritization can surface which customer signals most strongly support each candidate initiative.

Probabilistic Output

AI

AI systems don't guarantee the same output for the same input — they produce the most likely response given the context. This is the fundamental difference from traditional deterministic software, where input A always produces output B. The shift changes how PMs write acceptance criteria (you're testing distributions, not exact outputs), design error states, and communicate quality expectations to stakeholders.

Prompt Engineering

AI

The practice of crafting model inputs — system prompts, examples, instructions — to reliably produce desired outputs. A core skill for PMs shipping AI features. The quality of a prompt directly affects the quality of agent behavior, output structure, and failure rate.

Prompt Injection

Agent

A security attack where malicious instructions are hidden inside content an agent processes — a document, email, webpage, or data feed — causing it to take unintended actions. As AI agents read more external content, prompt injection becomes a real attack surface. PMs designing agents that interact with user-supplied or external data need to include injection risk in their threat model from the start.

Push

Propane

Hands aligned canvas content off to coding agents such as Cursor or Lovable. Enforces a clear workflow principle: humans align first, then agents build. Push is the handoff point between product thinking and implementation.

R

RAG (Retrieval-Augmented Generation)

AI

An architecture that combines a retrieval step — fetching relevant documents or data — with generation, so the model answers based on retrieved content rather than training data alone. Most AI product tools use RAG under the hood to keep responses grounded in your specific data.

Reasoning Model

AI

A language model variant that shows its thinking process before producing a final answer — often called chain-of-thought or extended thinking. Reasoning models tend to perform better on complex, multi-step problems like planning, analysis, and ambiguous tradeoffs. Increasingly relevant to AI-native PM tooling.

Research Agents

Propane

Structured research workflows outside the canvas. Templates define configurations; instances produce summaries, insight reports, and signal citations. Research sessions are individual participant conversations. Research insights are the AI-generated analysis produced from them.

RICE

Product

A prioritization framework that scores work by Reach, Impact, Confidence, and Effort. RICE produces a numeric score that makes tradeoffs between initiatives explicit and comparable. Useful for grounding AI-assisted prioritization in a consistent method.

S

Settings

Propane

Account-level configuration — workspace profile, team members, prompt shortcuts, context categories, connectors, integrations, and privacy controls. The source of truth for how Propane is wired to your stack.

Share & Collaborate

Propane

Share links for canvases and research reports. Teammates leave feedback and comments directly on the canvas — keeping discussion in context rather than scattered across Slack threads or email.

Signal

Product

A data point from a customer — a support ticket, NPS comment, interview excerpt, churn note, or product event — that reveals something meaningful about their experience or needs. Signals aggregate into insight; a single signal is evidence, not conclusion.

Signals

Propane

A dedicated view for exploring product signals. Surfaces categorized insights from connected sources. Queryable conversationally — so teams can interrogate their customer data without writing queries or waiting for a researcher.

Space

Propane

A container within an account that holds canvases. Each space has a name, optional description, and a canvas count. Canvases inside a space inherit its shared context — useful for organizing work by product area, team, or initiative.

System Prompt

AI

Instructions given to a model at the start of a conversation — before user input — that define its role, constraints, and behavior. System prompts are how AI products configure agent personas, enforce guardrails, and inject relevant context. Most of what you configure in Propane agent settings flows through system prompts.

T

Temperature

AI

A model setting that controls how much randomness is introduced when generating output. At 0, the model always picks the most probable next word — consistent but sometimes repetitive. Higher values introduce more variation, making responses feel more creative or human. PMs building AI features use temperature tuning to balance brand consistency with natural-feeling output. Small adjustments can meaningfully affect user trust.

Token

AI

The basic unit of text a language model processes — roughly three to four characters, or about three-quarters of a word in English. AI costs, context limits, and speed are all measured in tokens. Relevant to PMs building with AI APIs who need to manage latency and budget.

Tool Use

Agent

The ability of an AI model to call external functions or APIs — such as searching the web, querying a database, writing a file, or sending a message. Tool use is what transforms a language model into an agent capable of taking real-world actions.

U

Unit Economics (AI)

Product

A cost-benefit framework applied at the level of a single AI interaction. For PMs, this means understanding what each model call costs, what user value it produces, and whether that ratio holds as usage scales. High-quality models deliver better outputs but at higher per-query cost. Good AI PMs use unit economics to decide where premium intelligence is justified and where a simpler, cheaper approach is sufficient.

User Story

Product

A short, human-centered description of a feature from the user's perspective: 'As a [role], I want [action] so that [outcome].' User stories keep teams focused on behavior and value rather than technical implementation.