Glossary
Terms and concepts for product managers working with AI, agents, and customer intelligence. From Propane-specific features to the broader AI PM landscape.
A
Acceptance Criteria
ProductConditions a feature or story must meet to be considered complete. Acceptance criteria make requirements testable and reduce ambiguity between product and engineering. One of the six dimensions scored in Canvas Eval.
Related terms
Agent Memory
AgentThe ability of an agent to retain and recall information across separate sessions or conversations. Distinct from the context window, which only holds the current request. Memory allows agents to build a model of a user or workspace over time — personalizing responses, remembering past decisions, and maintaining continuity. It also introduces new governance considerations: what gets stored, who can access it, and how it's cleared.
Agentic Loop
AgentThe repeating cycle of perceive → reason → act that an AI agent runs through to complete a task. The loop continues until the agent reaches a stopping condition — a finished output, an error, or a human checkpoint. Understanding the loop helps PMs reason about latency, cost, and failure modes.
Related terms
AI Agent
AgentA software system that perceives its environment, makes decisions, and takes actions to achieve a goal — often across multiple steps without a human in the loop for each one. In product contexts, agents can draft documents, query data sources, run research sessions, and hand off work to other agents.
Related terms
AI Interview
ProductAn asynchronous research method where an AI agent conducts a structured conversation with a research participant — asking follow-up questions, probing on themes, and capturing responses. Enables PM teams to run continuous discovery at scale without scheduling live sessions.
Related terms
AI-Native
AIDesigned from the ground up to be powered by AI — not a traditional product with AI bolted on. AI-native tools use models as core infrastructure rather than as an add-on feature. The distinction matters to PMs evaluating vendors: AI-native tools can redesign the workflow, not just automate it.
Related terms
C
Canvas
PropaneThe core workspace surface in Propane — sometimes called a document. Where teams collect signals, align on decisions, and prepare work to ship. A canvas can hold a PRD, GTM plan, technical spec, or any product artifact. Content is versioned and shareable.
Related terms
Canvas Agent
PropaneThe AI agent operating inside a canvas. Pulls from signals, connectors, files, and action agents. Queries sources, summarizes, scores priorities, and drafts content directly in the canvas surface — so the work stays in one place.
Related terms
Canvas Eval
PropaneBuilt-in readiness assessment for a canvas. Scores six dimensions — Problem & Goal, Existing product context, Technical context, Customer evidence, Competitive context, and UX / acceptance criteria — as Strong, Partial, or Weak, producing an overall readiness percentage.
Related terms
Coding Agent
AgentAn AI agent that writes, edits, reviews, or runs code — tools like Cursor, Lovable, and GitHub Copilot. In AI-native product development, coding agents receive aligned requirements from product (via Push) and implement them. PMs are increasingly responsible for producing agent-ready specs.
Related terms
Compound Intelligence
ProductThe compounding effect that emerges when customer signals, business context, and AI reasoning accumulate over time. Unlike a one-off research report, compound intelligence gets more valuable as more data connects — each new signal makes every existing signal richer.
Related terms
Connectors
PropaneOn-demand MCP connections the canvas agent uses to query external tools at run time. Unlike integrations, connectors are invoked by the agent when needed rather than running continuously in the background.
Related terms
Context Engineering
AIThe systematic design of the information environment a model draws on when generating a response. While prompt engineering focuses on how instructions are written, context engineering focuses on what knowledge is available at inference time — which documents, data, or context get retrieved and injected. For PMs building AI products, it's an architecture decision: how does the system automatically surface the right context so every response is grounded and relevant?
Context Window (model)
AIThe maximum amount of text a language model can process in a single request — its effective working memory. Larger context windows allow agents to reason over longer documents and conversation histories. PMs designing agent workflows need to account for context limits when chaining tasks.
Context Window (product)
ProductIn product intelligence, the body of information — customer data, business goals, technical constraints — that an agent draws on when responding. Broader, richer product context leads to more accurate and relevant AI output. Propane's integrations expand this context automatically.
Continuous Discovery
ProductA product development practice where teams maintain an ongoing rhythm of customer conversations and signal collection — rather than discrete research sprints. AI agents and always-on integrations make continuous discovery tractable for small teams at scale.
Related terms
Customer Evidence
ProductQuotes, interview excerpts, support tickets, survey responses, or usage data that support a product decision. One of the six dimensions scored by Canvas Eval. Strong customer evidence separates conviction built on real signal from conviction built on assumption.
Related terms
E
Embeddings
AIA technique that transforms text into numerical vectors capturing semantic meaning. This lets AI systems find related content even when exact words don't match — so a query about 'canceling a subscription' can surface results about 'terminating an account.' Embeddings are the engine behind semantic search and RAG. Understanding them helps PMs diagnose why search results feel off, or why the right documents aren't surfacing in an AI feature.
Evaluation (Evals)
AIA systematic process for measuring how well an AI model or agent performs on a defined task. Evals replace intuition with measurement — critical for shipping AI features confidently. Good PMs define evals before they start building, not after something breaks in production.
Related terms
F
Fine-Tuning
AIAdditional training applied to a pre-trained foundation model on a smaller, task-specific dataset. Fine-tuning improves performance on narrow tasks but requires labeled data and compute. Most product teams achieve their goals with prompt engineering or RAG before needing fine-tuning.
Foundation Model
AIA large-scale AI model trained on broad data that can be adapted or fine-tuned for specific tasks. Foundation models are the base layer beneath most AI applications. Understanding the capability differences between providers matters when evaluating AI product vendors.
Related terms
Full Stack Builder
ProductA PM who goes beyond writing requirements and works directly with AI systems — experimenting with prompts, calling APIs, and prototyping features hands-on. Full stack builders shorten feedback loops by validating ideas in hours rather than waiting for engineering cycles. The term reflects an emerging archetype in AI-native teams: product people who can explore feasibility directly and bring credible technical insight into planning conversations.
Function Calling
AgentA structured method for language models to invoke external tools by outputting a function name and arguments in a defined format. The application then executes the function and returns the result to the model. Underpins most modern agent implementations.
Related terms
G
Grounding
AgentConnecting a model's outputs to real, verifiable data sources rather than relying purely on its training. Grounded responses cite specific customer feedback, product data, or documents — which is why Propane surfaces signal citations alongside insights.
GTM Plan
ProductGo-to-Market Plan. A strategy document outlining how a product or feature will reach its target customers — covering positioning, pricing, launch channels, and success metrics. GTM plans are a common artifact type in Propane canvases.
Related terms
Guardrails
AIConstraints placed on model behavior to prevent unsafe, off-brand, or incorrect outputs. Guardrails are how teams make probabilistic AI feel reliable to users — filtering harmful content, enforcing tone, blocking out-of-scope responses, and catching hallucinations before they surface. For PMs shipping user-facing AI features, guardrails are a launch requirement, not an afterthought.
H
Hallucination
AIWhen a language model generates plausible-sounding but factually incorrect output. A critical failure mode for AI in product work. Grounding, retrieval, and structured evaluation patterns (like Canvas Eval) are defences against hallucination in high-stakes documents.
Human-in-the-Loop
AgentA design pattern where a human reviews or approves an agent's output before it proceeds to the next step. Critical for high-stakes actions — publishing, sending, or pushing to production. PMs need to decide where in the agentic loop to insert human checkpoints.
Related terms
I
Inference
AIThe act of a model generating a response — the moment it runs and produces output for a user. Inference latency is one of the most common AI UX failure modes: a feature that works perfectly in testing can still feel broken if it takes too long to respond. PMs need to understand the tradeoff between model quality and inference speed when making model selection decisions.
Related terms
Insight
ProductA synthesized understanding derived from multiple signals — an observation about customer behavior, need, or pain that has strategic or tactical implications. Insights are what research produces; signals are the raw material. Propane surfaces insights from aggregated signal.
Related terms
Integrations
PropaneSaaS connections that continuously sync signals into Propane — powering company, people, and signal data. Examples include CRM, support tools, and product analytics platforms. Integrations run in the background, keeping your intelligence layer current.
Related terms
J
Job-to-be-Done
ProductA framework for understanding what customers are trying to accomplish — the underlying goal they hire a product to help with. Jobs-to-be-done shift focus from features to outcomes, and are a useful lens for writing prompts that extract meaningful signal from customer feedback.
Related terms
L
LLM
AILarge Language Model. A neural network trained on large text datasets that can generate, summarize, translate, and reason about language. GPT-4, Claude, and Gemini are LLMs. Most AI-native product tools — including Propane — are built on top of LLM APIs.
Related terms
LLM-as-Judge
AIA pattern where one language model is used to evaluate the outputs of another. Because manual review doesn't scale, LLM-as-judge is the main practical method for running automated quality evals across large volumes of AI output. PMs building AI features use it to measure consistency, catch regressions, and validate that changes to prompts or models improve — not degrade — output quality.
Related terms
M
MCP (Model Context Protocol)
AgentAn open standard for connecting AI models to external data sources and tools. MCP defines how models discover, invoke, and receive results from integrations — making it easier to build interoperable agent systems. Propane Connectors are built on MCP.
Related terms
Model Routing
AgentAlso called model orchestration. Logic that determines which AI model handles a given request, based on cost, speed, quality requirements, or task type. Not every query needs the most powerful model. PMs building AI features use model routing to balance output quality against per-query cost — routing complex, high-stakes tasks to more capable models while handling routine tasks with lighter, cheaper ones.
Related terms
Multi-Agent System
AgentAn architecture where multiple specialized AI agents collaborate to complete a task. One agent might orchestrate the workflow while others handle specific jobs — research, writing, code, evaluation. Propane uses this pattern internally to coordinate canvas actions.
Related terms
O
Observability
AgentThe ability to monitor, trace, and understand what an AI agent did and why — not just whether it succeeded or failed, but the reasoning path that led there. In traditional software, logs tell you what happened. In agentic systems, observability tells you how the model decided to do it. For PMs, observability is a launch requirement for any agent running in production — without it, debugging failures is guesswork.
Opportunity Sizing
ProductEstimating the potential value of a market or feature investment — typically expressed as TAM, SAM, or SAM/SOM. Opportunity sizing grounds prioritization decisions in market reality, not just internal intuition. AI agents can accelerate early-stage sizing by synthesizing market signals.
Related terms
Orchestrator Agent
AgentThe agent responsible for coordinating other agents in a multi-agent system. It receives a high-level goal, breaks it into subtasks, delegates to specialist agents, and assembles the results. PMs building with AI need to understand where orchestration happens to design useful human checkpoints.
Related terms
P
Positioning
ProductHow a product is framed relative to alternatives in a customer's mind — the combination of audience, problem, and differentiated value. Good positioning informs every layer of go-to-market, from copy to sales narrative. Competitive context in Canvas Eval directly surfaces positioning gaps.
Related terms
PRD
ProductProduct Requirements Document. A written specification that describes what a product or feature should do, why it exists, who it serves, and how success will be measured. In AI-native workflows, PRDs live in canvases — so the document and the intelligence behind it stay in one place.
Related terms
Prioritization
ProductThe process of deciding what to work on next, given limited time and resources. Common frameworks include RICE, ICE, and MoSCoW. AI-assisted prioritization can surface which customer signals most strongly support each candidate initiative.
Related terms
Probabilistic Output
AIAI systems don't guarantee the same output for the same input — they produce the most likely response given the context. This is the fundamental difference from traditional deterministic software, where input A always produces output B. The shift changes how PMs write acceptance criteria (you're testing distributions, not exact outputs), design error states, and communicate quality expectations to stakeholders.
Related terms
Prompt Engineering
AIThe practice of crafting model inputs — system prompts, examples, instructions — to reliably produce desired outputs. A core skill for PMs shipping AI features. The quality of a prompt directly affects the quality of agent behavior, output structure, and failure rate.
Related terms
Prompt Injection
AgentA security attack where malicious instructions are hidden inside content an agent processes — a document, email, webpage, or data feed — causing it to take unintended actions. As AI agents read more external content, prompt injection becomes a real attack surface. PMs designing agents that interact with user-supplied or external data need to include injection risk in their threat model from the start.
Related terms
Push
PropaneHands aligned canvas content off to coding agents such as Cursor or Lovable. Enforces a clear workflow principle: humans align first, then agents build. Push is the handoff point between product thinking and implementation.
Related terms
R
RAG (Retrieval-Augmented Generation)
AIAn architecture that combines a retrieval step — fetching relevant documents or data — with generation, so the model answers based on retrieved content rather than training data alone. Most AI product tools use RAG under the hood to keep responses grounded in your specific data.
Related terms
Reasoning Model
AIA language model variant that shows its thinking process before producing a final answer — often called chain-of-thought or extended thinking. Reasoning models tend to perform better on complex, multi-step problems like planning, analysis, and ambiguous tradeoffs. Increasingly relevant to AI-native PM tooling.
Related terms
Research Agents
PropaneStructured research workflows outside the canvas. Templates define configurations; instances produce summaries, insight reports, and signal citations. Research sessions are individual participant conversations. Research insights are the AI-generated analysis produced from them.
Related terms
RICE
ProductA prioritization framework that scores work by Reach, Impact, Confidence, and Effort. RICE produces a numeric score that makes tradeoffs between initiatives explicit and comparable. Useful for grounding AI-assisted prioritization in a consistent method.
Related terms
S
Semantic Search
AISearch that matches on meaning rather than exact keywords. Powered by embeddings, it returns conceptually relevant results even when the user's phrasing differs from the indexed content. A key building block for AI-native product tools — including how Propane surfaces signals that are relevant to a query without requiring users to know the right search terms.
Settings
PropaneAccount-level configuration — workspace profile, team members, prompt shortcuts, context categories, connectors, integrations, and privacy controls. The source of truth for how Propane is wired to your stack.
Related terms
Signal
ProductA data point from a customer — a support ticket, NPS comment, interview excerpt, churn note, or product event — that reveals something meaningful about their experience or needs. Signals aggregate into insight; a single signal is evidence, not conclusion.
Related terms
Signals
PropaneA dedicated view for exploring product signals. Surfaces categorized insights from connected sources. Queryable conversationally — so teams can interrogate their customer data without writing queries or waiting for a researcher.
Related terms
Space
PropaneA container within an account that holds canvases. Each space has a name, optional description, and a canvas count. Canvases inside a space inherit its shared context — useful for organizing work by product area, team, or initiative.
Related terms
System Prompt
AIInstructions given to a model at the start of a conversation — before user input — that define its role, constraints, and behavior. System prompts are how AI products configure agent personas, enforce guardrails, and inject relevant context. Most of what you configure in Propane agent settings flows through system prompts.
Related terms
T
Temperature
AIA model setting that controls how much randomness is introduced when generating output. At 0, the model always picks the most probable next word — consistent but sometimes repetitive. Higher values introduce more variation, making responses feel more creative or human. PMs building AI features use temperature tuning to balance brand consistency with natural-feeling output. Small adjustments can meaningfully affect user trust.
Related terms
Token
AIThe basic unit of text a language model processes — roughly three to four characters, or about three-quarters of a word in English. AI costs, context limits, and speed are all measured in tokens. Relevant to PMs building with AI APIs who need to manage latency and budget.
Related terms
Tool Use
AgentThe ability of an AI model to call external functions or APIs — such as searching the web, querying a database, writing a file, or sending a message. Tool use is what transforms a language model into an agent capable of taking real-world actions.
Related terms
U
Unit Economics (AI)
ProductA cost-benefit framework applied at the level of a single AI interaction. For PMs, this means understanding what each model call costs, what user value it produces, and whether that ratio holds as usage scales. High-quality models deliver better outputs but at higher per-query cost. Good AI PMs use unit economics to decide where premium intelligence is justified and where a simpler, cheaper approach is sufficient.
Related terms
User Story
ProductA short, human-centered description of a feature from the user's perspective: 'As a [role], I want [action] so that [outcome].' User stories keep teams focused on behavior and value rather than technical implementation.
Related terms