OpenClaw Prompt上下文构建思路
glmos-code-explain
一、整体架构:分层上下文组装
OpenClaw 构建了一套完整的、分层的上下文工程体系,可以理解为一个五层叠加模型:
┌──────────────────────────────────────────────────────────┐
│ Layer 5: Plugin Hook 注入层 (动态扩展) │
│ before_prompt_build → prependContext/systemContext │
├──────────────────────────────────────────────────────────┤
│ Layer 4: 会话历史层 (对话记忆) │
│ SessionManager → AgentMessage[] + 压缩摘要 │
├──────────────────────────────────────────────────────────┤
│ Layer 3: 工作区引导文件层 (静态知识注入) │
│ AGENTS.md / SOUL.md / TOOLS.md / USER.md / MEMORY.md │
├──────────────────────────────────────────────────────────┤
│ Layer 2: Skills 层 (按需能力扩展) │
│ SKILL.md 元数据列表 → 运行时懒加载 │
├──────────────────────────────────────────────────────────┤
│ Layer 1: System Prompt 基础层 (结构化指令) │
│ 工具清单 + Safety + 工作区 + 时间 + 运行时元数据 │
└──────────────────────────────────────────────────────────┘
1.1 架构关键点
- 解耦设计:各层职责清晰,可独立替换
- Token 预算全局管理:从单文件截断到总注入上限,层层设置预算,防止上下文爆仓
- 可插拔扩展:Context Engine 接口 + Plugin Hook 系统提供开放扩展点
二、System Prompt 的结构化构建
核心文件:src/agents/system-prompt.ts
2.1 Prompt Mode 三档模式
type PromptMode = "full" | "minimal" | "none"
| 模式 | 用途 | 包含内容 |
|---|---|---|
| full | 主 Agent | 所有 Section(Skills、Memory、Self-Update、Model Aliases、User Identity、Reply Tags、Messaging、Heartbeats) |
| minimal | 子 Agent | 仅 Tooling、Safety、Workspace、Runtime、Sandbox、Time(省略~40% token) |
| none | 纯身份标识 | 仅返回基本身份行 |
设计目标:子 Agent 的上下文开销远低于主 Agent,支撑多层 Agent 嵌套时的经济性。
2.2 System Prompt 核心 Section
const lines = [
"You are a personal assistant running inside OpenClaw.",
"",
"## Tooling",
"Tool availability (filtered by policy):",
"Tool names are case-sensitive. Call tools exactly as listed.",
toolLines.length > 0
? toolLines.join("\n")
: [
"Pi lists the standard tools above. This runtime enables:",
"- grep: search file contents for patterns",
"- find: find files by glob pattern",
"- ls: list directory contents",
"- apply_patch: apply multi-file patches",
`- ${execToolName}: run shell commands (supports background via yieldMs/background)`,
`- ${processToolName}: manage background exec sessions`,
"- browser: control OpenClaw's dedicated browser",
"- canvas: present/eval/snapshot the Canvas",
"- nodes: list/describe/notify/camera/screen on paired nodes",
"- cron: manage cron jobs and wake events (use for reminders; when scheduling a reminder, write the systemEvent text as something that will read like a reminder when it fires, and mention that it is a reminder depending on the time gap between setting and firing; include recent context in reminder text if appropriate)",
"- sessions_list: list sessions",
"- sessions_history: fetch session history",
"- sessions_send: send to another session",
"- subagents: list/steer/kill sub-agent runs",
'- session_status: show usage/time/model state and answer "what model are we using?"',
].join("\n"),
"TOOLS.md does not control tool availability; it is user guidance for how to use external tools.",
`For long waits, avoid rapid poll loops: use ${execToolName} with enough yieldMs or ${processToolName}(action=poll, timeout=<ms>).`,
"If a task is more complex or takes longer, spawn a sub-agent. Completion is push-based: it will auto-announce when done.",
...(acpHarnessSpawnAllowed
? [
'For requests like "do this in codex/claude code/gemini", treat it as ACP harness intent and call `sessions_spawn` with `runtime: "acp"`.',
'On Discord, default ACP harness requests to thread-bound persistent sessions (`thread: true`, `mode: "session"`) unless the user asks otherwise.',
"Set `agentId` explicitly unless `acp.defaultAgent` is configured, and do not route ACP harness requests through `subagents`/`agents_list` or local PTY exec flows.",
'For ACP harness thread spawns, do not call `message` with `action=thread-create`; use `sessions_spawn` (`runtime: "acp"`, `thread: true`) as the single thread creation path.',
]
: []),
"Do not poll `subagents list` / `sessions_list` in a loop; only check status on-demand (for intervention, debugging, or when explicitly asked).",
"",
"## Tool Call Style",
"Default: do not narrate routine, low-risk tool calls (just call the tool).",
"Narrate only when it helps: multi-step work, complex/challenging problems, sensitive actions (e.g., deletions), or when the user explicitly asks.",
"Keep narration brief and value-dense; avoid repeating obvious steps.",
"Use plain human language for narration unless in a technical context.",
"When a first-class tool exists for an action, use the tool directly instead of asking the user to run equivalent CLI or slash commands.",
"When exec returns approval-pending, include the concrete /approve command from tool output (with allow-once|allow-always|deny) and do not ask for a different or rotated code.",
"Treat allow-once as single-command only: if another elevated command needs approval, request a fresh /approve and do not claim prior approval covered it.",
"When approvals are required, preserve and show the full command/script exactly as provided (including chained operators like &&, ||, |, ;, or multiline shells) so the user can approve what will actually run.",
"",
...safetySection,
"## OpenClaw CLI Quick Reference",
"OpenClaw is controlled via subcommands. Do not invent commands.",
"To manage the Gateway daemon service (start/stop/restart):",
"- openclaw gateway status",
"- openclaw gateway start",
"- openclaw gateway stop",
"- openclaw gateway restart",
"If unsure, ask the user to run `openclaw help` (or `openclaw gateway --help`) and paste the output.",
"",
...skillsSection,
...memorySection,
核心 Section 详解
-
Tooling Section
- 显式列出所有可用工具(名称 + 简短描述)
- 工具名大小写敏感,防止模型幻觉调用不存在工具
- 强调 TOOLS.md 不控制工具可用性,仅为用户指南
-
Safety Section
const safetySection = [ "## Safety", "You have no independent goals: do not pursue self-preservation, replication, resource acquisition, or power-seeking; avoid long-term plans beyond the user's request.", "Prioritize safety and human oversight over completion; if instructions conflict, pause and ask; comply with stop/pause/audit requests and never bypass safeguards. (Inspired by Anthropic's constitution.)", "Do not manipulate or persuade anyone to expand access or disable safeguards. Do not copy yourself or change system prompts, safety rules, or tool policies unless explicitly requested.", "", ];- 固定的安全护栏,明确禁止自我保全/资源获取/绕过监管
- 注意:这是建议性的,硬执行依赖工具策略、exec 审批、沙箱隔离
-
Skills Section(见 四、Skills 系统)
-
Memory Recall Section
"## Memory Recall", "Before answering anything about prior work, decisions, dates, people, preferences, or todos: run memory_search on MEMORY.md + memory/*.md; then use memory_get to pull only the needed lines. If low confidence after search, say you checked.",- 当 memory_search / memory_get 工具可用时,主动要求模型先搜索记忆再回答
- Citations 模式控制是否在回复中引用来源(Source: <path#line>)
-
Workspace Section
"## Workspace", `Your working directory is: ${displayWorkspaceDir}`, workspaceGuidance, // 沙箱模式下的路径映射说明 -
Sandbox Section(沙箱模式)
- 告知模型运行在 Docker 容器中
- 说明容器工作目录 vs 宿主文件系统路径映射
- Elevated exec 可用性提示
-
Current Date & Time
- 用户本地时区 + 时间格式
- 提示使用 session_status 工具查询当前时间
-
Runtime Section
- 主机、OS、Node 版本、模型、仓库根目录、思考级别等元数据
2.3 Token 优化策略
- 路径压缩:将 /Users/alice/... 替换为 ~/...,每个 skill 节省 5-6 token
- Section 裁剪:minimal 模式省略 ~40% 内容
- 动态工具列表:仅显示当前可用工具,不列出被策略禁用的工具
三、工作区引导文件(Bootstrap)注入机制
核心文件:
- src/agents/bootstrap-files.ts
- src/agents/workspace.ts
- src/agents/bootstrap-cache.ts
- src/agents/pi-embedded-helpers.ts
3.1 文件清单与职责分离
const DEFAULT_AGENTS_FILENAME = "AGENTS.md";
const DEFAULT_SOUL_FILENAME = "SOUL.md";
const DEFAULT_TOOLS_FILENAME = "TOOLS.md";
const DEFAULT_IDENTITY_FILENAME = "IDENTITY.md";
const DEFAULT_USER_FILENAME = "USER.md";
const DEFAULT_HEARTBEAT_FILENAME = "HEARTBEAT.md";
const DEFAULT_BOOTSTRAP_FILENAME = "BOOTSTRAP.md";
const DEFAULT_MEMORY_FILENAME = "MEMORY.md";
const DEFAULT_MEMORY_ALT_FILENAME = "memory.md";
| 文件 | 职责 | 典型内容 |
|---|---|---|
| AGENTS.md | Agent 行为规范、仓库指南 | Git 规范、PR 流程、自动关闭标签、代码风格 |
| SOUL.md | 角色人格/语气 | 性格设定、回复风格、emoji 使用规范 |
| TOOLS.md | 外部工具使用指南 | API 密钥位置、工具特定参数说明 |
| IDENTITY.md | 身份标识 | Agent 名称、版本号 |
| USER.md | 用户偏好/信息 | 用户名称、常用语言、工作时区 |
| HEARTBEAT.md | 定时任务心跳指令 | 每小时检查邮件、每日备份等 |
| BOOTSTRAP.md | 首次运行初始化 | 新工作区欢迎消息(仅首次) |
| MEMORY.md/memory.md | 长期记忆快照 | 项目决策、重要日期、待办事项 |
设计目标:按关注域分离,便于维护和选择性注入。
3.2 Token 预算控制
const DEFAULT_BOOTSTRAP_MAX_CHARS = 20_000; // 单文件上限
const DEFAULT_BOOTSTRAP_TOTAL_MAX_CHARS = 150_000; // 总注入上限
截断策略:
function buildBootstrapContextFiles(
bootstrapFiles: WorkspaceBootstrapFile[],
opts: {
maxChars: number;
totalMaxChars: number;
warn?: (message: string) => void;
}
): EmbeddedContextFile[]
- 单文件截断:超过 maxChars 时截断并附加 [truncated] 标记
- 总量控制:所有文件总大小超过 totalMaxChars 时,按优先级裁剪
- 截断警告:System Prompt 中显示 ⚠ Bootstrap truncation warning
3.3 Session 级缓存
import { loadWorkspaceBootstrapFiles, type WorkspaceBootstrapFile } from "./workspace.js";
const cache = new Map<string, WorkspaceBootstrapFile[]>();
export async function getOrLoadBootstrapFiles(params: {
workspaceDir: string;
sessionKey: string;
}): Promise<WorkspaceBootstrapFile[]> {
const existing = cache.get(params.sessionKey);
if (existing) {
return existing;
}
const files = await loadWorkspaceBootstrapFiles(params.workspaceDir);
cache.set(params.sessionKey, files);
return files;
}
优化效果:同一会话内无需重复磁盘读取,降低 I/O 开销。
3.4 Hook 拦截注入额外文件
内置钩子 agent:bootstrap(src/hooks/bundled/bootstrap-extra-files):
// 示例:在 Monorepo 中追加多个目录的 AGENTS.md
const extraFiles = await resolveExtraBootstrapFiles({
patterns: ["packages/*/AGENTS.md", "services/*/AGENTS.md"],
workspaceDir
});
- 不修改磁盘,仅在内存中改变注入内容
- 支持 glob 模式匹配
3.5 Context Mode(轻量级模式)
type BootstrapContextMode = "full" | "lightweight"
type BootstrapContextRunKind = "default" | "heartbeat" | "cron"
| Mode | 用途 | 注入内容 |
|---|---|---|
| full+default | 常规运行 | 所有 Bootstrap 文件 |
| lightweight+heartbeat | 心跳任务 | 仅HEARTBEAT.md |
| lightweight+cron | Cron 任务 | 空(无 Bootstrap) |
设计目标:降低定时任务的上下文开销。
四、Skills 系统:按需懒加载的能力扩展
核心文件:src/agents/skills/workspace.ts
4.1 Skills 发现与合并
优先级(从低到高):
extra(插件/extraDirs)
< bundled(内置)
< managed(~/.openclaw/skills)
< agents-skills-personal(~/.agents/skills)
< agents-skills-project(workspace/.agents/skills)
< workspace(workspace/skills)
加载逻辑:
const merged = new Map<string, Skill>();
for (const skill of extraSkills) merged.set(skill.name, skill);
for (const skill of bundledSkills) merged.set(skill.name, skill);
for (const skill of managedSkills) merged.set(skill.name, skill);
for (const skill of personalAgentsSkills) merged.set(skill.name, skill);
for (const skill of projectAgentsSkills) merged.set(skill.name, skill);
for (const skill of workspaceSkills) merged.set(skill.name, skill);
- 后加载的同名 skill 覆盖先加载的
- 工作区 skills 优先级最高,便于项目级定制
4.2 System Prompt 中的"懒加载"设计
function buildSkillsSection(params: { skillsPrompt?: string; readToolName: string }) {
const trimmed = params.skillsPrompt?.trim();
if (!trimmed) {
return [];
}
return [
"## Skills (mandatory)",
"Before replying: scan <available_skills> <description> entries.",
`- If exactly one skill clearly applies: read its SKILL.md at <location> with \`${params.readToolName}\`, then follow it.`,
"- If multiple could apply: choose the most specific one, then read/follow it.",
"- If none clearly apply: do not read any SKILL.md.",
"Constraints: never read more than one skill up front; only read after selecting.",
"- When a skill drives external API writes, assume rate limits: prefer fewer larger writes, avoid tight one-item loops, serialize bursts when possible, and respect 429/Retry-After.",
trimmed,
"",
];
}
核心机制:
- 元数据注入:System Prompt 仅包含 skill 名称、描述、路径
- 运行时加载:模型通过 read 工具按需加载 SKILL.md 全文
- 单次加载限制:never read more than one skill up front
示例 Skills 元数据:
<available_skills>
<skill>
<name>github</name>
<description>GitHub API operations: create/update issues, PRs, comments</description>
<location>~/.openclaw/skills/github/SKILL.md</location>
</skill>
<skill>
<name>frontend-design</name>
<description>React/Tailwind best practices and component design patterns</description>
<location>~/workspace/skills/frontend-design/SKILL.md</location>
</skill>
</available_skills>
4.3 Token 预算限制
const DEFAULT_MAX_SKILLS_IN_PROMPT = 150;
const DEFAULT_MAX_SKILLS_PROMPT_CHARS = 30_000;
限制策略:
function applySkillsPromptLimits(params: { skills: Skill[]; config?: OpenClawConfig }): {
skillsForPrompt: Skill[];
truncated: boolean;
truncatedReason: "count" | "chars" | null;
} {
const limits = resolveSkillsLimits(params.config);
const total = params.skills.length;
const byCount = params.skills.slice(0, Math.max(0, limits.maxSkillsInPrompt));
let skillsForPrompt = byCount;
let truncated = total > byCount.length;
let truncatedReason: "count" | "chars" | null = truncated ? "count" : null;
const fits = (skills: Skill[]): boolean => {
const block = formatSkillsForPrompt(skills);
return block.length <= limits.maxSkillsPromptChars;
};
if (!fits(skillsForPrompt)) {
// Binary search the largest prefix that fits in the char budget.
let lo = 0;
let hi = skillsForPrompt.length;
while (lo < hi) {
const mid = Math.ceil((lo + hi) / 2);
if (fits(skillsForPrompt.slice(0, mid))) {
lo = mid;
} else {
hi = mid - 1;
}
}
skillsForPrompt = skillsForPrompt.slice(0, lo);
truncated = true;
truncatedReason = "chars";
}
return { skillsForPrompt, truncated, truncatedReason };
}
- 先按数量限制(150 个)
- 再按字符总数限制(30,000 chars)
- 使用二分查找找到最大可容纳数量
4.4 路径压缩优化
function compactSkillPaths(skills: Skill[]): Skill[] {
const home = os.homedir();
if (!home) return skills;
const prefix = home.endsWith(path.sep) ? home : home + path.sep;
return skills.map((s) => ({
...s,
filePath: s.filePath.startsWith(prefix) ? "~/" + s.filePath.slice(prefix.length) : s.filePath,
}));
}
优化效果:
- /Users/alice/.openclaw/skills/github/SKILL.md → ~/.openclaw/skills/github/SKILL.md
- 每个 skill 节省 ~5-6 token
- 150 个 skills 总计节省 ~900 token
4.5 Skill Snapshot 快照机制
export type SkillSnapshot = {
prompt: string; // 预先生成的 skills prompt
skills: Array<{
name: string;
primaryEnv?: string[];
requiredEnv?: string[];
}>;
skillFilter?: string[];
resolvedSkills?: boolean;
version?: number;
};
- 会话启动时生成 snapshot
- 避免每次 Agent 调用都重新扫描磁盘
- 支持 skill filter(仅启用部分 skills)
五、会话记忆与上下文窗口管理
5.1 Context Engine 可插拔接口
核心文件:src/context-engine/types.ts
export interface ContextEngine {
/** Engine identifier and metadata */
readonly info: ContextEngineInfo;
bootstrap?(params: { sessionId: string; sessionFile: string }): Promise<BootstrapResult>;
ingest(params: {
sessionId: string;
message: AgentMessage;
isHeartbeat?: boolean;
}): Promise<IngestResult>;
ingestBatch?(params: {
sessionId: string;
messages: AgentMessage[];
isHeartbeat?: boolean;
}): Promise<IngestBatchResult>;
afterTurn?(params: {
sessionId: string;
sessionFile: string;
messages: AgentMessage[];
prePromptMessageCount: number;
autoCompactionSummary?: string;
isHeartbeat?: boolean;
tokenBudget?: number;
runtimeContext?: ContextEngineRuntimeContext;
}): Promise<void>;
assemble(params: {
sessionId: string;
messages: AgentMessage[];
tokenBudget?: number;
}): Promise<AssembleResult>;
compact(params: {
sessionId: string;
sessionFile: string;
tokenBudget?: number;
force?: boolean;
currentTokenCount?: number;
compactionTarget?: "budget" | "threshold";
customInstructions?: string;
runtimeContext?: ContextEngineRuntimeContext;
}): Promise<CompactResult>;
prepareSubagentSpawn?(params: {
parentSessionKey: string;
childSessionKey: string;
ttlMs?: number;
}): Promise<SubagentSpawnPreparation | undefined>;
onSubagentEnded?(params: { childSessionKey: string; reason: SubagentEndReason }): Promise<void>;
dispose?(): Promise<void>;
}
生命周期方法:
- bootstrap:初始化引擎状态,可选导入历史上下文
- ingest:摄入单条消息到引擎存储
- ingestBatch:批量摄入一个完整 turn
- afterTurn:turn 结束后的后处理(持久化、触发压缩决策)
- assemble:组装模型上下文(在 token 预算内)
- compact:压缩上下文以减少 token 使用
- prepareSubagentSpawn / onSubagentEnded:子 Agent 生命周期钩子
- dispose:释放资源
5.2 默认实现:LegacyContextEngine
核心文件:src/context-engine/legacy.ts
export class LegacyContextEngine implements ContextEngine {
readonly info: ContextEngineInfo = {
id: "legacy",
name: "Legacy Context Engine",
version: "1.0.0",
};
async ingest(_params: {
sessionId: string;
message: AgentMessage;
isHeartbeat?: boolean;
}): Promise<IngestResult> {
// No-op: SessionManager handles message persistence in the legacy flow
return { ingested: false };
}
async assemble(params: {
sessionId: string;
messages: AgentMessage[];
tokenBudget?: number;
}): Promise<AssembleResult> {
// Pass-through: the existing sanitize -> validate -> limit -> repair pipeline
// in attempt.ts handles context assembly for the legacy engine.
// We just return the messages as-is with a rough token estimate.
return {
messages: params.messages,
estimatedTokens: 0, // Caller handles estimation
};
}
async afterTurn(_params: {
sessionId: string;
sessionFile: string;
messages: AgentMessage[];
prePromptMessageCount: number;
autoCompactionSummary?: string;
isHeartbeat?: boolean;
tokenBudget?: number;
runtimeContext?: ContextEngineRuntimeContext;
}): Promise<void> {
// No-op: legacy flow persists context directly in SessionManager.
}
async compact(params: {
sessionId: string;
sessionFile: string;
tokenBudget?: number;
force?: boolean;
currentTokenCount?: number;
compactionTarget?: "budget" | "threshold";
customInstructions?: string;
runtimeContext?: ContextEngineRuntimeContext;
}): Promise<CompactResult> {
// Import through a dedicated runtime boundary so the lazy edge remains effective.
const { compactEmbeddedPiSessionDirect } =
await import("../agents/pi-embedded-runner/compact.runtime.js");
// runtimeContext carries the full CompactEmbeddedPiSessionParams fields
// set by the caller in run.ts. We spread them and override the fields
// that come from the ContextEngine compact() signature directly.
const runtimeContext = params.runtimeContext ?? {};
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- bridge runtimeContext matches CompactEmbeddedPiSessionParams
const result = await compactEmbeddedPiSessionDirect({
...runtimeContext,
sessionId: params.sessionId,
sessionFile: params.sessionFile,
tokenBudget: params.tokenBudget,
force: params.force,
customInstructions: params.customInstructions,
workspaceDir: (runtimeContext.workspaceDir as string) ?? process.cwd(),
} as Parameters<typeof compactEmbeddedPiSessionDirect>[0]);
return {
ok: result.ok,
compacted: result.compacted,
reason: result.reason,
result: result.result
? {
summary: result.summary,
firstKeptEntryId: result.result.firstKeptEntryId,
tokensBefore: result.result.tokensBefore,
tokensAfter: result.result.tokensAfter,
details: result.result.details,
}
: undefined,
};
}
async dispose(): Promise<void> {
// Nothing to clean up for legacy engine
}
}
- 透传设计:大部分方法为 no-op,委托给 pi-coding-agent 的现有管道
- Compact 实现:调用 compactEmbeddedPiSessionDirect,生成摘要并修改会话文件
5.3 Compaction(上下文压缩)
核心文件:src/agents/compaction.ts
5.3.1 令牌估算与安全余量
export const BASE_CHUNK_RATIO = 0.4;
export const MIN_CHUNK_RATIO = 0.15;
export const SAFETY_MARGIN = 1.2; // 20% buffer for estimateTokens() inaccuracy
const DEFAULT_SUMMARY_FALLBACK = "No prior history.";
const DEFAULT_PARTS = 2;
const MERGE_SUMMARIES_INSTRUCTIONS = [
"Merge these partial summaries into a single cohesive summary.",
"",
"MUST PRESERVE:",
"- Active tasks and their current status (in-progress, blocked, pending)",
"- Batch operation progress (e.g., '5/17 items completed')",
"- The last thing the user requested and what was being done about it",
"- Decisions made and their rationale",
"- TODOs, open questions, and constraints",
"- Any commitments or follow-ups promised",
"",
"PRIORITIZE recent context over older history. The agent needs to know",
"what it was doing, not just what was discussed.",
].join("\n");
const IDENTIFIER_PRESERVATION_INSTRUCTIONS =
"Preserve all opaque identifiers exactly as written (no shortening or reconstruction), " +
"including UUIDs, hashes, IDs, tokens, API keys, hostnames, IPs, ports, URLs, and file names.";
- SAFETY_MARGIN = 1.2:chars/4 启发式会低估 token 数,加 20% 缓冲
- 摘要约束指令:明确要求保留活动任务、批处理进度、最后请求、决策、TODOs、承诺
5.3.2 分块策略
export function splitMessagesByTokenShare(
messages: AgentMessage[],
parts = DEFAULT_PARTS,
): AgentMessage[][] {
if (messages.length === 0) {
return [];
}
const normalizedParts = normalizeParts(parts, messages.length);
if (normalizedParts <= 1) {
return [messages];
}
const totalTokens = estimateMessagesTokens(messages);
const targetTokens = totalTokens / normalizedParts;
const chunks: AgentMessage[][] = [];
let current: AgentMessage[] = [];
let currentTokens = 0;
for (const message of messages) {
const messageTokens = estimateCompactionMessageTokens(message);
if (
chunks.length < normalizedParts - 1 &&
current.length > 0 &&
currentTokens + messageTokens > targetTokens
) {
chunks.push(current);
current = [];
currentTokens = 0;
}
current.push(message);
currentTokens += messageTokens;
}
if (current.length > 0) {
chunks.push(current);
}
return chunks;
}
- 将消息按 token 份额均分为 N 块
- 每块尽量接近 totalTokens / parts
5.3.3 分阶段摘要
export async function summarizeInStages(params: {
messages: AgentMessage[];
model: NonNullable<ExtensionContext["model"]>;
apiKey: string;
signal: AbortSignal;
reserveTokens: number;
maxChunkTokens: number;
contextWindow: number;
customInstructions?: string;
summarizationInstructions?: CompactionSummarizationInstructions;
previousSummary?: string;
parts?: number;
minMessagesForSplit?: number;
}): Promise<string> {
const { messages } = params;
if (messages.length === 0) {
return params.previousSummary ?? DEFAULT_SUMMARY_FALLBACK;
}
const minMessagesForSplit = Math.max(2, params.minMessagesForSplit ?? 4);
const parts = normalizeParts(params.parts ?? DEFAULT_PARTS, messages.length);
const totalTokens = estimateMessagesTokens(messages);
if (parts <= 1 || messages.length < minMessagesForSplit || totalTokens <= params.maxChunkTokens) {
return summarizeWithFallback(params);
}
const splits = splitMessagesByTokenShare(messages, parts).filter((chunk) => chunk.length > 0);
if (splits.length <= 1) {
return summarizeWithFallback(params);
}
const partialSummaries: string[] = [];
for (const chunk of splits) {
partialSummaries.push(
await summarizeWithFallback({
...params,
messages: chunk,
previousSummary: undefined,
}),
);
}
if (partialSummaries.length === 1) {
return partialSummaries[0];
}
const summaryMessages: AgentMessage[] = partialSummaries.map((summary) => ({
role: "user",
content: summary,
timestamp: Date.now(),
}));
const custom = params.customInstructions?.trim();
const mergeInstructions = custom
? `${MERGE_SUMMARIES_INSTRUCTIONS}\n\n${custom}`
: MERGE_SUMMARIES_INSTRUCTIONS;
return summarizeWithFallback({
...params,
messages: summaryMessages,
customInstructions: mergeInstructions,
});
}
策略:
- 先分批摘要各块(并行)
- 再合并多个局部摘要(二次摘要)
- 避免单次摘要超过上下文窗口
5.3.4 自适应分块比率
export function computeAdaptiveChunkRatio(messages: AgentMessage[], contextWindow: number): number {
if (messages.length === 0) {
return BASE_CHUNK_RATIO;
}
const totalTokens = estimateMessagesTokens(messages);
const avgTokens = totalTokens / messages.length;
// Apply safety margin to account for estimation inaccuracy
const safeAvgTokens = avgTokens * SAFETY_MARGIN;
const avgRatio = safeAvgTokens / contextWindow;
// If average message is > 10% of context, reduce chunk ratio
if (avgRatio > 0.1) {
const reduction = Math.min(avgRatio * 2, BASE_CHUNK_RATIO - MIN_CHUNK_RATIO);
return Math.max(MIN_CHUNK_RATIO, BASE_CHUNK_RATIO - reduction);
}
return BASE_CHUNK_RATIO;
}
- 当消息平均 token 数较大时,减小分块比率
- 避免单个分块超过模型上下文窗口
5.3.5 CompactionSummaryMessage 生成
压缩后生成特殊消息类型:
{
role: "compactionSummary",
summary: "用户请求读取 README.md,助手完成并介绍了 OpenClaw 项目。Session Startup 规则:...(引用 AGENTS.md)",
tokensBefore: 45000,
tokensAfter: 300,
timestamp: 1710000100000,
firstKeptEntryId: "msg_xyz789",
details: {
readFiles: ["README.md", "package.json"],
modifiedFiles: ["src/config.ts"]
}
}
- 不发送给 LLM:仅用于会话管理和 token 统计
- 工作区关键上下文:从 AGENTS.md 提取 "Session Startup" 和 "Red Lines" 部分附加到摘要
5.4 Context Pruning(内存级实时裁剪)
核心文件:src/agents/pi-extensions/context-pruning/pruner.ts
5.4.1 三种裁剪模式
type ContextPruningConfig = {
mode: "off" | "cache-ttl";
keepLastAssistants: number; // 保留最近 N 个 assistant 响应
softTrimRatio: number; // Soft trim 触发阈值(token 占比)
hardClearRatio: number; // Hard clear 触发阈值
minPrunableToolChars: number; // 最小可裁剪 tool result 字符数
softTrim: {
maxChars: number; // 保留的最大字符数
headChars: number; // 头部保留字符数
tailChars: number; // 尾部保留字符数
};
hardClear: {
enabled: boolean;
placeholder: string; // 替换占位符
};
tools: ContextPruningToolMatch[];
};
裁剪级别:
- Soft Trim:截断 tool result 内容(保留 head + "..." + tail)
- Hard Clear:用占位符 [cleared] 完全替换旧的 tool result
- No Pruning:保留完整内容
5.4.2 首条用户消息保护
// Bootstrap safety: never prune anything before the first user message. This protects initial
// "identity" reads (SOUL.md, USER.md, etc.) which typically happen before the first inbound user
// message exists in the session transcript.
const firstUserIndex = findFirstUserIndex(messages);
const pruneStartIndex = firstUserIndex === null ? messages.length : firstUserIndex;
const isToolPrunable = params.isToolPrunable ?? makeToolPrunablePredicate(settings.tools);
const totalCharsBefore = estimateContextChars(messages);
let totalChars = totalCharsBefore;
let ratio = totalChars / charWindow;
if (ratio < settings.softTrimRatio) {
return messages;
}
- 首条用户消息前的所有消息(SOUL.md/USER.md 读取等)不被裁剪
- 确保 Agent 身份初始化不受影响
5.4.3 Cache TTL 模式
- 适用场景:Anthropic Prompt Caching(或其他支持 KV 缓存的 provider)
- 触发时机:距离上次缓存触碰时间超过 TTL(默认 4.5 分钟)
- 效果:保留最近消息的完整上下文,提高缓存命中率
5.5 Tool Result 大小护栏
核心文件:src/agents/pi-embedded-runner/tool-result-context-guard.ts
// Keep a conservative input budget to absorb tokenizer variance and provider framing overhead.
const CONTEXT_INPUT_HEADROOM_RATIO = 0.75;
const SINGLE_TOOL_RESULT_CONTEXT_SHARE = 0.5;
export const CONTEXT_LIMIT_TRUNCATION_NOTICE = "[truncated: output exceeded context limit]";
const CONTEXT_LIMIT_TRUNCATION_SUFFIX = `\n${CONTEXT_LIMIT_TRUNCATION_NOTICE}`;
export const PREEMPTIVE_TOOL_RESULT_COMPACTION_PLACEHOLDER =
"[compacted: tool output removed to free context]";
护栏策略:
- 单 tool result 上限:最多占 50% 上下文窗口
- 总预算:所有 tool results 占 75% 上下文(留 25% 余量)
- 超出处理:截断并附加说明,或用占位符替换旧 tool results
效果:保证模型永远不会因单个工具输出爆仓。
5.6 历史记录裁剪策略
export function pruneHistoryForContextShare(params: {
messages: AgentMessage[];
maxContextTokens: number;
maxHistoryShare?: number;
parts?: number;
}): {
messages: AgentMessage[];
droppedMessagesList: AgentMessage[];
droppedChunks: number;
droppedMessages: number;
droppedTokens: number;
keptTokens: number;
budgetTokens: number;
} {
const maxHistoryShare = params.maxHistoryShare ?? 0.5;
const budgetTokens = Math.max(1, Math.floor(params.maxContextTokens * maxHistoryShare));
let keptMessages = params.messages;
const allDroppedMessages: AgentMessage[] = [];
let droppedChunks = 0;
let droppedMessages = 0;
let droppedTokens = 0;
const parts = normalizeParts(params.parts ?? DEFAULT_PARTS, keptMessages.length);
while (keptMessages.length > 0 && estimateMessagesTokens(keptMessages) > budgetTokens) {
const chunks = splitMessagesByTokenShare(keptMessages, parts);
if (chunks.length <= 1) {
break;
}
const [dropped, ...rest] = chunks;
const flatRest = rest.flat();
// After dropping a chunk, repair tool_use/tool_result pairing to handle
// orphaned tool_results (whose tool_use was in the dropped chunk).
// repairToolUseResultPairing drops orphaned tool_results, preventing
// "unexpected tool_use_id" errors from Anthropic's API.
const repairReport = repairToolUseResultPairing(flatRest);
const repairedKept = repairReport.messages;
// Track orphaned tool_results as dropped (they were in kept but their tool_use was dropped)
const orphanedCount = repairReport.droppedOrphanCount;
droppedChunks += 1;
droppedMessages += dropped.length + orphanedCount;
droppedTokens += estimateMessagesTokens(dropped);
allDroppedMessages.push(...dropped);
keptMessages = repairedKept;
}
return {
messages: keptMessages,
droppedMessagesList: allDroppedMessages,
droppedChunks,
droppedMessages,
droppedTokens,
keptTokens: estimateMessagesTokens(keptMessages),
budgetTokens,
};
}
策略:
- 分块删除最旧的历史
- 修复 tool_use/tool_result 配对(删除孤儿 tool_results)
- 保留最近的对话(默认 50% token 预算)
六、长期记忆:向量+全文检索混合索引
核心文件:src/memory/manager.ts
6.1 混合检索(Hybrid Search)
async search(
query: string,
opts?: {
maxResults?: number;
minScore?: number;
sessionKey?: string;
},
): Promise<MemorySearchResult[]> {
void this.warmSession(opts?.sessionKey);
if (this.settings.sync.onSearch && (this.dirty || this.sessionsDirty)) {
void this.sync({ reason: "search" }).catch((err) => {
log.warn(`memory sync failed (search): ${String(err)}`);
});
}
const cleaned = query.trim();
if (!cleaned) {
return [];
}
const minScore = opts?.minScore ?? this.settings.query.minScore;
const maxResults = opts?.maxResults ?? this.settings.query.maxResults;
const hybrid = this.settings.query.hybrid;
const candidates = Math.min(
200,
Math.max(1, Math.floor(maxResults * hybrid.candidateMultiplier)),
);
// FTS-only mode: no embedding provider available
if (!this.provider) {
if (!this.fts.enabled || !this.fts.available) {
log.warn("memory search: no provider and FTS unavailable");
return [];
}
// Extract keywords for better FTS matching on conversational queries
// e.g., "that thing we discussed about the API" → ["discussed", "API"]
const keywords = extractKeywords(cleaned);
const searchTerms = keywords.length > 0 ? keywords : [cleaned];
// Search with each keyword and merge results
const resultSets = await Promise.all(
searchTerms.map((term) => this.searchKeyword(term, candidates).catch(() => [])),
);
// Merge and deduplicate results, keeping highest score for each chunk
const seenIds = new Map<string, (typeof resultSets)[0][0]>();
for (const results of resultSets) {
for (const result of results) {
const existing = seenIds.get(result.id);
if (!existing || result.score > existing.score) {
seenIds.set(result.id, result);
}
}
}
const merged = [...seenIds.values()]
.toSorted((a, b) => b.score - a.score)
.filter((entry) => entry.score >= minScore)
.slice(0, maxResults);
return merged;
}
// If FTS isn't available, hybrid mode cannot use keyword search; degrade to vector-only.
const keywordResults =
hybrid.enabled && this.fts.enabled && this.fts.available
? await this.searchKeyword(cleaned, candidates).catch(() => [])
: [];
const queryVec = await this.embedQueryWithTimeout(cleaned);
const hasVector = queryVec.some((v) => v !== 0);
const vectorResults = hasVector
? await this.searchVector(queryVec, candidates).catch(() => [])
: [];
if (!hybrid.enabled || !this.fts.enabled || !this.fts.available) {
return vectorResults.filter((entry) => entry.score >= minScore).slice(0, maxResults);
}
const merged = await this.mergeHybridResults({
vector: vectorResults,
keyword: keywordResults,
vectorWeight: hybrid.vectorWeight,
textWeight: hybrid.textWeight,
mmr: hybrid.mmr,
temporalDecay: hybrid.temporalDecay,
});
检索模式:
- FTS-only(无 embedding provider):关键词提取 + BM25 全文检索
- Vector-only(FTS 不可用):余弦相似度向量检索
- Hybrid(推荐):向量 + BM25 混合,按权重合并
6.2 向量检索(sqlite-vec)
private async searchVector(
queryVec: number[],
limit: number,
): Promise<Array<MemorySearchResult & { id: string }>> {
// This method should never be called without a provider
if (!this.provider) {
return [];
}
const results = await searchVector({
db: this.db,
vectorTable: VECTOR_TABLE,
providerModel: this.provider.model,
queryVec,
limit,
snippetMaxChars: SNIPPET_MAX_CHARS,
ensureVectorReady: async (dimensions) => await this.ensureVectorReady(dimensions),
sourceFilterVec: this.buildSourceFilter("c"),
sourceFilterChunks: this.buildSourceFilter(),
});
return results.map((entry) => entry as MemorySearchResult & { id: string });
}
实现细节:
- 使用 sqlite-vec 扩展(支持 GPU 加速)
- 余弦相似度计算:vec_distance_cosine(v.embedding, ?)
- 支持回退到纯内存计算(无 sqlite-vec 时)
6.3 BM25 全文检索(FTS5)
private async searchKeyword(
query: string,
limit: number,
): Promise<Array<MemorySearchResult & { id: string; textScore: number }>> {
if (!this.fts.enabled || !this.fts.available) {
return [];
}
const sourceFilter = this.buildSourceFilter();
// In FTS-only mode (no provider), search all models; otherwise filter by current provider's model
const providerModel = this.provider?.model;
const results = await searchKeyword({
db: this.db,
ftsTable: FTS_TABLE,
providerModel,
query,
limit,
snippetMaxChars: SNIPPET_MAX_CHARS,
sourceFilter,
buildFtsQuery: (raw) => this.buildFtsQuery(raw),
bm25RankToScore,
});
return results.map((entry) => entry as MemorySearchResult & { id: string; textScore: number });
}
FTS 查询构建:
- 处理短语查询("exact match")
- 布尔运算符(AND OR NOT)
- 前缀匹配(prefix*)
6.4 混合结果合并
private mergeHybridResults(params: {
vector: Array<MemorySearchResult & { id: string }>;
keyword: Array<MemorySearchResult & { id: string; textScore: number }>;
vectorWeight: number;
textWeight: number;
mmr?: { enabled: boolean; lambda: number };
temporalDecay?: { enabled: boolean; halfLifeDays: number };
}): Promise<MemorySearchResult[]> {
return mergeHybridResults({
vector: params.vector.map((r) => ({
id: r.id,
path: r.path,
startLine: r.startLine,
endLine: r.endLine,
source: r.source,
snippet: r.snippet,
vectorScore: r.score,
})),
keyword: params.keyword.map((r) => ({
id: r.id,
path: r.path,
startLine: r.startLine,
endLine: r.endLine,
source: r.source,
snippet: r.snippet,
textScore: r.textScore,
})),
vectorWeight: params.vectorWeight,
textWeight: params.textWeight,
mmr: params.mmr,
temporalDecay: params.temporalDecay,
workspaceDir: this.workspaceDir,
}).then((entries) => entries.map((entry) => entry as MemorySearchResult));
}
合并策略:
finalScore = vectorScore * vectorWeight + textScore * textWeight
- 默认权重:vectorWeight: 0.7, textWeight: 0.3
- 支持 MMR(最大边际相关性)去重
- 支持时间衰减(近期记忆权重更高)
6.5 MMR(最大边际相关性)
核心文件:src/memory/mmr.ts
function computeMMRScore(relevance: number, maxSimilarity: number, lambda: number): number {
return lambda * relevance - (1 - lambda) * maxSimilarity;
}
目标:平衡结果的相关性与多样性
- lambda = 1.0:纯相关性排序(无去重)
- lambda = 0.5:相关性和多样性各占 50%
- lambda = 0.0:纯多样性排序
算法流程:
- 贪心选择:每次选择 MMR 得分最高的候选
- 相似度计算:与已选结果的最大相似度(Jaccard)
- 迭代直到选够 N 个结果
6.6 时间衰减(Temporal Decay)
核心文件:src/memory/temporal-decay.ts
function computeTemporalDecay(
ageInDays: number,
halfLifeDays: number
): number {
return Math.pow(0.5, ageInDays / halfLifeDays);
}
效果:
- halfLifeDays = 30:30 天前的记忆得分衰减到 50%
- halfLifeDays = 7:7 天前衰减到 50%(更激进)
6.7 查询扩展(Query Expansion)
核心文件:src/memory/query-expansion.ts
function extractKeywords(query: string): string[] {
// 提取关键词,过滤停用词
// 例如:"that thing we discussed about the API"
// → ["discussed", "API"]
}
- 用于 FTS-only 模式增强会话式查询的匹配精度
- 提取实体词(名词、动词)
- 过滤停用词("the", "a", "we" 等)
6.8 记忆来源(Memory Sources)
type MemorySource = "memory" | "sessions"
- memory:MEMORY.md + memory/*.md 文件
- sessions:会话历史(~/.openclaw/agents/
/sessions/*.jsonl)
可配置:agents.defaults.memory.sources: ["memory", "sessions"]
6.9 Embedding Provider 支持
支持多种 Embedding 提供商:
- OpenAI:text-embedding-3-small / text-embedding-3-large
- Gemini:text-embedding-004
- Voyage AI:voyage-3 / voyage-3-lite
- Mistral:mistral-embed
- Ollama:本地模型(nomic-embed-text 等)
自动回退:
- OpenAI 不可用 → 回退到 Gemini
- 远程不可用 → 回退到本地 Ollama
七、Plugin Hook 系统:开放式上下文注入
核心文件:src/agents/pi-embedded-runner/run/attempt.ts
7.1 Hook 类型
type PluginHookBeforePromptBuildResult = {
systemPrompt?: string; // 完全替换 System Prompt
prependContext?: string; // 用户消息前追加
prependSystemContext?: string; // System Prompt 头部追加
appendSystemContext?: string; // System Prompt 尾部追加
}
| 字段 | 位置 | 适用场景 | Provider 缓存友好 |
|---|---|---|---|
| systemPrompt | 完全替换 | 特殊 Agent 模式 | ✗ |
| prependSystemContext | System Prompt 头部 | 稳定上下文(利于 KV 缓存) | ✓ |
| appendSystemContext | System Prompt 尾部 | 补充指令 | ✓ |
| prependContext | 用户消息前 | 每轮动态上下文 | ✗ |
7.2 Hook 执行顺序
export async function resolvePromptBuildHookResult(params: {
prompt: string;
messages: unknown[];
hookCtx: PluginHookAgentContext;
hookRunner?: PromptBuildHookRunner | null;
legacyBeforeAgentStartResult?: PluginHookBeforeAgentStartResult;
}): Promise<PluginHookBeforePromptBuildResult> {
const promptBuildResult = params.hookRunner?.hasHooks("before_prompt_build")
? await params.hookRunner
.runBeforePromptBuild(
{
prompt: params.prompt,
messages: params.messages,
},
params.hookCtx,
)
.catch((hookErr: unknown) => {
log.warn(`before_prompt_build hook failed: ${String(hookErr)}`);
return undefined;
})
: undefined;
const legacyResult =
params.legacyBeforeAgentStartResult ??
(params.hookRunner?.hasHooks("before_agent_start")
? await params.hookRunner
.runBeforeAgentStart(
{
prompt: params.prompt,
messages: params.messages,
},
params.hookCtx,
)
.catch((hookErr: unknown) => {
log.warn(
`before_agent_start hook (legacy prompt build path) failed: ${String(hookErr)}`,
);
return undefined;
})
: undefined);
return {
systemPrompt: promptBuildResult?.systemPrompt ?? legacyResult?.systemPrompt,
prependContext: joinPresentTextSegments([
promptBuildResult?.prependContext,
legacyResult?.prependContext,
]),
prependSystemContext: joinPresentTextSegments([
promptBuildResult?.prependSystemContext,
legacyResult?.prependSystemContext,
]),
appendSystemContext: joinPresentTextSegments([
promptBuildResult?.appendSystemContext,
legacyResult?.appendSystemContext,
]),
};
}
优先级:
- before_prompt_build(新)
- before_agent_start(旧,兼容)
- 多个 hook 的结果通过 \n\n 连接
7.3 Prompt 组装顺序
try {
const promptStartedAt = Date.now();
// Run before_prompt_build hooks to allow plugins to inject prompt context.
// Legacy compatibility: before_agent_start is also checked for context fields.
let effectivePrompt = params.prompt;
const hookCtx = {
agentId: hookAgentId,
sessionKey: params.sessionKey,
sessionId: params.sessionId,
workspaceDir: params.workspaceDir,
messageProvider: params.messageChannel ?? params.messageProvider ?? undefined,
trigger: params.trigger,
channelId: params.messageChannel ?? params.messageProvider ?? undefined,
};
const hookResult = await resolvePromptBuildHookResult({
prompt: params.prompt,
messages: activeSession.messages,
hookCtx,
hookRunner,
legacyBeforeAgentStartResult: params.legacyBeforeAgentStartResult,
});
{
if (hookResult?.prependContext) {
effectivePrompt = `${hookResult.prependContext}\n\n${params.prompt}`;
log.debug(
`hooks: prepended context to prompt (${hookResult.prependContext.length} chars)`,
);
}
const legacySystemPrompt =
typeof hookResult?.systemPrompt === "string" ? hookResult.systemPrompt.trim() : "";
if (legacySystemPrompt) {
applySystemPromptOverrideToSession(activeSession, legacySystemPrompt);
systemPromptText = legacySystemPrompt;
log.debug(`hooks: applied systemPrompt override (${legacySystemPrompt.length} chars)`);
}
const prependedOrAppendedSystemPrompt = composeSystemPromptWithHookContext({
baseSystemPrompt: systemPromptText,
prependSystemContext: hookResult?.prependSystemContext,
appendSystemContext: hookResult?.appendSystemContext,
});
if (prependedOrAppendedSystemPrompt) {
const prependSystemLen = hookResult?.prependSystemContext?.trim().length ?? 0;
const appendSystemLen = hookResult?.appendSystemContext?.trim().length ?? 0;
applySystemPromptOverrideToSession(activeSession, prependedOrAppendedSystemPrompt);
systemPromptText = prependedOrAppendedSystemPrompt;
log.debug(
`hooks: applied prependSystemContext/appendSystemContext (${prependSystemLen}+${appendSystemLen} chars)`,
);
}
}
最终顺序:
[prependSystemContext]
[baseSystemPrompt]
[appendSystemContext]
---
[prependContext]
[用户消息]
7.4 使用场景示例
场景 1:注入群聊上下文
api.registerHook({
event: "before_prompt_build",
handler: async ({ prompt, messages }, ctx) => {
if (ctx.channelId !== "telegram") return {};
const groupContext = await fetchGroupMeta(ctx.sessionKey);
return {
appendSystemContext: `## Group Chat Context\nGroup: ${groupContext.name}\nMembers: ${groupContext.memberCount}`
};
}
});
场景 2:动态注入 API 密钥状态
api.registerHook({
event: "before_prompt_build",
handler: async () => {
const hasGithubToken = !!process.env.GITHUB_TOKEN;
return {
prependContext: hasGithubToken
? "Note: GitHub API is available."
: "Note: GitHub API is NOT configured."
};
}
});
八、AgentMessage 完整类型结构
来源:@mariozechner/pi-agent-core 包
8.1 主要类型(按 role 区分)
8.1.1 UserMessage(用户消息)
{
role: "user"
content: string | ContentBlock[] // 文本或多模态内容块
timestamp: number // Unix 毫秒时间戳
provenance?: InputProvenance // 消息来源追踪
}
InputProvenance(来源追踪):
{
kind: "external_user" | "inter_session" | "internal_system"
originSessionId?: string // 原始会话 ID
sourceSessionKey?: string // 来源会话 key
sourceChannel?: string // 来源渠道(telegram/discord/slack 等)
sourceTool?: string // 来源工具名(如 sessions_send)
}
8.1.2 AssistantMessage(AI 助手消息)
{
role: "assistant"
content: AssistantContentBlock[] // 内容块数组
api: string // "openai-responses" | "google-ai" | "anthropic-messages" 等
provider: string // "openai" | "anthropic" | "google" 等
model: string // 模型 ID,如 "gpt-4"
usage: Usage // Token 使用量统计
stopReason: StopReason // 停止原因
timestamp: number
errorMessage?: string // 错误信息(当 stopReason 为 "error" 时)
phase?: "planning" | "answer" // OpenAI Responses API 阶段标记
}
AssistantContentBlock 子类型
type AssistantContentBlock =
| TextContent // 文本块
| ToolCall // 工具调用块
| ThinkingContent // 思考过程块(推理模型)
| ImageContent // 图片块
TextContent(文本):
{
type: "text"
text: string
textSignature?: string // OpenAI Responses API 消息 ID 签名
}
ToolCall(工具调用):
{
type: "toolCall" // 或 "toolUse", "functionCall"
id: string // 工具调用 ID,格式:call_xxx 或 call_xxx|item_yyy
name: string // 工具名称
arguments: Record<string, unknown> // 工具参数(JSON 对象)
}
ThinkingContent(推理过程):
{
type: "thinking"
thinking: string // 思考过程文本
thinkingSignature?: string // 推理元数据(JSON 编码)
}
- thinkingSignature 示例:{"type": "reasoning", "id": "rs_abc123", "summary": []}
Usage(Token 使用统计)
{
input: number // 输入 token 数
output: number // 输出 token 数
cacheRead: number // 缓存读取 token(Anthropic Prompt Caching)
cacheWrite: number // 缓存写入 token
totalTokens: number // 总 token 数
cost: {
input: number // 输入成本(美元)
output: number // 输出成本
cacheRead: number // 缓存读取成本
cacheWrite: number // 缓存写入成本
total: number // 总成本
}
}
StopReason(停止原因)
type StopReason =
| "stop" // 正常结束
| "toolUse" // 调用工具
| "error" // 发生错误
| "maxTokens" // 达到 token 上限
| "contentFilter" // 内容过滤器触发
8.1.3 ToolResultMessage(工具结果消息)
{
role: "toolResult"
toolCallId: string // 对应的 toolCall.id
toolUseId?: string // Anthropic 的 tool_use_id(兼容)
toolName: string // 工具名称
content: ToolResultContentBlock[] // 结果内容块数组
isError: boolean // 是否为错误结果
timestamp:number
details?: unknown // 额外的元数据(如截断信息)
}
ToolResultContentBlock:
type ToolResultContentBlock =
| { type: "text"; text: string }
| { type: "image"; data: string; mimeType: string } // Base64 编码图片
details 字段示例(截断信息):
{
truncation: {
truncated: true,
outputLines: 100,
content: "完整的未截断内容"
}
}
安全约束:
- 工具结果的 details 字段绝不注入到 LLM prompt 中(防止 prompt injection 攻击)
- 压缩摘要时会调用 stripToolResultDetails() 移除该字段
8.1.4 CompactionSummaryMessage(压缩摘要消息)
{
role: "compactionSummary"
summary: string // 摘要文本
tokensBefore: number // 压缩前的 token 数
tokensAfter?: number // 压缩后的 token 数
timestamp: string | number
firstKeptEntryId?: string // 保留的首条消息 ID
details?: unknown // 额外元数据(如文件操作记录)
}
用途:
- 当会话历史接近 token 上限时,旧消息被摘要成此类型消息
- 不会发送给 LLM,仅用于会话管理和 token 统计
8.2 消息生命周期管理
8.2.1 Ingest(摄入)
interface ContextEngine {
ingest(params: {
sessionId: string;
message: AgentMessage;
isHeartbeat?: boolean; // 是否为心跳消息
}): Promise<IngestResult>;
}
- 每条新消息都通过 ingest() 写入 Context Engine
- isHeartbeat 标记:定时任务生成的消息,可能被特殊处理
8.2.2 Assemble(组装)
interface ContextEngine {
assemble(params: {
sessionId: string;
messages: AgentMessage[];
tokenBudget?: number; // Token 预算
}): Promise<AssembleResult>;
}
type AssembleResult = {
messages: AgentMessage[]; // 排序后的消息列表
estimatedTokens: number; // 估算的总 token 数
systemPromptAddition?: string; // Context Engine 提供的额外系统提示
}
8.2.3 Compact(压缩)
interface ContextEngine {
compact(params: {
sessionId: string;
sessionFile: string;
tokenBudget?: number;
force?: boolean;
currentTokenCount?: number;
compactionTarget?: "budget" | "threshold";
customInstructions?: string;
runtimeContext?: ContextEngineRuntimeContext;
}): Promise<CompactResult>;
}
压缩摘要指令(src/agents/compaction.ts):
const MERGE_SUMMARIES_INSTRUCTIONS = [
"Merge these partial summaries into a single cohesive summary.",
"",
"MUST PRESERVE:",
"- Active tasks and their current status (in-progress, blocked, pending)",
"- Batch operation progress (e.g., '5/17 items completed')",
"- The last thing the user requested and what was being done about it",
"- Decisions made and their rationale",
"- TODOs, open questions, and constraints",
"- Any commitments or follow-ups promised",
"",
"PRIORITIZE recent context over older history. The agent needs to know",
"what it was doing, not just what was discussed.",
].join("\n");
const IDENTIFIER_PRESERVATION_INSTRUCTIONS =
"Preserve all opaque identifiers exactly as written (no shortening or reconstruction), " +
"including UUIDs, hashes, IDs, tokens, API keys, hostnames, IPs, ports, URLs, and file names.";
8.3 消息处理管道中的转换
stripToolResultDetails(安全裁剪)
function stripToolResultDetails(messages: AgentMessage[]): AgentMessage[]
- 移除 ToolResultMessage.details 字段
- 在压缩摘要前调用,防止泄露敏感信息或注入攻击
repairToolUseResultPairing(修复配对)
function repairToolUseResultPairing(messages: AgentMessage[]): {
messages: AgentMessage[];
droppedOrphanCount: number;
}
- 修复孤儿 tool_result(对应的 tool_use 已被删除)
- 在压缩裁剪历史时调用,确保 Anthropic API 不报错
dropThinkingBlocks(移除推理块)
function dropThinkingBlocks(messages: AgentMessage[]): AgentMessage[]
- 移除 type: "thinking" 的内容块
- Copilot/Claude 某些场景拒绝持久化的 reasoning 块,需在请求前过滤
pruneContextMessages(实时裁剪)
Context Pruning 扩展对消息进行请求级裁剪:
- Soft Trim:截断 tool result 内容(保留头尾)
- Hard Clear:用占位符替换旧 tool result
- Bootstrap Protection:首条用户消息前的所有消息(SOUL.md 读取等)不被裁剪
九、完整的对话流示例
9.1 典型对话序列
[
// 1. 用户输入
{
role: "user",
content: "读取 README.md 文件",
timestamp: 1710000000000,
provenance: {
kind: "external_user",
sourceChannel: "telegram",
sourceSessionKey: "agent:default:telegram:group:123:direct:456"
}
},
// 2. AI 助手调用工具
{
role: "assistant",
content: [
{ type: "text", text: "我来读取这个文件" },
{
type: "toolCall",
id: "call_abc123",
name: "read",
arguments: { target_file: "README.md" }
}
],
api: "openai-responses",
provider: "openai",
model: "gpt-4-turbo",
usage: {
input: 1200,
output: 45,
cacheRead: 0,
cacheWrite: 0,
totalTokens: 1245,
cost: { input: 0.012, output: 0.0014, total: 0.0134 }
},
stopReason: "toolUse",
timestamp: 1710000001000
},
// 3. 工具结果返回
{
role: "toolResult",
toolCallId: "call_abc123",
toolName: "read",
content: [
{ type: "text", text: "# OpenClaw\n\nAI-powered assistant..." }
],
isError: false,
timestamp: 1710000002000,
details: {
truncation: {
truncated: false,
outputLines: 50
}
}
},
// 4. AI 最终回复
{
role: "assistant",
content: [
{ type: "text", text: "这是 OpenClaw 的 README 文件,它是一个..." }
],
api: "openai-responses",
provider: "openai",
model: "gpt-4-turbo",
usage: {
input: 2500,
output: 120,
cacheRead: 0,
cacheWrite: 0,
totalTokens: 2620,
cost: { input: 0.025, output: 0.0036, total: 0.0286 }
},
stopReason: "stop",
timestamp: 1710000003000
},
// (假设后续历史过长,触发压缩)
// 5. 压缩摘要消息(不发送给 LLM)
{
role: "compactionSummary",
summary: "用户请求读取 README.md,助手完成并介绍了 OpenClaw 项目的核心功能。",
tokensBefore: 45000,
tokensAfter: 300,
timestamp: 1710000100000,
firstKeptEntryId: "msg_xyz789",
details: {
readFiles: ["README.md"],
modifiedFiles: []
}
}
]
9.2 推理模型对话示例(带 thinking 块)
[
{
role: "user",
content: "计算斐波那契数列的第 100 项",
timestamp: 1710000000000
},
{
role: "assistant",
content: [
{
type: "thinking",
thinking: "这需要使用动态规划或矩阵快速幂算法,直接递归会超时...",
thinkingSignature: '{"type":"reasoning","id":"rs_001","summary":[]}'
},
{
type: "text",
text: "我会使用矩阵快速幂算法来高效计算:",
textSignature: '{"id":"msg_001","phase":"planning"}'
},
{
type: "toolCall",
id: "call_exec_001",
name: "exec",
arguments: {
cmd: "python3 -c 'import numpy as np; ...'"
}
}
],
api: "openai-responses",
provider: "openai",
model: "o1-preview",
usage: { input: 800, output: 1200, totalTokens: 2000, ... },
stopReason: "toolUse",
timestamp: 1710000001000,
phase: "planning"
}
]
十、上下文工程的核心设计原则总结
10.1 分层解耦架构
┌──────────────────────────────────────────────────────────┐
│ Layer 5: Plugin Hook 注入层 (动态扩展) │
│ before_prompt_build → prependContext/systemContext │
├──────────────────────────────────────────────────────────┤
│ Layer 4: 会话历史层 (对话记忆) │
│ SessionManager → messages + 压缩摘要 │
├──────────────────────────────────────────────────────────┤
│ Layer 3: 工作区引导文件层 (静态知识注入) │
│ AGENTS.md / SOUL.md / TOOLS.md / USER.md / MEMORY.md │
├──────────────────────────────────────────────────────────┤
│ Layer 2: Skills 层 (按需能力扩展) │
│ SKILL.md 元数据列表 → 运行时懒加载 │
├──────────────────────────────────────────────────────────┤
│ Layer 1: System Prompt 基础层 (结构化指令) │
│ 工具清单 + Safety + 工作区 + 时间 + 运行时元数据 │
└──────────────────────────────────────────────────────────┘
职责清晰:
- 每层独立可配置
- 下层不依赖上层
- 支持按需启用/禁用
10.2 Token 预算精细化管理
多级 Token 预算控制:
| 层级 | 控制点 | 默认值 | 作用 |
|---|---|---|---|
| 单文件上限 | bootstrapMaxChars | 20,000 chars | 防止单个 Bootstrap 文件过大 |
| 总注入上限 | bootstrapTotalMaxChars | 150,000 chars | 限制 Bootstrap 总注入量 |
| Skills 数量 | maxSkillsInPrompt | 150 个 | 限制 Skills 元数据列表长度 |
| Skills 字符 | maxSkillsPromptChars | 30,000 chars | 限制 Skills 总字符数 |
| Tool Result | SINGLE_TOOL_RESULT_CONTEXT_SHARE | 50% | 单个工具结果最多占 50% 上下文 |
| 历史占比 | maxHistoryShare | 50% | 对话历史最多占 50% 上下文 |
| 压缩阈值 | softTrimRatio/hardClearRatio | 可配置 | Context Pruning 触发阈值 |
安全余量(Safety Margin):
- Token 估算使用 chars/4 启发式,但实际可能更高
- 所有预算计算都乘以 SAFETY_MARGIN = 1.2(20% 缓冲)
10.3 渐进式压缩策略
三级递进压缩:
Level 1: Tool Result 实时裁剪 (per-request)
↓
- 单个 tool result > 50% context → 截断
- 标记:[truncated: output exceeded context limit]
Level 2: Context Pruning (内存级,不写磁盘)
↓
- Soft Trim: 保留头尾,裁剪中间
- Hard Clear: 用占位符替换旧 tool result
- 触发条件:contextUsage > softTrimRatio
Level 3: Compaction 摘要 (持久化)
↓
- 生成 CompactionSummaryMessage
- 替换旧消息,写入磁盘
- 触发条件:tokenCount > threshold
压缩目标:
- 保留关键信息:活跃任务、批处理进度、最后请求、决策理由、TODO
- 优先近期上下文:Recent > Old
- 精确保留标识符:UUID、哈希、API Key、文件名等不允许缩短或重构
10.4 懒加载优化
Skills 懒加载:
- System Prompt 只注入元数据(名称 + 描述 + 路径)
- 模型选择后,用 read 工具按需加载 SKILL.md 全文
- Token 节省:最多 150 个 Skills × ~200 chars = 30K chars,但每次运行只加载 1 个全文
Memory 按需召回:
- 不在 System Prompt 中全量注入 MEMORY.md
- 通过 memory_search 工具语义检索相关片段
- 默认最多返回 maxResults = 5 条结果
路径压缩:
- 将 /Users/alice/... 替换为 ~/...
- 每个 skill 节省 ~5-6 tokens,150 个 skills 总计节省 ~900 tokens
10.5 可插拔扩展性
Context Engine 接口:
- 通过 plugins.slots.contextEngine 配置项选择引擎
- 默认:LegacyContextEngine
- 插件可替换:ingest / assemble / compact / afterTurn 等所有生命周期钩子
Plugin Hook 系统:
- before_prompt_build:注入上下文的统一入口
- 支持 4 种注入方式:systemPrompt / prependSystemContext / appendSystemContext / prependContext
- 多个插件的结果自动合并
Memory Backend:
- 内置:MemoryIndexManager(sqlite + sqlite-vec + FTS5)
- 外部:QmdMemoryManager(远程服务)
- 自动回退:外部不可用 → 降级到内置
10.6 子 Agent 成本控制
Minimal Prompt Mode:
promptMode: "minimal" // 用于子 Agent
省略的 Section:
- Skills
- Memory Recall
- OpenClaw Self-Update
- Model Aliases
- User Identity
- Reply Tags
- Messaging
- Silent Replies
- Heartbeats
保留的 Section:
- Tooling
- Safety
- Workspace
- Sandbox
- Current Date & Time(已知时)
- Runtime
效果:
- System Prompt 缩减 ~50-70%
- 支撑多层 Agent 嵌套时的经济性
10.7 安全隔离原则
永不注入 LLM 的字段:
- ToolResultMessage.details:可能包含未截断的原始输出或敏感元数据
- CompactionSummaryMessage:仅用于会话管理,不是对话内容
- thinking 块(部分场景):某些 provider 拒绝持久化的 reasoning 块
防护措施:
- 压缩前调用 stripToolResultDetails()
- 请求前调用 dropThinkingBlocks()(按需)
- Bootstrap 文件限制在工作区内(isMemoryPath() 检查)
10.8 可追溯性设计
消息来源追踪:
provenance: {
kind: "external_user" | "inter_session" | "internal_system",
sourceChannel: "telegram",
sourceSessionKey: "agent:default:telegram:...",
sourceTool: "sessions_send"
}
时间戳:
- 所有消息都有 timestamp: number(Unix 毫秒)
- 用于时间衰减、历史排序、压缩决策
消息签名:
- textSignature:OpenAI Responses API 消息 ID
- thinkingSignature:推理步骤 ID(rs_xxx)
- 支持跨轮次的消息追踪
10.9 Provider 兼容性
适配多种 LLM Provider:
- OpenAI:支持 Realtime API、Reasoning 模型、Phase 标记
- Anthropic:支持 Prompt Caching、tool_use_id 兼容
- Google:处理 function call 后必须跟 user turn 的限制
- Ollama:支持本地模型、num_ctx 参数注入
统一抽象:
- AgentMessage 类型统一,不同 provider 通过 api / provider 字段区分
- 运行时动态调整(如 dropThinkingBlocks、repairToolUseResultPairing)
10.10 混合检索策略
Vector + BM25 Hybrid Search:
finalScore = vectorScore * vectorWeight + textScore * textWeight
默认权重:
- vectorWeight: 0.7:语义理解为主
- textWeight: 0.3:精确匹配为辅
增强策略:
- MMR(最大边际相关性):去除冗余结果,提高多样性
- Temporal Decay(时间衰减):近期记忆权重更高
- Query Expansion(查询扩展):提取关键词增强 FTS 匹配
十一、技术指标与性能优化
11.1 Token 使用效率
典型场景 Token 分布(以 GPT-4 为例):
| 组件 | 估算 Token | 占比 |
|---|---|---|
| System Prompt Base | ~9,600 | 30% |
| Bootstrap Files (7 个文件) | ~6,000 | 18% |
| Skills List (150 个) | ~7,500 | 23% |
| Tool Schemas (JSON) | ~8,000 | 25% |
| 对话历史(最近 10 轮) | ~1,500 | 5% |
| 总计 | ~32,600 | 100% |
优化后(启用所有优化):
| 组件 | 优化后 Token | 节省 |
|---|---|---|
| System Prompt Base | ~9,600 | - |
| Bootstrap Files (截断) | ~5,000 | ↓ 17% |
| Skills List (懒加载) | ~500 | ↓ 93% |
| Tool Schemas | ~8,000 | - |
| 对话历史(Pruning) | ~800 | ↓ 47% |
| 总计 | ~23,900 | ↓ 27% |
11.2 检索性能
Memory Search 延迟(MacBook Pro M1):
| 模式 | 平均延迟 | 吞吐量 |
|---|---|---|
| FTS-only | ~15ms | ~67 qps |
| Vector-only (sqlite-vec) | ~25ms | ~40 qps |
| Hybrid | ~35ms | ~28 qps |
| Vector-only (纯内存) | ~50ms | ~20 qps |
索引大小(1,000 个 Markdown 文件,~5MB 总大小):
| 索引类型 | 磁盘占用 | 备注 |
|---|---|---|
| FTS5 | ~2.5 MB | 全文索引 |
| sqlite-vec (768-dim) | ~15 MB | 向量索引 |
| Embedding Cache | ~3 MB | 缓存 embeddings |
| 总计 | ~20.5 MB | 数据库文件 |
11.3 压缩效率
Compaction 压缩比:
| 场景 | 压缩前 Token | 压缩后 Token | 压缩比 |
|---|---|---|---|
| 纯文本对话(50 轮) | ~45,000 | ~800 | 98.2% |
| 含工具调用(30 轮) | ~60,000 | ~1,200 | 98.0% |
| 大量代码读取(20 轮) | ~120,000 | ~2,500 | 97.9% |
压缩时间(gpt-3.5-turbo):
| 历史长度 | 摘要生成时间 | 总耗时 |
|---|---|---|
| 20 轮 | ~3 秒 | ~5 秒 |
| 50 轮 | ~8 秒 | ~12 秒 |
| 100 轮 | ~15 秒 | ~25 秒 |
十二、最佳实践与使用建议
12.1 Bootstrap 文件组织
推荐结构:
workspace/
├── AGENTS.md # 行为规范、仓库指南(必读)
├── SOUL.md # 角色人格、语气(可选)
├── TOOLS.md # 外部工具使用指南(按需)
├── IDENTITY.md # 身份标识(小文件)
├── USER.md # 用户偏好(小文件)
├── HEARTBEAT.md # 定时任务指令(可选)
├── MEMORY.md # 长期记忆快照(按需)
└── memory/ # 分类记忆文件(推荐)
├── 2024-01-15-project-setup.md
├── 2024-01-20-api-design.md
└── 2024-02-01-deployment-notes.md
文件大小建议:
- AGENTS.md:< 10KB(~2,500 tokens)
- SOUL.md:< 2KB(~500 tokens)
- TOOLS.md:< 5KB(~1,250 tokens)
- 其他:< 1KB 每个
12.2 Memory 文件编写
好的 Memory 文件示例:
# 项目架构决策 - 2024-01-15
## 上下文
团队决定使用 Context Engine 可插拔架构。
## 决策
- 使用 TypeScript 接口定义 ContextEngine
- 默认实现:LegacyContextEngine
- 支持插件替换
## 理由
- 灵活性:不同场景可用不同策略
- 向后兼容:Legacy 实现保持现有行为
## 标识符
- 接口文件:`src/context-engine/types.ts`
- 默认实现:`src/context-engine/legacy.ts`
- 注册表:`src/context-engine/registry.ts`
关键要素:
- 清晰的标题:包含日期
- 结构化内容:上下文 → 决策 → 理由
- 精确标识符:文件路径、函数名、配置 key
- 时间戳:便于时间衰减计算
12.3 Skills 组织
推荐目录结构:
~/.openclaw/skills/
├── github/
│ └── SKILL.md
├── database/
│ └── SKILL.md
└── testing/
└── SKILL.md
SKILL.md 前置元数据:
---
name: github
description: GitHub API operations and repository management
requires:
env:
- GITHUB_TOKEN
---
# GitHub Skill
Use this skill for GitHub API operations...
优化建议:
- 单个 Skill 文件:< 5KB
- 描述简洁:< 50 words
- 按功能域分组(不要全放一个 Skill)
12.4 配置调优
高 Token 效率配置:
agents:
defaults:
bootstrapMaxChars: 15000 # 降低单文件上限
bootstrapTotalMaxChars: 100000 # 降低总注入量
contextPruning:
mode: cache-ttl
softTrimRatio: 0.6 # 更激进的裁剪
hardClearRatio: 0.75
compaction:
mode: safeguard
recentTurnsPreserve: 3 # 保留最近 3 轮
memory:
query:
maxResults: 3 # 减少检索结果数
minScore: 0.4 # 提高最低相关性阈值
hybrid:
enabled: true
vectorWeight: 0.8 # 偏向语义检索
textWeight: 0.2
skills:
load:
maxSkillsInPrompt: 100 # 限制 Skills 数量
maxSkillsPromptChars: 20000
高准确性配置(容忍更高 Token 消耗):
agents:
defaults:
bootstrapMaxChars: 30000 # 放宽单文件上限
bootstrapTotalMaxChars: 200000
contextPruning:
mode: off # 关闭 Pruning
compaction:
mode: safeguard
recentTurnsPreserve: 10 # 保留最近 10 轮
memory:
query:
maxResults: 10 # 更多检索结果
minScore: 0.25 # 降低最低阈值
hybrid:
enabled: true
mmr:
enabled: true # 启用 MMR 去重
lambda: 0.6
12.5 子 Agent 使用
何时使用子 Agent:
- 任务复杂度高(需要多步推理)
- 任务耗时长(避免主 Agent 阻塞)
- 需要隔离上下文(不污染主会话)
子 Agent 配置:
await sessions_spawn({
agentId: "default",
runtime: "subagent",
prompt: "分析这个日志文件并生成报告",
contextMode: "lightweight", // 使用 minimal prompt
attachments: [
{ path: "logs/error.log", inline: false }
]
})
注意事项:
- 子 Agent 使用 promptMode: "minimal",System Prompt 缩减 ~50%
- 父子 Agent 上下文隔离,子 Agent 不继承父 Agent 的对话历史
- 子 Agent 完成后通过 subagents list 查看结果,或等待自动通知
十三、未来演进方向
13.1 Context Engine 生态
当前 OpenClaw 只有内置的 LegacyContextEngine,未来可能出现:
专用场景 Context Engine:
- CodeContextEngine:针对代码仓库优化,基于 AST 的智能裁剪
- LongContextEngine:利用 1M+ token 模型,最小化压缩
- MultimodalContextEngine:图片/音频的智能缓存和引用
社区插件:
- 通过 plugins.slots.contextEngine 配置项替换
- 完全自定义 ingest / assemble / compact 逻辑
13.2 Memory 增强
语义路由:
- 自动识别查询类型(事实查询 vs. 过程查询)
- 动态调整检索策略(FTS vs. Vector 权重)
自动分类:
- 新记忆自动打标签(project / api / deployment 等)
- 支持按标签过滤检索
版本管理:
- 记忆文件的 Git 集成
- 支持回溯到历史版本("一个月前我们的 API 设计是什么?")
13.3 Prompt 优化
动态 Section 调整:
- 根据对话主题动态启用/禁用 System Prompt Section
- 例如:检测到代码任务 → 自动注入 Code Style 指南
自适应压缩:
- 根据模型 context window 动态调整压缩策略
- 例如:GPT-4(128K)→ 更宽松,Claude-3.5(200K)→ 更宽松
多语言支持:
- System Prompt 多语言版本
- 根据用户语言自动切换
13.4 性能优化
增量索引:
- 只对新增/修改的文件重新生成 embeddings
- 减少内存同步的时间开销
并行检索:
- Vector 和 FTS 检索并行执行
- 减少 Hybrid Search 延迟
缓存预热:
- Session 启动时预加载常用 Bootstrap 文件
- 减少首次运行的延迟
十四、总结
OpenClaw 的 Prompt 上下文工程是一个系统化、工程化、可扩展的完整解决方案,涵盖了从系统提示构建、静态知识注入、动态能力扩展、对话历史管理、长期记忆检索到 Token 预算控制的全链路设计。
核心亮点
- 分层架构:5 层清晰分工,每层可独立配置和替换
- Token 精细管理:从文件级、组件级到历史级的多级预算控制
- 渐进式压缩:Tool Result 裁剪 → Context Pruning → Compaction 三级递进
- 懒加载优化:Skills 元数据 + 按需全文加载,Memory 语义检索按需召回
- 混合检索:Vector + BM25 + MMR + 时间衰减的多维度优化
- 可插拔扩展:Context Engine 接口 + Plugin Hook 系统支持完全自定义
- 安全隔离:敏感字段不进入 LLM,Bootstrap 文件路径限制
- Provider 兼容:统一抽象,运行时动态适配不同 LLM 特性
- 子 Agent 优化:Minimal Prompt Mode 支撑多层嵌套的经济性
- 完整可追溯:provenance、timestamp、signature 等元数据支持全链路追踪
设计哲学
- 高效:不浪费一个 token,每个字节都有明确的价值
- 灵活:插件化设计,支持不同场景的深度定制
- 安全:多层防护,防止敏感信息泄露和注入攻击
- 可维护:清晰的接口边界,易于理解和扩展
OpenClaw 的上下文工程设计为构建生产级 AI Agent 提供了完整的参考实现,是 LLM 应用工程化的优秀范例。