如何申请网站空间,如何做网站宣传片,wordpress 怎么查看蜘蛛,防城港装修公司口碑排行https://deepwiki.com/search/_102ce0dc-bde6-4596-b145-a914d607d3ec?modefast 简要总结 OpenClaw 的记忆以工作区中的 Markdown 文件为唯一事实来源#xff0c;配合可选的向量/混合索引实现语义搜索#xff1b;系统会在会话接近压缩时自动触发“记忆刷新”提醒模型持久化…https://deepwiki.com/search/_102ce0dc-bde6-4596-b145-a914d607d3ec?modefast简要总结OpenClaw 的记忆以工作区中的 Markdown 文件为唯一事实来源配合可选的向量/混合索引实现语义搜索系统会在会话接近压缩时自动触发“记忆刷新”提醒模型持久化重要信息并提供 CLI 工具用于手动索引与搜索 1 2 。1. 记忆文件结构Markdownmemory/YYYY-MM-DD.md每日日志仅追加会话开始时读取当天与昨天的内容 3 。MEMORY.md长期记忆仅在主/私人会话加载群组上下文不加载 4 。这些文件位于工作区agents.defaults.workspace默认~/.openclaw/workspace 5 。2. 何时写入记忆决策、偏好、持久事实写入MEMORY.md日常笔记写入当日memory/YYYY-MM-DD.md6 。若有人说“记住这个”应立即写入文件而非仅存内存 7 。3. 自动记忆刷新压缩前触发当会话接近自动压缩时系统触发一次静默的智能体回合提醒模型写入持久记忆 8 。由agents.defaults.compaction.memoryFlush控制支持软阈值、双提示与每周期一次刷新沙箱只读工作区会跳过 9 。4. 向量/混合搜索与索引默认启用对MEMORY.md与memory/**/*.md的向量索引支持 OpenAI/Gemini/Voyage/本地嵌入自动选择可用提供商 10 。混合搜索结合向量相似度与 BM25 关键词权重可配默认 0.7/0.3 11 。索引存储为每智能体的 SQLite~/.openclaw/memory/agentId.sqlite监视文件变更并异步同步嵌入/模型/分块参数变化会触发全量重建 12 。工具memory_search返回带来源的片段memory_get按路径读取文件 13 。5. CLI 管理openclaw memory status查看状态--deep探测向量与嵌入可用性--index在脏时重建索引 14 。openclaw memory index手动重建索引--verbose输出详细日志 15 。openclaw memory search 查询词执行语义搜索 16 。支持--agent id限定单智能体操作 17 。6. 配置要点记忆搜索配置位于agents.defaults.memorySearch可设 provider、model、fallback、extraPaths、混合权重、缓存等 18 。实验性功能会话记忆搜索需开启experimental.sessionMemory与sources: [memory,sessions] 19 。可选后端QMD需单独安装或内置 SQLite 向量加速sqlite-vec 20 21 。7. 常见问题记忆持久化在磁盘直到手动删除会话上下文仍受模型窗口限制因此需要记忆搜索 22 。若模型“忘记”请明确要求其写入MEMORY.md或当日文件并确认 Gateway 使用同一工作区 23 。Notes记忆文件是唯一可信源模型仅“记住”写入磁盘的内容 24 。可通过plugins.slots.memory none禁用记忆插件 25 。中文用户可参考docs/zh-CN/concepts/memory.md与docs/zh-CN/cli/memory.md获取本地化说明 26 27 。Wiki pages you might want to explore:Overview (openclaw/openclaw)CitationsFile:docs/concepts/memory.md (L11-27)OpenClaw memory is **plain Markdown in the agent workspace**. The files are the source of truth; the model only remembers what gets written to disk. Memory search tools are provided by the active memory plugin (default: memory-core). Disable memory plugins with plugins.slots.memory none. ## Memory files (Markdown) The default workspace layout uses two memory layers: - memory/YYYY-MM-DD.md - Daily log (append-only). - Read today yesterday at session start. - MEMORY.md (optional) - Curated long-term memory. - **Only load in the main, private session** (never in group contexts).File:docs/concepts/memory.md (L28-29)These files live under the workspace (agents.defaults.workspace, default ~/.openclaw/workspace). See [Agent workspace](/concepts/agent-workspace) for the full layout.File:docs/concepts/memory.md (L33-34)- Decisions, preferences, and durable facts go to MEMORY.md. - Day-to-day notes and running context go to memory/YYYY-MM-DD.md.File:docs/concepts/memory.md (L35-37)- If someone says remember this, write it down (do not keep it in RAM). - This area is still evolving. It helps to remind the model to store memories; it will know what to do. - If you want something to stick, **ask the bot to write it** into memory.File:docs/concepts/memory.md (L39-45)## Automatic memory flush (pre-compaction ping) When a session is **close to auto-compaction**, OpenClaw triggers a **silent, agentic turn** that reminds the model to write durable memory **before** the context is compacted. The default prompts explicitly say the model _may reply_, but usually NO_REPLY is the correct response so the user never sees this turn.File:docs/concepts/memory.md (L46-75)This is controlled by agents.defaults.compaction.memoryFlush: json5 { agents: { defaults: { compaction: { reserveTokensFloor: 20000, memoryFlush: { enabled: true, softThresholdTokens: 4000, systemPrompt: Session nearing compaction. Store durable memories now., prompt: Write any lasting notes to memory/YYYY-MM-DD.md; reply with NO_REPLY if nothing to store., }, }, }, }, }Details:Soft threshold: flush triggers when the session token estimate crossescontextWindow - reserveTokensFloor - softThresholdTokens.Silentby default: prompts includeNO_REPLYso nothing is delivered.Two prompts: a user prompt plus a system prompt append the reminder.One flush per compaction cycle(tracked insessions.json).Workspace must be writable: if the session runs sandboxed withworkspaceAccess: roornone, the flush is skipped.**File:** docs/concepts/memory.md (L79-96) markdown ## Vector memory search OpenClaw can build a small vector index over MEMORY.md and memory/*.md so semantic queries can find related notes even when wording differs. Defaults: - Enabled by default. - Watches memory files for changes (debounced). - Configure memory search under agents.defaults.memorySearch (not top-level memorySearch). - Uses remote embeddings by default. If memorySearch.provider is not set, OpenClaw auto-selects: 1. local if a memorySearch.local.modelPath is configured and the file exists. 2. openai if an OpenAI key can be resolved. 3. gemini if a Gemini key can be resolved. 4. voyage if a Voyage key can be resolved. 5. Otherwise memory search stays disabled until configured. - Local mode uses node-llama-cpp and may require pnpm approve-builds.File:docs/concepts/memory.md (L107-120)### QMD backend (experimental) Set memory.backend qmd to swap the built-in SQLite indexer for [QMD](https://github.com/tobi/qmd): a local-first search sidecar that combines BM25 vectors reranking. Markdown stays the source of truth; OpenClaw shells out to QMD for retrieval. Key points: **Prereqs** - Disabled by default. Opt in per-config (memory.backend qmd). - Install the QMD CLI separately (bun install -g https://github.com/tobi/qmd or grab a release) and make sure the qmd binary is on the gateway’s PATH. - QMD needs an SQLite build that allows extensions (brew install sqlite on macOS).File:docs/concepts/memory.md (L181-197)- searchMode (default search): pick which QMD command backs memory_search (search, vsearch, query). - includeDefaultMemory (default true): auto-index MEMORY.md memory/**/*.md. - paths[]: add extra directories/files (path, optional pattern, optional stable name). - sessions: opt into session JSONL indexing (enabled, retentionDays, exportDir). - update: controls refresh cadence and maintenance execution: (interval, debounceMs, onBoot, waitForBootSync, embedInterval, commandTimeoutMs, updateTimeoutMs, embedTimeoutMs). - limits: clamp recall payload (maxResults, maxSnippetChars, maxInjectedChars, timeoutMs). - scope: same schema as [session.sendPolicy](/gateway/configuration#session). Default is DM-only (deny all, allow direct chats); loosen it to surface QMD hits in groups/channels. - When scope denies a search, OpenClaw logs a warning with the derived channel/chatType so empty results are easier to debug.File:docs/concepts/memory.md (L312-326)Batch indexing (OpenAI Gemini Voyage): - Disabled by default. Set agents.defaults.memorySearch.remote.batch.enabled true to enable for large-corpus indexing (OpenAI, Gemini, and Voyage). - Default behavior waits for batch completion; tune remote.batch.wait, remote.batch.pollIntervalMs, and remote.batch.timeoutMinutes if needed. - Set remote.batch.concurrency to control how many batch jobs we submit in parallel (default: 2). - Batch mode applies when memorySearch.provider openai or gemini and uses the corresponding API key. - Gemini batch jobs use the async embeddings batch endpoint and require Gemini Batch API availability. Why OpenAI batch is fast cheap: - For large backfills, OpenAI is typically the fastest option we support because we can submit many embedding requests in a single batch job and let OpenAI process them asynchronously. - OpenAI offers discounted pricing for Batch API workloads, so large indexing runs are usually cheaper than sending the same requests synchronously. - See the OpenAI Batch API docs and pricing for details: - [https://platform.openai.com/docs/api-reference/batch](https://platform.openai.com/docs/api-reference/batch)File:docs/concepts/memory.md (L347-351)Tools: - memory_search — returns snippets with file line ranges. - memory_get — read memory file content by path.File:docs/concepts/memory.md (L354-360)- Set agents.defaults.memorySearch.provider local. - Provide agents.defaults.memorySearch.local.modelPath (GGUF or hf: URI). - Optional: set agents.defaults.memorySearch.fallback none to avoid remote fallback. ### How the memory tools work - memory_search semantically searches Markdown chunks (~400 token target, 80-token overlap) from MEMORY.md memory/**/*.md. It returns snippet text (capped ~700 chars), file path, line range, score, provider/model, and whether we fell back from local → remote embeddings. No full file payload is returned.File:docs/concepts/memory.md (L364-370)### What gets indexed (and when) - File type: Markdown only (MEMORY.md, memory/**/*.md). - Index storage: per-agent SQLite at ~/.openclaw/memory/agentId.sqlite (configurable via agents.defaults.memorySearch.store.path, supports {agentId} token). - Freshness: watcher on MEMORY.md memory/ marks the index dirty (debounce 1.5s). Sync is scheduled on session start, on search, or on an interval and runs asynchronously. Session transcripts use delta thresholds to trigger background sync. - Reindex triggers: the index stores the embedding **provider/model endpoint fingerprint chunking params**. If any of those change, OpenClaw automatically resets and reindexes the entire store.File:docs/concepts/memory.md (L371-396)### Hybrid search (BM25 vector) When enabled, OpenClaw combines: - **Vector similarity** (semantic match, wording can differ) - **BM25 keyword relevance** (exact tokens like IDs, env vars, code symbols) If full-text search is unavailable on your platform, OpenClaw falls back to vector-only search. #### Why hybrid? Vector search is great at “this means the same thing”: - “Mac Studio gateway host” vs “the machine running the gateway” - “debounce file updates” vs “avoid indexing on every write” But it can be weak at exact, high-signal tokens: - IDs (a828e60, b3b9895a…) - code symbols (memorySearch.query.hybrid) - error strings (“sqlite-vec unavailable”) BM25 (full-text) is the opposite: strong at exact tokens, weaker at paraphrases. Hybrid search is the pragmatic middle ground: **use both retrieval signals** so you get good results for both “natural language” queries and “needle in a haystack” queries.File:docs/cli/memory.md (L21-28)bash openclaw memory status openclaw memory status --deep openclaw memory status --deep --index openclaw memory status --deep --index --verbose openclaw memory index openclaw memory index --verbose openclaw memory search release checklistFile:docs/cli/memory.md (L37-38)- --agent id: scope to a single agent (default: all configured agents). - --verbose: emit detailed logs during probes and indexing.File:docs/help/faq.md (L1208-1217)### Memory keeps forgetting things How do I make it stick Ask the bot to **write the fact to memory**. Long-term notes belong in MEMORY.md, short-term context goes into memory/YYYY-MM-DD.md. This is still an area we are improving. It helps to remind the model to store memories; it will know what to do. If it keeps forgetting, verify the Gateway is using the same workspace on every run. Docs: [Memory](/concepts/memory), [Agent workspace](/concepts/agent-workspace).File:docs/help/faq.md (L1239-1246)### Does memory persist forever What are the limits Memory files live on disk and persist until you delete them. The limit is your storage, not the model. The **session context** is still limited by the model context window, so long conversations can compact or truncate. That is why memory search exists - it pulls only the relevant parts back into context. Docs: [Memory](/concepts/memory), [Context](/concepts/context).File:docs/zh-CN/concepts/memory.md (L16-33)# 记忆 OpenClaw 记忆是**智能体工作空间中的纯 Markdown 文件**。这些文件是唯一的事实来源模型只记住写入磁盘的内容。 记忆搜索工具由活动的记忆插件提供默认memory-core。使用 plugins.slots.memory none 禁用记忆插件。 ## 记忆文件Markdown 默认工作空间布局使用两个记忆层 - memory/YYYY-MM-DD.md - 每日日志仅追加。 - 在会话开始时读取今天和昨天的内容。 - MEMORY.md可选 - 精心整理的长期记忆。 - **仅在主要的私人会话中加载**绝不在群组上下文中加载。 这些文件位于工作空间下agents.defaults.workspace默认 ~/.openclaw/workspace。完整布局参见[智能体工作空间](/concepts/agent-workspace)。File:docs/zh-CN/cli/memory.md (L16-38)# openclaw memory 管理语义记忆的索引和搜索。 由活跃的记忆插件提供默认memory-core设置 plugins.slots.memory none 可禁用。 相关内容 - 记忆概念[记忆](/concepts/memory) - 插件[插件](/tools/plugin) ## 示例 bash openclaw memory status openclaw memory status --deep openclaw memory status --deep --index openclaw memory status --deep --index --verbose openclaw memory index openclaw memory index --verbose openclaw memory search release checklist openclaw memory status --agent main openclaw memory index --agent main --verbose